diff mbox series

[net-next] rxrpc: Simplify the allocation of slab caches

Message ID 20240201100924.210298-1-chentao@kylinos.cn (mailing list archive)
State Not Applicable
Delegated to: Netdev Maintainers
Headers show
Series [net-next] rxrpc: Simplify the allocation of slab caches | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1048 this patch: 1048
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 0 of 0 maintainers
netdev/build_clang success Errors and warnings before: 1065 this patch: 1065
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1065 this patch: 1065
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 10 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-02-02--15-00 (tests: 720)

Commit Message

Kunwu Feb. 1, 2024, 10:09 a.m. UTC
Use the new KMEM_CACHE() macro instead of direct kmem_cache_create
to simplify the creation of SLAB caches.

Signed-off-by: Kunwu Chan <chentao@kylinos.cn>
---
 net/rxrpc/af_rxrpc.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

Comments

Jiri Pirko Feb. 1, 2024, 12:47 p.m. UTC | #1
Thu, Feb 01, 2024 at 11:09:24AM CET, chentao@kylinos.cn wrote:
>Use the new KMEM_CACHE() macro instead of direct kmem_cache_create
>to simplify the creation of SLAB caches.
>
>Signed-off-by: Kunwu Chan <chentao@kylinos.cn>

Reviewed-by: Jiri Pirko <jiri@nvidia.com>

btw, why don't you bulk these changes into patchsets of 15 patches? Or,
given the low complexicity of the patch, just merge multiple patches
that are changing similar locations togeter.
Markus Elfring Feb. 1, 2024, 3:38 p.m. UTC | #2
> Use the new KMEM_CACHE() macro instead of direct kmem_cache_create
> to simplify the creation of SLAB caches.

* Please replace the word “new” by a reference to the commit 0a31bd5f2bbb6473ef9d24f0063ca91cfa678b64
  ("KMEM_CACHE(): simplify slab cache creation").

  See also related background information from 2007-05-06.

* Would you like to take another look at possibilities to group
  similar source code transformations into patch series?


Regards,
Markus
Kunwu Feb. 2, 2024, 9:46 a.m. UTC | #3
On 2024/2/1 20:47, Jiri Pirko wrote:
> Thu, Feb 01, 2024 at 11:09:24AM CET, chentao@kylinos.cn wrote:
>> Use the new KMEM_CACHE() macro instead of direct kmem_cache_create
>> to simplify the creation of SLAB caches.
>>
>> Signed-off-by: Kunwu Chan <chentao@kylinos.cn>
> 
> Reviewed-by: Jiri Pirko <jiri@nvidia.com>
> 
> btw, why don't you bulk these changes into patchsets of 15 patches? Or,
> given the low complexicity of the patch, just merge multiple patches
> that are changing similar locations togeter.
Sorry, I haven't sent a patchset, I'm worried about messing up.
I'll try to deal with these similar issues in the way you recommended in 
the future, thank you for the reminder.
Jiri Pirko Feb. 2, 2024, 10:28 a.m. UTC | #4
Fri, Feb 02, 2024 at 10:46:33AM CET, chentao@kylinos.cn wrote:
>On 2024/2/1 20:47, Jiri Pirko wrote:
>> Thu, Feb 01, 2024 at 11:09:24AM CET, chentao@kylinos.cn wrote:
>> > Use the new KMEM_CACHE() macro instead of direct kmem_cache_create
>> > to simplify the creation of SLAB caches.
>> > 
>> > Signed-off-by: Kunwu Chan <chentao@kylinos.cn>
>> 
>> Reviewed-by: Jiri Pirko <jiri@nvidia.com>
>> 
>> btw, why don't you bulk these changes into patchsets of 15 patches? Or,
>> given the low complexicity of the patch, just merge multiple patches
>> that are changing similar locations togeter.
>Sorry, I haven't sent a patchset, I'm worried about messing up.
>I'll try to deal with these similar issues in the way you recommended in the
>future, thank you for the reminder.

Also, please fix your email client. It breaks threads.

>-- 
>Thanks,
>  Kunwu
>
Kunwu Feb. 4, 2024, 3 a.m. UTC | #5
On 2024/2/2 18:28, Jiri Pirko wrote:
> Fri, Feb 02, 2024 at 10:46:33AM CET, chentao@kylinos.cn wrote:
>> On 2024/2/1 20:47, Jiri Pirko wrote:
>>> Thu, Feb 01, 2024 at 11:09:24AM CET, chentao@kylinos.cn wrote:
>>>> Use the new KMEM_CACHE() macro instead of direct kmem_cache_create
>>>> to simplify the creation of SLAB caches.
>>>>
>>>> Signed-off-by: Kunwu Chan <chentao@kylinos.cn>
>>>
>>> Reviewed-by: Jiri Pirko <jiri@nvidia.com>
>>>
>>> btw, why don't you bulk these changes into patchsets of 15 patches? Or,
>>> given the low complexicity of the patch, just merge multiple patches
>>> that are changing similar locations togeter.
>> Sorry, I haven't sent a patchset, I'm worried about messing up.
>> I'll try to deal with these similar issues in the way you recommended in the
>> future, thank you for the reminder.
> 
> Also, please fix your email client. It breaks threads.
Thanks for the reminder. Maybe it's my company email gateway that does 
something bad with email.
The last email was quarantined, this one is the same.
I asked the administrator to release it temporarily, and now it looks 
like there is still a problem with the gateway of my email.

I'll try to use a new email.

> 
>> -- 
>> Thanks,
>>   Kunwu
>>
diff mbox series

Patch

diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
index 465bfe5eb061..1326a1bff2d7 100644
--- a/net/rxrpc/af_rxrpc.c
+++ b/net/rxrpc/af_rxrpc.c
@@ -1026,9 +1026,7 @@  static int __init af_rxrpc_init(void)
 
 	ret = -ENOMEM;
 	rxrpc_gen_version_string();
-	rxrpc_call_jar = kmem_cache_create(
-		"rxrpc_call_jar", sizeof(struct rxrpc_call), 0,
-		SLAB_HWCACHE_ALIGN, NULL);
+	rxrpc_call_jar = KMEM_CACHE(rxrpc_call,	SLAB_HWCACHE_ALIGN);
 	if (!rxrpc_call_jar) {
 		pr_notice("Failed to allocate call jar\n");
 		goto error_call_jar;