diff mbox series

[v1] mm/slab: Optimize the code logic in find_mergeable()

Message ID 20240904074037.710692-1-xavier_qy@163.com (mailing list archive)
State New
Headers show
Series [v1] mm/slab: Optimize the code logic in find_mergeable() | expand

Commit Message

Xavier Sept. 4, 2024, 7:40 a.m. UTC
We can first assess the flags, if it's unmergeable, there's no need
to calculate the size and align.

Signed-off-by: Xavier <xavier_qy@163.com>
---
 mm/slab_common.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

Comments

Vlastimil Babka Sept. 5, 2024, 12:47 p.m. UTC | #1
On 9/4/24 9:40 AM, Xavier wrote:
> We can first assess the flags, if it's unmergeable, there's no need
> to calculate the size and align.
> 
> Signed-off-by: Xavier <xavier_qy@163.com>

OK, applied to slab/for-next. Thanks.

Note this is not a hotpath so no need to micro-optimize, but it makes
the code cleaner - no more calculate_alignment() using flags before we
adjust them by kmem_cache_flags(). This currently doesn't matter as it
cares only about SLAB_HWCACHE_ALIGN which is not affected, but will be
more robust in case of future changes.

> ---
>  mm/slab_common.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 40b582a014b8..aaf5989f7ffe 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -169,14 +169,15 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
>  	if (ctor)
>  		return NULL;
>  
> -	size = ALIGN(size, sizeof(void *));
> -	align = calculate_alignment(flags, align, size);
> -	size = ALIGN(size, align);
>  	flags = kmem_cache_flags(flags, name);
>  
>  	if (flags & SLAB_NEVER_MERGE)
>  		return NULL;
>  
> +	size = ALIGN(size, sizeof(void *));
> +	align = calculate_alignment(flags, align, size);
> +	size = ALIGN(size, align);
> +
>  	list_for_each_entry_reverse(s, &slab_caches, list) {
>  		if (slab_unmergeable(s))
>  			continue;
diff mbox series

Patch

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 40b582a014b8..aaf5989f7ffe 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -169,14 +169,15 @@  struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
 	if (ctor)
 		return NULL;
 
-	size = ALIGN(size, sizeof(void *));
-	align = calculate_alignment(flags, align, size);
-	size = ALIGN(size, align);
 	flags = kmem_cache_flags(flags, name);
 
 	if (flags & SLAB_NEVER_MERGE)
 		return NULL;
 
+	size = ALIGN(size, sizeof(void *));
+	align = calculate_alignment(flags, align, size);
+	size = ALIGN(size, align);
+
 	list_for_each_entry_reverse(s, &slab_caches, list) {
 		if (slab_unmergeable(s))
 			continue;