mm/slub: remove useless kmem_cache_debug
diff mbox series

Message ID 20200810080758.940-1-wuyun.wu@huawei.com
State New
Headers show
Series
  • mm/slub: remove useless kmem_cache_debug
Related show

Commit Message

Abel Wu Aug. 10, 2020, 8:07 a.m. UTC
From: Abel Wu <wuyun.wu@huawei.com>

The commit below is incomplete, as it didn't handle the add_full() part.
commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()")

Signed-off-by: Abel Wu <wuyun.wu@huawei.com>
---
 mm/slub.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--
1.8.3.1

Comments

David Rientjes Aug. 10, 2020, 7:44 p.m. UTC | #1
On Mon, 10 Aug 2020, wuyun.wu@huawei.com wrote:

> From: Abel Wu <wuyun.wu@huawei.com>
> 
> The commit below is incomplete, as it didn't handle the add_full() part.
> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()")
> 
> Signed-off-by: Abel Wu <wuyun.wu@huawei.com>
> ---
>  mm/slub.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index fe81773..0b021b7 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
>  		}
>  	} else {
>  		m = M_FULL;
> -		if (kmem_cache_debug(s) && !lock) {
> +#ifdef CONFIG_SLUB_DEBUG
> +		if (!lock) {
>  			lock = 1;
>  			/*
>  			 * This also ensures that the scanning of full
> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
>  			 */
>  			spin_lock(&n->list_lock);
>  		}
> +#endif
>  	}
> 
>  	if (l != m) {

This should be functionally safe, I'm wonder if it would make sense to 
only check for SLAB_STORE_USER here instead of kmem_cache_debug(), 
however, since that should be the only context in which we need the 
list_lock for add_full()?  It seems more explicit.
Abel Wu Aug. 11, 2020, 1:29 a.m. UTC | #2
On 2020/8/11 3:44, David Rientjes wrote:
> On Mon, 10 Aug 2020, wuyun.wu@huawei.com wrote:
> 
>> From: Abel Wu <wuyun.wu@huawei.com>
>>
>> The commit below is incomplete, as it didn't handle the add_full() part.
>> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()")
>>
>> Signed-off-by: Abel Wu <wuyun.wu@huawei.com>
>> ---
>>  mm/slub.c | 4 +++-
>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index fe81773..0b021b7 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
>>  		}
>>  	} else {
>>  		m = M_FULL;
>> -		if (kmem_cache_debug(s) && !lock) {
>> +#ifdef CONFIG_SLUB_DEBUG
>> +		if (!lock) {
>>  			lock = 1;
>>  			/*
>>  			 * This also ensures that the scanning of full
>> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
>>  			 */
>>  			spin_lock(&n->list_lock);
>>  		}
>> +#endif
>>  	}
>>
>>  	if (l != m) {
> 
> This should be functionally safe, I'm wonder if it would make sense to 
> only check for SLAB_STORE_USER here instead of kmem_cache_debug(), 
> however, since that should be the only context in which we need the 
> list_lock for add_full()?  It seems more explicit.
> .
> 
Yes, checking for SLAB_STORE_USER here can also get rid of noising macros.
I will resend the patch later.

Thanks,
	Abel
Abel Wu Aug. 11, 2020, 1:50 a.m. UTC | #3
On 2020/8/11 9:29, Abel Wu wrote:
> 
> 
> On 2020/8/11 3:44, David Rientjes wrote:
>> On Mon, 10 Aug 2020, wuyun.wu@huawei.com wrote:
>>
>>> From: Abel Wu <wuyun.wu@huawei.com>
>>>
>>> The commit below is incomplete, as it didn't handle the add_full() part.
>>> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()")
>>>
>>> Signed-off-by: Abel Wu <wuyun.wu@huawei.com>
>>> ---
>>>  mm/slub.c | 4 +++-
>>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index fe81773..0b021b7 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
>>>  		}
>>>  	} else {
>>>  		m = M_FULL;
>>> -		if (kmem_cache_debug(s) && !lock) {
>>> +#ifdef CONFIG_SLUB_DEBUG
>>> +		if (!lock) {
>>>  			lock = 1;
>>>  			/*
>>>  			 * This also ensures that the scanning of full
>>> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
>>>  			 */
>>>  			spin_lock(&n->list_lock);
>>>  		}
>>> +#endif
>>>  	}
>>>
>>>  	if (l != m) {
>>
>> This should be functionally safe, I'm wonder if it would make sense to 
>> only check for SLAB_STORE_USER here instead of kmem_cache_debug(), 
>> however, since that should be the only context in which we need the 
>> list_lock for add_full()?  It seems more explicit.
>> .
>>
> Yes, checking for SLAB_STORE_USER here can also get rid of noising macros.
> I will resend the patch later.
> 
> Thanks,
> 	Abel
> .
> 
Wait... It still needs CONFIG_SLUB_DEBUG to wrap around, but can avoid
locking overhead when SLAB_STORE_USER is not set (as what you said).
I will keep the CONFIG_SLUB_DEBUG in my new patch.

Patch
diff mbox series

diff --git a/mm/slub.c b/mm/slub.c
index fe81773..0b021b7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2182,7 +2182,8 @@  static void deactivate_slab(struct kmem_cache *s, struct page *page,
 		}
 	} else {
 		m = M_FULL;
-		if (kmem_cache_debug(s) && !lock) {
+#ifdef CONFIG_SLUB_DEBUG
+		if (!lock) {
 			lock = 1;
 			/*
 			 * This also ensures that the scanning of full
@@ -2191,6 +2192,7 @@  static void deactivate_slab(struct kmem_cache *s, struct page *page,
 			 */
 			spin_lock(&n->list_lock);
 		}
+#endif
 	}

 	if (l != m) {