diff mbox series

[v2,1/3] mm/slub: fix the race between validate_slab and slab_free

Message ID 20220712022807.44113-1-rongwei.wang@linux.alibaba.com (mailing list archive)
State New
Headers show
Series [v2,1/3] mm/slub: fix the race between validate_slab and slab_free | expand

Commit Message

Rongwei Wang July 12, 2022, 2:28 a.m. UTC
In use cases where allocating and freeing slab frequently, some
error messages, such as "Left Redzone overwritten", "First byte
0xbb instead of 0xcc" would be printed when validating slabs.
That's because an object has been filled with SLAB_RED_INACTIVE,
but has not been added to slab's freelist. And between these
two states, the behaviour of validating slab is likely to occur.

Actually, it doesn't mean the slab can not work stably. But, these
confusing messages will disturb slab debugging more or less.

Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
---
 mm/slub.c | 43 +++++++++++++++++++++++++------------------
 1 file changed, 25 insertions(+), 18 deletions(-)

Comments

Rongwei Wang July 12, 2022, 2:57 a.m. UTC | #1
Hi

According to all's point in PATCH v1 [1], I rewrote the first patch
"mm/slub: fix the race between validate_slab and slab_free". And now, 
these changes only works when DEBUG SLUB enabled. Plus, here some 
performance test can been found in [2] (Thanks Christoph's suggestion).

changelog
v1->v2:
*mm/slub: fix the race between validate_slab and slab_free
make these changes can work when debug slub enabled.

*mm/slub: improve consistency of nr_slabs count
nothing

*mm/slub: delete confusing pr_err when debugging slub
only deleting the confusing pr_err().

For convenient, showing the latest test data here (copy from [2]):

testcase used: https://github.com/netoptimizer/prototype-kernel.git
(slab_test)

Single thread testing
1. Kmalloc: Repeatedly allocate then free test
                    before                fix
                    kmalloc    kfree      kmalloc     kfree
10000 times 8     4 cycles   5 cycles	4 cycles    5 cycles
10000 times 16    3 cycles   5 cycles	3 cycles    5 cycles
10000 times 32    3 cycles   5 cycles	3 cycles    5 cycles
10000 times 64    3 cycles   5 cycles	3 cycles    5 cycles
10000 times 128   3 cycles   5 cycles	3 cycles    5 cycles
10000 times 256   14 cycles  9 cycles	6 cycles    8 cycles
10000 times 512   9 cycles   8 cycles	9 cycles    10 cycles
10000 times 1024  48 cycles  10 cycles	6 cycles    10 cycles
10000 times 2048  31 cycles  12 cycles	35 cycles   13 cycles
10000 times 4096  96 cycles  17 cycles	96 cycles   18 cycles
10000 times 8192  188 cycles 27 cycles	190 cycles  27 cycles
10000 times 16384 117 cycles 38 cycles  115 cycles  38 cycles
			
2. Kmalloc: alloc/free test
                                    before        fix
10000 times kmalloc(8)/kfree      3 cycles      3 cycles
10000 times kmalloc(16)/kfree     3 cycles      3 cycles
10000 times kmalloc(32)/kfree     3 cycles      3 cycles
10000 times kmalloc(64)/kfree     3 cycles      3 cycles
10000 times kmalloc(128)/kfree    3 cycles      3 cycles
10000 times kmalloc(256)/kfree    3 cycles      3 cycles
10000 times kmalloc(512)/kfree    3 cycles      3 cycles
10000 times kmalloc(1024)/kfree   3 cycles      3 cycles
10000 times kmalloc(2048)/kfree   3 cycles      3 cycles
10000 times kmalloc(4096)/kfree   3 cycles      3 cycles
10000 times kmalloc(8192)/kfree   3 cycles      3 cycles
10000 times kmalloc(16384)/kfree  33 cycles     33 cycles

Concurrent allocs
                                  before            fix
Kmalloc N*alloc N*free(8)       Average=13/14     Average=14/15
Kmalloc N*alloc N*free(16)      Average=13/15     Average=13/15
Kmalloc N*alloc N*free(32)      Average=13/15     Average=13/15
Kmalloc N*alloc N*free(64)      Average=13/15     Average=13/15
Kmalloc N*alloc N*free(128)     Average=13/15     Average=13/15
Kmalloc N*alloc N*free(256)     Average=137/29    Average=134/39
Kmalloc N*alloc N*free(512)     Average=61/29     Average=64/28
Kmalloc N*alloc N*free(1024)    Average=465/50    Average=656/55
Kmalloc N*alloc N*free(2048)    Average=503/97    Average=422/97
Kmalloc N*alloc N*free(4096)    Average=1592/206  Average=1624/207
		
Kmalloc N*(alloc free)(8)       Average=3         Average=3
Kmalloc N*(alloc free)(16)      Average=3         Average=3
Kmalloc N*(alloc free)(32)      Average=3         Average=3
Kmalloc N*(alloc free)(64)      Average=3         Average=3
Kmalloc N*(alloc free)(128)     Average=3         Average=3
Kmalloc N*(alloc free)(256)     Average=3         Average=3
Kmalloc N*(alloc free)(512)     Average=3         Average=3
Kmalloc N*(alloc free)(1024)    Average=3         Average=3
Kmalloc N*(alloc free)(2048)    Average=3         Average=3
Kmalloc N*(alloc free)(4096)    Average=3         Average=3

The above data seems indicate that this modification (only works when
kmem_cache_debug(s) is true) does not introduce significant performance
impact. And if you have better suggestion of testcase, please let me 
know, Thanks!

[1] 
https://lore.kernel.org/linux-mm/alpine.DEB.2.22.394.2206081417370.465021@gentwo.de/T/#m2832b1983a229183aabfd6eb71a2eb39ecd0d08a

[2] 
https://lore.kernel.org/linux-mm/alpine.DEB.2.22.394.2206081417370.465021@gentwo.de/T/#m75f1f32ad590fb13ac9e771030fafd15c7db8cb1

Thanks for your time!

On 7/12/22 10:28 AM, Rongwei Wang wrote:
> In use cases where allocating and freeing slab frequently, some
> error messages, such as "Left Redzone overwritten", "First byte
> 0xbb instead of 0xcc" would be printed when validating slabs.
> That's because an object has been filled with SLAB_RED_INACTIVE,
> but has not been added to slab's freelist. And between these
> two states, the behaviour of validating slab is likely to occur.
> 
> Actually, it doesn't mean the slab can not work stably. But, these
> confusing messages will disturb slab debugging more or less.
> 
> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
> ---
>   mm/slub.c | 43 +++++++++++++++++++++++++------------------
>   1 file changed, 25 insertions(+), 18 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index b1281b8654bd..e950d8df8380 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1391,18 +1391,16 @@ static noinline int free_debug_processing(
>   	void *head, void *tail, int bulk_cnt,
>   	unsigned long addr)
>   {
> -	struct kmem_cache_node *n = get_node(s, slab_nid(slab));
>   	void *object = head;
>   	int cnt = 0;
> -	unsigned long flags, flags2;
> +	unsigned long flags;
>   	int ret = 0;
>   	depot_stack_handle_t handle = 0;
>   
>   	if (s->flags & SLAB_STORE_USER)
>   		handle = set_track_prepare();
>   
> -	spin_lock_irqsave(&n->list_lock, flags);
> -	slab_lock(slab, &flags2);
> +	slab_lock(slab, &flags);
>   
>   	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
>   		if (!check_slab(s, slab))
> @@ -1435,8 +1433,7 @@ static noinline int free_debug_processing(
>   		slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n",
>   			 bulk_cnt, cnt);
>   
> -	slab_unlock(slab, &flags2);
> -	spin_unlock_irqrestore(&n->list_lock, flags);
> +	slab_unlock(slab, &flags);
>   	if (!ret)
>   		slab_fix(s, "Object at 0x%p not freed", object);
>   	return ret;
> @@ -3330,7 +3327,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>   
>   {
>   	void *prior;
> -	int was_frozen;
> +	int was_frozen, to_take_off = 0;
>   	struct slab new;
>   	unsigned long counters;
>   	struct kmem_cache_node *n = NULL;
> @@ -3341,14 +3338,23 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>   	if (kfence_free(head))
>   		return;
>   
> -	if (kmem_cache_debug(s) &&
> -	    !free_debug_processing(s, slab, head, tail, cnt, addr))
> -		return;
> +	n = get_node(s, slab_nid(slab));
> +	if (kmem_cache_debug(s)) {
> +		int ret;
>   
> -	do {
> -		if (unlikely(n)) {
> +		spin_lock_irqsave(&n->list_lock, flags);
> +		ret = free_debug_processing(s, slab, head, tail, cnt, addr);
> +		if (!ret) {
>   			spin_unlock_irqrestore(&n->list_lock, flags);
> -			n = NULL;
> +			return;
> +		}
> +	}
> +
> +	do {
> +		if (unlikely(to_take_off)) {
> +			if (!kmem_cache_debug(s))
> +				spin_unlock_irqrestore(&n->list_lock, flags);
> +			to_take_off = 0;
>   		}
>   		prior = slab->freelist;
>   		counters = slab->counters;
> @@ -3369,8 +3375,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>   				new.frozen = 1;
>   
>   			} else { /* Needs to be taken off a list */
> -
> -				n = get_node(s, slab_nid(slab));
>   				/*
>   				 * Speculatively acquire the list_lock.
>   				 * If the cmpxchg does not succeed then we may
> @@ -3379,8 +3383,10 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>   				 * Otherwise the list_lock will synchronize with
>   				 * other processors updating the list of slabs.
>   				 */
> -				spin_lock_irqsave(&n->list_lock, flags);
> +				if (!kmem_cache_debug(s))
> +					spin_lock_irqsave(&n->list_lock, flags);
>   
> +				to_take_off = 1;
>   			}
>   		}
>   
> @@ -3389,8 +3395,9 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>   		head, new.counters,
>   		"__slab_free"));
>   
> -	if (likely(!n)) {
> -
> +	if (likely(!to_take_off)) {
> +		if (kmem_cache_debug(s))
> +			spin_unlock_irqrestore(&n->list_lock, flags);
>   		if (likely(was_frozen)) {
>   			/*
>   			 * The list lock was not taken therefore no list
Hyeonggon Yoo July 13, 2022, 10:22 a.m. UTC | #2
On Tue, Jul 12, 2022 at 10:28:05AM +0800, Rongwei Wang wrote:
> In use cases where allocating and freeing slab frequently, some
> error messages, such as "Left Redzone overwritten", "First byte
> 0xbb instead of 0xcc" would be printed when validating slabs.
> That's because an object has been filled with SLAB_RED_INACTIVE,
> but has not been added to slab's freelist. And between these
> two states, the behaviour of validating slab is likely to occur.
> 
> Actually, it doesn't mean the slab can not work stably. But, these
> confusing messages will disturb slab debugging more or less.
> 
> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
> ---
>  mm/slub.c | 43 +++++++++++++++++++++++++------------------
>  1 file changed, 25 insertions(+), 18 deletions(-)
>

This makes the code more complex.

A part of me says it may be more pleasant to split implementation
allocating from caches for debugging. That would make it simpler.

something like:

__slab_alloc() {
	if (kmem_cache_debug(s))
		slab_alloc_debug()
	else
		___slab_alloc()
}

slab_free() {
	if (kmem_cache_debug(s))
		slab_free_debug()
	else
		__do_slab_free()
}

See also:
	https://lore.kernel.org/lkml/faf416b9-f46c-8534-7fb7-557c046a564d@suse.cz/

> diff --git a/mm/slub.c b/mm/slub.c
> index b1281b8654bd..e950d8df8380 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1391,18 +1391,16 @@ static noinline int free_debug_processing(
>  	void *head, void *tail, int bulk_cnt,
>  	unsigned long addr)
>  {
> -	struct kmem_cache_node *n = get_node(s, slab_nid(slab));
>  	void *object = head;
>  	int cnt = 0;
> -	unsigned long flags, flags2;
> +	unsigned long flags;
>  	int ret = 0;
>  	depot_stack_handle_t handle = 0;
>  
>  	if (s->flags & SLAB_STORE_USER)
>  		handle = set_track_prepare();
>  
> -	spin_lock_irqsave(&n->list_lock, flags);
> -	slab_lock(slab, &flags2);
> +	slab_lock(slab, &flags);
>  
>  	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
>  		if (!check_slab(s, slab))
> @@ -1435,8 +1433,7 @@ static noinline int free_debug_processing(
>  		slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n",
>  			 bulk_cnt, cnt);
>  
> -	slab_unlock(slab, &flags2);
> -	spin_unlock_irqrestore(&n->list_lock, flags);
> +	slab_unlock(slab, &flags);
>  	if (!ret)
>  		slab_fix(s, "Object at 0x%p not freed", object);
>  	return ret;
> @@ -3330,7 +3327,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>  
>  {
>  	void *prior;
> -	int was_frozen;
> +	int was_frozen, to_take_off = 0;
>  	struct slab new;
>  	unsigned long counters;
>  	struct kmem_cache_node *n = NULL;
> @@ -3341,14 +3338,23 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>  	if (kfence_free(head))
>  		return;
>  
> -	if (kmem_cache_debug(s) &&
> -	    !free_debug_processing(s, slab, head, tail, cnt, addr))
> -		return;
> +	n = get_node(s, slab_nid(slab));
> +	if (kmem_cache_debug(s)) {
> +		int ret;
>  
> -	do {
> -		if (unlikely(n)) {
> +		spin_lock_irqsave(&n->list_lock, flags);
> +		ret = free_debug_processing(s, slab, head, tail, cnt, addr);
> +		if (!ret) {
>  			spin_unlock_irqrestore(&n->list_lock, flags);
> -			n = NULL;
> +			return;
> +		}
> +	}
> +
> +	do {
> +		if (unlikely(to_take_off)) {
> +			if (!kmem_cache_debug(s))
> +				spin_unlock_irqrestore(&n->list_lock, flags);
> +			to_take_off = 0;
>  		}
>  		prior = slab->freelist;
>  		counters = slab->counters;
> @@ -3369,8 +3375,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>  				new.frozen = 1;
>  
>  			} else { /* Needs to be taken off a list */
> -
> -				n = get_node(s, slab_nid(slab));
>  				/*
>  				 * Speculatively acquire the list_lock.
>  				 * If the cmpxchg does not succeed then we may
> @@ -3379,8 +3383,10 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>  				 * Otherwise the list_lock will synchronize with
>  				 * other processors updating the list of slabs.
>  				 */
> -				spin_lock_irqsave(&n->list_lock, flags);
> +				if (!kmem_cache_debug(s))
> +					spin_lock_irqsave(&n->list_lock, flags);
>  
> +				to_take_off = 1;
>  			}
>  		}
>  
> @@ -3389,8 +3395,9 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>  		head, new.counters,
>  		"__slab_free"));
>  
> -	if (likely(!n)) {
> -
> +	if (likely(!to_take_off)) {
> +		if (kmem_cache_debug(s))
> +			spin_unlock_irqrestore(&n->list_lock, flags);
>  		if (likely(was_frozen)) {
>  			/*
>  			 * The list lock was not taken therefore no list
> -- 
> 2.27.0
>
Rongwei Wang July 13, 2022, 12:10 p.m. UTC | #3
On 7/13/22 6:22 PM, Hyeonggon Yoo wrote:
> On Tue, Jul 12, 2022 at 10:28:05AM +0800, Rongwei Wang wrote:
>> In use cases where allocating and freeing slab frequently, some
>> error messages, such as "Left Redzone overwritten", "First byte
>> 0xbb instead of 0xcc" would be printed when validating slabs.
>> That's because an object has been filled with SLAB_RED_INACTIVE,
>> but has not been added to slab's freelist. And between these
>> two states, the behaviour of validating slab is likely to occur.
>>
>> Actually, it doesn't mean the slab can not work stably. But, these
>> confusing messages will disturb slab debugging more or less.
>>
>> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
>> ---
>>   mm/slub.c | 43 +++++++++++++++++++++++++------------------
>>   1 file changed, 25 insertions(+), 18 deletions(-)
>>
> 
> This makes the code more complex.
> 
> A part of me says it may be more pleasant to split implementation
> allocating from caches for debugging. That would make it simpler.
> 
> something like:
> 
> __slab_alloc() {
> 	if (kmem_cache_debug(s))
> 		slab_alloc_debug()
> 	else
> 		___slab_alloc()
> }
> 
> slab_free() {
> 	if (kmem_cache_debug(s))
> 		slab_free_debug()
> 	else
> 		__do_slab_free()
> }
Oh, I also have same idea, but not sure whether it is accepted because 
of it needs more changes than now. Since you agree with this way, I can
rewrite this code.

Thanks.
> 
> See also:
> 	https://lore.kernel.org/lkml/faf416b9-f46c-8534-7fb7-557c046a564d@suse.cz/
Thanks, it seems that I had missed it.
> 
>> diff --git a/mm/slub.c b/mm/slub.c
>> index b1281b8654bd..e950d8df8380 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1391,18 +1391,16 @@ static noinline int free_debug_processing(
>>   	void *head, void *tail, int bulk_cnt,
>>   	unsigned long addr)
>>   {
>> -	struct kmem_cache_node *n = get_node(s, slab_nid(slab));
>>   	void *object = head;
>>   	int cnt = 0;
>> -	unsigned long flags, flags2;
>> +	unsigned long flags;
>>   	int ret = 0;
>>   	depot_stack_handle_t handle = 0;
>>   
>>   	if (s->flags & SLAB_STORE_USER)
>>   		handle = set_track_prepare();
>>   
>> -	spin_lock_irqsave(&n->list_lock, flags);
>> -	slab_lock(slab, &flags2);
>> +	slab_lock(slab, &flags);
>>   
>>   	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
>>   		if (!check_slab(s, slab))
>> @@ -1435,8 +1433,7 @@ static noinline int free_debug_processing(
>>   		slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n",
>>   			 bulk_cnt, cnt);
>>   
>> -	slab_unlock(slab, &flags2);
>> -	spin_unlock_irqrestore(&n->list_lock, flags);
>> +	slab_unlock(slab, &flags);
>>   	if (!ret)
>>   		slab_fix(s, "Object at 0x%p not freed", object);
>>   	return ret;
>> @@ -3330,7 +3327,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>>   
>>   {
>>   	void *prior;
>> -	int was_frozen;
>> +	int was_frozen, to_take_off = 0;
>>   	struct slab new;
>>   	unsigned long counters;
>>   	struct kmem_cache_node *n = NULL;
>> @@ -3341,14 +3338,23 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>>   	if (kfence_free(head))
>>   		return;
>>   
>> -	if (kmem_cache_debug(s) &&
>> -	    !free_debug_processing(s, slab, head, tail, cnt, addr))
>> -		return;
>> +	n = get_node(s, slab_nid(slab));
>> +	if (kmem_cache_debug(s)) {
>> +		int ret;
>>   
>> -	do {
>> -		if (unlikely(n)) {
>> +		spin_lock_irqsave(&n->list_lock, flags);
>> +		ret = free_debug_processing(s, slab, head, tail, cnt, addr);
>> +		if (!ret) {
>>   			spin_unlock_irqrestore(&n->list_lock, flags);
>> -			n = NULL;
>> +			return;
>> +		}
>> +	}
>> +
>> +	do {
>> +		if (unlikely(to_take_off)) {
>> +			if (!kmem_cache_debug(s))
>> +				spin_unlock_irqrestore(&n->list_lock, flags);
>> +			to_take_off = 0;
>>   		}
>>   		prior = slab->freelist;
>>   		counters = slab->counters;
>> @@ -3369,8 +3375,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>>   				new.frozen = 1;
>>   
>>   			} else { /* Needs to be taken off a list */
>> -
>> -				n = get_node(s, slab_nid(slab));
>>   				/*
>>   				 * Speculatively acquire the list_lock.
>>   				 * If the cmpxchg does not succeed then we may
>> @@ -3379,8 +3383,10 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>>   				 * Otherwise the list_lock will synchronize with
>>   				 * other processors updating the list of slabs.
>>   				 */
>> -				spin_lock_irqsave(&n->list_lock, flags);
>> +				if (!kmem_cache_debug(s))
>> +					spin_lock_irqsave(&n->list_lock, flags);
>>   
>> +				to_take_off = 1;
>>   			}
>>   		}
>>   
>> @@ -3389,8 +3395,9 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
>>   		head, new.counters,
>>   		"__slab_free"));
>>   
>> -	if (likely(!n)) {
>> -
>> +	if (likely(!to_take_off)) {
>> +		if (kmem_cache_debug(s))
>> +			spin_unlock_irqrestore(&n->list_lock, flags);
>>   		if (likely(was_frozen)) {
>>   			/*
>>   			 * The list lock was not taken therefore no list
>> -- 
>> 2.27.0
>>
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index b1281b8654bd..e950d8df8380 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1391,18 +1391,16 @@  static noinline int free_debug_processing(
 	void *head, void *tail, int bulk_cnt,
 	unsigned long addr)
 {
-	struct kmem_cache_node *n = get_node(s, slab_nid(slab));
 	void *object = head;
 	int cnt = 0;
-	unsigned long flags, flags2;
+	unsigned long flags;
 	int ret = 0;
 	depot_stack_handle_t handle = 0;
 
 	if (s->flags & SLAB_STORE_USER)
 		handle = set_track_prepare();
 
-	spin_lock_irqsave(&n->list_lock, flags);
-	slab_lock(slab, &flags2);
+	slab_lock(slab, &flags);
 
 	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
 		if (!check_slab(s, slab))
@@ -1435,8 +1433,7 @@  static noinline int free_debug_processing(
 		slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n",
 			 bulk_cnt, cnt);
 
-	slab_unlock(slab, &flags2);
-	spin_unlock_irqrestore(&n->list_lock, flags);
+	slab_unlock(slab, &flags);
 	if (!ret)
 		slab_fix(s, "Object at 0x%p not freed", object);
 	return ret;
@@ -3330,7 +3327,7 @@  static void __slab_free(struct kmem_cache *s, struct slab *slab,
 
 {
 	void *prior;
-	int was_frozen;
+	int was_frozen, to_take_off = 0;
 	struct slab new;
 	unsigned long counters;
 	struct kmem_cache_node *n = NULL;
@@ -3341,14 +3338,23 @@  static void __slab_free(struct kmem_cache *s, struct slab *slab,
 	if (kfence_free(head))
 		return;
 
-	if (kmem_cache_debug(s) &&
-	    !free_debug_processing(s, slab, head, tail, cnt, addr))
-		return;
+	n = get_node(s, slab_nid(slab));
+	if (kmem_cache_debug(s)) {
+		int ret;
 
-	do {
-		if (unlikely(n)) {
+		spin_lock_irqsave(&n->list_lock, flags);
+		ret = free_debug_processing(s, slab, head, tail, cnt, addr);
+		if (!ret) {
 			spin_unlock_irqrestore(&n->list_lock, flags);
-			n = NULL;
+			return;
+		}
+	}
+
+	do {
+		if (unlikely(to_take_off)) {
+			if (!kmem_cache_debug(s))
+				spin_unlock_irqrestore(&n->list_lock, flags);
+			to_take_off = 0;
 		}
 		prior = slab->freelist;
 		counters = slab->counters;
@@ -3369,8 +3375,6 @@  static void __slab_free(struct kmem_cache *s, struct slab *slab,
 				new.frozen = 1;
 
 			} else { /* Needs to be taken off a list */
-
-				n = get_node(s, slab_nid(slab));
 				/*
 				 * Speculatively acquire the list_lock.
 				 * If the cmpxchg does not succeed then we may
@@ -3379,8 +3383,10 @@  static void __slab_free(struct kmem_cache *s, struct slab *slab,
 				 * Otherwise the list_lock will synchronize with
 				 * other processors updating the list of slabs.
 				 */
-				spin_lock_irqsave(&n->list_lock, flags);
+				if (!kmem_cache_debug(s))
+					spin_lock_irqsave(&n->list_lock, flags);
 
+				to_take_off = 1;
 			}
 		}
 
@@ -3389,8 +3395,9 @@  static void __slab_free(struct kmem_cache *s, struct slab *slab,
 		head, new.counters,
 		"__slab_free"));
 
-	if (likely(!n)) {
-
+	if (likely(!to_take_off)) {
+		if (kmem_cache_debug(s))
+			spin_unlock_irqrestore(&n->list_lock, flags);
 		if (likely(was_frozen)) {
 			/*
 			 * The list lock was not taken therefore no list