diff mbox series

[v2] mm/slab.c: add node spinlock protect in __cache_free_alien

Message ID 20200730025226.10350-1-qiang.zhang@windriver.com (mailing list archive)
State New, archived
Headers show
Series [v2] mm/slab.c: add node spinlock protect in __cache_free_alien | expand

Commit Message

Zhang, Qiang July 30, 2020, 2:52 a.m. UTC
From: Zhang Qiang <qiang.zhang@windriver.com>

Due to cpu hotplug, the "cpuup_canceled" func be called, it's currently
manipulating the alien cache for the canceled cpu's node and this node
may be the same as the node which node's alien cache being operated in
the "__cache_free_alien" func, so we should add a protect for node's alien
cache in "__cache_free_alien" func.

Fixes: 6731d4f12315 ("slab: Convert to hotplug state machine")
Signed-off-by: Zhang Qiang <qiang.zhang@windriver.com>
---
 v1->v2:
 change submission information and fixes tags.

 mm/slab.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/mm/slab.c b/mm/slab.c
index a89633603b2d..290523c90b4e 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -759,8 +759,10 @@  static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
 
 	n = get_node(cachep, node);
 	STATS_INC_NODEFREES(cachep);
+	spin_lock(&n->list_lock);
 	if (n->alien && n->alien[page_node]) {
 		alien = n->alien[page_node];
+		spin_unlock(&n->list_lock);
 		ac = &alien->ac;
 		spin_lock(&alien->lock);
 		if (unlikely(ac->avail == ac->limit)) {
@@ -769,14 +771,15 @@  static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
 		}
 		ac->entry[ac->avail++] = objp;
 		spin_unlock(&alien->lock);
-		slabs_destroy(cachep, &list);
 	} else {
+		spin_unlock(&n->list_lock);
 		n = get_node(cachep, page_node);
 		spin_lock(&n->list_lock);
 		free_block(cachep, &objp, 1, page_node, &list);
 		spin_unlock(&n->list_lock);
-		slabs_destroy(cachep, &list);
 	}
+
+	slabs_destroy(cachep, &list);
 	return 1;
 }