diff mbox series

[027/147] mm, slab: split out the cpu offline variant of flush_slab()

Message ID 20210908025423.Cch2XKUmy%akpm@linux-foundation.org (mailing list archive)
State New
Headers show
Series [001/147] mm, slub: don't call flush_all() from slab_debug_trace_open() | expand

Commit Message

Andrew Morton Sept. 8, 2021, 2:54 a.m. UTC
From: Vlastimil Babka <vbabka@suse.cz>
Subject: mm, slab: split out the cpu offline variant of flush_slab()

flush_slab() is called either as part IPI handler on given live cpu, or as
a cleanup on behalf of another cpu that went offline.  The first case
needs to protect updating the kmem_cache_cpu fields with disabled irqs. 
Currently the whole call happens with irqs disabled by the IPI handler,
but the following patch will change from IPI to workqueue, and
flush_slab() will have to disable irqs (to be replaced with a local lock
later) in the critical part.

To prepare for this change, replace the call to flush_slab() for the dead
cpu handling with an opencoded variant that will not disable irqs nor take
a local lock.

Link: https://lkml.kernel.org/r/20210904105003.11688-28-vbabka@suse.cz
Suggested-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Jann Horn <jannh@google.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Qian Cai <quic_qiancai@quicinc.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/slub.c |   12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)
diff mbox series

Patch

--- a/mm/slub.c~mm-slab-split-out-the-cpu-offline-variant-of-flush_slab
+++ a/mm/slub.c
@@ -2511,9 +2511,17 @@  static inline void flush_slab(struct kme
 static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu)
 {
 	struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);
+	void *freelist = c->freelist;
+	struct page *page = c->page;
 
-	if (c->page)
-		flush_slab(s, c);
+	c->page = NULL;
+	c->freelist = NULL;
+	c->tid = next_tid(c->tid);
+
+	if (page) {
+		deactivate_slab(s, page, freelist);
+		stat(s, CPUSLAB_FLUSH);
+	}
 
 	unfreeze_partials_cpu(s, c);
 }