Message ID | 20200504033407.2385-1-haokexin@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v5.6-rt] mm: slub: Always flush the delayed empty slubs in flush_all() | expand |
On 2020-05-04 11:34:07 [+0800], Kevin Hao wrote: > After commit f0b231101c94 ("mm/SLUB: delay giving back empty slubs to … > Fixes: f0b231101c94 ("mm/SLUB: delay giving back empty slubs to IRQ enabled regions") > Signed-off-by: Kevin Hao <haokexin@gmail.com> Applied, thanks. Sebastian
On Mon, 4 May 2020, Kevin Hao wrote: > After commit f0b231101c94 ("mm/SLUB: delay giving back empty slubs to > IRQ enabled regions"), when the free_slab() is invoked with the IRQ > disabled, the empty slubs are moved to a per-CPU list and will be > freed after IRQ enabled later. But in the current codes, there is > a check to see if there really has the cpu slub on a specific cpu > before flushing the delayed empty slubs, this may cause a reference > of already released kmem_cache in a scenario like below: > cpu 0 cpu 1 > kmem_cache_destroy() > flush_all() > --->IPI flush_cpu_slab() > flush_slab() > deactivate_slab() > discard_slab() > free_slab() > c->page = NULL; > for_each_online_cpu(cpu) > if (!has_cpu_slab(1, s)) > continue > this skip to flush the delayed > empty slub released by cpu1 > kmem_cache_free(kmem_cache, s) > > kmalloc() > __slab_alloc() > free_delayed() > __free_slab() > reference to released kmem_cache > > Fixes: f0b231101c94 ("mm/SLUB: delay giving back empty slubs to IRQ enabled regions") > Signed-off-by: Kevin Hao <haokexin@gmail.com> Acked-by: David Rientjes <rientjes@google.com>
diff --git a/mm/slub.c b/mm/slub.c index 15c194ff16e6..83b29bf71fd0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2402,9 +2402,6 @@ static void flush_all(struct kmem_cache *s) for_each_online_cpu(cpu) { struct slub_free_list *f; - if (!has_cpu_slab(cpu, s)) - continue; - f = &per_cpu(slub_free_list, cpu); raw_spin_lock_irq(&f->lock); list_splice_init(&f->list, &tofree);
After commit f0b231101c94 ("mm/SLUB: delay giving back empty slubs to IRQ enabled regions"), when the free_slab() is invoked with the IRQ disabled, the empty slubs are moved to a per-CPU list and will be freed after IRQ enabled later. But in the current codes, there is a check to see if there really has the cpu slub on a specific cpu before flushing the delayed empty slubs, this may cause a reference of already released kmem_cache in a scenario like below: cpu 0 cpu 1 kmem_cache_destroy() flush_all() --->IPI flush_cpu_slab() flush_slab() deactivate_slab() discard_slab() free_slab() c->page = NULL; for_each_online_cpu(cpu) if (!has_cpu_slab(1, s)) continue this skip to flush the delayed empty slub released by cpu1 kmem_cache_free(kmem_cache, s) kmalloc() __slab_alloc() free_delayed() __free_slab() reference to released kmem_cache Fixes: f0b231101c94 ("mm/SLUB: delay giving back empty slubs to IRQ enabled regions") Signed-off-by: Kevin Hao <haokexin@gmail.com> --- mm/slub.c | 3 --- 1 file changed, 3 deletions(-)