Message ID | 20240425205516.work.220-kees@kernel.org (mailing list archive) |
---|---|
State | Mainlined |
Commit | 3b89ec41747a6b6b8c7b6ad4fe13e063cb6dfe7f |
Headers | show |
Series | mm/slub: Avoid recursive loop with kmemleak | expand |
On Thu, Apr 25, 2024 at 01:55:23PM -0700, Kees Cook wrote: > The system will immediate fill up stack and crash when both > CONFIG_DEBUG_KMEMLEAK and CONFIG_MEM_ALLOC_PROFILING are enabled. > Avoid allocation tagging of kmemleak caches, otherwise recursive > allocation tracking occurs. > > Fixes: 279bb991b4d9 ("mm/slab: add allocation accounting into slab allocation and free paths") > Signed-off-by: Kees Cook <keescook@chromium.org> > --- > Cc: Suren Baghdasaryan <surenb@google.com> > Cc: Kent Overstreet <kent.overstreet@linux.dev> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Christoph Lameter <cl@linux.com> > Cc: Pekka Enberg <penberg@kernel.org> > Cc: David Rientjes <rientjes@google.com> > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> > Cc: Vlastimil Babka <vbabka@suse.cz> > Cc: Roman Gushchin <roman.gushchin@linux.dev> > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> > Cc: linux-mm@kvack.org > --- > mm/kmemleak.c | 4 ++-- > mm/slub.c | 2 +- > 2 files changed, 3 insertions(+), 3 deletions(-) > > diff --git a/mm/kmemleak.c b/mm/kmemleak.c > index c55c2cbb6837..fdcf01f62202 100644 > --- a/mm/kmemleak.c > +++ b/mm/kmemleak.c > @@ -463,7 +463,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp) > > /* try the slab allocator first */ > if (object_cache) { > - object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp)); > + object = kmem_cache_alloc_noprof(object_cache, gfp_kmemleak_mask(gfp)); What do these get accounted to, or does this now pop a warning with CONFIG_MEM_ALLOC_PROFILING_DEBUG? > if (object) > return object; > } > @@ -947,7 +947,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp) > untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer); > > if (scan_area_cache) > - area = kmem_cache_alloc(scan_area_cache, gfp_kmemleak_mask(gfp)); > + area = kmem_cache_alloc_noprof(scan_area_cache, gfp_kmemleak_mask(gfp)); > > raw_spin_lock_irqsave(&object->lock, flags); > if (!area) { > diff --git a/mm/slub.c b/mm/slub.c > index a94a0507e19c..9ae032ed17ed 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2016,7 +2016,7 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) > if (!p) > return NULL; > > - if (s->flags & SLAB_NO_OBJ_EXT) > + if (s->flags & (SLAB_NO_OBJ_EXT | SLAB_NOLEAKTRACE)) > return NULL; > > if (flags & __GFP_NO_OBJ_EXT) > -- > 2.34.1 >
On Thu, Apr 25, 2024 at 2:09 PM Kent Overstreet <kent.overstreet@linux.dev> wrote: > > On Thu, Apr 25, 2024 at 01:55:23PM -0700, Kees Cook wrote: > > The system will immediate fill up stack and crash when both > > CONFIG_DEBUG_KMEMLEAK and CONFIG_MEM_ALLOC_PROFILING are enabled. > > Avoid allocation tagging of kmemleak caches, otherwise recursive > > allocation tracking occurs. > > > > Fixes: 279bb991b4d9 ("mm/slab: add allocation accounting into slab allocation and free paths") > > Signed-off-by: Kees Cook <keescook@chromium.org> > > --- > > Cc: Suren Baghdasaryan <surenb@google.com> > > Cc: Kent Overstreet <kent.overstreet@linux.dev> > > Cc: Catalin Marinas <catalin.marinas@arm.com> > > Cc: Andrew Morton <akpm@linux-foundation.org> > > Cc: Christoph Lameter <cl@linux.com> > > Cc: Pekka Enberg <penberg@kernel.org> > > Cc: David Rientjes <rientjes@google.com> > > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > Cc: Vlastimil Babka <vbabka@suse.cz> > > Cc: Roman Gushchin <roman.gushchin@linux.dev> > > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > Cc: linux-mm@kvack.org > > --- > > mm/kmemleak.c | 4 ++-- > > mm/slub.c | 2 +- > > 2 files changed, 3 insertions(+), 3 deletions(-) > > > > diff --git a/mm/kmemleak.c b/mm/kmemleak.c > > index c55c2cbb6837..fdcf01f62202 100644 > > --- a/mm/kmemleak.c > > +++ b/mm/kmemleak.c > > @@ -463,7 +463,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp) > > > > /* try the slab allocator first */ > > if (object_cache) { > > - object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp)); > > + object = kmem_cache_alloc_noprof(object_cache, gfp_kmemleak_mask(gfp)); > > What do these get accounted to, or does this now pop a warning with > CONFIG_MEM_ALLOC_PROFILING_DEBUG? Thanks for the fix, Kees! I'll look into this recursion more closely to see if there is a better way to break it. As a stopgap measure seems ok to me. I also think it's unlikely that one would use both tracking mechanisms on the same system. > > > if (object) > > return object; > > } > > @@ -947,7 +947,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp) > > untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer); > > > > if (scan_area_cache) > > - area = kmem_cache_alloc(scan_area_cache, gfp_kmemleak_mask(gfp)); > > + area = kmem_cache_alloc_noprof(scan_area_cache, gfp_kmemleak_mask(gfp)); > > > > raw_spin_lock_irqsave(&object->lock, flags); > > if (!area) { > > diff --git a/mm/slub.c b/mm/slub.c > > index a94a0507e19c..9ae032ed17ed 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -2016,7 +2016,7 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) > > if (!p) > > return NULL; > > > > - if (s->flags & SLAB_NO_OBJ_EXT) > > + if (s->flags & (SLAB_NO_OBJ_EXT | SLAB_NOLEAKTRACE)) > > return NULL; > > > > if (flags & __GFP_NO_OBJ_EXT) > > -- > > 2.34.1 > >
On Thu, 25 Apr 2024 14:30:55 -0700 Suren Baghdasaryan <surenb@google.com> wrote: > > > --- a/mm/kmemleak.c > > > +++ b/mm/kmemleak.c > > > @@ -463,7 +463,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp) > > > > > > /* try the slab allocator first */ > > > if (object_cache) { > > > - object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp)); > > > + object = kmem_cache_alloc_noprof(object_cache, gfp_kmemleak_mask(gfp)); > > > > What do these get accounted to, or does this now pop a warning with > > CONFIG_MEM_ALLOC_PROFILING_DEBUG? > > Thanks for the fix, Kees! > I'll look into this recursion more closely to see if there is a better > way to break it. As a stopgap measure seems ok to me. I also think > it's unlikely that one would use both tracking mechanisms on the same > system. I'd really like to start building mm-stable without having to route around memprofiling. How about I include Kees's patch in that for now?
On Thu, Apr 25, 2024 at 04:49:17PM -0700, Andrew Morton wrote: > On Thu, 25 Apr 2024 14:30:55 -0700 Suren Baghdasaryan <surenb@google.com> wrote: > > > > > --- a/mm/kmemleak.c > > > > +++ b/mm/kmemleak.c > > > > @@ -463,7 +463,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp) > > > > > > > > /* try the slab allocator first */ > > > > if (object_cache) { > > > > - object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp)); > > > > + object = kmem_cache_alloc_noprof(object_cache, gfp_kmemleak_mask(gfp)); > > > > > > What do these get accounted to, or does this now pop a warning with > > > CONFIG_MEM_ALLOC_PROFILING_DEBUG? > > > > Thanks for the fix, Kees! > > I'll look into this recursion more closely to see if there is a better > > way to break it. As a stopgap measure seems ok to me. I also think > > it's unlikely that one would use both tracking mechanisms on the same > > system. > > I'd really like to start building mm-stable without having to route > around memprofiling. How about I include Kees's patch in that for now? Agreed
On Thu, Apr 25, 2024 at 5:19 PM Kent Overstreet <kent.overstreet@linux.dev> wrote: > > On Thu, Apr 25, 2024 at 04:49:17PM -0700, Andrew Morton wrote: > > On Thu, 25 Apr 2024 14:30:55 -0700 Suren Baghdasaryan <surenb@google.com> wrote: > > > > > > > --- a/mm/kmemleak.c > > > > > +++ b/mm/kmemleak.c > > > > > @@ -463,7 +463,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp) > > > > > > > > > > /* try the slab allocator first */ > > > > > if (object_cache) { > > > > > - object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp)); > > > > > + object = kmem_cache_alloc_noprof(object_cache, gfp_kmemleak_mask(gfp)); > > > > > > > > What do these get accounted to, or does this now pop a warning with > > > > CONFIG_MEM_ALLOC_PROFILING_DEBUG? > > > > > > Thanks for the fix, Kees! > > > I'll look into this recursion more closely to see if there is a better > > > way to break it. As a stopgap measure seems ok to me. I also think > > > it's unlikely that one would use both tracking mechanisms on the same > > > system. > > > > I'd really like to start building mm-stable without having to route > > around memprofiling. How about I include Kees's patch in that for now? > > Agreed Yes, please. When I figure out a better way, I'll post a separate patch. Thanks!
On Thu, Apr 25, 2024 at 01:55:23PM -0700, Kees Cook wrote: > The system will immediate fill up stack and crash when both > CONFIG_DEBUG_KMEMLEAK and CONFIG_MEM_ALLOC_PROFILING are enabled. > Avoid allocation tagging of kmemleak caches, otherwise recursive > allocation tracking occurs. > > Fixes: 279bb991b4d9 ("mm/slab: add allocation accounting into slab allocation and free paths") > Signed-off-by: Kees Cook <keescook@chromium.org> For the kmemleak bits: Acked-by: Catalin Marinas <catalin.marinas@arm.com>
On Fri, Apr 26, 2024 at 03:52:24PM +0100, Catalin Marinas wrote: > On Thu, Apr 25, 2024 at 01:55:23PM -0700, Kees Cook wrote: > > The system will immediate fill up stack and crash when both Oops, typo from me: "immediately". You'd never guess I'm a native English speaker! :) > > CONFIG_DEBUG_KMEMLEAK and CONFIG_MEM_ALLOC_PROFILING are enabled. > > Avoid allocation tagging of kmemleak caches, otherwise recursive > > allocation tracking occurs. > > > > Fixes: 279bb991b4d9 ("mm/slab: add allocation accounting into slab allocation and free paths") > > Signed-off-by: Kees Cook <keescook@chromium.org> > > For the kmemleak bits: > > Acked-by: Catalin Marinas <catalin.marinas@arm.com> Thanks! -Kees
diff --git a/mm/kmemleak.c b/mm/kmemleak.c index c55c2cbb6837..fdcf01f62202 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -463,7 +463,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp) /* try the slab allocator first */ if (object_cache) { - object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp)); + object = kmem_cache_alloc_noprof(object_cache, gfp_kmemleak_mask(gfp)); if (object) return object; } @@ -947,7 +947,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp) untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer); if (scan_area_cache) - area = kmem_cache_alloc(scan_area_cache, gfp_kmemleak_mask(gfp)); + area = kmem_cache_alloc_noprof(scan_area_cache, gfp_kmemleak_mask(gfp)); raw_spin_lock_irqsave(&object->lock, flags); if (!area) { diff --git a/mm/slub.c b/mm/slub.c index a94a0507e19c..9ae032ed17ed 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2016,7 +2016,7 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) if (!p) return NULL; - if (s->flags & SLAB_NO_OBJ_EXT) + if (s->flags & (SLAB_NO_OBJ_EXT | SLAB_NOLEAKTRACE)) return NULL; if (flags & __GFP_NO_OBJ_EXT)
The system will immediate fill up stack and crash when both CONFIG_DEBUG_KMEMLEAK and CONFIG_MEM_ALLOC_PROFILING are enabled. Avoid allocation tagging of kmemleak caches, otherwise recursive allocation tracking occurs. Fixes: 279bb991b4d9 ("mm/slab: add allocation accounting into slab allocation and free paths") Signed-off-by: Kees Cook <keescook@chromium.org> --- Cc: Suren Baghdasaryan <surenb@google.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: linux-mm@kvack.org --- mm/kmemleak.c | 4 ++-- mm/slub.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-)