Message ID | 20240109221234.90929-1-andrey.konovalov@linux.dev (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [mm] kasan: avoid resetting aux_lock | expand |
On Tue, 9 Jan 2024 at 23:12, <andrey.konovalov@linux.dev> wrote: > > From: Andrey Konovalov <andreyknvl@gmail.com> > > With commit 63b85ac56a64 ("kasan: stop leaking stack trace handles"), > KASAN zeroes out alloc meta when an object is freed. The zeroed out data > purposefully includes alloc and auxiliary stack traces but also > accidentally includes aux_lock. > > As aux_lock is only initialized for each object slot during slab > creation, when the freed slot is reallocated, saving auxiliary stack > traces for the new object leads to lockdep reports when taking the > zeroed out aux_lock. > > Arguably, we could reinitialize aux_lock when the object is reallocated, > but a simpler solution is to avoid zeroing out aux_lock when an object > gets freed. > > Reported-by: Paul E. McKenney <paulmck@kernel.org> > Closes: https://lore.kernel.org/linux-next/5cc0f83c-e1d6-45c5-be89-9b86746fe731@paulmck-laptop/ > Fixes: 63b85ac56a64 ("kasan: stop leaking stack trace handles") > Signed-off-by: Andrey Konovalov <andreyknvl@gmail.com> Reviewed-by: Marco Elver <elver@google.com> > --- > mm/kasan/generic.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c > index 24c13dfb1e94..df6627f62402 100644 > --- a/mm/kasan/generic.c > +++ b/mm/kasan/generic.c > @@ -487,6 +487,7 @@ void kasan_init_object_meta(struct kmem_cache *cache, const void *object) > __memset(alloc_meta, 0, sizeof(*alloc_meta)); > > /* > + * Prepare the lock for saving auxiliary stack traces. > * Temporarily disable KASAN bug reporting to allow instrumented > * raw_spin_lock_init to access aux_lock, which resides inside > * of a redzone. > @@ -510,8 +511,13 @@ static void release_alloc_meta(struct kasan_alloc_meta *meta) > stack_depot_put(meta->aux_stack[0]); > stack_depot_put(meta->aux_stack[1]); > > - /* Zero out alloc meta to mark it as invalid. */ > - __memset(meta, 0, sizeof(*meta)); > + /* > + * Zero out alloc meta to mark it as invalid but keep aux_lock > + * initialized to avoid having to reinitialize it when another object > + * is allocated in the same slot. > + */ > + __memset(&meta->alloc_track, 0, sizeof(meta->alloc_track)); > + __memset(meta->aux_stack, 0, sizeof(meta->aux_stack)); > } > > static void release_free_meta(const void *object, struct kasan_free_meta *meta) > -- > 2.25.1 >
On Tue, Jan 09, 2024 at 11:12:34PM +0100, andrey.konovalov@linux.dev wrote: > From: Andrey Konovalov <andreyknvl@gmail.com> > > With commit 63b85ac56a64 ("kasan: stop leaking stack trace handles"), > KASAN zeroes out alloc meta when an object is freed. The zeroed out data > purposefully includes alloc and auxiliary stack traces but also > accidentally includes aux_lock. > > As aux_lock is only initialized for each object slot during slab > creation, when the freed slot is reallocated, saving auxiliary stack > traces for the new object leads to lockdep reports when taking the > zeroed out aux_lock. > > Arguably, we could reinitialize aux_lock when the object is reallocated, > but a simpler solution is to avoid zeroing out aux_lock when an object > gets freed. > > Reported-by: Paul E. McKenney <paulmck@kernel.org> > Closes: https://lore.kernel.org/linux-next/5cc0f83c-e1d6-45c5-be89-9b86746fe731@paulmck-laptop/ > Fixes: 63b85ac56a64 ("kasan: stop leaking stack trace handles") > Signed-off-by: Andrey Konovalov <andreyknvl@gmail.com> Very good! Tested-by: Paul E. McKenney <paulmck@kernel.org> > --- > mm/kasan/generic.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c > index 24c13dfb1e94..df6627f62402 100644 > --- a/mm/kasan/generic.c > +++ b/mm/kasan/generic.c > @@ -487,6 +487,7 @@ void kasan_init_object_meta(struct kmem_cache *cache, const void *object) > __memset(alloc_meta, 0, sizeof(*alloc_meta)); > > /* > + * Prepare the lock for saving auxiliary stack traces. > * Temporarily disable KASAN bug reporting to allow instrumented > * raw_spin_lock_init to access aux_lock, which resides inside > * of a redzone. > @@ -510,8 +511,13 @@ static void release_alloc_meta(struct kasan_alloc_meta *meta) > stack_depot_put(meta->aux_stack[0]); > stack_depot_put(meta->aux_stack[1]); > > - /* Zero out alloc meta to mark it as invalid. */ > - __memset(meta, 0, sizeof(*meta)); > + /* > + * Zero out alloc meta to mark it as invalid but keep aux_lock > + * initialized to avoid having to reinitialize it when another object > + * is allocated in the same slot. > + */ > + __memset(&meta->alloc_track, 0, sizeof(meta->alloc_track)); > + __memset(meta->aux_stack, 0, sizeof(meta->aux_stack)); > } > > static void release_free_meta(const void *object, struct kasan_free_meta *meta) > -- > 2.25.1 >
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c index 24c13dfb1e94..df6627f62402 100644 --- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -487,6 +487,7 @@ void kasan_init_object_meta(struct kmem_cache *cache, const void *object) __memset(alloc_meta, 0, sizeof(*alloc_meta)); /* + * Prepare the lock for saving auxiliary stack traces. * Temporarily disable KASAN bug reporting to allow instrumented * raw_spin_lock_init to access aux_lock, which resides inside * of a redzone. @@ -510,8 +511,13 @@ static void release_alloc_meta(struct kasan_alloc_meta *meta) stack_depot_put(meta->aux_stack[0]); stack_depot_put(meta->aux_stack[1]); - /* Zero out alloc meta to mark it as invalid. */ - __memset(meta, 0, sizeof(*meta)); + /* + * Zero out alloc meta to mark it as invalid but keep aux_lock + * initialized to avoid having to reinitialize it when another object + * is allocated in the same slot. + */ + __memset(&meta->alloc_track, 0, sizeof(meta->alloc_track)); + __memset(meta->aux_stack, 0, sizeof(meta->aux_stack)); } static void release_free_meta(const void *object, struct kasan_free_meta *meta)