Message ID | ZjCxZfD1d36zfq-R@archlinux (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2] slub: Fixes freepointer encoding for single free | expand |
On 4/30/24 10:52 AM, Nicolas Bouchinet wrote: > From: Nicolas Bouchinet <nicolas.bouchinet@ssi.gouv.fr> > > Commit 284f17ac13fe ("mm/slub: handle bulk and single object freeing > separately") splits single and bulk object freeing in two functions > slab_free() and slab_free_bulk() which leads slab_free() to call > slab_free_hook() directly instead of slab_free_freelist_hook(). > > If `init_on_free` is set, slab_free_hook() zeroes the object. > Afterward, if `slub_debug=F` and `CONFIG_SLAB_FREELIST_HARDENED` are > set, the do_slab_free() slowpath executes freelist consistency > checks and try to decode a zeroed freepointer which leads to a > "Freepointer corrupt" detection in check_object(). To make this explanation complete, we should also say that with slab_free_freelist_hook() this doesn't happen as it always sets the freepointer to a valid value after the zeroing. > Object's freepointer thus needs to be avoided when stored outside the > object if init_on_free is set. It would be good to add more reasoning why we're not just doing the same freepointer re-init as slab_free_freelist_hook(), but we decided instead to allow check_object() to actually catch any overwrite by the user of the allocated object, which means it did a buffer overflow. And for that we need to stop wiping or re-initing the outside-object freepointer ourselves... > To reproduce, set `slub_debug=FU init_on_free=1 log_level=7` on the > command line of a kernel build with `CONFIG_SLAB_FREELIST_HARDENED=y`. > > dmesg sample log: > [ 10.708715] ============================================================================= > [ 10.710323] BUG kmalloc-rnd-05-32 (Tainted: G B T ): Freepointer corrupt > [ 10.712695] ----------------------------------------------------------------------------- > [ 10.712695] > [ 10.712695] Slab 0xffffd8bdc400d580 objects=32 used=4 fp=0xffff9d9a80356f80 flags=0x200000000000a00(workingset|slab|node=0|zone=2) > [ 10.716698] Object 0xffff9d9a80356600 @offset=1536 fp=0x7ee4f480ce0ecd7c > [ 10.716698] > [ 10.716698] Bytes b4 ffff9d9a803565f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > [ 10.720703] Object ffff9d9a80356600: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > [ 10.720703] Object ffff9d9a80356610: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > [ 10.724696] Padding ffff9d9a8035666c: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > [ 10.724696] Padding ffff9d9a8035667c: 00 00 00 00 .... > [ 10.724696] FIX kmalloc-rnd-05-32: Object at 0xffff9d9a80356600 not freed > > Co-authored-by: Chengming Zhou <chengming.zhou@linux.dev> So per Documentation/process/submitting-patches.rst the canonical name is Co-developed-by: and Chengming Zhou should respond with his Signed-off-by: > Signed-off-by: Nicolas Bouchinet <nicolas.bouchinet@ssi.gouv.fr> Otherwise seems correct, thanks! So if you could just resend with updated changelog, would be great. > --- > Changes since v1: > https://lore.kernel.org/all/Zij_fGjRS_rK-65r@archlinux/ > > * Jump above out of object freepointer if init_on_free is set > instead of initializing it with set_freepointer() as suggested > by Vlastimil Babka. > > * Adapt maybe_wipe_obj_freeptr() to avoid wiping out of object > on alloc freepointer as suggested by Chengming Zhou. > > * Reword commit message. > --- > mm/slub.c | 11 ++++++++--- > 1 file changed, 8 insertions(+), 3 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 3aa12b9b323d..173c340ec1d3 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2102,15 +2102,20 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) > * > * The initialization memset's clear the object and the metadata, > * but don't touch the SLAB redzone. > + * > + * The object's freepointer is also avoided if stored outside the > + * object. > */ > if (unlikely(init)) { > int rsize; > + unsigned int inuse; > > + inuse = get_info_end(s); > if (!kasan_has_integrated_init()) > memset(kasan_reset_tag(x), 0, s->object_size); > rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0; > - memset((char *)kasan_reset_tag(x) + s->inuse, 0, > - s->size - s->inuse - rsize); > + memset((char *)kasan_reset_tag(x) + inuse, 0, > + s->size - inuse - rsize); > } > /* KASAN might put x into memory quarantine, delaying its reuse. */ > return !kasan_slab_free(s, x, init); > @@ -3789,7 +3794,7 @@ static void *__slab_alloc_node(struct kmem_cache *s, > static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, > void *obj) > { > - if (unlikely(slab_want_init_on_free(s)) && obj) > + if (unlikely(slab_want_init_on_free(s)) && obj && !freeptr_outside_object(s)) > memset((void *)((char *)kasan_reset_tag(obj) + s->offset), > 0, sizeof(void *)); > }
diff --git a/mm/slub.c b/mm/slub.c index 3aa12b9b323d..173c340ec1d3 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2102,15 +2102,20 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) * * The initialization memset's clear the object and the metadata, * but don't touch the SLAB redzone. + * + * The object's freepointer is also avoided if stored outside the + * object. */ if (unlikely(init)) { int rsize; + unsigned int inuse; + inuse = get_info_end(s); if (!kasan_has_integrated_init()) memset(kasan_reset_tag(x), 0, s->object_size); rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0; - memset((char *)kasan_reset_tag(x) + s->inuse, 0, - s->size - s->inuse - rsize); + memset((char *)kasan_reset_tag(x) + inuse, 0, + s->size - inuse - rsize); } /* KASAN might put x into memory quarantine, delaying its reuse. */ return !kasan_slab_free(s, x, init); @@ -3789,7 +3794,7 @@ static void *__slab_alloc_node(struct kmem_cache *s, static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, void *obj) { - if (unlikely(slab_want_init_on_free(s)) && obj) + if (unlikely(slab_want_init_on_free(s)) && obj && !freeptr_outside_object(s)) memset((void *)((char *)kasan_reset_tag(obj) + s->offset), 0, sizeof(void *)); }