Message ID | 20221025233421.you.825-kees@kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3] mempool: Do not use ksize() for poisoning | expand |
On 10/26/22 01:36, Kees Cook wrote: > Nothing appears to be using ksize() within the kmalloc-backed mempools > except the mempool poisoning logic. Use the actual pool size instead > of the ksize() to avoid needing any special handling of the memory as > needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE. > > Suggested-by: Vlastimil Babka <vbabka@suse.cz> > Link: https://lore.kernel.org/lkml/f4fc52c4-7c18-1d76-0c7a-4058ea2486b9@suse.cz/ > Cc: Andrey Konovalov <andreyknvl@gmail.com> > Cc: David Rientjes <rientjes@google.com> > Cc: Marco Elver <elver@google.com> > Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: linux-mm@kvack.org > Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> > --- > v3: remove ksize() calls instead of adding kmalloc_roundup_size() calls (vbabka) > v2: https://lore.kernel.org/lkml/20221018090323.never.897-kees@kernel.org/ > v1: https://lore.kernel.org/lkml/20220923202822.2667581-14-keescook@chromium.org/ > --- > mm/mempool.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/mm/mempool.c b/mm/mempool.c > index 96488b13a1ef..54204065037d 100644 > --- a/mm/mempool.c > +++ b/mm/mempool.c > @@ -58,7 +58,7 @@ static void check_element(mempool_t *pool, void *element) > { > /* Mempools backed by slab allocator */ > if (pool->free == mempool_free_slab || pool->free == mempool_kfree) { > - __check_element(pool, element, ksize(element)); > + __check_element(pool, element, (size_t)pool->pool_data); > } else if (pool->free == mempool_free_pages) { > /* Mempools backed by page allocator */ > int order = (int)(long)pool->pool_data; > @@ -81,7 +81,7 @@ static void poison_element(mempool_t *pool, void *element) > { > /* Mempools backed by slab allocator */ > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) { > - __poison_element(element, ksize(element)); > + __poison_element(element, (size_t)pool->pool_data); > } else if (pool->alloc == mempool_alloc_pages) { > /* Mempools backed by page allocator */ > int order = (int)(long)pool->pool_data; > @@ -112,7 +112,7 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) > static void kasan_unpoison_element(mempool_t *pool, void *element) > { > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) > - kasan_unpoison_range(element, __ksize(element)); > + kasan_unpoison_range(element, (size_t)pool->pool_data); > else if (pool->alloc == mempool_alloc_pages) > kasan_unpoison_pages(element, (unsigned long)pool->pool_data, > false);
On 10/26/22 12:02, Vlastimil Babka wrote: > On 10/26/22 01:36, Kees Cook wrote: >> Nothing appears to be using ksize() within the kmalloc-backed mempools >> except the mempool poisoning logic. Use the actual pool size instead >> of the ksize() to avoid needing any special handling of the memory as >> needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE. >> >> Suggested-by: Vlastimil Babka <vbabka@suse.cz> >> Link: https://lore.kernel.org/lkml/f4fc52c4-7c18-1d76-0c7a-4058ea2486b9@suse.cz/ >> Cc: Andrey Konovalov <andreyknvl@gmail.com> >> Cc: David Rientjes <rientjes@google.com> >> Cc: Marco Elver <elver@google.com> >> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> >> Cc: Andrew Morton <akpm@linux-foundation.org> >> Cc: linux-mm@kvack.org >> Signed-off-by: Kees Cook <keescook@chromium.org> > > Acked-by: Vlastimil Babka <vbabka@suse.cz> Ah and since the subject was updated too, note this is supposed to replace/fixup the patch in mm-unstable: mempool-use-kmalloc_size_roundup-to-match-ksize-usage.patch >> --- >> v3: remove ksize() calls instead of adding kmalloc_roundup_size() calls (vbabka) >> v2: https://lore.kernel.org/lkml/20221018090323.never.897-kees@kernel.org/ >> v1: https://lore.kernel.org/lkml/20220923202822.2667581-14-keescook@chromium.org/ >> --- >> mm/mempool.c | 6 +++--- >> 1 file changed, 3 insertions(+), 3 deletions(-) >> >> diff --git a/mm/mempool.c b/mm/mempool.c >> index 96488b13a1ef..54204065037d 100644 >> --- a/mm/mempool.c >> +++ b/mm/mempool.c >> @@ -58,7 +58,7 @@ static void check_element(mempool_t *pool, void *element) >> { >> /* Mempools backed by slab allocator */ >> if (pool->free == mempool_free_slab || pool->free == mempool_kfree) { >> - __check_element(pool, element, ksize(element)); >> + __check_element(pool, element, (size_t)pool->pool_data); >> } else if (pool->free == mempool_free_pages) { >> /* Mempools backed by page allocator */ >> int order = (int)(long)pool->pool_data; >> @@ -81,7 +81,7 @@ static void poison_element(mempool_t *pool, void *element) >> { >> /* Mempools backed by slab allocator */ >> if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) { >> - __poison_element(element, ksize(element)); >> + __poison_element(element, (size_t)pool->pool_data); >> } else if (pool->alloc == mempool_alloc_pages) { >> /* Mempools backed by page allocator */ >> int order = (int)(long)pool->pool_data; >> @@ -112,7 +112,7 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) >> static void kasan_unpoison_element(mempool_t *pool, void *element) >> { >> if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) >> - kasan_unpoison_range(element, __ksize(element)); >> + kasan_unpoison_range(element, (size_t)pool->pool_data); >> else if (pool->alloc == mempool_alloc_pages) >> kasan_unpoison_pages(element, (unsigned long)pool->pool_data, >> false); >
Greeting, FYI, we noticed WARNING:at_kernel/locking/lockdep.c:#__lock_acquire due to commit (built with clang-14): commit: 216fd003b5aca1c27d62df679cd722730b886919 ("[PATCH v3] mempool: Do not use ksize() for poisoning") url: https://github.com/intel-lab-lkp/linux/commits/Kees-Cook/mempool-Do-not-use-ksize-for-poisoning/20221026-073834 base: https://git.kernel.org/cgit/linux/kernel/git/kees/linux.git for-next/pstore patch link: https://lore.kernel.org/linux-mm/20221025233421.you.825-kees@kernel.org patch subject: [PATCH v3] mempool: Do not use ksize() for poisoning in testcase: boot on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace): If you fix the issue, kindly add following tag | Reported-by: kernel test robot <oliver.sang@intel.com> | Link: https://lore.kernel.org/oe-lkp/202210271449.445fce6-oliver.sang@intel.com [ 4.181184][ T1] ------------[ cut here ]------------ [ 4.181613][ T1] DEBUG_LOCKS_WARN_ON(1) [ 4.181613][ T1] WARNING: CPU: 1 PID: 1 at kernel/locking/lockdep.c:231 __lock_acquire (lockdep.c:?) [ 4.181613][ T1] Modules linked in: [ 4.181613][ T1] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 6.1.0-rc1-00012-g216fd003b5ac #1 [ 4.181613][ T1] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-4 04/01/2014 [ 4.181613][ T1] EIP: __lock_acquire (lockdep.c:?) [ 4.181613][ T1] Code: bd 96 00 85 c0 0f 84 b4 02 00 00 83 3d b8 c0 4c 45 00 0f 85 a7 02 00 00 68 c3 07 29 44 68 44 7b 13 44 e8 a7 53 f8 ff 83 c4 08 <0f> 0b 8b 4d f0 eb 09 6b c0 64 8d b0 10 38 71 45 8a 5e 60 b8 ff 1f All code ======== 0: bd 96 00 85 c0 mov $0xc0850096,%ebp 5: 0f 84 b4 02 00 00 je 0x2bf b: 83 3d b8 c0 4c 45 00 cmpl $0x0,0x454cc0b8(%rip) # 0x454cc0ca 12: 0f 85 a7 02 00 00 jne 0x2bf 18: 68 c3 07 29 44 pushq $0x442907c3 1d: 68 44 7b 13 44 pushq $0x44137b44 22: e8 a7 53 f8 ff callq 0xfffffffffff853ce 27: 83 c4 08 add $0x8,%esp 2a:* 0f 0b ud2 <-- trapping instruction 2c: 8b 4d f0 mov -0x10(%rbp),%ecx 2f: eb 09 jmp 0x3a 31: 6b c0 64 imul $0x64,%eax,%eax 34: 8d b0 10 38 71 45 lea 0x45713810(%rax),%esi 3a: 8a 5e 60 mov 0x60(%rsi),%bl 3d: b8 .byte 0xb8 3e: ff 1f lcall *(%rdi) Code starting with the faulting instruction =========================================== 0: 0f 0b ud2 2: 8b 4d f0 mov -0x10(%rbp),%ecx 5: eb 09 jmp 0x10 7: 6b c0 64 imul $0x64,%eax,%eax a: 8d b0 10 38 71 45 lea 0x45713810(%rax),%esi 10: 8a 5e 60 mov 0x60(%rsi),%bl 13: b8 .byte 0xb8 14: ff 1f lcall *(%rdi) [ 4.181613][ T1] EAX: 00000016 EBX: 00080000 ECX: 00000000 EDX: 00000000 [ 4.181613][ T1] ESI: 00000000 EDI: 403e0610 EBP: 40397ad0 ESP: 40397a30 [ 4.181613][ T1] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 EFLAGS: 00010016 [ 4.181613][ T1] CR0: 80050033 CR2: 00000000 CR3: 05654000 CR4: 00000690 [ 4.181613][ T1] Call Trace: [ 4.181613][ T1] ? __lock_acquire (lockdep.c:?) [ 4.181613][ T1] ? update_cfs_rq_load_avg (fair.c:?) [ 4.181613][ T1] ? lock_acquire (??:?) [ 4.181613][ T1] ? ___slab_alloc (slub.c:?) [ 4.181613][ T1] ? _raw_spin_lock_irqsave (??:?) [ 4.181613][ T1] ? ___slab_alloc (slub.c:?) [ 4.181613][ T1] ? __cond_resched (??:?) [ 4.181613][ T1] ? slab_pre_alloc_hook (slub.c:?) [ 4.181613][ T1] ? kmem_cache_alloc (??:?) [ 4.181613][ T1] ? mempool_alloc_slab (??:?) [ 4.181613][ T1] ? mempool_init_node (??:?) [ 4.181613][ T1] ? mempool_init (??:?) [ 4.181613][ T1] ? mempool_alloc_slab (??:?) [ 4.181613][ T1] ? bioset_init (??:?) [ 4.181613][ T1] ? mempool_alloc_slab (??:?) [ 4.181613][ T1] ? init_bio (bio.c:?) [ 4.181613][ T1] ? blkdev_init (bio.c:?) [ 4.181613][ T1] ? do_one_initcall (??:?) [ 4.181613][ T1] ? blkdev_init (bio.c:?) [ 4.181613][ T1] ? do_initcall_level (main.c:?) [ 4.181613][ T1] ? do_initcalls (main.c:?) [ 4.181613][ T1] ? do_basic_setup (main.c:?) [ 4.181613][ T1] ? kernel_init_freeable (main.c:?) [ 4.181613][ T1] ? rest_init (main.c:?) [ 4.181613][ T1] ? kernel_init (main.c:?) [ 4.181613][ T1] ? rest_init (main.c:?) [ 4.181613][ T1] ? ret_from_fork (??:?) [ 4.181613][ T1] irq event stamp: 52976 [ 4.181613][ T1] hardirqs last enabled at (52975): __schedule (core.c:?) [ 4.181613][ T1] hardirqs last disabled at (52976): _raw_spin_lock_irqsave (??:?) [ 4.181613][ T1] softirqs last enabled at (52954): __do_softirq (??:?) [ 4.181613][ T1] softirqs last disabled at (52949): do_softirq_own_stack (??:?) [ 4.181613][ T1] ---[ end trace 0000000000000000 ]--- To reproduce: # build kernel cd linux cp config-6.1.0-rc1-00012-g216fd003b5ac .config make HOSTCC=clang-14 CC=clang-14 ARCH=i386 olddefconfig prepare modules_prepare bzImage modules make HOSTCC=clang-14 CC=clang-14 ARCH=i386 INSTALL_MOD_PATH=<mod-install-dir> modules_install cd <mod-install-dir> find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz git clone https://github.com/intel/lkp-tests.git cd lkp-tests bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email # if come across any failure that blocks the test, # please remove ~/.lkp and /lkp dir to run from a clean state.
On Wed, Oct 26, 2022 at 1:36 AM Kees Cook <keescook@chromium.org> wrote: > > Nothing appears to be using ksize() within the kmalloc-backed mempools > except the mempool poisoning logic. Use the actual pool size instead > of the ksize() to avoid needing any special handling of the memory as > needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE. > > Suggested-by: Vlastimil Babka <vbabka@suse.cz> > Link: https://lore.kernel.org/lkml/f4fc52c4-7c18-1d76-0c7a-4058ea2486b9@suse.cz/ > Cc: Andrey Konovalov <andreyknvl@gmail.com> > Cc: David Rientjes <rientjes@google.com> > Cc: Marco Elver <elver@google.com> > Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: linux-mm@kvack.org > Signed-off-by: Kees Cook <keescook@chromium.org> > --- > v3: remove ksize() calls instead of adding kmalloc_roundup_size() calls (vbabka) > v2: https://lore.kernel.org/lkml/20221018090323.never.897-kees@kernel.org/ > v1: https://lore.kernel.org/lkml/20220923202822.2667581-14-keescook@chromium.org/ > --- > mm/mempool.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/mm/mempool.c b/mm/mempool.c > index 96488b13a1ef..54204065037d 100644 > --- a/mm/mempool.c > +++ b/mm/mempool.c > @@ -58,7 +58,7 @@ static void check_element(mempool_t *pool, void *element) > { > /* Mempools backed by slab allocator */ > if (pool->free == mempool_free_slab || pool->free == mempool_kfree) { > - __check_element(pool, element, ksize(element)); > + __check_element(pool, element, (size_t)pool->pool_data); > } else if (pool->free == mempool_free_pages) { > /* Mempools backed by page allocator */ > int order = (int)(long)pool->pool_data; > @@ -81,7 +81,7 @@ static void poison_element(mempool_t *pool, void *element) > { > /* Mempools backed by slab allocator */ > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) { > - __poison_element(element, ksize(element)); > + __poison_element(element, (size_t)pool->pool_data); > } else if (pool->alloc == mempool_alloc_pages) { > /* Mempools backed by page allocator */ > int order = (int)(long)pool->pool_data; > @@ -112,7 +112,7 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) > static void kasan_unpoison_element(mempool_t *pool, void *element) > { > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) > - kasan_unpoison_range(element, __ksize(element)); > + kasan_unpoison_range(element, (size_t)pool->pool_data); > else if (pool->alloc == mempool_alloc_pages) > kasan_unpoison_pages(element, (unsigned long)pool->pool_data, > false); > -- > 2.34.1 > For the KASAN change: Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Thanks!
diff --git a/mm/mempool.c b/mm/mempool.c index 96488b13a1ef..54204065037d 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -58,7 +58,7 @@ static void check_element(mempool_t *pool, void *element) { /* Mempools backed by slab allocator */ if (pool->free == mempool_free_slab || pool->free == mempool_kfree) { - __check_element(pool, element, ksize(element)); + __check_element(pool, element, (size_t)pool->pool_data); } else if (pool->free == mempool_free_pages) { /* Mempools backed by page allocator */ int order = (int)(long)pool->pool_data; @@ -81,7 +81,7 @@ static void poison_element(mempool_t *pool, void *element) { /* Mempools backed by slab allocator */ if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) { - __poison_element(element, ksize(element)); + __poison_element(element, (size_t)pool->pool_data); } else if (pool->alloc == mempool_alloc_pages) { /* Mempools backed by page allocator */ int order = (int)(long)pool->pool_data; @@ -112,7 +112,7 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) static void kasan_unpoison_element(mempool_t *pool, void *element) { if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) - kasan_unpoison_range(element, __ksize(element)); + kasan_unpoison_range(element, (size_t)pool->pool_data); else if (pool->alloc == mempool_alloc_pages) kasan_unpoison_pages(element, (unsigned long)pool->pool_data, false);
Nothing appears to be using ksize() within the kmalloc-backed mempools except the mempool poisoning logic. Use the actual pool size instead of the ksize() to avoid needing any special handling of the memory as needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE. Suggested-by: Vlastimil Babka <vbabka@suse.cz> Link: https://lore.kernel.org/lkml/f4fc52c4-7c18-1d76-0c7a-4058ea2486b9@suse.cz/ Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: Marco Elver <elver@google.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Signed-off-by: Kees Cook <keescook@chromium.org> --- v3: remove ksize() calls instead of adding kmalloc_roundup_size() calls (vbabka) v2: https://lore.kernel.org/lkml/20221018090323.never.897-kees@kernel.org/ v1: https://lore.kernel.org/lkml/20220923202822.2667581-14-keescook@chromium.org/ --- mm/mempool.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)