Message ID | 20211123143220.134361-1-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2] mm: Delay kmemleak object creation of module_alloc() | expand |
On Tue, Nov 23, 2021 at 10:32:20PM +0800, Kefeng Wang wrote: > Yongqiang reports a kmemleak panic when module insmod/rmmod with KASAN > enabled on x86[1]. > > When the module allocates memory, it's kmemleak_object is created successfully, > but the KASAN shadow memory of module allocation is not ready, so when kmemleak > scan the module's pointer, it will panic due to no shadow memory with KASAN. > > module_alloc > __vmalloc_node_range > kmemleak_vmalloc > kmemleak_scan > update_checksum > kasan_module_alloc > kmemleak_ignore Can you share the .config and the stack trace you get on arm64? I have a suspicion there is no problem if KASAN_VMALLOC is enabled. > diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c > index 4a4929b29a23..2ade2f484562 100644 > --- a/mm/kasan/shadow.c > +++ b/mm/kasan/shadow.c > @@ -498,7 +498,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end, > > #else /* CONFIG_KASAN_VMALLOC */ > > -int kasan_module_alloc(void *addr, size_t size) > +int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask) > { > void *ret; > size_t scaled_size; > @@ -520,9 +520,14 @@ int kasan_module_alloc(void *addr, size_t size) > __builtin_return_address(0)); > > if (ret) { > + struct vm_struct *vm = find_vm_area(addr); > __memset(ret, KASAN_SHADOW_INIT, shadow_size); > - find_vm_area(addr)->flags |= VM_KASAN; > + vm->flags |= VM_KASAN; > kmemleak_ignore(ret); > + > + if (vm->flags & VM_DELAY_KMEMLEAK) > + kmemleak_vmalloc(vm, size, gfp_mask); > + > return 0; > } This function only exists if CONFIG_KASAN_VMALLOC=n. > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index d2a00ad4e1dd..23c595b15839 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -3074,7 +3074,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, > clear_vm_uninitialized_flag(area); > > size = PAGE_ALIGN(size); > - kmemleak_vmalloc(area, size, gfp_mask); > + if (!(vm_flags & VM_DELAY_KMEMLEAK)) > + kmemleak_vmalloc(area, size, gfp_mask); So with KASAN_VMALLOC enabled, we'll miss the kmemleak allocation. You could add an IS_ENABLED(CONFIG_KASAN_VMALLOC) check but I'm not particularly fond of the delay approach (also think DEFER is probably a better name). A quick fix would be to make KMEMLEAK depend on !KASAN || KASAN_VMALLOC. We'll miss KASAN_SW_TAGS with kmemleak but I think vmalloc support could be enabled for this as well. What does KASAN do with other vmalloc() allocations when !KASAN_VMALLOC? Can we not have a similar approach. I don't fully understand why the module vmalloc() is a special case.
On 2021/11/24 3:44, Catalin Marinas wrote: > On Tue, Nov 23, 2021 at 10:32:20PM +0800, Kefeng Wang wrote: >> Yongqiang reports a kmemleak panic when module insmod/rmmod with KASAN >> enabled on x86[1]. >> >> When the module allocates memory, it's kmemleak_object is created successfully, >> but the KASAN shadow memory of module allocation is not ready, so when kmemleak >> scan the module's pointer, it will panic due to no shadow memory with KASAN. >> >> module_alloc >> __vmalloc_node_range >> kmemleak_vmalloc >> kmemleak_scan >> update_checksum >> kasan_module_alloc >> kmemleak_ignore > Can you share the .config and the stack trace you get on arm64? My gcc could not support CC_HAS_KASAN_SW_TAGS, so no repoduced on ARM64 > > I have a suspicion there is no problem if KASAN_VMALLOC is enabled. Yes, if KASAN_VMALLOC enabled, the memory of vmalloc shadow has been populated in arch's kasan_init(), there is no issue. but x86/arm64/s390 support dynamic allocation of module area per module load by kasan_module_alloc(), and this leads the above concurrent issue. >> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c >> index 4a4929b29a23..2ade2f484562 100644 >> --- a/mm/kasan/shadow.c >> +++ b/mm/kasan/shadow.c >> @@ -498,7 +498,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end, >> >> #else /* CONFIG_KASAN_VMALLOC */ >> >> -int kasan_module_alloc(void *addr, size_t size) >> +int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask) >> { >> void *ret; >> size_t scaled_size; >> @@ -520,9 +520,14 @@ int kasan_module_alloc(void *addr, size_t size) >> __builtin_return_address(0)); >> >> if (ret) { >> + struct vm_struct *vm = find_vm_area(addr); >> __memset(ret, KASAN_SHADOW_INIT, shadow_size); >> - find_vm_area(addr)->flags |= VM_KASAN; >> + vm->flags |= VM_KASAN; >> kmemleak_ignore(ret); >> + >> + if (vm->flags & VM_DELAY_KMEMLEAK) >> + kmemleak_vmalloc(vm, size, gfp_mask); >> + >> return 0; >> } > This function only exists if CONFIG_KASAN_VMALLOC=n. yes. > >> diff --git a/mm/vmalloc.c b/mm/vmalloc.c >> index d2a00ad4e1dd..23c595b15839 100644 >> --- a/mm/vmalloc.c >> +++ b/mm/vmalloc.c >> @@ -3074,7 +3074,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, >> clear_vm_uninitialized_flag(area); >> >> size = PAGE_ALIGN(size); >> - kmemleak_vmalloc(area, size, gfp_mask); >> + if (!(vm_flags & VM_DELAY_KMEMLEAK)) >> + kmemleak_vmalloc(area, size, gfp_mask); > So with KASAN_VMALLOC enabled, we'll miss the kmemleak allocation. See the definination, if KASAN_VMALLOC enabled, VM_DELAY_KMEMLEAK is 0, so kmemleak allocation still works. +#if defined(CONFIG_KASAN) && (defined(CONFIG_KASAN_GENERIC) || \ + defined(CONFIG_KASAN_SW_TAGS)) && !defined(CONFIG_KASAN_VMALLOC) +#define VM_DELAY_KMEMLEAK 0x00000800 /* delay kmemleak object create */ +#else +#define VM_DELAY_KMEMLEAK 0 +#endif + > > You could add an IS_ENABLED(CONFIG_KASAN_VMALLOC) check but I'm not > particularly fond of the delay approach (also think DEFER is probably a > better name). Will use DEFER instead. > > A quick fix would be to make KMEMLEAK depend on !KASAN || KASAN_VMALLOC. > We'll miss KASAN_SW_TAGS with kmemleak but I think vmalloc support could > be enabled for this as well. > > What does KASAN do with other vmalloc() allocations when !KASAN_VMALLOC? > Can we not have a similar approach. I don't fully understand why the > module vmalloc() is a special case. Only the shadow of module area is dynamic allocation , this exists on ARM64 too. if no KASAN_VMALLOC, no shadow area for vmalloc, other vmalloc allocation is no problem. Correct me if I'm wrong. Thanks. >
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index b5ec010c481f..e6da010716d0 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -36,7 +36,7 @@ void *module_alloc(unsigned long size) module_alloc_end = MODULES_END; p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base, - module_alloc_end, gfp_mask, PAGE_KERNEL, 0, + module_alloc_end, gfp_mask, PAGE_KERNEL, VM_DELAY_KMEMLEAK, NUMA_NO_NODE, __builtin_return_address(0)); if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) && @@ -58,7 +58,7 @@ void *module_alloc(unsigned long size) PAGE_KERNEL, 0, NUMA_NO_NODE, __builtin_return_address(0)); - if (p && (kasan_module_alloc(p, size) < 0)) { + if (p && (kasan_module_alloc(p, size, gfp_mask) < 0)) { vfree(p); return NULL; } diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c index b01ba460b7ca..8d66a93562ca 100644 --- a/arch/s390/kernel/module.c +++ b/arch/s390/kernel/module.c @@ -37,14 +37,15 @@ void *module_alloc(unsigned long size) { + gfp_t gfp_mask = GFP_KERNEL; void *p; if (PAGE_ALIGN(size) > MODULES_LEN) return NULL; p = __vmalloc_node_range(size, MODULE_ALIGN, MODULES_VADDR, MODULES_END, - GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, + gfp_mask, PAGE_KERNEL_EXEC, VM_DELAY_KMEMLEAK, NUMA_NO_NODE, __builtin_return_address(0)); - if (p && (kasan_module_alloc(p, size) < 0)) { + if (p && (kasan_module_alloc(p, size, gfp_mask) < 0)) { vfree(p); return NULL; } diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index 169fb6f4cd2e..ff134d0f1ca1 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -67,6 +67,7 @@ static unsigned long int get_module_load_offset(void) void *module_alloc(unsigned long size) { + gfp_t gfp_mask = GFP_KERNEL; void *p; if (PAGE_ALIGN(size) > MODULES_LEN) @@ -74,10 +75,10 @@ void *module_alloc(unsigned long size) p = __vmalloc_node_range(size, MODULE_ALIGN, MODULES_VADDR + get_module_load_offset(), - MODULES_END, GFP_KERNEL, - PAGE_KERNEL, 0, NUMA_NO_NODE, + MODULES_END, gfp_mask, + PAGE_KERNEL, VM_DELAY_KMEMLEAK, NUMA_NO_NODE, __builtin_return_address(0)); - if (p && (kasan_module_alloc(p, size) < 0)) { + if (p && (kasan_module_alloc(p, size, gfp_mask) < 0)) { vfree(p); return NULL; } diff --git a/include/linux/kasan.h b/include/linux/kasan.h index d8783b682669..89c99e5e67de 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -474,12 +474,12 @@ static inline void kasan_populate_early_vm_area_shadow(void *start, * allocations with real shadow memory. With KASAN vmalloc, the special * case is unnecessary, as the work is handled in the generic case. */ -int kasan_module_alloc(void *addr, size_t size); +int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask); void kasan_free_shadow(const struct vm_struct *vm); #else /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */ -static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } +static inline int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask) { return 0; } static inline void kasan_free_shadow(const struct vm_struct *vm) {} #endif /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */ diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 6e022cc712e6..56d2b7828b31 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -28,6 +28,13 @@ struct notifier_block; /* in notifier.h */ #define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */ #define VM_NO_HUGE_VMAP 0x00000400 /* force PAGE_SIZE pte mapping */ +#if defined(CONFIG_KASAN) && (defined(CONFIG_KASAN_GENERIC) || \ + defined(CONFIG_KASAN_SW_TAGS)) && !defined(CONFIG_KASAN_VMALLOC) +#define VM_DELAY_KMEMLEAK 0x00000800 /* delay kmemleak object create */ +#else +#define VM_DELAY_KMEMLEAK 0 +#endif + /* * VM_KASAN is used slightly differently depending on CONFIG_KASAN_VMALLOC. * diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 4a4929b29a23..2ade2f484562 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -498,7 +498,7 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end, #else /* CONFIG_KASAN_VMALLOC */ -int kasan_module_alloc(void *addr, size_t size) +int kasan_module_alloc(void *addr, size_t size, gfp_t gfp_mask) { void *ret; size_t scaled_size; @@ -520,9 +520,14 @@ int kasan_module_alloc(void *addr, size_t size) __builtin_return_address(0)); if (ret) { + struct vm_struct *vm = find_vm_area(addr); __memset(ret, KASAN_SHADOW_INIT, shadow_size); - find_vm_area(addr)->flags |= VM_KASAN; + vm->flags |= VM_KASAN; kmemleak_ignore(ret); + + if (vm->flags & VM_DELAY_KMEMLEAK) + kmemleak_vmalloc(vm, size, gfp_mask); + return 0; } diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d2a00ad4e1dd..23c595b15839 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3074,7 +3074,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, clear_vm_uninitialized_flag(area); size = PAGE_ALIGN(size); - kmemleak_vmalloc(area, size, gfp_mask); + if (!(vm_flags & VM_DELAY_KMEMLEAK)) + kmemleak_vmalloc(area, size, gfp_mask); return addr;
Yongqiang reports a kmemleak panic when module insmod/rmmod with KASAN enabled on x86[1]. When the module allocates memory, it's kmemleak_object is created successfully, but the KASAN shadow memory of module allocation is not ready, so when kmemleak scan the module's pointer, it will panic due to no shadow memory with KASAN. module_alloc __vmalloc_node_range kmemleak_vmalloc kmemleak_scan update_checksum kasan_module_alloc kmemleak_ignore The bug should exist on ARM64/S390 too, add a VM_DELAY_KMEMLEAK flags, delay vmalloc'ed object register of kmemleak in module_alloc(). [1] https://lore.kernel.org/all/6d41e2b9-4692-5ec4-b1cd-cbe29ae89739@huawei.com/ Reported-by: Yongqiang Liu <liuyongqiang13@huawei.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> --- v2: - fix type error on changelog and kasan_module_alloc() arch/arm64/kernel/module.c | 4 ++-- arch/s390/kernel/module.c | 5 +++-- arch/x86/kernel/module.c | 7 ++++--- include/linux/kasan.h | 4 ++-- include/linux/vmalloc.h | 7 +++++++ mm/kasan/shadow.c | 9 +++++++-- mm/vmalloc.c | 3 ++- 7 files changed, 27 insertions(+), 12 deletions(-)