diff mbox series

kfence: fix stack trace pruning

Message ID 20221118152216.3914899-1-elver@google.com (mailing list archive)
State New
Headers show
Series kfence: fix stack trace pruning | expand

Commit Message

Marco Elver Nov. 18, 2022, 3:22 p.m. UTC
Commit b14051352465 ("mm/sl[au]b: generalize kmalloc subsystem")
refactored large parts of the kmalloc subsystem, resulting in the stack
trace pruning logic done by KFENCE to no longer work.

While b14051352465 attempted to fix the situation by including
'__kmem_cache_free' in the list of functions KFENCE should skip through,
this only works when the compiler actually optimized the tail call from
kfree() to __kmem_cache_free() into a jump (and thus kfree() _not_
appearing in the full stack trace to begin with).

In some configurations, the compiler no longer optimizes the tail call
into a jump, and __kmem_cache_free() appears in the stack trace. This
means that the pruned stack trace shown by KFENCE would include kfree()
which is not intended - for example:

 | BUG: KFENCE: invalid free in kfree+0x7c/0x120
 |
 | Invalid free of 0xffff8883ed8fefe0 (in kfence-#126):
 |  kfree+0x7c/0x120
 |  test_double_free+0x116/0x1a9
 |  kunit_try_run_case+0x90/0xd0
 | [...]

Fix it by moving __kmem_cache_free() to the list of functions that may
be tail called by an allocator entry function, making the pruning logic
work in both the optimized and unoptimized tail call cases.

Fixes: b14051352465 ("mm/sl[au]b: generalize kmalloc subsystem")
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Feng Tang <feng.tang@intel.com>
Signed-off-by: Marco Elver <elver@google.com>
---
 mm/kfence/report.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

Comments

Alexander Potapenko Nov. 18, 2022, 5:09 p.m. UTC | #1
On Fri, Nov 18, 2022 at 4:22 PM Marco Elver <elver@google.com> wrote:
>
> Commit b14051352465 ("mm/sl[au]b: generalize kmalloc subsystem")
> refactored large parts of the kmalloc subsystem, resulting in the stack
> trace pruning logic done by KFENCE to no longer work.
>
> While b14051352465 attempted to fix the situation by including
> '__kmem_cache_free' in the list of functions KFENCE should skip through,
> this only works when the compiler actually optimized the tail call from
> kfree() to __kmem_cache_free() into a jump (and thus kfree() _not_
> appearing in the full stack trace to begin with).
>
> In some configurations, the compiler no longer optimizes the tail call
> into a jump, and __kmem_cache_free() appears in the stack trace. This
> means that the pruned stack trace shown by KFENCE would include kfree()
> which is not intended - for example:
>
>  | BUG: KFENCE: invalid free in kfree+0x7c/0x120
>  |
>  | Invalid free of 0xffff8883ed8fefe0 (in kfence-#126):
>  |  kfree+0x7c/0x120
>  |  test_double_free+0x116/0x1a9
>  |  kunit_try_run_case+0x90/0xd0
>  | [...]
>
> Fix it by moving __kmem_cache_free() to the list of functions that may
> be tail called by an allocator entry function, making the pruning logic
> work in both the optimized and unoptimized tail call cases.
>
> Fixes: b14051352465 ("mm/sl[au]b: generalize kmalloc subsystem")
> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Cc: Feng Tang <feng.tang@intel.com>
> Signed-off-by: Marco Elver <elver@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

> ---
>  mm/kfence/report.c | 13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/mm/kfence/report.c b/mm/kfence/report.c
> index 7e496856c2eb..46ecea18c4ca 100644
> --- a/mm/kfence/report.c
> +++ b/mm/kfence/report.c
> @@ -75,18 +75,23 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries
>
>                 if (str_has_prefix(buf, ARCH_FUNC_PREFIX "kfence_") ||
>                     str_has_prefix(buf, ARCH_FUNC_PREFIX "__kfence_") ||
> +                   str_has_prefix(buf, ARCH_FUNC_PREFIX "__kmem_cache_free") ||
>                     !strncmp(buf, ARCH_FUNC_PREFIX "__slab_free", len)) {
>                         /*
> -                        * In case of tail calls from any of the below
> -                        * to any of the above.
> +                        * In case of tail calls from any of the below to any of
> +                        * the above, optimized by the compiler such that the
> +                        * stack trace would omit the initial entry point below.
>                          */
>                         fallback = skipnr + 1;
>                 }
>
> -               /* Also the *_bulk() variants by only checking prefixes. */
> +               /*
> +                * The below list should only include the initial entry points
> +                * into the slab allocators. Includes the *_bulk() variants by
> +                * checking prefixes.
> +                */
>                 if (str_has_prefix(buf, ARCH_FUNC_PREFIX "kfree") ||
>                     str_has_prefix(buf, ARCH_FUNC_PREFIX "kmem_cache_free") ||
> -                   str_has_prefix(buf, ARCH_FUNC_PREFIX "__kmem_cache_free") ||
>                     str_has_prefix(buf, ARCH_FUNC_PREFIX "__kmalloc") ||
>                     str_has_prefix(buf, ARCH_FUNC_PREFIX "kmem_cache_alloc"))
>                         goto found;
> --
> 2.38.1.584.g0f3c55d4c2-goog
>
diff mbox series

Patch

diff --git a/mm/kfence/report.c b/mm/kfence/report.c
index 7e496856c2eb..46ecea18c4ca 100644
--- a/mm/kfence/report.c
+++ b/mm/kfence/report.c
@@ -75,18 +75,23 @@  static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries
 
 		if (str_has_prefix(buf, ARCH_FUNC_PREFIX "kfence_") ||
 		    str_has_prefix(buf, ARCH_FUNC_PREFIX "__kfence_") ||
+		    str_has_prefix(buf, ARCH_FUNC_PREFIX "__kmem_cache_free") ||
 		    !strncmp(buf, ARCH_FUNC_PREFIX "__slab_free", len)) {
 			/*
-			 * In case of tail calls from any of the below
-			 * to any of the above.
+			 * In case of tail calls from any of the below to any of
+			 * the above, optimized by the compiler such that the
+			 * stack trace would omit the initial entry point below.
 			 */
 			fallback = skipnr + 1;
 		}
 
-		/* Also the *_bulk() variants by only checking prefixes. */
+		/*
+		 * The below list should only include the initial entry points
+		 * into the slab allocators. Includes the *_bulk() variants by
+		 * checking prefixes.
+		 */
 		if (str_has_prefix(buf, ARCH_FUNC_PREFIX "kfree") ||
 		    str_has_prefix(buf, ARCH_FUNC_PREFIX "kmem_cache_free") ||
-		    str_has_prefix(buf, ARCH_FUNC_PREFIX "__kmem_cache_free") ||
 		    str_has_prefix(buf, ARCH_FUNC_PREFIX "__kmalloc") ||
 		    str_has_prefix(buf, ARCH_FUNC_PREFIX "kmem_cache_alloc"))
 			goto found;