mbox series

[00/32] kasan: switch tag-based modes to stack ring from per-object metadata

Message ID cover.1655150842.git.andreyknvl@google.com (mailing list archive)
Headers show
Series kasan: switch tag-based modes to stack ring from per-object metadata | expand

Message

andrey.konovalov@linux.dev June 13, 2022, 8:13 p.m. UTC
From: Andrey Konovalov <andreyknvl@google.com>

This series makes the tag-based KASAN modes use a ring buffer for storing
stack depot handles for alloc/free stack traces for slab objects instead
of per-object metadata. This ring buffer is referred to as the stack ring.

On each alloc/free of a slab object, the tagged address of the object and
the current stack trace are recorded in the stack ring.

On each bug report, if the accessed address belongs to a slab object, the
stack ring is scanned for matching entries. The newest entries are used to
print the alloc/free stack traces in the report: one entry for alloc and
one for free.

The ring buffer is lock-free.

The advantages of this approach over storing stack trace handles in
per-object metadata with the tag-based KASAN modes:

- Allows to find relevant stack traces for use-after-free bugs without
  using quarantine for freed memory. (Currently, if the object was
  reallocated multiple times, the report contains the latest alloc/free
  stack traces, not necessarily the ones relevant to the buggy allocation.)
- Allows to better identify and mark use-after-free bugs, effectively
  making the CONFIG_KASAN_TAGS_IDENTIFY functionality always-on.
- Has fixed memory overhead.

The disadvantage:

- If the affected object was allocated/freed long before the bug happened
  and the stack trace events were purged from the stack ring, the report
  will have no stack traces.

Discussion
==========

The current implementation of the stack ring uses a single ring buffer for
the whole kernel. This might lead to contention due to atomic accesses to
the ring buffer index on multicore systems.

It is unclear to me whether the performance impact from this contention
is significant compared to the slowdown introduced by collecting stack
traces.

While these patches are being reviewed, I will do some tests on the arm64
hardware that I have. However, I do not have a large multicore arm64
system to do proper measurements.

A considered alternative is to keep a separate ring buffer for each CPU
and then iterate over all of them when printing a bug report. This approach
requires somehow figuring out which of the stack rings has the freshest
stack traces for an object if multiple stack rings have them.

Further plans
=============

This series is a part of an effort to make KASAN stack trace collection
suitable for production. This requires stack trace collection to be fast
and memory-bounded.

The planned steps are:

1. Speed up stack trace collection (potentially, by using SCS;
   patches on-hold until steps #2 and #3 are completed).
2. Keep stack trace handles in the stack ring (this series).
3. Add a memory-bounded mode to stack depot or provide an alternative
   memory-bounded stack storage.
4. Potentially, implement stack trace collection sampling to minimize
   the performance impact.

Thanks!

Andrey Konovalov (32):
  kasan: check KASAN_NO_FREE_META in __kasan_metadata_size
  kasan: rename kasan_set_*_info to kasan_save_*_info
  kasan: move is_kmalloc check out of save_alloc_info
  kasan: split save_alloc_info implementations
  kasan: drop CONFIG_KASAN_TAGS_IDENTIFY
  kasan: introduce kasan_print_aux_stacks
  kasan: introduce kasan_get_alloc_track
  kasan: introduce kasan_init_object_meta
  kasan: clear metadata functions for tag-based modes
  kasan: move kasan_get_*_meta to generic.c
  kasan: introduce kasan_requires_meta
  kasan: introduce kasan_init_cache_meta
  kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta
  kasan: only define kasan_metadata_size for Generic mode
  kasan: only define kasan_never_merge for Generic mode
  kasan: only define metadata offsets for Generic mode
  kasan: only define metadata structs for Generic mode
  kasan: only define kasan_cache_create for Generic mode
  kasan: pass tagged pointers to kasan_save_alloc/free_info
  kasan: move kasan_get_alloc/free_track definitions
  kasan: simplify invalid-free reporting
  kasan: cosmetic changes in report.c
  kasan: use kasan_addr_to_slab in print_address_description
  kasan: move kasan_addr_to_slab to common.c
  kasan: make kasan_addr_to_page static
  kasan: simplify print_report
  kasan: introduce complete_report_info
  kasan: fill in cache and object in complete_report_info
  kasan: rework function arguments in report.c
  kasan: introduce kasan_complete_mode_report_info
  kasan: implement stack ring for tag-based modes
  kasan: better identify bug types for tag-based modes

 include/linux/kasan.h     |  55 +++++-------
 include/linux/slab.h      |   2 +-
 lib/Kconfig.kasan         |   8 --
 mm/kasan/common.c         | 173 ++++----------------------------------
 mm/kasan/generic.c        | 154 ++++++++++++++++++++++++++++++---
 mm/kasan/kasan.h          | 138 ++++++++++++++++++++----------
 mm/kasan/report.c         | 130 +++++++++++++---------------
 mm/kasan/report_generic.c |  45 +++++++++-
 mm/kasan/report_tags.c    | 114 ++++++++++++++++++-------
 mm/kasan/tags.c           |  61 +++++++-------
 10 files changed, 491 insertions(+), 389 deletions(-)

Comments

Marco Elver June 17, 2022, 9:32 a.m. UTC | #1
On Mon, Jun 13, 2022 at 10:13PM +0200, andrey.konovalov@linux.dev wrote:
> From: Andrey Konovalov <andreyknvl@google.com>
> 
> This series makes the tag-based KASAN modes use a ring buffer for storing
> stack depot handles for alloc/free stack traces for slab objects instead
> of per-object metadata. This ring buffer is referred to as the stack ring.
> 
> On each alloc/free of a slab object, the tagged address of the object and
> the current stack trace are recorded in the stack ring.
> 
> On each bug report, if the accessed address belongs to a slab object, the
> stack ring is scanned for matching entries. The newest entries are used to
> print the alloc/free stack traces in the report: one entry for alloc and
> one for free.
> 
> The ring buffer is lock-free.
> 
> The advantages of this approach over storing stack trace handles in
> per-object metadata with the tag-based KASAN modes:
> 
> - Allows to find relevant stack traces for use-after-free bugs without
>   using quarantine for freed memory. (Currently, if the object was
>   reallocated multiple times, the report contains the latest alloc/free
>   stack traces, not necessarily the ones relevant to the buggy allocation.)
> - Allows to better identify and mark use-after-free bugs, effectively
>   making the CONFIG_KASAN_TAGS_IDENTIFY functionality always-on.
> - Has fixed memory overhead.
> 
> The disadvantage:
> 
> - If the affected object was allocated/freed long before the bug happened
>   and the stack trace events were purged from the stack ring, the report
>   will have no stack traces.

Do you have statistics on how how likely this is? Maybe through
identifying what the average lifetime of an entry in the stack ring is?

How bad is this for very long lived objects (e.g. pagecache)?

> Discussion
> ==========
> 
> The current implementation of the stack ring uses a single ring buffer for
> the whole kernel. This might lead to contention due to atomic accesses to
> the ring buffer index on multicore systems.
> 
> It is unclear to me whether the performance impact from this contention
> is significant compared to the slowdown introduced by collecting stack
> traces.

I agree, but once stack trace collection becomes faster (per your future
plans below), this might need to be revisited.

> While these patches are being reviewed, I will do some tests on the arm64
> hardware that I have. However, I do not have a large multicore arm64
> system to do proper measurements.
> 
> A considered alternative is to keep a separate ring buffer for each CPU
> and then iterate over all of them when printing a bug report. This approach
> requires somehow figuring out which of the stack rings has the freshest
> stack traces for an object if multiple stack rings have them.
> 
> Further plans
> =============
> 
> This series is a part of an effort to make KASAN stack trace collection
> suitable for production. This requires stack trace collection to be fast
> and memory-bounded.
> 
> The planned steps are:
> 
> 1. Speed up stack trace collection (potentially, by using SCS;
>    patches on-hold until steps #2 and #3 are completed).
> 2. Keep stack trace handles in the stack ring (this series).
> 3. Add a memory-bounded mode to stack depot or provide an alternative
>    memory-bounded stack storage.
> 4. Potentially, implement stack trace collection sampling to minimize
>    the performance impact.
> 
> Thanks!
> 
> Andrey Konovalov (32):
>   kasan: check KASAN_NO_FREE_META in __kasan_metadata_size
>   kasan: rename kasan_set_*_info to kasan_save_*_info
>   kasan: move is_kmalloc check out of save_alloc_info
>   kasan: split save_alloc_info implementations
>   kasan: drop CONFIG_KASAN_TAGS_IDENTIFY
>   kasan: introduce kasan_print_aux_stacks
>   kasan: introduce kasan_get_alloc_track
>   kasan: introduce kasan_init_object_meta
>   kasan: clear metadata functions for tag-based modes
>   kasan: move kasan_get_*_meta to generic.c
>   kasan: introduce kasan_requires_meta
>   kasan: introduce kasan_init_cache_meta
>   kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta
>   kasan: only define kasan_metadata_size for Generic mode
>   kasan: only define kasan_never_merge for Generic mode
>   kasan: only define metadata offsets for Generic mode
>   kasan: only define metadata structs for Generic mode
>   kasan: only define kasan_cache_create for Generic mode
>   kasan: pass tagged pointers to kasan_save_alloc/free_info
>   kasan: move kasan_get_alloc/free_track definitions
>   kasan: simplify invalid-free reporting
>   kasan: cosmetic changes in report.c
>   kasan: use kasan_addr_to_slab in print_address_description
>   kasan: move kasan_addr_to_slab to common.c
>   kasan: make kasan_addr_to_page static
>   kasan: simplify print_report
>   kasan: introduce complete_report_info
>   kasan: fill in cache and object in complete_report_info
>   kasan: rework function arguments in report.c
>   kasan: introduce kasan_complete_mode_report_info
>   kasan: implement stack ring for tag-based modes
>   kasan: better identify bug types for tag-based modes

Let me go and review the patches now.

Thanks,
-- Marco
Andrey Konovalov July 18, 2022, 10:41 p.m. UTC | #2
On Fri, Jun 17, 2022 at 11:32 AM Marco Elver <elver@google.com> wrote:
>
> > The disadvantage:
> >
> > - If the affected object was allocated/freed long before the bug happened
> >   and the stack trace events were purged from the stack ring, the report
> >   will have no stack traces.
>
> Do you have statistics on how how likely this is? Maybe through
> identifying what the average lifetime of an entry in the stack ring is?
>
> How bad is this for very long lived objects (e.g. pagecache)?

I ran a test on Pixel 6: the stack ring of size (32 << 10) gets fully
rewritten every ~2.7 seconds during boot. Any buggy object that is
allocated/freed and then accessed with a bigger time span will not
have stack traces.

This can be dealt with by increasing the stack ring size, but this
comes down to how much memory one is willing to allocate for the stack
ring. If we decide to use sampling (saving stack traces only for every
Nth object), that will affect this too.

But any object that is allocated once during boot will be purged out
of the stack ring sooner or later. One could argue that such objects
are usually allocated at a single know place, so have a stack trace
won't considerably improve the report.

I would say that we need to deploy some solution, study the reports,
and adjust the implementation based on that.

> > Discussion
> > ==========
> >
> > The current implementation of the stack ring uses a single ring buffer for
> > the whole kernel. This might lead to contention due to atomic accesses to
> > the ring buffer index on multicore systems.
> >
> > It is unclear to me whether the performance impact from this contention
> > is significant compared to the slowdown introduced by collecting stack
> > traces.
>
> I agree, but once stack trace collection becomes faster (per your future
> plans below), this might need to be revisited.

Ack.

Thanks!