diff mbox series

[v6,6/9] kfence, kasan: make KFENCE compatible with KASAN

Message ID 20201029131649.182037-7-elver@google.com (mailing list archive)
State New, archived
Headers show
Series KFENCE: A low-overhead sampling-based memory safety error detector | expand

Commit Message

Marco Elver Oct. 29, 2020, 1:16 p.m. UTC
From: Alexander Potapenko <glider@google.com>

We make KFENCE compatible with KASAN for testing KFENCE itself. In
particular, KASAN helps to catch any potential corruptions to KFENCE
state, or other corruptions that may be a result of freepointer
corruptions in the main allocators.

To indicate that the combination of the two is generally discouraged,
CONFIG_EXPERT=y should be set. It also gives us the nice property that
KFENCE will be build-tested by allyesconfig builds.

Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Co-developed-by: Marco Elver <elver@google.com>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Alexander Potapenko <glider@google.com>
---
v5:
* Also guard kasan_unpoison_shadow with is_kfence_address(), as it may
  be called from SL*B internals, currently ksize().
* Make kasan_record_aux_stack() compatible with KFENCE, which may be
  called from outside KASAN runtime.
---
 lib/Kconfig.kfence |  2 +-
 mm/kasan/common.c  | 15 +++++++++++++++
 mm/kasan/generic.c |  3 ++-
 3 files changed, 18 insertions(+), 2 deletions(-)

Comments

Jann Horn Oct. 30, 2020, 2:49 a.m. UTC | #1
On Thu, Oct 29, 2020 at 2:17 PM Marco Elver <elver@google.com> wrote:
> We make KFENCE compatible with KASAN for testing KFENCE itself. In
> particular, KASAN helps to catch any potential corruptions to KFENCE
> state, or other corruptions that may be a result of freepointer
> corruptions in the main allocators.
>
> To indicate that the combination of the two is generally discouraged,
> CONFIG_EXPERT=y should be set. It also gives us the nice property that
> KFENCE will be build-tested by allyesconfig builds.
>
> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
> Co-developed-by: Marco Elver <elver@google.com>
> Signed-off-by: Marco Elver <elver@google.com>
> Signed-off-by: Alexander Potapenko <glider@google.com>

Reviewed-by: Jann Horn <jannh@google.com>

with one nit:

[...]
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
[...]
> @@ -141,6 +142,14 @@ void kasan_unpoison_shadow(const void *address, size_t size)
>          */
>         address = reset_tag(address);
>
> +       /*
> +        * We may be called from SL*B internals, such as ksize(): with a size
> +        * not a multiple of machine-word size, avoid poisoning the invalid
> +        * portion of the word for KFENCE memory.
> +        */
> +       if (is_kfence_address(address))
> +               return;

It might be helpful if you could add a comment that explains that
kasan_poison_object_data() does not need a similar guard because
kasan_poison_object_data() is always paired with
kasan_unpoison_object_data() - that threw me off a bit at first.
Marco Elver Oct. 30, 2020, 1:46 p.m. UTC | #2
On Fri, 30 Oct 2020 at 03:50, Jann Horn <jannh@google.com> wrote:
>
> On Thu, Oct 29, 2020 at 2:17 PM Marco Elver <elver@google.com> wrote:
> > We make KFENCE compatible with KASAN for testing KFENCE itself. In
> > particular, KASAN helps to catch any potential corruptions to KFENCE
> > state, or other corruptions that may be a result of freepointer
> > corruptions in the main allocators.
> >
> > To indicate that the combination of the two is generally discouraged,
> > CONFIG_EXPERT=y should be set. It also gives us the nice property that
> > KFENCE will be build-tested by allyesconfig builds.
> >
> > Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
> > Co-developed-by: Marco Elver <elver@google.com>
> > Signed-off-by: Marco Elver <elver@google.com>
> > Signed-off-by: Alexander Potapenko <glider@google.com>
>
> Reviewed-by: Jann Horn <jannh@google.com>

Thanks!

> with one nit:
>
> [...]
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> [...]
> > @@ -141,6 +142,14 @@ void kasan_unpoison_shadow(const void *address, size_t size)
> >          */
> >         address = reset_tag(address);
> >
> > +       /*
> > +        * We may be called from SL*B internals, such as ksize(): with a size
> > +        * not a multiple of machine-word size, avoid poisoning the invalid
> > +        * portion of the word for KFENCE memory.
> > +        */
> > +       if (is_kfence_address(address))
> > +               return;
>
> It might be helpful if you could add a comment that explains that
> kasan_poison_object_data() does not need a similar guard because
> kasan_poison_object_data() is always paired with
> kasan_unpoison_object_data() - that threw me off a bit at first.

Well, KFENCE objects should never be poisoned/unpoisoned because the
kasan_alloc and free hooks have a kfence guard, and none of the code
in sl*b.c that does kasan_{poison,unpoison}_object_data() should be
executed for KFENCE objects.

But I just noticed that kernel/scs.c seems to kasan_poison and
unpoison objects, and keeps them poisoned for most of the object
lifetime. I think we better add a kfence guard to
kasan_poison_shadow() as well.

Thanks,
-- Marco
Jann Horn Oct. 30, 2020, 3:08 p.m. UTC | #3
On Fri, Oct 30, 2020 at 2:46 PM Marco Elver <elver@google.com> wrote:
> On Fri, 30 Oct 2020 at 03:50, Jann Horn <jannh@google.com> wrote:
> > On Thu, Oct 29, 2020 at 2:17 PM Marco Elver <elver@google.com> wrote:
> > > We make KFENCE compatible with KASAN for testing KFENCE itself. In
> > > particular, KASAN helps to catch any potential corruptions to KFENCE
> > > state, or other corruptions that may be a result of freepointer
> > > corruptions in the main allocators.
> > >
> > > To indicate that the combination of the two is generally discouraged,
> > > CONFIG_EXPERT=y should be set. It also gives us the nice property that
> > > KFENCE will be build-tested by allyesconfig builds.
> > >
> > > Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
> > > Co-developed-by: Marco Elver <elver@google.com>
> > > Signed-off-by: Marco Elver <elver@google.com>
> > > Signed-off-by: Alexander Potapenko <glider@google.com>
> >
> > Reviewed-by: Jann Horn <jannh@google.com>
>
> Thanks!
>
> > with one nit:
> >
> > [...]
> > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > [...]
> > > @@ -141,6 +142,14 @@ void kasan_unpoison_shadow(const void *address, size_t size)
> > >          */
> > >         address = reset_tag(address);
> > >
> > > +       /*
> > > +        * We may be called from SL*B internals, such as ksize(): with a size
> > > +        * not a multiple of machine-word size, avoid poisoning the invalid
> > > +        * portion of the word for KFENCE memory.
> > > +        */
> > > +       if (is_kfence_address(address))
> > > +               return;
> >
> > It might be helpful if you could add a comment that explains that
> > kasan_poison_object_data() does not need a similar guard because
> > kasan_poison_object_data() is always paired with
> > kasan_unpoison_object_data() - that threw me off a bit at first.
>
> Well, KFENCE objects should never be poisoned/unpoisoned because the
> kasan_alloc and free hooks have a kfence guard, and none of the code
> in sl*b.c that does kasan_{poison,unpoison}_object_data() should be
> executed for KFENCE objects.
>
> But I just noticed that kernel/scs.c seems to kasan_poison and
> unpoison objects, and keeps them poisoned for most of the object
> lifetime.

FWIW, I wouldn't be surprised if other parts of the kernel also ended
up wanting to have in-object redzones eventually - e.g. inside skb
buffers, which have a struct skb_shared_info at the end. AFAIU at the
moment, KASAN can't catch small OOB accesses from these buffers
because of the following structure.

> I think we better add a kfence guard to
> kasan_poison_shadow() as well.

Sounds good.
Marco Elver Oct. 30, 2020, 3:19 p.m. UTC | #4
On Fri, 30 Oct 2020 at 16:09, Jann Horn <jannh@google.com> wrote:
>
> On Fri, Oct 30, 2020 at 2:46 PM Marco Elver <elver@google.com> wrote:
> > On Fri, 30 Oct 2020 at 03:50, Jann Horn <jannh@google.com> wrote:
> > > On Thu, Oct 29, 2020 at 2:17 PM Marco Elver <elver@google.com> wrote:
> > > > We make KFENCE compatible with KASAN for testing KFENCE itself. In
> > > > particular, KASAN helps to catch any potential corruptions to KFENCE
> > > > state, or other corruptions that may be a result of freepointer
> > > > corruptions in the main allocators.
> > > >
> > > > To indicate that the combination of the two is generally discouraged,
> > > > CONFIG_EXPERT=y should be set. It also gives us the nice property that
> > > > KFENCE will be build-tested by allyesconfig builds.
> > > >
> > > > Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
> > > > Co-developed-by: Marco Elver <elver@google.com>
> > > > Signed-off-by: Marco Elver <elver@google.com>
> > > > Signed-off-by: Alexander Potapenko <glider@google.com>
> > >
> > > Reviewed-by: Jann Horn <jannh@google.com>
> >
> > Thanks!
> >
> > > with one nit:
> > >
> > > [...]
> > > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > > [...]
> > > > @@ -141,6 +142,14 @@ void kasan_unpoison_shadow(const void *address, size_t size)
> > > >          */
> > > >         address = reset_tag(address);
> > > >
> > > > +       /*
> > > > +        * We may be called from SL*B internals, such as ksize(): with a size
> > > > +        * not a multiple of machine-word size, avoid poisoning the invalid
> > > > +        * portion of the word for KFENCE memory.
> > > > +        */
> > > > +       if (is_kfence_address(address))
> > > > +               return;
> > >
> > > It might be helpful if you could add a comment that explains that
> > > kasan_poison_object_data() does not need a similar guard because
> > > kasan_poison_object_data() is always paired with
> > > kasan_unpoison_object_data() - that threw me off a bit at first.
> >
> > Well, KFENCE objects should never be poisoned/unpoisoned because the
> > kasan_alloc and free hooks have a kfence guard, and none of the code
> > in sl*b.c that does kasan_{poison,unpoison}_object_data() should be
> > executed for KFENCE objects.
> >
> > But I just noticed that kernel/scs.c seems to kasan_poison and
> > unpoison objects, and keeps them poisoned for most of the object
> > lifetime.
>
> FWIW, I wouldn't be surprised if other parts of the kernel also ended
> up wanting to have in-object redzones eventually - e.g. inside skb
> buffers, which have a struct skb_shared_info at the end. AFAIU at the
> moment, KASAN can't catch small OOB accesses from these buffers
> because of the following structure.

Sure, and it might also become more interesting with MTE-based KASAN.

But, currently we recommend not to enable generic KASAN+KFENCE,
because it'd be redundant if the instrumentation price for generic (or
SW-tag) KASAN is already paid. The changes here are also mostly for
testing KFENCE itself.

That may change with MTE-based KASAN, however, which may have modes
where stack traces aren't collected and having KFENCE to get
actionable debug-info across a fleet of machines may still be wanted.
But that story is still evolving. The code here is only for the
generic and SW-tag based KASAN modes, and MTE will have its own
kasan_{un,}poison_shadow (afaik it's being renamed to
kasan_{un,}poison_memory) which works just fine with KFENCE AFAIK.

> > I think we better add a kfence guard to
> > kasan_poison_shadow() as well.
>
> Sounds good.
diff mbox series

Patch

diff --git a/lib/Kconfig.kfence b/lib/Kconfig.kfence
index d24baa3bce4a..639b48cc75d4 100644
--- a/lib/Kconfig.kfence
+++ b/lib/Kconfig.kfence
@@ -5,7 +5,7 @@  config HAVE_ARCH_KFENCE
 
 menuconfig KFENCE
 	bool "KFENCE: low-overhead sampling-based memory safety error detector"
-	depends on HAVE_ARCH_KFENCE && !KASAN && (SLAB || SLUB)
+	depends on HAVE_ARCH_KFENCE && (!KASAN || EXPERT) && (SLAB || SLUB)
 	depends on JUMP_LABEL # To ensure performance, require jump labels
 	select STACKTRACE
 	help
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 950fd372a07e..ac1d404fb41e 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -18,6 +18,7 @@ 
 #include <linux/init.h>
 #include <linux/kasan.h>
 #include <linux/kernel.h>
+#include <linux/kfence.h>
 #include <linux/kmemleak.h>
 #include <linux/linkage.h>
 #include <linux/memblock.h>
@@ -141,6 +142,14 @@  void kasan_unpoison_shadow(const void *address, size_t size)
 	 */
 	address = reset_tag(address);
 
+	/*
+	 * We may be called from SL*B internals, such as ksize(): with a size
+	 * not a multiple of machine-word size, avoid poisoning the invalid
+	 * portion of the word for KFENCE memory.
+	 */
+	if (is_kfence_address(address))
+		return;
+
 	kasan_poison_shadow(address, size, tag);
 
 	if (size & KASAN_SHADOW_MASK) {
@@ -396,6 +405,9 @@  static bool __kasan_slab_free(struct kmem_cache *cache, void *object,
 	tagged_object = object;
 	object = reset_tag(object);
 
+	if (is_kfence_address(object))
+		return false;
+
 	if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
 	    object)) {
 		kasan_report_invalid_free(tagged_object, ip);
@@ -444,6 +456,9 @@  static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,
 	if (unlikely(object == NULL))
 		return NULL;
 
+	if (is_kfence_address(object))
+		return (void *)object;
+
 	redzone_start = round_up((unsigned long)(object + size),
 				KASAN_SHADOW_SCALE_SIZE);
 	redzone_end = round_up((unsigned long)object + cache->object_size,
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 248264b9cb76..1069ecd1cd55 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -21,6 +21,7 @@ 
 #include <linux/init.h>
 #include <linux/kasan.h>
 #include <linux/kernel.h>
+#include <linux/kfence.h>
 #include <linux/kmemleak.h>
 #include <linux/linkage.h>
 #include <linux/memblock.h>
@@ -332,7 +333,7 @@  void kasan_record_aux_stack(void *addr)
 	struct kasan_alloc_meta *alloc_info;
 	void *object;
 
-	if (!(page && PageSlab(page)))
+	if (is_kfence_address(addr) || !(page && PageSlab(page)))
 		return;
 
 	cache = page->slab_cache;