diff mbox series

kasan: Fix tag for large allocations when using CONFIG_SLAB

Message ID 20211001024105.3217339-1-willy@infradead.org (mailing list archive)
State New
Headers show
Series kasan: Fix tag for large allocations when using CONFIG_SLAB | expand

Commit Message

Matthew Wilcox Oct. 1, 2021, 2:41 a.m. UTC
If an object is allocated on a tail page of a multi-page slab, kasan
will get the wrong tag because page->s_mem is NULL for tail pages.
I'm not quite sure what the user-visible effect of this might be.

Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/kasan/common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Marco Elver Oct. 1, 2021, 10:30 a.m. UTC | #1
On Fri, 1 Oct 2021 at 04:42, Matthew Wilcox (Oracle)
<willy@infradead.org> wrote:
> If an object is allocated on a tail page of a multi-page slab, kasan
> will get the wrong tag because page->s_mem is NULL for tail pages.
> I'm not quite sure what the user-visible effect of this might be.
>
> Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Acked-by: Marco Elver <elver@google.com>

Indeed this looks wrong. I don't know how much this code is even
tested, because it depends on CONFIG_KASAN_SW_TAGS && CONFIG_SLAB, and
the cache having a constructor or SLAB_TYPESAFE_BY_RCU. HW_TAGS isn't
affected because it doesn't work with SLAB.

And to run SW_TAGS, one needs an arm64 CPU with TBI. And the instances
of KASAN_SW_TAGS I'm aware of use SLUB.

With eventual availability of Intel LAM, I expect KASAN_SW_TAGS to
become more widely used though, including its SLAB support.

> ---
>  mm/kasan/common.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 2baf121fb8c5..41779ad109cd 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -298,7 +298,7 @@ static inline u8 assign_tag(struct kmem_cache *cache,
>         /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */
>  #ifdef CONFIG_SLAB
>         /* For SLAB assign tags based on the object index in the freelist. */
> -       return (u8)obj_to_index(cache, virt_to_page(object), (void *)object);
> +       return (u8)obj_to_index(cache, virt_to_head_page(object), (void *)object);
>  #else
>         /*
>          * For SLUB assign a random tag during slab creation, otherwise reuse
> --
> 2.32.0
Andrey Konovalov Oct. 1, 2021, 1:29 p.m. UTC | #2
On Fri, Oct 1, 2021 at 4:42 AM Matthew Wilcox (Oracle)
<willy@infradead.org> wrote:
>
> If an object is allocated on a tail page of a multi-page slab, kasan
> will get the wrong tagbecause page->s_mem is NULL for tail pages.

Interesting. Is this a known property of tail pages? Why does this
happen? I failed to find this exception in the code.

The tag value won't really be "wrong", just unexpected. But if s_mem
is indeed NULL for tail pages, your fix makes sense.

> I'm not quite sure what the user-visible effect of this might be.

Everything should work, as long as tag values are assigned
consistently based on the object address.

>
> Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/kasan/common.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 2baf121fb8c5..41779ad109cd 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -298,7 +298,7 @@ static inline u8 assign_tag(struct kmem_cache *cache,
>         /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */
>  #ifdef CONFIG_SLAB
>         /* For SLAB assign tags based on the object index in the freelist. */
> -       return (u8)obj_to_index(cache, virt_to_page(object), (void *)object);
> +       return (u8)obj_to_index(cache, virt_to_head_page(object), (void *)object);
>  #else
>         /*
>          * For SLUB assign a random tag during slab creation, otherwise reuse
> --
> 2.32.0
>
Matthew Wilcox Oct. 1, 2021, 2:05 p.m. UTC | #3
On Fri, Oct 01, 2021 at 03:29:29PM +0200, Andrey Konovalov wrote:
> On Fri, Oct 1, 2021 at 4:42 AM Matthew Wilcox (Oracle)
> <willy@infradead.org> wrote:
> >
> > If an object is allocated on a tail page of a multi-page slab, kasan
> > will get the wrong tagbecause page->s_mem is NULL for tail pages.
> 
> Interesting. Is this a known property of tail pages? Why does this
> happen? I failed to find this exception in the code.

Yes, it's a known property of tail pages.  kmem_getpages() calls
__alloc_pages_node() which returns a pointer to the head page.
All the tail pages are initialised to point to the head page.
Then in alloc_slabmgmt(), we set ->s_mem of the head page, but
we never set ->s_mem of the tail pages.  Instead, we rely on
people always passing in the head page.  I have a patch in the works
to change the type from struct page to struct slab so you can't
make this mistake.  That was how I noticed this problem.

> The tag value won't really be "wrong", just unexpected. But if s_mem
> is indeed NULL for tail pages, your fix makes sense.
> 
> > I'm not quite sure what the user-visible effect of this might be.
> 
> Everything should work, as long as tag values are assigned
> consistently based on the object address.

OK, maybe this doesn't need to be backported then?  Actually, why
subtract s_mem in the first place?  Can we just avoid that for all
tag calculations?
Andrey Konovalov Oct. 3, 2021, 4:27 p.m. UTC | #4
On Fri, Oct 1, 2021 at 4:06 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Fri, Oct 01, 2021 at 03:29:29PM +0200, Andrey Konovalov wrote:
> > On Fri, Oct 1, 2021 at 4:42 AM Matthew Wilcox (Oracle)
> > <willy@infradead.org> wrote:
> > >
> > > If an object is allocated on a tail page of a multi-page slab, kasan
> > > will get the wrong tagbecause page->s_mem is NULL for tail pages.
> >
> > Interesting. Is this a known property of tail pages? Why does this
> > happen? I failed to find this exception in the code.
>
> Yes, it's a known property of tail pages.  kmem_getpages() calls
> __alloc_pages_node() which returns a pointer to the head page.
> All the tail pages are initialised to point to the head page.
> Then in alloc_slabmgmt(), we set ->s_mem of the head page, but
> we never set ->s_mem of the tail pages.  Instead, we rely on
> people always passing in the head page.  I have a patch in the works
> to change the type from struct page to struct slab so you can't
> make this mistake.  That was how I noticed this problem.

Ah, so it's not "the tail page", it's "a tail page". Meaning any page
but the head page. Got it.

> > The tag value won't really be "wrong", just unexpected. But if s_mem
> > is indeed NULL for tail pages, your fix makes sense.
> >
> > > I'm not quite sure what the user-visible effect of this might be.
> >
> > Everything should work, as long as tag values are assigned
> > consistently based on the object address.
>
> OK, maybe this doesn't need to be backported then?  Actually, why
> subtract s_mem in the first place?  Can we just avoid that for all
> tag calculations?

We could avoid it. To me, it seems cleaner to assign tags based on the
object index rather than on the absolute address. But either way
should work.

There's no security nor stability impact from this issue, so probably
not so much incentive to backport. But the patch makes sense.

Thanks!
Andrey Konovalov Oct. 3, 2021, 4:27 p.m. UTC | #5
On Fri, Oct 1, 2021 at 4:42 AM Matthew Wilcox (Oracle)
<willy@infradead.org> wrote:
>
> If an object is allocated on a tail page of a multi-page slab, kasan
> will get the wrong tag because page->s_mem is NULL for tail pages.
> I'm not quite sure what the user-visible effect of this might be.
>
> Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/kasan/common.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 2baf121fb8c5..41779ad109cd 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -298,7 +298,7 @@ static inline u8 assign_tag(struct kmem_cache *cache,
>         /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */
>  #ifdef CONFIG_SLAB
>         /* For SLAB assign tags based on the object index in the freelist. */
> -       return (u8)obj_to_index(cache, virt_to_page(object), (void *)object);
> +       return (u8)obj_to_index(cache, virt_to_head_page(object), (void *)object);
>  #else
>         /*
>          * For SLUB assign a random tag during slab creation, otherwise reuse
> --
> 2.32.0
>

Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
diff mbox series

Patch

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 2baf121fb8c5..41779ad109cd 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -298,7 +298,7 @@  static inline u8 assign_tag(struct kmem_cache *cache,
 	/* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */
 #ifdef CONFIG_SLAB
 	/* For SLAB assign tags based on the object index in the freelist. */
-	return (u8)obj_to_index(cache, virt_to_page(object), (void *)object);
+	return (u8)obj_to_index(cache, virt_to_head_page(object), (void *)object);
 #else
 	/*
 	 * For SLUB assign a random tag during slab creation, otherwise reuse