diff mbox series

kasan: fix per-page tags for non-page_alloc pages

Message ID 1a41abb11c51b264511d9e71c303bb16d5cb367b.1615475452.git.andreyknvl@google.com (mailing list archive)
State New, archived
Headers show
Series kasan: fix per-page tags for non-page_alloc pages | expand

Commit Message

Andrey Konovalov March 11, 2021, 3:11 p.m. UTC
To allow performing tag checks on page_alloc addresses obtained via
page_address(), tag-based KASAN modes store tags for page_alloc
allocations in page->flags.

Currently, the default tag value stored in page->flags is 0x00.
Therefore, page_address() returns a 0x00ffff... address for pages
that were not allocated via page_alloc.

This might cause problems. A particular case we encountered is a conflict
with KFENCE. If a KFENCE-allocated slab object is being freed via
kfree(page_address(page) + offset), the address passed to kfree() will
get tagged with 0x00 (as slab pages keep the default per-page tags).
This leads to is_kfence_address() check failing, and a KFENCE object
ending up in normal slab freelist, which causes memory corruptions.

This patch changes the way KASAN stores tag in page-flags: they are now
stored xor'ed with 0xff. This way, KASAN doesn't need to initialize
per-page flags for every created page, which might be slow.

With this change, page_address() returns natively-tagged (with 0xff)
pointers for pages that didn't have tags set explicitly.

This patch fixes the encountered conflict with KFENCE and prevents more
similar issues that can occur in the future.

Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc")
Cc: stable@vger.kernel.org
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 include/linux/mm.h | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

Comments

Marco Elver March 11, 2021, 3:17 p.m. UTC | #1
On Thu, 11 Mar 2021 at 16:11, Andrey Konovalov <andreyknvl@google.com> wrote:
>
> To allow performing tag checks on page_alloc addresses obtained via
> page_address(), tag-based KASAN modes store tags for page_alloc
> allocations in page->flags.
>
> Currently, the default tag value stored in page->flags is 0x00.
> Therefore, page_address() returns a 0x00ffff... address for pages
> that were not allocated via page_alloc.
>
> This might cause problems. A particular case we encountered is a conflict
> with KFENCE. If a KFENCE-allocated slab object is being freed via
> kfree(page_address(page) + offset), the address passed to kfree() will
> get tagged with 0x00 (as slab pages keep the default per-page tags).
> This leads to is_kfence_address() check failing, and a KFENCE object
> ending up in normal slab freelist, which causes memory corruptions.
>
> This patch changes the way KASAN stores tag in page-flags: they are now
> stored xor'ed with 0xff. This way, KASAN doesn't need to initialize
> per-page flags for every created page, which might be slow.
>
> With this change, page_address() returns natively-tagged (with 0xff)
> pointers for pages that didn't have tags set explicitly.
>
> This patch fixes the encountered conflict with KFENCE and prevents more
> similar issues that can occur in the future.
>
> Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc")
> Cc: stable@vger.kernel.org
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

Reviewed-by: Marco Elver <elver@google.com>

Thank you!

> ---
>  include/linux/mm.h | 18 +++++++++++++++---
>  1 file changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 77e64e3eac80..c45c28f094a7 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1440,16 +1440,28 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
>
>  #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
>
> +/*
> + * KASAN per-page tags are stored xor'ed with 0xff. This allows to avoid
> + * setting tags for all pages to native kernel tag value 0xff, as the default
> + * value 0x00 maps to 0xff.
> + */
> +
>  static inline u8 page_kasan_tag(const struct page *page)
>  {
> -       if (kasan_enabled())
> -               return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> -       return 0xff;
> +       u8 tag = 0xff;
> +
> +       if (kasan_enabled()) {
> +               tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
> +               tag ^= 0xff;
> +       }
> +
> +       return tag;
>  }
>
>  static inline void page_kasan_tag_set(struct page *page, u8 tag)
>  {
>         if (kasan_enabled()) {
> +               tag ^= 0xff;
>                 page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
>                 page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
>         }
> --
> 2.31.0.rc2.261.g7f71774620-goog
>
diff mbox series

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 77e64e3eac80..c45c28f094a7 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1440,16 +1440,28 @@  static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
 
 #if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
 
+/*
+ * KASAN per-page tags are stored xor'ed with 0xff. This allows to avoid
+ * setting tags for all pages to native kernel tag value 0xff, as the default
+ * value 0x00 maps to 0xff.
+ */
+
 static inline u8 page_kasan_tag(const struct page *page)
 {
-	if (kasan_enabled())
-		return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
-	return 0xff;
+	u8 tag = 0xff;
+
+	if (kasan_enabled()) {
+		tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
+		tag ^= 0xff;
+	}
+
+	return tag;
 }
 
 static inline void page_kasan_tag_set(struct page *page, u8 tag)
 {
 	if (kasan_enabled()) {
+		tag ^= 0xff;
 		page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
 		page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
 	}