diff mbox series

mm: Make ksize() a reporting-only function

Message ID 20221022180455.never.023-kees@kernel.org (mailing list archive)
State Superseded
Headers show
Series mm: Make ksize() a reporting-only function | expand

Commit Message

Kees Cook Oct. 22, 2022, 6:08 p.m. UTC
With all "silently resizing" callers of ksize() refactored, remove the
logic in ksize() that would allow it to be used to effectively change
the size of an allocation (bypassing __alloc_size hints, etc). Users
wanting this feature need to either use kmalloc_size_roundup() before an
allocation, or use krealloc() directly.

For kfree_sensitive(), move the unpoisoning logic inline. Replace the
some of the partially open-coded ksize() in __do_krealloc with ksize()
now that it doesn't perform unpoisoning.

Adjust the KUnit tests to match the new ksize() behavior.

Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: linux-mm@kvack.org
Cc: kasan-dev@googlegroups.com
Cc: netdev@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
This requires at least this be landed first:
https://lore.kernel.org/lkml/20221021234713.you.031-kees@kernel.org/
I suspect given that is the most central ksize() user, this ksize()
fix might be best to land through the netdev tree...
---
 mm/kasan/kasan_test.c |  8 +++++---
 mm/slab_common.c      | 33 ++++++++++++++-------------------
 2 files changed, 19 insertions(+), 22 deletions(-)

Comments

Vlastimil Babka Oct. 25, 2022, 11:53 a.m. UTC | #1
On 10/22/22 20:08, Kees Cook wrote:
> With all "silently resizing" callers of ksize() refactored, remove the
> logic in ksize() that would allow it to be used to effectively change
> the size of an allocation (bypassing __alloc_size hints, etc). Users
> wanting this feature need to either use kmalloc_size_roundup() before an
> allocation, or use krealloc() directly.
> 
> For kfree_sensitive(), move the unpoisoning logic inline. Replace the
> some of the partially open-coded ksize() in __do_krealloc with ksize()
> now that it doesn't perform unpoisoning.
> 
> Adjust the KUnit tests to match the new ksize() behavior.
> 
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: Paolo Abeni <pabeni@redhat.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Andrey Konovalov <andreyknvl@gmail.com>
> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Cc: linux-mm@kvack.org
> Cc: kasan-dev@googlegroups.com
> Cc: netdev@vger.kernel.org
> Signed-off-by: Kees Cook <keescook@chromium.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
> This requires at least this be landed first:
> https://lore.kernel.org/lkml/20221021234713.you.031-kees@kernel.org/

Don't we need all parts to have landed first, even if the skbuff one is the
most prominent?

> I suspect given that is the most central ksize() user, this ksize()
> fix might be best to land through the netdev tree...
> ---
>  mm/kasan/kasan_test.c |  8 +++++---
>  mm/slab_common.c      | 33 ++++++++++++++-------------------
>  2 files changed, 19 insertions(+), 22 deletions(-)
> 
> diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> index 0d59098f0876..cb5c54adb503 100644
> --- a/mm/kasan/kasan_test.c
> +++ b/mm/kasan/kasan_test.c
> @@ -783,7 +783,7 @@ static void kasan_global_oob_left(struct kunit *test)
>  	KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p);
>  }
>  
> -/* Check that ksize() makes the whole object accessible. */
> +/* Check that ksize() does NOT unpoison whole object. */
>  static void ksize_unpoisons_memory(struct kunit *test)
>  {
>  	char *ptr;
> @@ -791,15 +791,17 @@ static void ksize_unpoisons_memory(struct kunit *test)
>  
>  	ptr = kmalloc(size, GFP_KERNEL);
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> +
>  	real_size = ksize(ptr);
> +	KUNIT_EXPECT_GT(test, real_size, size);
>  
>  	OPTIMIZER_HIDE_VAR(ptr);
>  
>  	/* This access shouldn't trigger a KASAN report. */
> -	ptr[size] = 'x';
> +	ptr[size - 1] = 'x';
>  
>  	/* This one must. */
> -	KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size]);
> +	KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
>  
>  	kfree(ptr);
>  }
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 33b1886b06eb..eabd66fcabd0 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1333,11 +1333,11 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
>  	void *ret;
>  	size_t ks;
>  
> -	/* Don't use instrumented ksize to allow precise KASAN poisoning. */
> +	/* Check for double-free before calling ksize. */
>  	if (likely(!ZERO_OR_NULL_PTR(p))) {
>  		if (!kasan_check_byte(p))
>  			return NULL;
> -		ks = kfence_ksize(p) ?: __ksize(p);
> +		ks = ksize(p);
>  	} else
>  		ks = 0;
>  
> @@ -1405,8 +1405,10 @@ void kfree_sensitive(const void *p)
>  	void *mem = (void *)p;
>  
>  	ks = ksize(mem);
> -	if (ks)
> +	if (ks) {
> +		kasan_unpoison_range(mem, ks);
>  		memzero_explicit(mem, ks);
> +	}
>  	kfree(mem);
>  }
>  EXPORT_SYMBOL(kfree_sensitive);
> @@ -1415,10 +1417,11 @@ EXPORT_SYMBOL(kfree_sensitive);
>   * ksize - get the actual amount of memory allocated for a given object
>   * @objp: Pointer to the object
>   *
> - * kmalloc may internally round up allocations and return more memory
> + * kmalloc() may internally round up allocations and return more memory
>   * than requested. ksize() can be used to determine the actual amount of
> - * memory allocated. The caller may use this additional memory, even though
> - * a smaller amount of memory was initially specified with the kmalloc call.
> + * allocated memory. The caller may NOT use this additional memory, unless
> + * it calls krealloc(). To avoid an alloc/realloc cycle, callers can use
> + * kmalloc_size_roundup() to find the size of the associated kmalloc bucket.
>   * The caller must guarantee that objp points to a valid object previously
>   * allocated with either kmalloc() or kmem_cache_alloc(). The object
>   * must not be freed during the duration of the call.
> @@ -1427,13 +1430,11 @@ EXPORT_SYMBOL(kfree_sensitive);
>   */
>  size_t ksize(const void *objp)
>  {
> -	size_t size;
> -
>  	/*
> -	 * We need to first check that the pointer to the object is valid, and
> -	 * only then unpoison the memory. The report printed from ksize() is
> -	 * more useful, then when it's printed later when the behaviour could
> -	 * be undefined due to a potential use-after-free or double-free.
> +	 * We need to first check that the pointer to the object is valid.
> +	 * The KASAN report printed from ksize() is more useful, then when
> +	 * it's printed later when the behaviour could be undefined due to
> +	 * a potential use-after-free or double-free.
>  	 *
>  	 * We use kasan_check_byte(), which is supported for the hardware
>  	 * tag-based KASAN mode, unlike kasan_check_read/write().
> @@ -1447,13 +1448,7 @@ size_t ksize(const void *objp)
>  	if (unlikely(ZERO_OR_NULL_PTR(objp)) || !kasan_check_byte(objp))
>  		return 0;
>  
> -	size = kfence_ksize(objp) ?: __ksize(objp);
> -	/*
> -	 * We assume that ksize callers could use whole allocated area,
> -	 * so we need to unpoison this area.
> -	 */
> -	kasan_unpoison_range(objp, size);
> -	return size;
> +	return kfence_ksize(objp) ?: __ksize(objp);
>  }
>  EXPORT_SYMBOL(ksize);
>
Kees Cook Oct. 25, 2022, 6:38 p.m. UTC | #2
On Tue, Oct 25, 2022 at 01:53:54PM +0200, Vlastimil Babka wrote:
> On 10/22/22 20:08, Kees Cook wrote:
> > With all "silently resizing" callers of ksize() refactored, remove the
> > logic in ksize() that would allow it to be used to effectively change
> > the size of an allocation (bypassing __alloc_size hints, etc). Users
> > wanting this feature need to either use kmalloc_size_roundup() before an
> > allocation, or use krealloc() directly.
> > 
> > For kfree_sensitive(), move the unpoisoning logic inline. Replace the
> > some of the partially open-coded ksize() in __do_krealloc with ksize()
> > now that it doesn't perform unpoisoning.
> > 
> > [...]
> > Signed-off-by: Kees Cook <keescook@chromium.org>
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>

Thanks!

> > ---
> > This requires at least this be landed first:
> > https://lore.kernel.org/lkml/20221021234713.you.031-kees@kernel.org/
> 
> Don't we need all parts to have landed first, even if the skbuff one is the
> most prominent?

Yes, though, I suspect there will be some cases we couldn't easily find.

Here are the prerequisites I'm aware of:

in -next:
  36875a063b5e ("net: ipa: Proactively round up to kmalloc bucket size")
  ab3f7828c979 ("openvswitch: Use kmalloc_size_roundup() to match ksize() usage")
  d6dd508080a3 ("bnx2: Use kmalloc_size_roundup() to match ksize() usage")

reviewed, waiting to land (should I take these myself?)
  btrfs: send: Proactively round up to kmalloc bucket size
    https://lore.kernel.org/lkml/20220923202822.2667581-8-keescook@chromium.org/
  dma-buf: Proactively round up to kmalloc bucket size
    https://lore.kernel.org/lkml/20221018090858.never.941-kees@kernel.org/

partially reviewed:
  igb: Proactively round up to kmalloc bucket size
    https://lore.kernel.org/lkml/20221018092340.never.556-kees@kernel.org/

unreviewed:
  coredump: Proactively round up to kmalloc bucket size
    https://lore.kernel.org/lkml/20221018090701.never.996-kees@kernel.org/
  devres: Use kmalloc_size_roundup() to match ksize() usage
    https://lore.kernel.org/lkml/20221018090406.never.856-kees@kernel.org/

needs updating:
  mempool: Use kmalloc_size_roundup() to match ksize() usage
    https://lore.kernel.org/lkml/20221018090323.never.897-kees@kernel.org/
  bpf: Use kmalloc_size_roundup() to match ksize() usage
    https://lore.kernel.org/lkml/20221018090550.never.834-kees@kernel.org/
Andrey Konovalov Oct. 27, 2022, 7:05 p.m. UTC | #3
On Sat, Oct 22, 2022 at 8:08 PM Kees Cook <keescook@chromium.org> wrote:
>
> With all "silently resizing" callers of ksize() refactored, remove the
> logic in ksize() that would allow it to be used to effectively change
> the size of an allocation (bypassing __alloc_size hints, etc). Users
> wanting this feature need to either use kmalloc_size_roundup() before an
> allocation, or use krealloc() directly.
>
> For kfree_sensitive(), move the unpoisoning logic inline. Replace the
> some of the partially open-coded ksize() in __do_krealloc with ksize()
> now that it doesn't perform unpoisoning.
>
> Adjust the KUnit tests to match the new ksize() behavior.

Hi Kees,

> -/* Check that ksize() makes the whole object accessible. */
> +/* Check that ksize() does NOT unpoison whole object. */
>  static void ksize_unpoisons_memory(struct kunit *test)
>  {
>         char *ptr;
> @@ -791,15 +791,17 @@ static void ksize_unpoisons_memory(struct kunit *test)
>
>         ptr = kmalloc(size, GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> +
>         real_size = ksize(ptr);
> +       KUNIT_EXPECT_GT(test, real_size, size);
>
>         OPTIMIZER_HIDE_VAR(ptr);
>
>         /* This access shouldn't trigger a KASAN report. */
> -       ptr[size] = 'x';
> +       ptr[size - 1] = 'x';
>
>         /* This one must. */
> -       KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size]);
> +       KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);

How about also accessing ptr[size] here? It would allow for a more
precise checking of the in-object redzone.

>
>         kfree(ptr);
>  }

Thanks!
Kees Cook Oct. 27, 2022, 7:13 p.m. UTC | #4
On Thu, Oct 27, 2022 at 09:05:45PM +0200, Andrey Konovalov wrote:
> On Sat, Oct 22, 2022 at 8:08 PM Kees Cook <keescook@chromium.org> wrote:
> [...]
> > -/* Check that ksize() makes the whole object accessible. */
> > +/* Check that ksize() does NOT unpoison whole object. */
> >  static void ksize_unpoisons_memory(struct kunit *test)
> >  {
> >         char *ptr;
> > @@ -791,15 +791,17 @@ static void ksize_unpoisons_memory(struct kunit *test)
> >
> >         ptr = kmalloc(size, GFP_KERNEL);
> >         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> > +
> >         real_size = ksize(ptr);
> > +       KUNIT_EXPECT_GT(test, real_size, size);
> >
> >         OPTIMIZER_HIDE_VAR(ptr);
> >
> >         /* This access shouldn't trigger a KASAN report. */
> > -       ptr[size] = 'x';
> > +       ptr[size - 1] = 'x';
> >
> >         /* This one must. */
> > -       KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size]);
> > +       KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
> 
> How about also accessing ptr[size] here? It would allow for a more
> precise checking of the in-object redzone.

Sure! Probably both ptr[size] and ptr[real_size -1], yes?
Andrey Konovalov Oct. 27, 2022, 7:15 p.m. UTC | #5
On Thu, Oct 27, 2022 at 9:13 PM Kees Cook <keescook@chromium.org> wrote:
>
> On Thu, Oct 27, 2022 at 09:05:45PM +0200, Andrey Konovalov wrote:
> > On Sat, Oct 22, 2022 at 8:08 PM Kees Cook <keescook@chromium.org> wrote:
> > [...]
> > > -/* Check that ksize() makes the whole object accessible. */
> > > +/* Check that ksize() does NOT unpoison whole object. */
> > >  static void ksize_unpoisons_memory(struct kunit *test)
> > >  {
> > >         char *ptr;
> > > @@ -791,15 +791,17 @@ static void ksize_unpoisons_memory(struct kunit *test)
> > >
> > >         ptr = kmalloc(size, GFP_KERNEL);
> > >         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> > > +
> > >         real_size = ksize(ptr);
> > > +       KUNIT_EXPECT_GT(test, real_size, size);
> > >
> > >         OPTIMIZER_HIDE_VAR(ptr);
> > >
> > >         /* This access shouldn't trigger a KASAN report. */
> > > -       ptr[size] = 'x';
> > > +       ptr[size - 1] = 'x';
> > >
> > >         /* This one must. */
> > > -       KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size]);
> > > +       KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
> >
> > How about also accessing ptr[size] here? It would allow for a more
> > precise checking of the in-object redzone.
>
> Sure! Probably both ptr[size] and ptr[real_size -1], yes?

Yes, sounds good. Thank you!
diff mbox series

Patch

diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 0d59098f0876..cb5c54adb503 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -783,7 +783,7 @@  static void kasan_global_oob_left(struct kunit *test)
 	KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p);
 }
 
-/* Check that ksize() makes the whole object accessible. */
+/* Check that ksize() does NOT unpoison whole object. */
 static void ksize_unpoisons_memory(struct kunit *test)
 {
 	char *ptr;
@@ -791,15 +791,17 @@  static void ksize_unpoisons_memory(struct kunit *test)
 
 	ptr = kmalloc(size, GFP_KERNEL);
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
 	real_size = ksize(ptr);
+	KUNIT_EXPECT_GT(test, real_size, size);
 
 	OPTIMIZER_HIDE_VAR(ptr);
 
 	/* This access shouldn't trigger a KASAN report. */
-	ptr[size] = 'x';
+	ptr[size - 1] = 'x';
 
 	/* This one must. */
-	KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size]);
+	KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
 
 	kfree(ptr);
 }
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 33b1886b06eb..eabd66fcabd0 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1333,11 +1333,11 @@  __do_krealloc(const void *p, size_t new_size, gfp_t flags)
 	void *ret;
 	size_t ks;
 
-	/* Don't use instrumented ksize to allow precise KASAN poisoning. */
+	/* Check for double-free before calling ksize. */
 	if (likely(!ZERO_OR_NULL_PTR(p))) {
 		if (!kasan_check_byte(p))
 			return NULL;
-		ks = kfence_ksize(p) ?: __ksize(p);
+		ks = ksize(p);
 	} else
 		ks = 0;
 
@@ -1405,8 +1405,10 @@  void kfree_sensitive(const void *p)
 	void *mem = (void *)p;
 
 	ks = ksize(mem);
-	if (ks)
+	if (ks) {
+		kasan_unpoison_range(mem, ks);
 		memzero_explicit(mem, ks);
+	}
 	kfree(mem);
 }
 EXPORT_SYMBOL(kfree_sensitive);
@@ -1415,10 +1417,11 @@  EXPORT_SYMBOL(kfree_sensitive);
  * ksize - get the actual amount of memory allocated for a given object
  * @objp: Pointer to the object
  *
- * kmalloc may internally round up allocations and return more memory
+ * kmalloc() may internally round up allocations and return more memory
  * than requested. ksize() can be used to determine the actual amount of
- * memory allocated. The caller may use this additional memory, even though
- * a smaller amount of memory was initially specified with the kmalloc call.
+ * allocated memory. The caller may NOT use this additional memory, unless
+ * it calls krealloc(). To avoid an alloc/realloc cycle, callers can use
+ * kmalloc_size_roundup() to find the size of the associated kmalloc bucket.
  * The caller must guarantee that objp points to a valid object previously
  * allocated with either kmalloc() or kmem_cache_alloc(). The object
  * must not be freed during the duration of the call.
@@ -1427,13 +1430,11 @@  EXPORT_SYMBOL(kfree_sensitive);
  */
 size_t ksize(const void *objp)
 {
-	size_t size;
-
 	/*
-	 * We need to first check that the pointer to the object is valid, and
-	 * only then unpoison the memory. The report printed from ksize() is
-	 * more useful, then when it's printed later when the behaviour could
-	 * be undefined due to a potential use-after-free or double-free.
+	 * We need to first check that the pointer to the object is valid.
+	 * The KASAN report printed from ksize() is more useful, then when
+	 * it's printed later when the behaviour could be undefined due to
+	 * a potential use-after-free or double-free.
 	 *
 	 * We use kasan_check_byte(), which is supported for the hardware
 	 * tag-based KASAN mode, unlike kasan_check_read/write().
@@ -1447,13 +1448,7 @@  size_t ksize(const void *objp)
 	if (unlikely(ZERO_OR_NULL_PTR(objp)) || !kasan_check_byte(objp))
 		return 0;
 
-	size = kfence_ksize(objp) ?: __ksize(objp);
-	/*
-	 * We assume that ksize callers could use whole allocated area,
-	 * so we need to unpoison this area.
-	 */
-	kasan_unpoison_range(objp, size);
-	return size;
+	return kfence_ksize(objp) ?: __ksize(objp);
 }
 EXPORT_SYMBOL(ksize);