diff mbox series

mm/slub, kasan: fix checking page_alloc allocations on free

Message ID ef00ee9e0cf2b8fbcdf639d5038c373b69c0e1e1.1628639145.git.andreyknvl@gmail.com (mailing list archive)
State New
Headers show
Series mm/slub, kasan: fix checking page_alloc allocations on free | expand

Commit Message

andrey.konovalov@linux.dev Aug. 10, 2021, 11:46 p.m. UTC
From: Andrey Konovalov <andreyknvl@gmail.com>

A fix for stat counters f227f0faf63b ("slub: fix unreclaimable slab stat
for bulk free") used page_address(page) as kfree_hook() argument instead
of object. While the change is technically correct, it breaks KASAN's
ability to detect improper (unaligned) pointers passed to kfree() and
causes the kmalloc_pagealloc_invalid_free test to fail.

This patch changes free_nonslab_page() to pass object to kfree_hook()
instead of page_address(page) as it was before the fix.

Fixed: f227f0faf63b ("slub: fix unreclaimable slab stat for bulk free")
Signed-off-by: Andrey Konovalov <andreyknvl@gmail.com>
---
 mm/slub.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Shakeel Butt Aug. 11, 2021, 12:17 a.m. UTC | #1
On Tue, Aug 10, 2021 at 4:47 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@gmail.com>
>
> A fix for stat counters f227f0faf63b ("slub: fix unreclaimable slab stat
> for bulk free") used page_address(page) as kfree_hook() argument instead
> of object. While the change is technically correct, it breaks KASAN's
> ability to detect improper (unaligned) pointers passed to kfree() and
> causes the kmalloc_pagealloc_invalid_free test to fail.
>
> This patch changes free_nonslab_page() to pass object to kfree_hook()
> instead of page_address(page) as it was before the fix.
>
> Fixed: f227f0faf63b ("slub: fix unreclaimable slab stat for bulk free")
> Signed-off-by: Andrey Konovalov <andreyknvl@gmail.com>

The fix is already in the mm tree:
https://lkml.kernel.org/r/20210802180819.1110165-1-shakeelb@google.com
Andrey Konovalov Aug. 11, 2021, 12:41 a.m. UTC | #2
On Wed, Aug 11, 2021 at 2:18 AM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Tue, Aug 10, 2021 at 4:47 PM <andrey.konovalov@linux.dev> wrote:
> >
> > From: Andrey Konovalov <andreyknvl@gmail.com>
> >
> > A fix for stat counters f227f0faf63b ("slub: fix unreclaimable slab stat
> > for bulk free") used page_address(page) as kfree_hook() argument instead
> > of object. While the change is technically correct, it breaks KASAN's
> > ability to detect improper (unaligned) pointers passed to kfree() and
> > causes the kmalloc_pagealloc_invalid_free test to fail.
> >
> > This patch changes free_nonslab_page() to pass object to kfree_hook()
> > instead of page_address(page) as it was before the fix.
> >
> > Fixed: f227f0faf63b ("slub: fix unreclaimable slab stat for bulk free")
> > Signed-off-by: Andrey Konovalov <andreyknvl@gmail.com>
>
> The fix is already in the mm tree:
> https://lkml.kernel.org/r/20210802180819.1110165-1-shakeelb@google.com

Ah, I missed this.

Please CC kasan-dev for KASAN-related fixes.

Thanks!
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index af984e4990e8..56079dd33c74 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3236,12 +3236,12 @@  struct detached_freelist {
 	struct kmem_cache *s;
 };
 
-static inline void free_nonslab_page(struct page *page)
+static inline void free_nonslab_page(void *object, struct page *page)
 {
 	unsigned int order = compound_order(page);
 
 	VM_BUG_ON_PAGE(!PageCompound(page), page);
-	kfree_hook(page_address(page));
+	kfree_hook(object);
 	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
 	__free_pages(page, order);
 }
@@ -3282,7 +3282,7 @@  int build_detached_freelist(struct kmem_cache *s, size_t size,
 	if (!s) {
 		/* Handle kalloc'ed objects */
 		if (unlikely(!PageSlab(page))) {
-			free_nonslab_page(page);
+			free_nonslab_page(object, page);
 			p[size] = NULL; /* mark object processed */
 			return size;
 		}
@@ -4258,7 +4258,7 @@  void kfree(const void *x)
 
 	page = virt_to_head_page(x);
 	if (unlikely(!PageSlab(page))) {
-		free_nonslab_page(page);
+		free_nonslab_page(object, page);
 		return;
 	}
 	slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_);