diff mbox series

slub: fix kmalloc_pagealloc_invalid_free unit test

Message ID 20210802180819.1110165-1-shakeelb@google.com (mailing list archive)
State New
Headers show
Series slub: fix kmalloc_pagealloc_invalid_free unit test | expand

Commit Message

Shakeel Butt Aug. 2, 2021, 6:08 p.m. UTC
The unit test kmalloc_pagealloc_invalid_free makes sure that for the
higher order slub allocation which goes to page allocator, the free is
called with the correct address i.e. the virtual address of the head
page.

The commit f227f0faf63b ("slub: fix unreclaimable slab stat for bulk
free") unified the free code paths for page allocator based slub
allocations but instead of using the address passed by the caller, it
extracted the address from the page. Thus making the unit test
kmalloc_pagealloc_invalid_free moot. So, fix this by using the address
passed by the caller.

Should we fix this? I think yes because dev expect kasan to catch these
type of programming bugs.

Fixes: f227f0faf63b ("slub: fix unreclaimable slab stat for bulk free")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Reported-by: Nathan Chancellor <nathan@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>

---
 mm/slub.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Nathan Chancellor Aug. 2, 2021, 11:52 p.m. UTC | #1
On Mon, Aug 02, 2021 at 11:08:18AM -0700, Shakeel Butt wrote:
> The unit test kmalloc_pagealloc_invalid_free makes sure that for the
> higher order slub allocation which goes to page allocator, the free is
> called with the correct address i.e. the virtual address of the head
> page.
> 
> The commit f227f0faf63b ("slub: fix unreclaimable slab stat for bulk
> free") unified the free code paths for page allocator based slub
> allocations but instead of using the address passed by the caller, it
> extracted the address from the page. Thus making the unit test
> kmalloc_pagealloc_invalid_free moot. So, fix this by using the address
> passed by the caller.
> 
> Should we fix this? I think yes because dev expect kasan to catch these
> type of programming bugs.
> 
> Fixes: f227f0faf63b ("slub: fix unreclaimable slab stat for bulk free")
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> Reported-by: Nathan Chancellor <nathan@kernel.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Muchun Song <songmuchun@bytedance.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Andrew Morton <akpm@linux-foundation.org>

Thank you for the quick fix! It passes my tests on arm64 and x86_64 in
QEMU with a few different clang versions.

Tested-by: Nathan Chancellor <nathan@kernel.org>

> ---
>  mm/slub.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index af984e4990e8..60aeedc436d5 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3236,12 +3236,12 @@ struct detached_freelist {
>  	struct kmem_cache *s;
>  };
>  
> -static inline void free_nonslab_page(struct page *page)
> +static inline void free_nonslab_page(struct page *page, void *object)
>  {
>  	unsigned int order = compound_order(page);
>  
>  	VM_BUG_ON_PAGE(!PageCompound(page), page);
> -	kfree_hook(page_address(page));
> +	kfree_hook(object);
>  	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
>  	__free_pages(page, order);
>  }
> @@ -3282,7 +3282,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
>  	if (!s) {
>  		/* Handle kalloc'ed objects */
>  		if (unlikely(!PageSlab(page))) {
> -			free_nonslab_page(page);
> +			free_nonslab_page(page, object);
>  			p[size] = NULL; /* mark object processed */
>  			return size;
>  		}
> @@ -4258,7 +4258,7 @@ void kfree(const void *x)
>  
>  	page = virt_to_head_page(x);
>  	if (unlikely(!PageSlab(page))) {
> -		free_nonslab_page(page);
> +		free_nonslab_page(page, object);
>  		return;
>  	}
>  	slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_);
> -- 
> 2.32.0.554.ge1b32706d8-goog
> 
>
Roman Gushchin Aug. 3, 2021, 3:30 a.m. UTC | #2
On Mon, Aug 02, 2021 at 11:08:18AM -0700, Shakeel Butt wrote:
> The unit test kmalloc_pagealloc_invalid_free makes sure that for the
> higher order slub allocation which goes to page allocator, the free is
> called with the correct address i.e. the virtual address of the head
> page.
> 
> The commit f227f0faf63b ("slub: fix unreclaimable slab stat for bulk
> free") unified the free code paths for page allocator based slub
> allocations but instead of using the address passed by the caller, it
> extracted the address from the page. Thus making the unit test
> kmalloc_pagealloc_invalid_free moot. So, fix this by using the address
> passed by the caller.
> 
> Should we fix this? I think yes because dev expect kasan to catch these
> type of programming bugs.

I think so too.

Acked-by: Roman Gushchin <guro@fb.com>

Thanks!
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index af984e4990e8..60aeedc436d5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3236,12 +3236,12 @@  struct detached_freelist {
 	struct kmem_cache *s;
 };
 
-static inline void free_nonslab_page(struct page *page)
+static inline void free_nonslab_page(struct page *page, void *object)
 {
 	unsigned int order = compound_order(page);
 
 	VM_BUG_ON_PAGE(!PageCompound(page), page);
-	kfree_hook(page_address(page));
+	kfree_hook(object);
 	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
 	__free_pages(page, order);
 }
@@ -3282,7 +3282,7 @@  int build_detached_freelist(struct kmem_cache *s, size_t size,
 	if (!s) {
 		/* Handle kalloc'ed objects */
 		if (unlikely(!PageSlab(page))) {
-			free_nonslab_page(page);
+			free_nonslab_page(page, object);
 			p[size] = NULL; /* mark object processed */
 			return size;
 		}
@@ -4258,7 +4258,7 @@  void kfree(const void *x)
 
 	page = virt_to_head_page(x);
 	if (unlikely(!PageSlab(page))) {
-		free_nonslab_page(page);
+		free_nonslab_page(page, object);
 		return;
 	}
 	slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_);