diff mbox series

[v2] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order

Message ID 20200630145155.GA52108@lilong (mailing list archive)
State New, archived
Headers show
Series [v2] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order | expand

Commit Message

Long Li June 30, 2020, 2:51 p.m. UTC
In the ARM32 environment with highmem enabled. Using flag of kmalloc()
with __GFP_HIGHMEM to allocate large memory, it will go through the
kmalloc_order() path and return NULL. The __GFP_HIGHMEM flag will
cause alloc_pages() to allocate highmem memory and pages cannot be
directly converted to a virtual address, kmalloc_order() will return
NULL and the page has been allocated.

After modification, GFP_SLAB_BUG_MASK has been checked before
allocating pages, refer to the new_slab().

Signed-off-by: Long Li <lonuxli.64@gmail.com>
---

Changes in v2:
- patch is rebased againest "[PATCH] mm: Free unused pages in
kmalloc_order()" [1]
- check GFP_SLAB_BUG_MASK and generate warnings before alloc_pages
in kmalloc_order()

[1] https://lkml.org/lkml/2020/6/27/16

 mm/slab_common.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Matthew Wilcox June 30, 2020, 5:13 p.m. UTC | #1
On Tue, Jun 30, 2020 at 02:51:55PM +0000, Long Li wrote:
> In the ARM32 environment with highmem enabled. Using flag of kmalloc()
> with __GFP_HIGHMEM to allocate large memory, it will go through the
> kmalloc_order() path and return NULL. The __GFP_HIGHMEM flag will
> cause alloc_pages() to allocate highmem memory and pages cannot be
> directly converted to a virtual address, kmalloc_order() will return
> NULL and the page has been allocated.
> 
> After modification, GFP_SLAB_BUG_MASK has been checked before
> allocating pages, refer to the new_slab().
> 
> Signed-off-by: Long Li <lonuxli.64@gmail.com>
> ---
> 
> Changes in v2:
> - patch is rebased againest "[PATCH] mm: Free unused pages in
> kmalloc_order()" [1]
> - check GFP_SLAB_BUG_MASK and generate warnings before alloc_pages
> in kmalloc_order()
> 
> [1] https://lkml.org/lkml/2020/6/27/16
> 
>  mm/slab_common.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index a143a8c8f874..3548f4f8374b 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -27,6 +27,7 @@
>  #include <trace/events/kmem.h>
>  
>  #include "slab.h"
> +#include "internal.h"
>  
>  enum slab_state slab_state;
>  LIST_HEAD(slab_caches);
> @@ -815,6 +816,15 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
>  	void *ret = NULL;
>  	struct page *page;
>  
> +	if (unlikely(flags & GFP_SLAB_BUG_MASK)) {
> +		gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK;
> +
> +		flags &= ~GFP_SLAB_BUG_MASK;
> +		pr_warn("Unexpected gfp: %#x (%pGg). Fixing up to gfp: %#x (%pGg). Fix your code!\n",
> +				invalid_mask, &invalid_mask, flags, &flags);
> +		dump_stack();
> +	}
> +
>  	flags |= __GFP_COMP;

Oh, this is really good!  I hadn't actually looked at how slab/slub handle
GFP_SLAB_BUG_MASK.  If you don't mind though, I would suggest that this
code should all be in one place.  Perhaps:

gfp_t kmalloc_invalid_flags(gfp_t flags)
{
	gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK;

	flags &= ~GFP_SLAB_BUG_MASK;
	pr_warn("Unexpected gfp: %#x (%pGg). Fixing up to gfp: %#x (%pGg). Fix your code!\n",
			invalid_mask, &invalid_mask, flags, &flags);
	dump_stack();

	return flags;
}

and then call it from the three places?

Also, the changelog could do with a bit of work.  Perhaps:

kmalloc cannot allocate memory from HIGHMEM.  Allocating large amounts of
memory currently bypasses the check and will simply leak the memory when
page_address() returns NULL.  To fix this, factor the GFP_SLAB_BUG_MASK
check out of slab & slub, and call it from kmalloc_order() as well.
diff mbox series

Patch

diff --git a/mm/slab_common.c b/mm/slab_common.c
index a143a8c8f874..3548f4f8374b 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -27,6 +27,7 @@ 
 #include <trace/events/kmem.h>
 
 #include "slab.h"
+#include "internal.h"
 
 enum slab_state slab_state;
 LIST_HEAD(slab_caches);
@@ -815,6 +816,15 @@  void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	void *ret = NULL;
 	struct page *page;
 
+	if (unlikely(flags & GFP_SLAB_BUG_MASK)) {
+		gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK;
+
+		flags &= ~GFP_SLAB_BUG_MASK;
+		pr_warn("Unexpected gfp: %#x (%pGg). Fixing up to gfp: %#x (%pGg). Fix your code!\n",
+				invalid_mask, &invalid_mask, flags, &flags);
+		dump_stack();
+	}
+
 	flags |= __GFP_COMP;
 	page = alloc_pages(flags, order);
 	if (likely(page)) {