diff mbox series

mm, vmalloc: fix high order __GFP_NOFAIL allocations

Message ID ZAXynvdNqcI0f6Us@dhcp22.suse.cz (mailing list archive)
State New
Headers show
Series mm, vmalloc: fix high order __GFP_NOFAIL allocations | expand

Commit Message

Michal Hocko March 6, 2023, 2:03 p.m. UTC
On Mon 06-03-23 13:14:43, Uladzislau Rezki wrote:
[...]
> Some questions:
> 
> 1. Could you please add a comment why you want the bulk_gfp without
> the __GFP_NOFAIL(bulk path)?

The bulk allocator is not documented to fully support __GFP_NOFAIL
semantic IIRC. While it uses alloc_pages as fallback I didn't want
to make any assumptions based on the current implementation. At least
that is my recollection. If we do want to support NOFAIL by the batch
allocator then we can drop the special casing here.

> 2. Could you please add a comment why a high order pages do not want
> __GFP_NOFAIL? You have already explained.

See below
> 3. Looking at the patch:
> 
> <snip>
> +       } else {
> +               alloc_gfp &= ~__GFP_NOFAIL;
> +               nofail = true;
> <snip>
> 
> if user does not want to go with __GFP_NOFAIL flag why you force it in
> case a high order allocation fails and you switch to 0 order allocations? 

Not intended. The above should have been else if (gfp & __GFP_NOFAIL).
Thanks for catching that!

This would be the full patch with the description:
--- 
From 3ccfaa15bf2587b8998c129533a0404fedf5a484 Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@suse.com>
Date: Mon, 6 Mar 2023 09:15:17 +0100
Subject: [PATCH] mm, vmalloc: fix high order __GFP_NOFAIL allocations

Gao Xiang has reported that the page allocator complains about high
order __GFP_NOFAIL request coming from the vmalloc core:

 __alloc_pages+0x1cb/0x5b0 mm/page_alloc.c:5549
 alloc_pages+0x1aa/0x270 mm/mempolicy.c:2286
 vm_area_alloc_pages mm/vmalloc.c:2989 [inline]
 __vmalloc_area_node mm/vmalloc.c:3057 [inline]
 __vmalloc_node_range+0x978/0x13c0 mm/vmalloc.c:3227
 kvmalloc_node+0x156/0x1a0 mm/util.c:606
 kvmalloc include/linux/slab.h:737 [inline]
 kvmalloc_array include/linux/slab.h:755 [inline]
 kvcalloc include/linux/slab.h:760 [inline]

it seems that I have completely missed high order allocation backing
vmalloc areas case when implementing __GFP_NOFAIL support. This means
that [k]vmalloc at al. can allocate higher order allocations with
__GFP_NOFAIL which can trigger OOM killer for non-costly orders easily
or cause a lot of reclaim/compaction activity if those requests cannot
be satisfied.

Fix the issue by falling back to zero order allocations for __GFP_NOFAIL
requests if the high order request fails.

Fixes: 9376130c390a ("mm/vmalloc: add support for __GFP_NOFAIL")
Reported-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/vmalloc.c | 28 +++++++++++++++++++++++-----
 1 file changed, 23 insertions(+), 5 deletions(-)

Comments

Uladzislau Rezki March 6, 2023, 4:37 p.m. UTC | #1
On Mon, Mar 06, 2023 at 03:03:10PM +0100, Michal Hocko wrote:
> On Mon 06-03-23 13:14:43, Uladzislau Rezki wrote:
> [...]
> > Some questions:
> > 
> > 1. Could you please add a comment why you want the bulk_gfp without
> > the __GFP_NOFAIL(bulk path)?
> 
> The bulk allocator is not documented to fully support __GFP_NOFAIL
> semantic IIRC. While it uses alloc_pages as fallback I didn't want
> to make any assumptions based on the current implementation. At least
> that is my recollection. If we do want to support NOFAIL by the batch
> allocator then we can drop the special casing here.
> 
> > 2. Could you please add a comment why a high order pages do not want
> > __GFP_NOFAIL? You have already explained.
> 
> See below
> > 3. Looking at the patch:
> > 
> > <snip>
> > +       } else {
> > +               alloc_gfp &= ~__GFP_NOFAIL;
> > +               nofail = true;
> > <snip>
> > 
> > if user does not want to go with __GFP_NOFAIL flag why you force it in
> > case a high order allocation fails and you switch to 0 order allocations? 
> 
> Not intended. The above should have been else if (gfp & __GFP_NOFAIL).
> Thanks for catching that!
> 
> This would be the full patch with the description:
> --- 
> From 3ccfaa15bf2587b8998c129533a0404fedf5a484 Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@suse.com>
> Date: Mon, 6 Mar 2023 09:15:17 +0100
> Subject: [PATCH] mm, vmalloc: fix high order __GFP_NOFAIL allocations
> 
> Gao Xiang has reported that the page allocator complains about high
> order __GFP_NOFAIL request coming from the vmalloc core:
> 
>  __alloc_pages+0x1cb/0x5b0 mm/page_alloc.c:5549
>  alloc_pages+0x1aa/0x270 mm/mempolicy.c:2286
>  vm_area_alloc_pages mm/vmalloc.c:2989 [inline]
>  __vmalloc_area_node mm/vmalloc.c:3057 [inline]
>  __vmalloc_node_range+0x978/0x13c0 mm/vmalloc.c:3227
>  kvmalloc_node+0x156/0x1a0 mm/util.c:606
>  kvmalloc include/linux/slab.h:737 [inline]
>  kvmalloc_array include/linux/slab.h:755 [inline]
>  kvcalloc include/linux/slab.h:760 [inline]
> 
> it seems that I have completely missed high order allocation backing
> vmalloc areas case when implementing __GFP_NOFAIL support. This means
> that [k]vmalloc at al. can allocate higher order allocations with
> __GFP_NOFAIL which can trigger OOM killer for non-costly orders easily
> or cause a lot of reclaim/compaction activity if those requests cannot
> be satisfied.
> 
> Fix the issue by falling back to zero order allocations for __GFP_NOFAIL
> requests if the high order request fails.
> 
> Fixes: 9376130c390a ("mm/vmalloc: add support for __GFP_NOFAIL")
> Reported-by: Gao Xiang <hsiangkao@linux.alibaba.com>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  mm/vmalloc.c | 28 +++++++++++++++++++++++-----
>  1 file changed, 23 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index ef910bf349e1..bef6cf2b4d46 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2883,6 +2883,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>  		unsigned int order, unsigned int nr_pages, struct page **pages)
>  {
>  	unsigned int nr_allocated = 0;
> +	gfp_t alloc_gfp = gfp;
> +	bool nofail = false;
>  	struct page *page;
>  	int i;
>  
> @@ -2893,6 +2895,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>  	 * more permissive.
>  	 */
>  	if (!order) {
> +		/* bulk allocator doesn't support nofail req. officially */
>  		gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL;
>  
>  		while (nr_allocated < nr_pages) {
> @@ -2931,20 +2934,35 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>  			if (nr != nr_pages_request)
>  				break;
>  		}
> +	} else if (gfp & __GFP_NOFAIL) {
> +		/*
> +		 * Higher order nofail allocations are really expensive and
> +		 * potentially dangerous (pre-mature OOM, disruptive reclaim
> +		 * and compaction etc.
> +		 */
> +		alloc_gfp &= ~__GFP_NOFAIL;
> +		nofail = true;
>  	}
>  
>  	/* High-order pages or fallback path if "bulk" fails. */
> -
>  	while (nr_allocated < nr_pages) {
>  		if (fatal_signal_pending(current))
>  			break;
>  
>  		if (nid == NUMA_NO_NODE)
> -			page = alloc_pages(gfp, order);
> +			page = alloc_pages(alloc_gfp, order);
>  		else
> -			page = alloc_pages_node(nid, gfp, order);
> -		if (unlikely(!page))
> -			break;
> +			page = alloc_pages_node(nid, alloc_gfp, order);
> +		if (unlikely(!page)) {
> +			if (!nofail)
> +				break;
> +
> +			/* fall back to the zero order allocations */
> +			alloc_gfp |= __GFP_NOFAIL;
> +			order = 0;
> +			continue;
> +		}
> +
>  		/*
>  		 * Higher order allocations must be able to be treated as
>  		 * indepdenent small pages by callers (as they can with
> -- 
> 2.30.2
> 
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>

--
Uladzislau Rezki
Vlastimil Babka March 6, 2023, 5:29 p.m. UTC | #2
On 3/6/23 15:03, Michal Hocko wrote:

> --- 
> From 3ccfaa15bf2587b8998c129533a0404fedf5a484 Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@suse.com>
> Date: Mon, 6 Mar 2023 09:15:17 +0100
> Subject: [PATCH] mm, vmalloc: fix high order __GFP_NOFAIL allocations
> 
> Gao Xiang has reported that the page allocator complains about high
> order __GFP_NOFAIL request coming from the vmalloc core:
> 
>  __alloc_pages+0x1cb/0x5b0 mm/page_alloc.c:5549
>  alloc_pages+0x1aa/0x270 mm/mempolicy.c:2286
>  vm_area_alloc_pages mm/vmalloc.c:2989 [inline]
>  __vmalloc_area_node mm/vmalloc.c:3057 [inline]
>  __vmalloc_node_range+0x978/0x13c0 mm/vmalloc.c:3227
>  kvmalloc_node+0x156/0x1a0 mm/util.c:606
>  kvmalloc include/linux/slab.h:737 [inline]
>  kvmalloc_array include/linux/slab.h:755 [inline]
>  kvcalloc include/linux/slab.h:760 [inline]
> 
> it seems that I have completely missed high order allocation backing
> vmalloc areas case when implementing __GFP_NOFAIL support. This means
> that [k]vmalloc at al. can allocate higher order allocations with
> __GFP_NOFAIL which can trigger OOM killer for non-costly orders easily
> or cause a lot of reclaim/compaction activity if those requests cannot
> be satisfied.
> 
> Fix the issue by falling back to zero order allocations for __GFP_NOFAIL
> requests if the high order request fails.
> 
> Fixes: 9376130c390a ("mm/vmalloc: add support for __GFP_NOFAIL")
> Reported-by: Gao Xiang <hsiangkao@linux.alibaba.com>
> Signed-off-by: Michal Hocko <mhocko@suse.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/vmalloc.c | 28 +++++++++++++++++++++++-----
>  1 file changed, 23 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index ef910bf349e1..bef6cf2b4d46 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2883,6 +2883,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>  		unsigned int order, unsigned int nr_pages, struct page **pages)
>  {
>  	unsigned int nr_allocated = 0;
> +	gfp_t alloc_gfp = gfp;
> +	bool nofail = false;
>  	struct page *page;
>  	int i;
>  
> @@ -2893,6 +2895,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>  	 * more permissive.
>  	 */
>  	if (!order) {
> +		/* bulk allocator doesn't support nofail req. officially */
>  		gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL;
>  
>  		while (nr_allocated < nr_pages) {
> @@ -2931,20 +2934,35 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>  			if (nr != nr_pages_request)
>  				break;
>  		}
> +	} else if (gfp & __GFP_NOFAIL) {
> +		/*
> +		 * Higher order nofail allocations are really expensive and
> +		 * potentially dangerous (pre-mature OOM, disruptive reclaim
> +		 * and compaction etc.

				      ^ unclosed parenthesis

> +		 */
> +		alloc_gfp &= ~__GFP_NOFAIL;
> +		nofail = true;
>  	}
>  
>  	/* High-order pages or fallback path if "bulk" fails. */
> -
>  	while (nr_allocated < nr_pages) {
>  		if (fatal_signal_pending(current))
>  			break;
>  
>  		if (nid == NUMA_NO_NODE)
> -			page = alloc_pages(gfp, order);
> +			page = alloc_pages(alloc_gfp, order);
>  		else
> -			page = alloc_pages_node(nid, gfp, order);
> -		if (unlikely(!page))
> -			break;
> +			page = alloc_pages_node(nid, alloc_gfp, order);
> +		if (unlikely(!page)) {
> +			if (!nofail)
> +				break;
> +
> +			/* fall back to the zero order allocations */
> +			alloc_gfp |= __GFP_NOFAIL;
> +			order = 0;
> +			continue;
> +		}
> +
>  		/*
>  		 * Higher order allocations must be able to be treated as
>  		 * indepdenent small pages by callers (as they can with

		   ^ while at it the typo could also be fixed
Michal Hocko March 6, 2023, 5:38 p.m. UTC | #3
Thanks. Here is an incremental diff
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index bef6cf2b4d46..b01295672a31 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2938,7 +2938,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
 		/*
 		 * Higher order nofail allocations are really expensive and
 		 * potentially dangerous (pre-mature OOM, disruptive reclaim
-		 * and compaction etc.
+		 * and compaction etc).
 		 */
 		alloc_gfp &= ~__GFP_NOFAIL;
 		nofail = true;
@@ -2965,7 +2965,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
 
 		/*
 		 * Higher order allocations must be able to be treated as
-		 * indepdenent small pages by callers (as they can with
+		 * independent small pages by callers (as they can with
 		 * small-page vmallocs). Some drivers do their own refcounting
 		 * on vmalloc_to_page() pages, some use page->mapping,
 		 * page->lru, etc.
Baoquan He March 7, 2023, 12:58 a.m. UTC | #4
n 03/06/23 at 03:03pm, Michal Hocko wrote:
 On Mon 06-03-23 13:14:43, Uladzislau Rezki wrote:
 [...]
 > Some questions:
 > 
 > 1. Could you please add a comment why you want the bulk_gfp without
 > the __GFP_NOFAIL(bulk path)?
 
 The bulk allocator is not documented to fully support __GFP_NOFAIL
 semantic IIRC. While it uses alloc_pages as fallback I didn't want
 to make any assumptions based on the current implementation. At least
 that is my recollection. If we do want to support NOFAIL by the batch
 allocator then we can drop the special casing here.
 
 > 2. Could you please add a comment why a high order pages do not want
 > __GFP_NOFAIL? You have already explained.
 
 See below
 > 3. Looking at the patch:
 > 
 > <snip>
 > +       } else {
 > +               alloc_gfp &= ~__GFP_NOFAIL;
 > +               nofail = true;
 > <snip>
 > 
 > if user does not want to go with __GFP_NOFAIL flag why you force it in
 > case a high order allocation fails and you switch to 0 order allocations? 
 
 Not intended. The above should have been else if (gfp & __GFP_NOFAIL).
 Thanks for catching that!
 
 This would be the full patch with the description:
 --- 
 From 3ccfaa15bf2587b8998c129533a0404fedf5a484 Mon Sep 17 00:00:00 2001
 From: Michal Hocko <mhocko@suse.com>
 Date: Mon, 6 Mar 2023 09:15:17 +0100
 Subject: [PATCH] mm, vmalloc: fix high order __GFP_NOFAIL allocations
 
 Gao Xiang has reported that the page allocator complains about high
 order __GFP_NOFAIL request coming from the vmalloc core:
 
  __alloc_pages+0x1cb/0x5b0 mm/page_alloc.c:5549
  alloc_pages+0x1aa/0x270 mm/mempolicy.c:2286
  vm_area_alloc_pages mm/vmalloc.c:2989 [inline]
  __vmalloc_area_node mm/vmalloc.c:3057 [inline]
  __vmalloc_node_range+0x978/0x13c0 mm/vmalloc.c:3227
  kvmalloc_node+0x156/0x1a0 mm/util.c:606
  kvmalloc include/linux/slab.h:737 [inline]
  kvmalloc_array include/linux/slab.h:755 [inline]
  kvcalloc include/linux/slab.h:760 [inline]
 
 it seems that I have completely missed high order allocation backing
 vmalloc areas case when implementing __GFP_NOFAIL support. This means
 that [k]vmalloc at al. can allocate higher order allocations with
 __GFP_NOFAIL which can trigger OOM killer for non-costly orders easily
 or cause a lot of reclaim/compaction activity if those requests cannot
 be satisfied.
 
 Fix the issue by falling back to zero order allocations for __GFP_NOFAIL
 requests if the high order request fails.
 
 Fixes: 9376130c390a ("mm/vmalloc: add support for __GFP_NOFAIL")
 Reported-by: Gao Xiang <hsiangkao@linux.alibaba.com>
 Signed-off-by: Michal Hocko <mhocko@suse.com>
 ---
  mm/vmalloc.c | 28 +++++++++++++++++++++++-----
  1 file changed, 23 insertions(+), 5 deletions(-)
 
 diff --git a/mm/vmalloc.c b/mm/vmalloc.c
 index ef910bf349e1..bef6cf2b4d46 100644
 --- a/mm/vmalloc.c
 +++ b/mm/vmalloc.c
 @@ -2883,6 +2883,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
  		unsigned int order, unsigned int nr_pages, struct page **pages)
  {
  	unsigned int nr_allocated = 0;
 +	gfp_t alloc_gfp = gfp;
 +	bool nofail = false;
  	struct page *page;
  	int i;
  
 @@ -2893,6 +2895,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
  	 * more permissive.
  	 */
  	if (!order) {
 +		/* bulk allocator doesn't support nofail req. officially */
  		gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL;
  
  		while (nr_allocated < nr_pages) {
 @@ -2931,20 +2934,35 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
  			if (nr != nr_pages_request)
  				break;
  		}
 +	} else if (gfp & __GFP_NOFAIL) {
 +		/*
 +		 * Higher order nofail allocations are really expensive and
 +		 * potentially dangerous (pre-mature OOM, disruptive reclaim
 +		 * and compaction etc.
 +		 */
 +		alloc_gfp &= ~__GFP_NOFAIL;
 +		nofail = true;
  	}
  
  	/* High-order pages or fallback path if "bulk" fails. */
 -
  	while (nr_allocated < nr_pages) {
  		if (fatal_signal_pending(current))
  			break;
  
  		if (nid == NUMA_NO_NODE)
 -			page = alloc_pages(gfp, order);
 +			page = alloc_pages(alloc_gfp, order);
  		else
 -			page = alloc_pages_node(nid, gfp, order);
 -		if (unlikely(!page))
 -			break;
 +			page = alloc_pages_node(nid, alloc_gfp, order);
 +		if (unlikely(!page)) {
 +			if (!nofail)
 +				break;
 +
 +			/* fall back to the zero order allocations */
 +			alloc_gfp |= __GFP_NOFAIL;
 +			order = 0;
 +			continue;
 +		}
 +
  		/*
  		 * Higher order allocations must be able to be treated as
  		 * indepdenent small pages by callers (as they can with

Reivewed-by: Baoquan He <bhe@redhat.com>
diff mbox series

Patch

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ef910bf349e1..bef6cf2b4d46 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2883,6 +2883,8 @@  vm_area_alloc_pages(gfp_t gfp, int nid,
 		unsigned int order, unsigned int nr_pages, struct page **pages)
 {
 	unsigned int nr_allocated = 0;
+	gfp_t alloc_gfp = gfp;
+	bool nofail = false;
 	struct page *page;
 	int i;
 
@@ -2893,6 +2895,7 @@  vm_area_alloc_pages(gfp_t gfp, int nid,
 	 * more permissive.
 	 */
 	if (!order) {
+		/* bulk allocator doesn't support nofail req. officially */
 		gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL;
 
 		while (nr_allocated < nr_pages) {
@@ -2931,20 +2934,35 @@  vm_area_alloc_pages(gfp_t gfp, int nid,
 			if (nr != nr_pages_request)
 				break;
 		}
+	} else if (gfp & __GFP_NOFAIL) {
+		/*
+		 * Higher order nofail allocations are really expensive and
+		 * potentially dangerous (pre-mature OOM, disruptive reclaim
+		 * and compaction etc.
+		 */
+		alloc_gfp &= ~__GFP_NOFAIL;
+		nofail = true;
 	}
 
 	/* High-order pages or fallback path if "bulk" fails. */
-
 	while (nr_allocated < nr_pages) {
 		if (fatal_signal_pending(current))
 			break;
 
 		if (nid == NUMA_NO_NODE)
-			page = alloc_pages(gfp, order);
+			page = alloc_pages(alloc_gfp, order);
 		else
-			page = alloc_pages_node(nid, gfp, order);
-		if (unlikely(!page))
-			break;
+			page = alloc_pages_node(nid, alloc_gfp, order);
+		if (unlikely(!page)) {
+			if (!nofail)
+				break;
+
+			/* fall back to the zero order allocations */
+			alloc_gfp |= __GFP_NOFAIL;
+			order = 0;
+			continue;
+		}
+
 		/*
 		 * Higher order allocations must be able to be treated as
 		 * indepdenent small pages by callers (as they can with