diff mbox series

mm: fix swap cache node allocation mask

Message ID alpine.LSU.2.11.2006151330070.11064@eggly.anvils (mailing list archive)
State New, archived
Headers show
Series mm: fix swap cache node allocation mask | expand

Commit Message

Hugh Dickins June 15, 2020, 8:35 p.m. UTC
https://bugzilla.kernel.org/show_bug.cgi?id=208085 reports that a
slightly overcommitted load, testing swap and zram along with i915,
splats and keeps on splatting, when it had better fail less noisily:

gnome-shell: page allocation failure: order:0,
mode:0x400d0(__GFP_IO|__GFP_FS|__GFP_COMP|__GFP_RECLAIMABLE),
nodemask=(null),cpuset=/,mems_allowed=0
CPU: 2 PID: 1155 Comm: gnome-shell Not tainted 5.7.0-1.fc33.x86_64 #1
Call Trace:
dump_stack+0x64/0x88
warn_alloc.cold+0x75/0xd9
__alloc_pages_slowpath.constprop.0+0xcfa/0xd30
__alloc_pages_nodemask+0x2df/0x320
alloc_slab_page+0x195/0x310
allocate_slab+0x3c5/0x440
___slab_alloc+0x40c/0x5f0
__slab_alloc+0x1c/0x30
kmem_cache_alloc+0x20e/0x220
xas_nomem+0x28/0x70
add_to_swap_cache+0x321/0x400
__read_swap_cache_async+0x105/0x240
swap_cluster_readahead+0x22c/0x2e0
shmem_swapin+0x8e/0xc0
shmem_swapin_page+0x196/0x740
shmem_getpage_gfp+0x3a2/0xa60
shmem_read_mapping_page_gfp+0x32/0x60
shmem_get_pages+0x155/0x5e0 [i915]
__i915_gem_object_get_pages+0x68/0xa0 [i915]
i915_vma_pin+0x3fe/0x6c0 [i915]
eb_add_vma+0x10b/0x2c0 [i915]
i915_gem_do_execbuffer+0x704/0x3430 [i915]
i915_gem_execbuffer2_ioctl+0x1ea/0x3e0 [i915]
drm_ioctl_kernel+0x86/0xd0 [drm]
drm_ioctl+0x206/0x390 [drm]
ksys_ioctl+0x82/0xc0
__x64_sys_ioctl+0x16/0x20
do_syscall_64+0x5b/0xf0
entry_SYSCALL_64_after_hwframe+0x44/0xa9

Reported on 5.7, but it goes back really to 3.1: when
shmem_read_mapping_page_gfp() was implemented for use by i915, and
allowed for __GFP_NORETRY and __GFP_NOWARN flags in most places, but
missed swapin's "& GFP_KERNEL" mask for page tree node allocation in
__read_swap_cache_async() - that was to mask off HIGHUSER_MOVABLE bits
from what page cache uses, but GFP_RECLAIM_MASK is now what's needed.

Fixes: 68da9f055755 ("tmpfs: pass gfp to shmem_getpage_gfp")
Reported-by: Chris Murphy <lists@colorremedies.com>
Analyzed-by: Vlastimil Babka <vbabka@suse.cz>
Analyzed-by: Matthew Wilcox <willy@infradead.org>
Tested-by: Chris Murphy <lists@colorremedies.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: stable@vger.kernel.org # 3.1+
---

 mm/swap_state.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Vlastimil Babka June 16, 2020, 11:24 a.m. UTC | #1
On 6/15/20 10:35 PM, Hugh Dickins wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=208085 reports that a
> slightly overcommitted load, testing swap and zram along with i915,
> splats and keeps on splatting, when it had better fail less noisily:
> 
> gnome-shell: page allocation failure: order:0,
> mode:0x400d0(__GFP_IO|__GFP_FS|__GFP_COMP|__GFP_RECLAIMABLE),
> nodemask=(null),cpuset=/,mems_allowed=0
> CPU: 2 PID: 1155 Comm: gnome-shell Not tainted 5.7.0-1.fc33.x86_64 #1
> Call Trace:
> dump_stack+0x64/0x88
> warn_alloc.cold+0x75/0xd9
> __alloc_pages_slowpath.constprop.0+0xcfa/0xd30
> __alloc_pages_nodemask+0x2df/0x320
> alloc_slab_page+0x195/0x310
> allocate_slab+0x3c5/0x440
> ___slab_alloc+0x40c/0x5f0
> __slab_alloc+0x1c/0x30
> kmem_cache_alloc+0x20e/0x220
> xas_nomem+0x28/0x70
> add_to_swap_cache+0x321/0x400
> __read_swap_cache_async+0x105/0x240
> swap_cluster_readahead+0x22c/0x2e0
> shmem_swapin+0x8e/0xc0
> shmem_swapin_page+0x196/0x740
> shmem_getpage_gfp+0x3a2/0xa60
> shmem_read_mapping_page_gfp+0x32/0x60
> shmem_get_pages+0x155/0x5e0 [i915]
> __i915_gem_object_get_pages+0x68/0xa0 [i915]
> i915_vma_pin+0x3fe/0x6c0 [i915]
> eb_add_vma+0x10b/0x2c0 [i915]
> i915_gem_do_execbuffer+0x704/0x3430 [i915]
> i915_gem_execbuffer2_ioctl+0x1ea/0x3e0 [i915]
> drm_ioctl_kernel+0x86/0xd0 [drm]
> drm_ioctl+0x206/0x390 [drm]
> ksys_ioctl+0x82/0xc0
> __x64_sys_ioctl+0x16/0x20
> do_syscall_64+0x5b/0xf0
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
> 
> Reported on 5.7, but it goes back really to 3.1: when
> shmem_read_mapping_page_gfp() was implemented for use by i915, and
> allowed for __GFP_NORETRY and __GFP_NOWARN flags in most places, but
> missed swapin's "& GFP_KERNEL" mask for page tree node allocation in
> __read_swap_cache_async() - that was to mask off HIGHUSER_MOVABLE bits
> from what page cache uses, but GFP_RECLAIM_MASK is now what's needed.
> 
> Fixes: 68da9f055755 ("tmpfs: pass gfp to shmem_getpage_gfp")
> Reported-by: Chris Murphy <lists@colorremedies.com>
> Analyzed-by: Vlastimil Babka <vbabka@suse.cz>
> Analyzed-by: Matthew Wilcox <willy@infradead.org>
> Tested-by: Chris Murphy <lists@colorremedies.com>
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: stable@vger.kernel.org # 3.1+

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Thanks!

> ---
> 
>  mm/swap_state.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> --- 5.8-rc1/mm/swap_state.c	2020-06-14 15:13:01.518042420 -0700
> +++ linux/mm/swap_state.c	2020-06-15 11:48:02.346691901 -0700
> @@ -21,7 +21,7 @@
>  #include <linux/vmalloc.h>
>  #include <linux/swap_slots.h>
>  #include <linux/huge_mm.h>
> -
> +#include "internal.h"
>  
>  /*
>   * swapper_space is a fiction, retained to simplify the path through
> @@ -429,7 +429,7 @@ struct page *__read_swap_cache_async(swp
>  	__SetPageSwapBacked(page);
>  
>  	/* May fail (-ENOMEM) if XArray node allocation failed. */
> -	if (add_to_swap_cache(page, entry, gfp_mask & GFP_KERNEL)) {
> +	if (add_to_swap_cache(page, entry, gfp_mask & GFP_RECLAIM_MASK)) {
>  		put_swap_page(page, entry);
>  		goto fail_unlock;
>  	}
>
diff mbox series

Patch

--- 5.8-rc1/mm/swap_state.c	2020-06-14 15:13:01.518042420 -0700
+++ linux/mm/swap_state.c	2020-06-15 11:48:02.346691901 -0700
@@ -21,7 +21,7 @@ 
 #include <linux/vmalloc.h>
 #include <linux/swap_slots.h>
 #include <linux/huge_mm.h>
-
+#include "internal.h"
 
 /*
  * swapper_space is a fiction, retained to simplify the path through
@@ -429,7 +429,7 @@  struct page *__read_swap_cache_async(swp
 	__SetPageSwapBacked(page);
 
 	/* May fail (-ENOMEM) if XArray node allocation failed. */
-	if (add_to_swap_cache(page, entry, gfp_mask & GFP_KERNEL)) {
+	if (add_to_swap_cache(page, entry, gfp_mask & GFP_RECLAIM_MASK)) {
 		put_swap_page(page, entry);
 		goto fail_unlock;
 	}