diff mbox series

[stable,v5.8] mm: memcg: fix memcg reclaim soft lockup

Message ID 20200918011913.57159-1-jpitti@cisco.com (mailing list archive)
State New, archived
Headers show
Series [stable,v5.8] mm: memcg: fix memcg reclaim soft lockup | expand

Commit Message

Julius Hemanth Pitti Sept. 18, 2020, 1:19 a.m. UTC
From: Xunlei Pang <xlpang@linux.alibaba.com>

commit e3336cab2579012b1e72b5265adf98e2d6e244ad upstream.

We've met softlockup with "CONFIG_PREEMPT_NONE=y", when the target memcg
doesn't have any reclaimable memory.

It can be easily reproduced as below:

  watchdog: BUG: soft lockup - CPU#0 stuck for 111s![memcg_test:2204]
  CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12
  Call Trace:
    shrink_lruvec+0x49f/0x640
    shrink_node+0x2a6/0x6f0
    do_try_to_free_pages+0xe9/0x3e0
    try_to_free_mem_cgroup_pages+0xef/0x1f0
    try_charge+0x2c1/0x750
    mem_cgroup_charge+0xd7/0x240
    __add_to_page_cache_locked+0x2fd/0x370
    add_to_page_cache_lru+0x4a/0xc0
    pagecache_get_page+0x10b/0x2f0
    filemap_fault+0x661/0xad0
    ext4_filemap_fault+0x2c/0x40
    __do_fault+0x4d/0xf9
    handle_mm_fault+0x1080/0x1790

It only happens on our 1-vcpu instances, because there's no chance for
oom reaper to run to reclaim the to-be-killed process.

Add a cond_resched() at the upper shrink_node_memcgs() to solve this
issue, this will mean that we will get a scheduling point for each memcg
in the reclaimed hierarchy without any dependency on the reclaimable
memory in that memcg thus making it more predictable.

Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Chris Down <chris@chrisdown.name>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Link: http://lkml.kernel.org/r/1598495549-67324-1-git-send-email-xlpang@linux.alibaba.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fixes: b0dedc49a2da ("mm/vmscan.c: iterate only over charged shrinkers during memcg shrink_slab()")
Cc: stable@vger.kernel.org
Signed-off-by: Julius Hemanth Pitti <jpitti@cisco.com>
---
 mm/vmscan.c | 8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Greg KH Sept. 21, 2020, 4:12 p.m. UTC | #1
On Thu, Sep 17, 2020 at 06:19:13PM -0700, Julius Hemanth Pitti wrote:
> From: Xunlei Pang <xlpang@linux.alibaba.com>
> 
> commit e3336cab2579012b1e72b5265adf98e2d6e244ad upstream.
> 
> We've met softlockup with "CONFIG_PREEMPT_NONE=y", when the target memcg
> doesn't have any reclaimable memory.
> 
> It can be easily reproduced as below:
> 
>   watchdog: BUG: soft lockup - CPU#0 stuck for 111s![memcg_test:2204]
>   CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12
>   Call Trace:
>     shrink_lruvec+0x49f/0x640
>     shrink_node+0x2a6/0x6f0
>     do_try_to_free_pages+0xe9/0x3e0
>     try_to_free_mem_cgroup_pages+0xef/0x1f0
>     try_charge+0x2c1/0x750
>     mem_cgroup_charge+0xd7/0x240
>     __add_to_page_cache_locked+0x2fd/0x370
>     add_to_page_cache_lru+0x4a/0xc0
>     pagecache_get_page+0x10b/0x2f0
>     filemap_fault+0x661/0xad0
>     ext4_filemap_fault+0x2c/0x40
>     __do_fault+0x4d/0xf9
>     handle_mm_fault+0x1080/0x1790
> 
> It only happens on our 1-vcpu instances, because there's no chance for
> oom reaper to run to reclaim the to-be-killed process.
> 
> Add a cond_resched() at the upper shrink_node_memcgs() to solve this
> issue, this will mean that we will get a scheduling point for each memcg
> in the reclaimed hierarchy without any dependency on the reclaimable
> memory in that memcg thus making it more predictable.
> 
> Suggested-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Acked-by: Chris Down <chris@chrisdown.name>
> Acked-by: Michal Hocko <mhocko@suse.com>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> Link: http://lkml.kernel.org/r/1598495549-67324-1-git-send-email-xlpang@linux.alibaba.com
> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> Fixes: b0dedc49a2da ("mm/vmscan.c: iterate only over charged shrinkers during memcg shrink_slab()")
> Cc: stable@vger.kernel.org
> Signed-off-by: Julius Hemanth Pitti <jpitti@cisco.com>
> ---
>  mm/vmscan.c | 8 ++++++++
>  1 file changed, 8 insertions(+)

The Fixes: tag you show here goes back to 4.19, can you provide a 4.19.y
and 5.4.y version of this as well?

thanks,

greg k-h
Julius Hemanth Pitti Sept. 21, 2020, 4:15 p.m. UTC | #2
On Mon, 2020-09-21 at 18:12 +0200, Greg KH wrote:
> On Thu, Sep 17, 2020 at 06:19:13PM -0700, Julius Hemanth Pitti wrote:
> > From: Xunlei Pang <xlpang@linux.alibaba.com>
> > 
> > commit e3336cab2579012b1e72b5265adf98e2d6e244ad upstream.
> > 
> > We've met softlockup with "CONFIG_PREEMPT_NONE=y", when the target
> > memcg
> > doesn't have any reclaimable memory.
> > 
> > It can be easily reproduced as below:
> > 
> >   watchdog: BUG: soft lockup - CPU#0 stuck for
> > 111s![memcg_test:2204]
> >   CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12
> >   Call Trace:
> >     shrink_lruvec+0x49f/0x640
> >     shrink_node+0x2a6/0x6f0
> >     do_try_to_free_pages+0xe9/0x3e0
> >     try_to_free_mem_cgroup_pages+0xef/0x1f0
> >     try_charge+0x2c1/0x750
> >     mem_cgroup_charge+0xd7/0x240
> >     __add_to_page_cache_locked+0x2fd/0x370
> >     add_to_page_cache_lru+0x4a/0xc0
> >     pagecache_get_page+0x10b/0x2f0
> >     filemap_fault+0x661/0xad0
> >     ext4_filemap_fault+0x2c/0x40
> >     __do_fault+0x4d/0xf9
> >     handle_mm_fault+0x1080/0x1790
> > 
> > It only happens on our 1-vcpu instances, because there's no chance
> > for
> > oom reaper to run to reclaim the to-be-killed process.
> > 
> > Add a cond_resched() at the upper shrink_node_memcgs() to solve
> > this
> > issue, this will mean that we will get a scheduling point for each
> > memcg
> > in the reclaimed hierarchy without any dependency on the
> > reclaimable
> > memory in that memcg thus making it more predictable.
> > 
> > Suggested-by: Michal Hocko <mhocko@suse.com>
> > Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>
> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> > Acked-by: Chris Down <chris@chrisdown.name>
> > Acked-by: Michal Hocko <mhocko@suse.com>
> > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> > Link: 
> > http://lkml.kernel.org/r/1598495549-67324-1-git-send-email-xlpang@linux.alibaba.com
> > Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> > Fixes: b0dedc49a2da ("mm/vmscan.c: iterate only over charged
> > shrinkers during memcg shrink_slab()")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Julius Hemanth Pitti <jpitti@cisco.com>
> > ---
> >  mm/vmscan.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> 
> The Fixes: tag you show here goes back to 4.19, can you provide a
> 4.19.y
> and 5.4.y version of this as well?
Sure. Will send for both 5.4.y and 4.19.y.

> 
> thanks,
> 
> greg k-h
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 749d239c62b2..8b97bc615d8c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2619,6 +2619,14 @@  static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
 		unsigned long reclaimed;
 		unsigned long scanned;
 
+		/*
+		 * This loop can become CPU-bound when target memcgs
+		 * aren't eligible for reclaim - either because they
+		 * don't have any reclaimable pages, or because their
+		 * memory is explicitly protected. Avoid soft lockups.
+		 */
+		cond_resched();
+
 		switch (mem_cgroup_protected(target_memcg, memcg)) {
 		case MEMCG_PROT_MIN:
 			/*