diff mbox series

mm, oom: dump stack of victim when reaping failed

Message ID alpine.DEB.2.21.2001141519280.200484@chino.kir.corp.google.com (mailing list archive)
State New, archived
Headers show
Series mm, oom: dump stack of victim when reaping failed | expand

Commit Message

David Rientjes Jan. 14, 2020, 11:20 p.m. UTC
When a process cannot be oom reaped, for whatever reason, currently the
list of locks that are held is currently dumped to the kernel log.

Much more interesting is the stack trace of the victim that cannot be
reaped.  If the stack trace is dumped, we have the ability to find
related occurrences in the same kernel code and hopefully solve the
issue that is making it wedged.

Dump the stack trace when a process fails to be oom reaped.

Signed-off-by: David Rientjes <rientjes@google.com>
---
 mm/oom_kill.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Michal Hocko Jan. 15, 2020, 8:43 a.m. UTC | #1
On Tue 14-01-20 15:20:04, David Rientjes wrote:
> When a process cannot be oom reaped, for whatever reason, currently the
> list of locks that are held is currently dumped to the kernel log.
> 
> Much more interesting is the stack trace of the victim that cannot be
> reaped.  If the stack trace is dumped, we have the ability to find
> related occurrences in the same kernel code and hopefully solve the
> issue that is making it wedged.
> 
> Dump the stack trace when a process fails to be oom reaped.

Yes, this is really helpful.

> Signed-off-by: David Rientjes <rientjes@google.com>

Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

> ---
>  mm/oom_kill.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -26,6 +26,7 @@
>  #include <linux/sched/mm.h>
>  #include <linux/sched/coredump.h>
>  #include <linux/sched/task.h>
> +#include <linux/sched/debug.h>
>  #include <linux/swap.h>
>  #include <linux/timex.h>
>  #include <linux/jiffies.h>
> @@ -620,6 +621,7 @@ static void oom_reap_task(struct task_struct *tsk)
>  
>  	pr_info("oom_reaper: unable to reap pid:%d (%s)\n",
>  		task_pid_nr(tsk), tsk->comm);
> +	sched_show_task(tsk);
>  	debug_show_all_locks();
>  
>  done:
David Rientjes Jan. 15, 2020, 8:27 p.m. UTC | #2
On Wed, 15 Jan 2020, Tetsuo Handa wrote:

> >> When a process cannot be oom reaped, for whatever reason, currently the
> >> list of locks that are held is currently dumped to the kernel log.
> >>
> >> Much more interesting is the stack trace of the victim that cannot be
> >> reaped.  If the stack trace is dumped, we have the ability to find
> >> related occurrences in the same kernel code and hopefully solve the
> >> issue that is making it wedged.
> >>
> >> Dump the stack trace when a process fails to be oom reaped.
> > 
> > Yes, this is really helpful.
> 
> tsk would be a thread group leader, but the thread which got stuck is not
> always a thread group leader. Maybe dump all threads in that thread group
> without PF_EXITING (or something) ?
> 

That's possible, yes.  I think it comes down to the classic problem of how 
much info in the kernel log on oom kill is too much.  Stacks for all 
threads that match the mm being reaped may be *very* verbose.  I'm 
currently tracking a stall in oom reaping where the victim doesn't always 
have a lock held so we don't know where it's at in the kernel; I'm hoping 
that a stack for the thread group leader will at least shed some light on 
it.
David Rientjes Jan. 16, 2020, 9:05 p.m. UTC | #3
On Thu, 16 Jan 2020, Tetsuo Handa wrote:

> > I'm 
> > currently tracking a stall in oom reaping where the victim doesn't always 
> > have a lock held so we don't know where it's at in the kernel; I'm hoping 
> > that a stack for the thread group leader will at least shed some light on 
> > it.
> > 
> 
> This change was already proposed at
> https://lore.kernel.org/linux-mm/20180320122818.GL23100@dhcp22.suse.cz/ .
> 

Hmm, seems the patch didn't get followed up on but I obviously agree with 
it :)

> And according to that proposal, it is likely i_mmap_lock_write() in dup_mmap()
> in copy_process(). We tried to make that lock killable but we gave it up
> because nobody knows whether it is safe to do make it killable.
> 

I haven't encountered that particular problem yet; one problem that I've 
found is a victim holding cgroup_threadgroup_rwsem in the exit path, 
another problem is the victim not holding any locks at all which is more 
concerning (why isn't it making forward progress?).  This patch intends to 
provide a clue for the latter.

Aside: we may also want to consider the possibility of doing immediate 
additional oom killing if the initial victim is too small.  We rely on the 
oom reaper to solve livelocks like this by freeing memory so that 
allocators can drop locks that the victim depends on.  If the victim is 
too small (we have victims <1MB because of oom_score_adj +1000!) we may 
want to consider additional immediate oom killing because it simply won't 
free enough memory.
diff mbox series

Patch

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -26,6 +26,7 @@ 
 #include <linux/sched/mm.h>
 #include <linux/sched/coredump.h>
 #include <linux/sched/task.h>
+#include <linux/sched/debug.h>
 #include <linux/swap.h>
 #include <linux/timex.h>
 #include <linux/jiffies.h>
@@ -620,6 +621,7 @@  static void oom_reap_task(struct task_struct *tsk)
 
 	pr_info("oom_reaper: unable to reap pid:%d (%s)\n",
 		task_pid_nr(tsk), tsk->comm);
+	sched_show_task(tsk);
 	debug_show_all_locks();
 
 done: