diff mbox series

memcg, oom: be careful about races when warning about no reclaimable task

Message ID 20180807072553.14941-1-mhocko@kernel.org (mailing list archive)
State New, archived
Headers show
Series memcg, oom: be careful about races when warning about no reclaimable task | expand

Commit Message

Michal Hocko Aug. 7, 2018, 7:25 a.m. UTC
From: Michal Hocko <mhocko@suse.com>

"memcg, oom: move out_of_memory back to the charge path" has added a
warning triggered when the oom killer cannot find any eligible task
and so there is no way to reclaim the oom memcg under its hard limit.
Further charges for such a memcg are forced and therefore the hard limit
isolation is weakened.

The current warning is however too eager to trigger  even when we are not
really hitting the above condition. Syzbot[1] and Greg Thelen have noticed
that we can hit this condition even when there is still oom victim
pending. E.g. the following race is possible:

memcg has two tasks taskA, taskB.

CPU1 (taskA)			CPU2			CPU3 (taskB)
try_charge
  mem_cgroup_out_of_memory				try_charge
      select_bad_process(taskB)
      oom_kill_process		oom_reap_task
				# No real memory reaped
    				  			  mem_cgroup_out_of_memory
				# set taskB -> MMF_OOM_SKIP
  # retry charge
  mem_cgroup_out_of_memory
    oom_lock						    oom_lock
    select_bad_process(self)
    oom_kill_process(self)
    oom_unlock
							    # no eligible task

In fact syzbot test triggered this situation by placing multiple tasks
into a memcg with hard limit set to 0. So no task really had any memory
charged to the memcg

: Memory cgroup stats for /ile0: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
: Tasks state (memory values in pages):
: [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
: [   6569]     0  6562     9427        1    53248        0             0 syz-executor0
: [   6576]     0  6576     9426        0    61440        0             0 syz-executor6
: [   6578]     0  6578     9426      534    61440        0             0 syz-executor4
: [   6579]     0  6579     9426        0    57344        0             0 syz-executor5
: [   6582]     0  6582     9426        0    61440        0             0 syz-executor7
: [   6584]     0  6584     9426        0    57344        0             0 syz-executor1

so in principle there is indeed nothing reclaimable in this memcg and
this looks like a misconfiguration. On the other hand we can clearly
kill all those tasks so it is a bit early to warn and scare users. Do
that by checking that the current is the oom victim and bypass the
warning then. The victim is allowed to force charge and terminate to
release its temporal charge along the way.

[1] http://lkml.kernel.org/r/0000000000005e979605729c1564@google.com
Fixes: "memcg, oom: move out_of_memory back to the charge path"
Noticed-by: Greg Thelen <gthelen@google.com>
Reported-and-tested-by: syzbot+bab151e82a4e973fa325@syzkaller.appspotmail.com
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/memcontrol.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Tetsuo Handa Aug. 7, 2018, 10:15 a.m. UTC | #1
On 2018/08/07 16:25, Michal Hocko wrote:
> @@ -1703,7 +1703,8 @@ static enum oom_status mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int
>  		return OOM_ASYNC;
>  	}
>  
> -	if (mem_cgroup_out_of_memory(memcg, mask, order))
> +	if (mem_cgroup_out_of_memory(memcg, mask, order) ||
> +			tsk_is_oom_victim(current))
>  		return OOM_SUCCESS;
>  
>  	WARN(1,"Memory cgroup charge failed because of no reclaimable memory! "
> 

I don't think this patch is appropriate. This patch only avoids hitting WARN(1).
This patch does not address the root cause:

The task_will_free_mem(current) test in out_of_memory() is returning false
because test_bit(MMF_OOM_SKIP, &mm->flags) test in task_will_free_mem() is
returning false because MMF_OOM_SKIP was already set by the OOM reaper. The OOM
killer does not need to start selecting next OOM victim until "current thread
completes __mmput()" or "it fails to complete __mmput() within reasonable
period".

According to https://syzkaller.appspot.com/text?tag=CrashLog&x=15a1c770400000 ,
PID=23767 selected PID=23766 as an OOM victim and the OOM reaper set MMF_OOM_SKIP
before PID=23766 unnecessarily selects PID=23767 as next OOM victim.
At uptime = 366.550949, out_of_memory() should have returned true without selecting
next OOM victim because tsk_is_oom_victim(current) == true.

[  365.869417] syz-executor2 invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), order=0, oom_score_adj=0
[  365.878899] CPU: 0 PID: 23767 Comm: syz-executor2 Not tainted 4.18.0-rc6-next-20180725+ #18
(...snipped...)
[  366.487490] Tasks state (memory values in pages):
[  366.492349] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  366.501237] [  23766]     0 23766    17620     8221   126976        0             0 syz-executor3
[  366.510367] [  23767]     0 23767    17618     8218   126976        0             0 syz-executor2
[  366.519409] Memory cgroup out of memory: Kill process 23766 (syz-executor3) score 8252000 or sacrifice child
[  366.529422] Killed process 23766 (syz-executor3) total-vm:70480kB, anon-rss:116kB, file-rss:32768kB, shmem-rss:0kB
[  366.540456] oom_reaper: reaped process 23766 (syz-executor3), now anon-rss:0kB, file-rss:32000kB, shmem-rss:0kB
[  366.550949] syz-executor3 invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), order=0, oom_score_adj=0
[  366.560374] CPU: 1 PID: 23766 Comm: syz-executor3 Not tainted 4.18.0-rc6-next-20180725+ #18
(...snipped...)
[  367.138136] Tasks state (memory values in pages):
[  367.142986] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  367.151889] [  23766]     0 23766    17620     8002   126976        0             0 syz-executor3
[  367.160946] [  23767]     0 23767    17618     8218   126976        0             0 syz-executor2
[  367.169994] Memory cgroup out of memory: Kill process 23767 (syz-executor2) score 8249000 or sacrifice child
[  367.180119] Killed process 23767 (syz-executor2) total-vm:70472kB, anon-rss:104kB, file-rss:32768kB, shmem-rss:0kB
[  367.192101] oom_reaper: reaped process 23767 (syz-executor2), now anon-rss:0kB, file-rss:32000kB, shmem-rss:0kB
[  367.202986] ------------[ cut here ]------------
[  367.207845] Memory cgroup charge failed because of no reclaimable memory! This looks like a misconfiguration or a kernel bug.
[  367.207965] WARNING: CPU: 1 PID: 23767 at mm/memcontrol.c:1710 try_charge+0x734/0x1680
[  367.227540] Kernel panic - not syncing: panic_on_warn set ...

Of course, if the hard limit is 0, all processes will be killed after all. But
Michal is ignoring the fact that if the hard limit were not 0, there is a chance
of saving next process from needlessly killed if we waited until "mm of PID=23766
completed __mmput()" or "mm of PID=23766 failed to complete __mmput() within
reasonable period". 

We can make efforts not to return false at

	/*
	 * This task has already been drained by the oom reaper so there are
	 * only small chances it will free some more
	 */
	if (test_bit(MMF_OOM_SKIP, &mm->flags))
		return false;

(I admit that ignoring MMF_OOM_SKIP for once might not be sufficient for memcg
case), and we can use feedback based backoff like
"[PATCH 4/4] mm, oom: Fix unnecessary killing of additional processes." *UNTIL*
we come to the point where the OOM reaper can always reclaim all memory.
Michal Hocko Aug. 7, 2018, 11:04 a.m. UTC | #2
On Tue 07-08-18 19:15:11, Tetsuo Handa wrote:
[...]
> Of course, if the hard limit is 0, all processes will be killed after all. But
> Michal is ignoring the fact that if the hard limit were not 0, there is a chance
> of saving next process from needlessly killed if we waited until "mm of PID=23766
> completed __mmput()" or "mm of PID=23766 failed to complete __mmput() within
> reasonable period". 

This is a completely different issue IMHO. I haven't seen reports about
overly eager memcg oom killing so far.
 
> We can make efforts not to return false at
> 
> 	/*
> 	 * This task has already been drained by the oom reaper so there are
> 	 * only small chances it will free some more
> 	 */
> 	if (test_bit(MMF_OOM_SKIP, &mm->flags))
> 		return false;
> 
> (I admit that ignoring MMF_OOM_SKIP for once might not be sufficient for memcg
> case), and we can use feedback based backoff like
> "[PATCH 4/4] mm, oom: Fix unnecessary killing of additional processes." *UNTIL*
> we come to the point where the OOM reaper can always reclaim all memory.

The code is quite tricky and I am really reluctant to make it even more
so without seeing this is really hurting real users with real workloads.
Johannes Weiner Aug. 7, 2018, 8:02 p.m. UTC | #3
On Tue, Aug 07, 2018 at 09:25:53AM +0200, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> "memcg, oom: move out_of_memory back to the charge path" has added a
> warning triggered when the oom killer cannot find any eligible task
> and so there is no way to reclaim the oom memcg under its hard limit.
> Further charges for such a memcg are forced and therefore the hard limit
> isolation is weakened.
> 
> The current warning is however too eager to trigger  even when we are not
> really hitting the above condition. Syzbot[1] and Greg Thelen have noticed
> that we can hit this condition even when there is still oom victim
> pending. E.g. the following race is possible:
> 
> memcg has two tasks taskA, taskB.
> 
> CPU1 (taskA)			CPU2			CPU3 (taskB)
> try_charge
>   mem_cgroup_out_of_memory				try_charge
>       select_bad_process(taskB)
>       oom_kill_process		oom_reap_task
> 				# No real memory reaped
>     				  			  mem_cgroup_out_of_memory
> 				# set taskB -> MMF_OOM_SKIP
>   # retry charge
>   mem_cgroup_out_of_memory
>     oom_lock						    oom_lock
>     select_bad_process(self)
>     oom_kill_process(self)
>     oom_unlock
> 							    # no eligible task
> 
> In fact syzbot test triggered this situation by placing multiple tasks
> into a memcg with hard limit set to 0. So no task really had any memory
> charged to the memcg
> 
> : Memory cgroup stats for /ile0: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
> : Tasks state (memory values in pages):
> : [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
> : [   6569]     0  6562     9427        1    53248        0             0 syz-executor0
> : [   6576]     0  6576     9426        0    61440        0             0 syz-executor6
> : [   6578]     0  6578     9426      534    61440        0             0 syz-executor4
> : [   6579]     0  6579     9426        0    57344        0             0 syz-executor5
> : [   6582]     0  6582     9426        0    61440        0             0 syz-executor7
> : [   6584]     0  6584     9426        0    57344        0             0 syz-executor1
> 
> so in principle there is indeed nothing reclaimable in this memcg and
> this looks like a misconfiguration. On the other hand we can clearly
> kill all those tasks so it is a bit early to warn and scare users. Do
> that by checking that the current is the oom victim and bypass the
> warning then. The victim is allowed to force charge and terminate to
> release its temporal charge along the way.
> 
> [1] http://lkml.kernel.org/r/0000000000005e979605729c1564@google.com
> Fixes: "memcg, oom: move out_of_memory back to the charge path"
> Noticed-by: Greg Thelen <gthelen@google.com>
> Reported-and-tested-by: syzbot+bab151e82a4e973fa325@syzkaller.appspotmail.com
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  mm/memcontrol.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 4603ad75c9a9..1b6eed1bc404 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1703,7 +1703,8 @@ static enum oom_status mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int
>  		return OOM_ASYNC;
>  	}
>  
> -	if (mem_cgroup_out_of_memory(memcg, mask, order))
> +	if (mem_cgroup_out_of_memory(memcg, mask, order) ||
> +			tsk_is_oom_victim(current))
>  		return OOM_SUCCESS;
>  
>  	WARN(1,"Memory cgroup charge failed because of no reclaimable memory! "

This is really ugly. :(

If that check is only there to suppress the warning when the limit is
0, this should really be a separate branch around the warning, with a
fat comment that this is a ridiculous cornercase, and not look like it
is an essential part of the memcg reclaim/oom process.

Personally, I really don't get the point of this message. What is the
user to do with this information? What are we to do with it if people
report it? It conveys zero information on what the problem could be,
because it asserts a really vague high-level thing. Shouldn't such
debugging happen inside the OOM killer? What are the conceivable
scenarios in which this triggers other than obvious misconfigs?

What would we lose by just deleting it?
Johannes Weiner Aug. 7, 2018, 8:19 p.m. UTC | #4
On Tue, Aug 07, 2018 at 07:15:11PM +0900, Tetsuo Handa wrote:
> On 2018/08/07 16:25, Michal Hocko wrote:
> > @@ -1703,7 +1703,8 @@ static enum oom_status mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int
> >  		return OOM_ASYNC;
> >  	}
> >  
> > -	if (mem_cgroup_out_of_memory(memcg, mask, order))
> > +	if (mem_cgroup_out_of_memory(memcg, mask, order) ||
> > +			tsk_is_oom_victim(current))
> >  		return OOM_SUCCESS;
> >  
> >  	WARN(1,"Memory cgroup charge failed because of no reclaimable memory! "
> > 
> 
> I don't think this patch is appropriate. This patch only avoids hitting WARN(1).
> This patch does not address the root cause:
> 
> The task_will_free_mem(current) test in out_of_memory() is returning false
> because test_bit(MMF_OOM_SKIP, &mm->flags) test in task_will_free_mem() is
> returning false because MMF_OOM_SKIP was already set by the OOM reaper. The OOM
> killer does not need to start selecting next OOM victim until "current thread
> completes __mmput()" or "it fails to complete __mmput() within reasonable
> period".

I don't see why it matters whether the OOM victim exits or not, unless
you count the memory consumed by struct task_struct.

> According to https://syzkaller.appspot.com/text?tag=CrashLog&x=15a1c770400000 ,
> PID=23767 selected PID=23766 as an OOM victim and the OOM reaper set MMF_OOM_SKIP
> before PID=23766 unnecessarily selects PID=23767 as next OOM victim.
> At uptime = 366.550949, out_of_memory() should have returned true without selecting
> next OOM victim because tsk_is_oom_victim(current) == true.

The code works just fine. We have to kill tasks until we a) free
enough memory or b) run out of tasks or c) kill current. When one of
these outcomes is reached, we allow the charge and return.

The only problem here is a warning in the wrong place.
Michal Hocko Aug. 7, 2018, 8:23 p.m. UTC | #5
On Tue 07-08-18 16:02:47, Johannes Weiner wrote:
> On Tue, Aug 07, 2018 at 09:25:53AM +0200, Michal Hocko wrote:
> > From: Michal Hocko <mhocko@suse.com>
> > 
> > "memcg, oom: move out_of_memory back to the charge path" has added a
> > warning triggered when the oom killer cannot find any eligible task
> > and so there is no way to reclaim the oom memcg under its hard limit.
> > Further charges for such a memcg are forced and therefore the hard limit
> > isolation is weakened.
> > 
> > The current warning is however too eager to trigger  even when we are not
> > really hitting the above condition. Syzbot[1] and Greg Thelen have noticed
> > that we can hit this condition even when there is still oom victim
> > pending. E.g. the following race is possible:
> > 
> > memcg has two tasks taskA, taskB.
> > 
> > CPU1 (taskA)			CPU2			CPU3 (taskB)
> > try_charge
> >   mem_cgroup_out_of_memory				try_charge
> >       select_bad_process(taskB)
> >       oom_kill_process		oom_reap_task
> > 				# No real memory reaped
> >     				  			  mem_cgroup_out_of_memory
> > 				# set taskB -> MMF_OOM_SKIP
> >   # retry charge
> >   mem_cgroup_out_of_memory
> >     oom_lock						    oom_lock
> >     select_bad_process(self)
> >     oom_kill_process(self)
> >     oom_unlock
> > 							    # no eligible task
> > 
> > In fact syzbot test triggered this situation by placing multiple tasks
> > into a memcg with hard limit set to 0. So no task really had any memory
> > charged to the memcg
> > 
> > : Memory cgroup stats for /ile0: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
> > : Tasks state (memory values in pages):
> > : [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
> > : [   6569]     0  6562     9427        1    53248        0             0 syz-executor0
> > : [   6576]     0  6576     9426        0    61440        0             0 syz-executor6
> > : [   6578]     0  6578     9426      534    61440        0             0 syz-executor4
> > : [   6579]     0  6579     9426        0    57344        0             0 syz-executor5
> > : [   6582]     0  6582     9426        0    61440        0             0 syz-executor7
> > : [   6584]     0  6584     9426        0    57344        0             0 syz-executor1
> > 
> > so in principle there is indeed nothing reclaimable in this memcg and
> > this looks like a misconfiguration. On the other hand we can clearly
> > kill all those tasks so it is a bit early to warn and scare users. Do
> > that by checking that the current is the oom victim and bypass the
> > warning then. The victim is allowed to force charge and terminate to
> > release its temporal charge along the way.
> > 
> > [1] http://lkml.kernel.org/r/0000000000005e979605729c1564@google.com
> > Fixes: "memcg, oom: move out_of_memory back to the charge path"
> > Noticed-by: Greg Thelen <gthelen@google.com>
> > Reported-and-tested-by: syzbot+bab151e82a4e973fa325@syzkaller.appspotmail.com
> > Signed-off-by: Michal Hocko <mhocko@suse.com>
> > ---
> >  mm/memcontrol.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 4603ad75c9a9..1b6eed1bc404 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -1703,7 +1703,8 @@ static enum oom_status mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int
> >  		return OOM_ASYNC;
> >  	}
> >  
> > -	if (mem_cgroup_out_of_memory(memcg, mask, order))
> > +	if (mem_cgroup_out_of_memory(memcg, mask, order) ||
> > +			tsk_is_oom_victim(current))
> >  		return OOM_SUCCESS;
> >  
> >  	WARN(1,"Memory cgroup charge failed because of no reclaimable memory! "
> 
> This is really ugly. :(
> 
> If that check is only there to suppress the warning when the limit is
> 0, this should really be a separate branch around the warning, with a
> fat comment that this is a ridiculous cornercase, and not look like it
> is an essential part of the memcg reclaim/oom process.

I do not mind having it in a separate branch. Btw. this is not just about
hard limit set to 0. Similar can happen anytime we are getting out of
oom victims. The likelihood goes up with the remote memcg charging
merged recently.

> Personally, I really don't get the point of this message. What is the
> user to do with this information? What are we to do with it if people
> report it? It conveys zero information on what the problem could be,
> because it asserts a really vague high-level thing. Shouldn't such
> debugging happen inside the OOM killer? What are the conceivable
> scenarios in which this triggers other than obvious misconfigs?
> 
> What would we lose by just deleting it?

We know that _something_ bad is going on because we have no way to
reclaim down to the hard limit. And that is the primary tool for the
workload isolation. I am all for a better information to tell us more. I
do not know what that would be right now, though.

My primary motivation for this warning was to catch potential issues
after we have moved oom handling back to the charge path. If we just
remove it then we have no information at all.

I wouldn't mind removing it if it generated more false possitives but
that hasn't happened so far.
Tetsuo Handa Aug. 7, 2018, 8:38 p.m. UTC | #6
On 2018/08/08 5:19, Johannes Weiner wrote:
> On Tue, Aug 07, 2018 at 07:15:11PM +0900, Tetsuo Handa wrote:
>> On 2018/08/07 16:25, Michal Hocko wrote:
>>> @@ -1703,7 +1703,8 @@ static enum oom_status mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int
>>>  		return OOM_ASYNC;
>>>  	}
>>>  
>>> -	if (mem_cgroup_out_of_memory(memcg, mask, order))
>>> +	if (mem_cgroup_out_of_memory(memcg, mask, order) ||
>>> +			tsk_is_oom_victim(current))
>>>  		return OOM_SUCCESS;
>>>  
>>>  	WARN(1,"Memory cgroup charge failed because of no reclaimable memory! "
>>>
>>
>> I don't think this patch is appropriate. This patch only avoids hitting WARN(1).
>> This patch does not address the root cause:
>>
>> The task_will_free_mem(current) test in out_of_memory() is returning false
>> because test_bit(MMF_OOM_SKIP, &mm->flags) test in task_will_free_mem() is
>> returning false because MMF_OOM_SKIP was already set by the OOM reaper. The OOM
>> killer does not need to start selecting next OOM victim until "current thread
>> completes __mmput()" or "it fails to complete __mmput() within reasonable
>> period".
> 
> I don't see why it matters whether the OOM victim exits or not, unless
> you count the memory consumed by struct task_struct.

We are not counting memory consumed by struct task_struct. But David is
counting memory released between set_bit(MMF_OOM_SKIP, &mm->flags) and
completion of exit_mmap().

> 
>> According to https://syzkaller.appspot.com/text?tag=CrashLog&x=15a1c770400000 ,
>> PID=23767 selected PID=23766 as an OOM victim and the OOM reaper set MMF_OOM_SKIP
>> before PID=23766 unnecessarily selects PID=23767 as next OOM victim.
>> At uptime = 366.550949, out_of_memory() should have returned true without selecting
>> next OOM victim because tsk_is_oom_victim(current) == true.
> 
> The code works just fine. We have to kill tasks until we a) free
> enough memory or b) run out of tasks or c) kill current. When one of
> these outcomes is reached, we allow the charge and return.
> 
> The only problem here is a warning in the wrong place.
> 

If forced charge contained a bug, removing this WARN(1) deprives users of chance
to know that something is going wrong.
Johannes Weiner Aug. 7, 2018, 8:54 p.m. UTC | #7
On Tue, Aug 07, 2018 at 10:23:32PM +0200, Michal Hocko wrote:
> On Tue 07-08-18 16:02:47, Johannes Weiner wrote:
> > On Tue, Aug 07, 2018 at 09:25:53AM +0200, Michal Hocko wrote:
> > > From: Michal Hocko <mhocko@suse.com>
> > > 
> > > "memcg, oom: move out_of_memory back to the charge path" has added a
> > > warning triggered when the oom killer cannot find any eligible task
> > > and so there is no way to reclaim the oom memcg under its hard limit.
> > > Further charges for such a memcg are forced and therefore the hard limit
> > > isolation is weakened.
> > > 
> > > The current warning is however too eager to trigger  even when we are not
> > > really hitting the above condition. Syzbot[1] and Greg Thelen have noticed
> > > that we can hit this condition even when there is still oom victim
> > > pending. E.g. the following race is possible:
> > > 
> > > memcg has two tasks taskA, taskB.
> > > 
> > > CPU1 (taskA)			CPU2			CPU3 (taskB)
> > > try_charge
> > >   mem_cgroup_out_of_memory				try_charge
> > >       select_bad_process(taskB)
> > >       oom_kill_process		oom_reap_task
> > > 				# No real memory reaped
> > >     				  			  mem_cgroup_out_of_memory
> > > 				# set taskB -> MMF_OOM_SKIP
> > >   # retry charge
> > >   mem_cgroup_out_of_memory
> > >     oom_lock						    oom_lock
> > >     select_bad_process(self)
> > >     oom_kill_process(self)
> > >     oom_unlock
> > > 							    # no eligible task
> > > 
> > > In fact syzbot test triggered this situation by placing multiple tasks
> > > into a memcg with hard limit set to 0. So no task really had any memory
> > > charged to the memcg
> > > 
> > > : Memory cgroup stats for /ile0: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
> > > : Tasks state (memory values in pages):
> > > : [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
> > > : [   6569]     0  6562     9427        1    53248        0             0 syz-executor0
> > > : [   6576]     0  6576     9426        0    61440        0             0 syz-executor6
> > > : [   6578]     0  6578     9426      534    61440        0             0 syz-executor4
> > > : [   6579]     0  6579     9426        0    57344        0             0 syz-executor5
> > > : [   6582]     0  6582     9426        0    61440        0             0 syz-executor7
> > > : [   6584]     0  6584     9426        0    57344        0             0 syz-executor1
> > > 
> > > so in principle there is indeed nothing reclaimable in this memcg and
> > > this looks like a misconfiguration. On the other hand we can clearly
> > > kill all those tasks so it is a bit early to warn and scare users. Do
> > > that by checking that the current is the oom victim and bypass the
> > > warning then. The victim is allowed to force charge and terminate to
> > > release its temporal charge along the way.
> > > 
> > > [1] http://lkml.kernel.org/r/0000000000005e979605729c1564@google.com
> > > Fixes: "memcg, oom: move out_of_memory back to the charge path"
> > > Noticed-by: Greg Thelen <gthelen@google.com>
> > > Reported-and-tested-by: syzbot+bab151e82a4e973fa325@syzkaller.appspotmail.com
> > > Signed-off-by: Michal Hocko <mhocko@suse.com>
> > > ---
> > >  mm/memcontrol.c | 3 ++-
> > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > index 4603ad75c9a9..1b6eed1bc404 100644
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> > > @@ -1703,7 +1703,8 @@ static enum oom_status mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int
> > >  		return OOM_ASYNC;
> > >  	}
> > >  
> > > -	if (mem_cgroup_out_of_memory(memcg, mask, order))
> > > +	if (mem_cgroup_out_of_memory(memcg, mask, order) ||
> > > +			tsk_is_oom_victim(current))
> > >  		return OOM_SUCCESS;
> > >  
> > >  	WARN(1,"Memory cgroup charge failed because of no reclaimable memory! "
> > 
> > This is really ugly. :(
> > 
> > If that check is only there to suppress the warning when the limit is
> > 0, this should really be a separate branch around the warning, with a
> > fat comment that this is a ridiculous cornercase, and not look like it
> > is an essential part of the memcg reclaim/oom process.
> 
> I do not mind having it in a separate branch. Btw. this is not just about
> hard limit set to 0. Similar can happen anytime we are getting out of
> oom victims. The likelihood goes up with the remote memcg charging
> merged recently.

What the global OOM killer does in that situation is dump the header
anyway:

	/* Found nothing?!?! Either we hang forever, or we panic. */
	if (!oc->chosen && !is_sysrq_oom(oc) && !is_memcg_oom(oc)) {
		dump_header(oc, NULL);
		panic("Out of memory and no killable processes...\n");
	}

I think that would make sense here as well - without the panic,
obviously, but we can add our own pr_err() line following the header.

That gives us the exact memory situation of the cgroup and who is
trying to allocate and from what context, but in a format that is
known to users without claiming right away that it's a kernel issue.
Michal Hocko Aug. 8, 2018, 6:44 a.m. UTC | #8
On Tue 07-08-18 16:54:25, Johannes Weiner wrote:
> On Tue, Aug 07, 2018 at 10:23:32PM +0200, Michal Hocko wrote:
> > On Tue 07-08-18 16:02:47, Johannes Weiner wrote:
> > > On Tue, Aug 07, 2018 at 09:25:53AM +0200, Michal Hocko wrote:
> > > > From: Michal Hocko <mhocko@suse.com>
> > > > 
> > > > "memcg, oom: move out_of_memory back to the charge path" has added a
> > > > warning triggered when the oom killer cannot find any eligible task
> > > > and so there is no way to reclaim the oom memcg under its hard limit.
> > > > Further charges for such a memcg are forced and therefore the hard limit
> > > > isolation is weakened.
> > > > 
> > > > The current warning is however too eager to trigger  even when we are not
> > > > really hitting the above condition. Syzbot[1] and Greg Thelen have noticed
> > > > that we can hit this condition even when there is still oom victim
> > > > pending. E.g. the following race is possible:
> > > > 
> > > > memcg has two tasks taskA, taskB.
> > > > 
> > > > CPU1 (taskA)			CPU2			CPU3 (taskB)
> > > > try_charge
> > > >   mem_cgroup_out_of_memory				try_charge
> > > >       select_bad_process(taskB)
> > > >       oom_kill_process		oom_reap_task
> > > > 				# No real memory reaped
> > > >     				  			  mem_cgroup_out_of_memory
> > > > 				# set taskB -> MMF_OOM_SKIP
> > > >   # retry charge
> > > >   mem_cgroup_out_of_memory
> > > >     oom_lock						    oom_lock
> > > >     select_bad_process(self)
> > > >     oom_kill_process(self)
> > > >     oom_unlock
> > > > 							    # no eligible task
> > > > 
> > > > In fact syzbot test triggered this situation by placing multiple tasks
> > > > into a memcg with hard limit set to 0. So no task really had any memory
> > > > charged to the memcg
> > > > 
> > > > : Memory cgroup stats for /ile0: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
> > > > : Tasks state (memory values in pages):
> > > > : [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
> > > > : [   6569]     0  6562     9427        1    53248        0             0 syz-executor0
> > > > : [   6576]     0  6576     9426        0    61440        0             0 syz-executor6
> > > > : [   6578]     0  6578     9426      534    61440        0             0 syz-executor4
> > > > : [   6579]     0  6579     9426        0    57344        0             0 syz-executor5
> > > > : [   6582]     0  6582     9426        0    61440        0             0 syz-executor7
> > > > : [   6584]     0  6584     9426        0    57344        0             0 syz-executor1
> > > > 
> > > > so in principle there is indeed nothing reclaimable in this memcg and
> > > > this looks like a misconfiguration. On the other hand we can clearly
> > > > kill all those tasks so it is a bit early to warn and scare users. Do
> > > > that by checking that the current is the oom victim and bypass the
> > > > warning then. The victim is allowed to force charge and terminate to
> > > > release its temporal charge along the way.
> > > > 
> > > > [1] http://lkml.kernel.org/r/0000000000005e979605729c1564@google.com
> > > > Fixes: "memcg, oom: move out_of_memory back to the charge path"
> > > > Noticed-by: Greg Thelen <gthelen@google.com>
> > > > Reported-and-tested-by: syzbot+bab151e82a4e973fa325@syzkaller.appspotmail.com
> > > > Signed-off-by: Michal Hocko <mhocko@suse.com>
> > > > ---
> > > >  mm/memcontrol.c | 3 ++-
> > > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > > index 4603ad75c9a9..1b6eed1bc404 100644
> > > > --- a/mm/memcontrol.c
> > > > +++ b/mm/memcontrol.c
> > > > @@ -1703,7 +1703,8 @@ static enum oom_status mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int
> > > >  		return OOM_ASYNC;
> > > >  	}
> > > >  
> > > > -	if (mem_cgroup_out_of_memory(memcg, mask, order))
> > > > +	if (mem_cgroup_out_of_memory(memcg, mask, order) ||
> > > > +			tsk_is_oom_victim(current))
> > > >  		return OOM_SUCCESS;
> > > >  
> > > >  	WARN(1,"Memory cgroup charge failed because of no reclaimable memory! "
> > > 
> > > This is really ugly. :(
> > > 
> > > If that check is only there to suppress the warning when the limit is
> > > 0, this should really be a separate branch around the warning, with a
> > > fat comment that this is a ridiculous cornercase, and not look like it
> > > is an essential part of the memcg reclaim/oom process.
> > 
> > I do not mind having it in a separate branch. Btw. this is not just about
> > hard limit set to 0. Similar can happen anytime we are getting out of
> > oom victims. The likelihood goes up with the remote memcg charging
> > merged recently.
> 
> What the global OOM killer does in that situation is dump the header
> anyway:
> 
> 	/* Found nothing?!?! Either we hang forever, or we panic. */
> 	if (!oc->chosen && !is_sysrq_oom(oc) && !is_memcg_oom(oc)) {
> 		dump_header(oc, NULL);
> 		panic("Out of memory and no killable processes...\n");
> 	}
> 
> I think that would make sense here as well - without the panic,
> obviously, but we can add our own pr_err() line following the header.
> 
> That gives us the exact memory situation of the cgroup and who is
> trying to allocate and from what context, but in a format that is
> known to users without claiming right away that it's a kernel issue.

I was considering doing that initially but then decided that warning is
less noisy and still a good "let us know" trigger. It doesn't give us
the whole picture which is obviously a downside but we would at least
know that something is going south one have the trace to who that might
be should this be a bug rather than a misconfiguration.

But I do not mind doing dump_header as well. Care to send a patch?
Michal Hocko Aug. 8, 2018, 7:12 a.m. UTC | #9
On Wed 08-08-18 08:44:14, Michal Hocko wrote:
> On Tue 07-08-18 16:54:25, Johannes Weiner wrote:
[...]
> > What the global OOM killer does in that situation is dump the header
> > anyway:
> > 
> > 	/* Found nothing?!?! Either we hang forever, or we panic. */
> > 	if (!oc->chosen && !is_sysrq_oom(oc) && !is_memcg_oom(oc)) {
> > 		dump_header(oc, NULL);
> > 		panic("Out of memory and no killable processes...\n");
> > 	}
> > 
> > I think that would make sense here as well - without the panic,
> > obviously, but we can add our own pr_err() line following the header.
> > 
> > That gives us the exact memory situation of the cgroup and who is
> > trying to allocate and from what context, but in a format that is
> > known to users without claiming right away that it's a kernel issue.
> 
> I was considering doing that initially but then decided that warning is
> less noisy and still a good "let us know" trigger. It doesn't give us
> the whole picture which is obviously a downside but we would at least
> know that something is going south one have the trace to who that might
> be should this be a bug rather than a misconfiguration.
> 
> But I do not mind doing dump_header as well. Care to send a patch?

OK, so I found few spare cycles and here is what I came up with. The
first patch fixes the spurious warning and I have separated the check
and added a comment as you asked. The second patch replaces warning with
oom report.

Does that look better?
Tetsuo Handa Aug. 8, 2018, 12:57 p.m. UTC | #10
On 2018/08/08 5:38, Tetsuo Handa wrote:
> On 2018/08/08 5:19, Johannes Weiner wrote:
>> On Tue, Aug 07, 2018 at 07:15:11PM +0900, Tetsuo Handa wrote:
>>> On 2018/08/07 16:25, Michal Hocko wrote:
>>>> @@ -1703,7 +1703,8 @@ static enum oom_status mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int
>>>>  		return OOM_ASYNC;
>>>>  	}
>>>>  
>>>> -	if (mem_cgroup_out_of_memory(memcg, mask, order))
>>>> +	if (mem_cgroup_out_of_memory(memcg, mask, order) ||
>>>> +			tsk_is_oom_victim(current))
>>>>  		return OOM_SUCCESS;
>>>>  
>>>>  	WARN(1,"Memory cgroup charge failed because of no reclaimable memory! "
>>>>
>>>
>>> I don't think this patch is appropriate. This patch only avoids hitting WARN(1).
>>> This patch does not address the root cause:
>>>
>>> The task_will_free_mem(current) test in out_of_memory() is returning false
>>> because test_bit(MMF_OOM_SKIP, &mm->flags) test in task_will_free_mem() is
>>> returning false because MMF_OOM_SKIP was already set by the OOM reaper. The OOM
>>> killer does not need to start selecting next OOM victim until "current thread
>>> completes __mmput()" or "it fails to complete __mmput() within reasonable
>>> period".
>>
>> I don't see why it matters whether the OOM victim exits or not, unless
>> you count the memory consumed by struct task_struct.
> 
> We are not counting memory consumed by struct task_struct. But David is
> counting memory released between set_bit(MMF_OOM_SKIP, &mm->flags) and
> completion of exit_mmap().

Also, before the OOM reaper was introduced, we waited until TIF_MEMDIE is
cleared from the OOM victim thread. Compared to pre OOM reaper era, giving up
so early is certainly a regression.
Michal Hocko Aug. 8, 2018, 1:16 p.m. UTC | #11
On Wed 08-08-18 21:57:13, Tetsuo Handa wrote:
[...]
> Also, before the OOM reaper was introduced, we waited until TIF_MEMDIE is
> cleared from the OOM victim thread. Compared to pre OOM reaper era, giving up
> so early is certainly a regression.

We did clear TIF_MEMDIE flag after mmput() in do_exit so this was not a silver
bullet either. Any reference on the mm_struct would lead to a similar
problem. So could you please stop making strong stamements and start
being reasonable?

Yeah, this is racy. Nobody is claiming otherwise. All we are trying to
say is that this area is full of dragons and before we start making it
more complicating by covering weird cornercases we really need to see
that those corner cases happen in real workloads. Otherwise we end up
with a unmaintainable and fragile mess.

And more importantly this is _not_ what this patch is trying to address
so please do not go tangent again.

I really do not know how to send this simply message to you. I have
tried so many times before.
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 4603ad75c9a9..1b6eed1bc404 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1703,7 +1703,8 @@  static enum oom_status mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int
 		return OOM_ASYNC;
 	}
 
-	if (mem_cgroup_out_of_memory(memcg, mask, order))
+	if (mem_cgroup_out_of_memory(memcg, mask, order) ||
+			tsk_is_oom_victim(current))
 		return OOM_SUCCESS;
 
 	WARN(1,"Memory cgroup charge failed because of no reclaimable memory! "