diff mbox series

mm, oom: Introduce time limit for dump_tasks duration.

Message ID 58aa0543-86d0-b2ad-7fb9-9bed7c6a1f6c@i-love.sakura.ne.jp (mailing list archive)
State New, archived
Headers show
Series mm, oom: Introduce time limit for dump_tasks duration. | expand

Commit Message

Tetsuo Handa Sept. 6, 2018, 10:58 a.m. UTC
On 2018/09/06 18:54, Dmitry Vyukov wrote:
> On Thu, Sep 6, 2018 at 7:53 AM, Tetsuo Handa
> <penguin-kernel@i-love.sakura.ne.jp> wrote:
>> Dmitry Vyukov wrote:
>>>> Also, another notable thing is that the backtrace for some reason includes
>>>>
>>>> [ 1048.211540]  ? oom_killer_disable+0x3a0/0x3a0
>>>>
>>>> line. Was syzbot testing process freezing functionality?
>>>
>>> What's the API for this?
>>>
>>
>> I'm not a user of suspend/hibernation. But it seems that usage of the API
>> is to write one of words listed in /sys/power/state into /sys/power/state .
>>
>> # echo suspend > /sys/power/state
> 
> syzkaller should not write to /sys/power/state. The only mention of
> "power" is in some selinux contexts.
> 

OK. Then, I have no idea.
Anyway, I think we can apply this patch.

From 18876f287dd69a7c33f65c91cfcda3564233f55e Mon Sep 17 00:00:00 2001
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Date: Thu, 6 Sep 2018 19:53:18 +0900
Subject: [PATCH] mm, oom: Introduce time limit for dump_tasks duration.

Since printk() is slow, printing one line takes nearly 0.01 second.
As a result, syzbot is stalling for 52 seconds trying to dump 5600
tasks at for_each_process() under RCU. Since such situation is almost
inflight fork bomb attack (the OOM killer will print similar tasks for
so many times), it makes little sense to print all candidate tasks.
Thus, this patch introduces 3 seconds limit for printing.

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Dmitry Vyukov <dvyukov@google.com>
---
 mm/oom_kill.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Tetsuo Handa Sept. 6, 2018, 11:25 a.m. UTC | #1
On 2018/09/06 20:07, Dmitry Vyukov wrote:
>> Since printk() is slow, printing one line takes nearly 0.01 second.
>> As a result, syzbot is stalling for 52 seconds trying to dump 5600
> 
> I wonder why there are so many of them?
> We have at most 8 test processes (each having no more than 16 threads
> if that matters).
> No more than 1 instance of syz-executor1 at a time. But we see output
> like the one below. It has lots of instances of syz-executor1 with
> different pid's. So does it print all tasks that ever existed (kernel
> does not store that info, right)? Or it livelocks picking up new and
> new tasks as they are created slower than they are created? Or we have
> tons of zombies?
> 
> ...

I don't think they are zombies. Since tasks which already released ->mm
are not printed, these tasks are still alive.

  [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
> [   8037]     0  8037    17618     8738   131072        0             0 syz-executor1

Maybe something signal / fork() / exit() / wait() related regression?
Tetsuo Handa Sept. 6, 2018, 11:40 a.m. UTC | #2
On 2018/09/06 20:23, Michal Hocko wrote:
> On Thu 06-09-18 19:58:25, Tetsuo Handa wrote:
> [...]
>> >From 18876f287dd69a7c33f65c91cfcda3564233f55e Mon Sep 17 00:00:00 2001
>> From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
>> Date: Thu, 6 Sep 2018 19:53:18 +0900
>> Subject: [PATCH] mm, oom: Introduce time limit for dump_tasks duration.
>>
>> Since printk() is slow, printing one line takes nearly 0.01 second.
>> As a result, syzbot is stalling for 52 seconds trying to dump 5600
>> tasks at for_each_process() under RCU. Since such situation is almost
>> inflight fork bomb attack (the OOM killer will print similar tasks for
>> so many times), it makes little sense to print all candidate tasks.
>> Thus, this patch introduces 3 seconds limit for printing.
>>
>> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
>> Cc: Dmitry Vyukov <dvyukov@google.com>
> 
> You really love timeout based solutions with randomly chosen timeouts,
> don't you. This is just ugly as hell. We already have means to disable
> tasks dumping (see /proc/sys/vm/oom_dump_tasks).

I know /proc/sys/vm/oom_dump_tasks . Showing some entries while not always
printing all entries might be helpful. For example, allow
/proc/sys/vm/oom_dump_tasks > 1 and use it as timeout in seconds.
Dmitry Vyukov Sept. 6, 2018, 12:08 p.m. UTC | #3
On Thu, Sep 6, 2018 at 1:53 PM, Michal Hocko <mhocko@kernel.org> wrote:
> On Thu 06-09-18 20:40:34, Tetsuo Handa wrote:
>> On 2018/09/06 20:23, Michal Hocko wrote:
>> > On Thu 06-09-18 19:58:25, Tetsuo Handa wrote:
>> > [...]
>> >> >From 18876f287dd69a7c33f65c91cfcda3564233f55e Mon Sep 17 00:00:00 2001
>> >> From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
>> >> Date: Thu, 6 Sep 2018 19:53:18 +0900
>> >> Subject: [PATCH] mm, oom: Introduce time limit for dump_tasks duration.
>> >>
>> >> Since printk() is slow, printing one line takes nearly 0.01 second.
>> >> As a result, syzbot is stalling for 52 seconds trying to dump 5600
>> >> tasks at for_each_process() under RCU. Since such situation is almost
>> >> inflight fork bomb attack (the OOM killer will print similar tasks for
>> >> so many times), it makes little sense to print all candidate tasks.
>> >> Thus, this patch introduces 3 seconds limit for printing.
>> >>
>> >> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
>> >> Cc: Dmitry Vyukov <dvyukov@google.com>
>> >
>> > You really love timeout based solutions with randomly chosen timeouts,
>> > don't you. This is just ugly as hell. We already have means to disable
>> > tasks dumping (see /proc/sys/vm/oom_dump_tasks).
>>
>> I know /proc/sys/vm/oom_dump_tasks . Showing some entries while not always
>> printing all entries might be helpful.
>
> Not really. It could be more confusing than helpful. The main purpose of
> the listing is to double check the list to understand the oom victim
> selection. If you have a partial list you simply cannot do that.
>
> If the iteration takes too long and I can imagine it does with zillions
> of tasks then the proper way around it is either release the lock
> periodically after N tasks is processed or outright skip the whole thing
> if there are too many tasks. The first option is obviously tricky to
> prevent from duplicate entries or other artifacts.


So does anybody know if it can live lock picking up new tasks all the
time? That's what it looks like at first glance. I also don't remember
seeing anything similar in the past.
If it's a live lock and we resolve it, then we don't need to solve the
problem of too many tasks here.
Michal Hocko Sept. 6, 2018, 12:16 p.m. UTC | #4
Ccing Oleg.

On Thu 06-09-18 14:08:43, Dmitry Vyukov wrote:
[...]
> So does anybody know if it can live lock picking up new tasks all the
> time? That's what it looks like at first glance. I also don't remember
> seeing anything similar in the past.

That is an interesting question. I find it unlikely here because it is
quite hard to get new tasks spawned while you are genuinely OOM. But we
do have these for_each_process loops at other places as well. Some of
them even controlled from the userspace. Some of them like exit path
(zap_threads) sound even more interesting even when that is a rare path.

So a question for Oleg I guess. Is it possible that for_each_process
live locks (or stalls for way too long/unbounded amount of time) under
heavy fork/exit loads? Is there any protection from that?

> If it's a live lock and we resolve it, then we don't need to solve the
> problem of too many tasks here.
Tetsuo Handa Sept. 6, 2018, 1:45 p.m. UTC | #5
On 2018/09/06 20:53, Michal Hocko wrote:
> On Thu 06-09-18 20:40:34, Tetsuo Handa wrote:
>> On 2018/09/06 20:23, Michal Hocko wrote:
>>> On Thu 06-09-18 19:58:25, Tetsuo Handa wrote:
>>> [...]
>>>> >From 18876f287dd69a7c33f65c91cfcda3564233f55e Mon Sep 17 00:00:00 2001
>>>> From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
>>>> Date: Thu, 6 Sep 2018 19:53:18 +0900
>>>> Subject: [PATCH] mm, oom: Introduce time limit for dump_tasks duration.
>>>>
>>>> Since printk() is slow, printing one line takes nearly 0.01 second.
>>>> As a result, syzbot is stalling for 52 seconds trying to dump 5600
>>>> tasks at for_each_process() under RCU. Since such situation is almost
>>>> inflight fork bomb attack (the OOM killer will print similar tasks for
>>>> so many times), it makes little sense to print all candidate tasks.
>>>> Thus, this patch introduces 3 seconds limit for printing.
>>>>
>>>> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
>>>> Cc: Dmitry Vyukov <dvyukov@google.com>
>>>
>>> You really love timeout based solutions with randomly chosen timeouts,
>>> don't you. This is just ugly as hell. We already have means to disable
>>> tasks dumping (see /proc/sys/vm/oom_dump_tasks).
>>
>> I know /proc/sys/vm/oom_dump_tasks . Showing some entries while not always
>> printing all entries might be helpful.
> 
> Not really. It could be more confusing than helpful. The main purpose of
> the listing is to double check the list to understand the oom victim
> selection. If you have a partial list you simply cannot do that.

It serves as a safeguard for avoiding RCU stall warnings.

> 
> If the iteration takes too long and I can imagine it does with zillions
> of tasks then the proper way around it is either release the lock
> periodically after N tasks is processed or outright skip the whole thing
> if there are too many tasks. The first option is obviously tricky to
> prevent from duplicate entries or other artifacts.
> 

Can we add rcu_lock_break() like check_hung_uninterruptible_tasks() does?
Michal Hocko Sept. 6, 2018, 2:39 p.m. UTC | #6
On Thu 06-09-18 22:45:26, Tetsuo Handa wrote:
> On 2018/09/06 20:53, Michal Hocko wrote:
> > On Thu 06-09-18 20:40:34, Tetsuo Handa wrote:
> >> On 2018/09/06 20:23, Michal Hocko wrote:
> >>> On Thu 06-09-18 19:58:25, Tetsuo Handa wrote:
> >>> [...]
> >>>> >From 18876f287dd69a7c33f65c91cfcda3564233f55e Mon Sep 17 00:00:00 2001
> >>>> From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> >>>> Date: Thu, 6 Sep 2018 19:53:18 +0900
> >>>> Subject: [PATCH] mm, oom: Introduce time limit for dump_tasks duration.
> >>>>
> >>>> Since printk() is slow, printing one line takes nearly 0.01 second.
> >>>> As a result, syzbot is stalling for 52 seconds trying to dump 5600
> >>>> tasks at for_each_process() under RCU. Since such situation is almost
> >>>> inflight fork bomb attack (the OOM killer will print similar tasks for
> >>>> so many times), it makes little sense to print all candidate tasks.
> >>>> Thus, this patch introduces 3 seconds limit for printing.
> >>>>
> >>>> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> >>>> Cc: Dmitry Vyukov <dvyukov@google.com>
> >>>
> >>> You really love timeout based solutions with randomly chosen timeouts,
> >>> don't you. This is just ugly as hell. We already have means to disable
> >>> tasks dumping (see /proc/sys/vm/oom_dump_tasks).
> >>
> >> I know /proc/sys/vm/oom_dump_tasks . Showing some entries while not always
> >> printing all entries might be helpful.
> > 
> > Not really. It could be more confusing than helpful. The main purpose of
> > the listing is to double check the list to understand the oom victim
> > selection. If you have a partial list you simply cannot do that.
> 
> It serves as a safeguard for avoiding RCU stall warnings.
> 
> > 
> > If the iteration takes too long and I can imagine it does with zillions
> > of tasks then the proper way around it is either release the lock
> > periodically after N tasks is processed or outright skip the whole thing
> > if there are too many tasks. The first option is obviously tricky to
> > prevent from duplicate entries or other artifacts.
> > 
> 
> Can we add rcu_lock_break() like check_hung_uninterruptible_tasks() does?

This would be a better variant of your timeout based approach. But it
can still produce an incomplete task list so it still consumes a lot of
resources to print a long list of tasks potentially while that list is not
useful for any evaluation. Maybe that is good enough. I don't know. I
would generally recommend to disable the whole thing with workloads with
many tasks though.
Tetsuo Handa Sept. 6, 2018, 8:58 p.m. UTC | #7
On 2018/09/06 23:39, Michal Hocko wrote:
>>>> I know /proc/sys/vm/oom_dump_tasks . Showing some entries while not always
>>>> printing all entries might be helpful.
>>>
>>> Not really. It could be more confusing than helpful. The main purpose of
>>> the listing is to double check the list to understand the oom victim
>>> selection. If you have a partial list you simply cannot do that.
>>
>> It serves as a safeguard for avoiding RCU stall warnings.
>>
>>>
>>> If the iteration takes too long and I can imagine it does with zillions
>>> of tasks then the proper way around it is either release the lock
>>> periodically after N tasks is processed or outright skip the whole thing
>>> if there are too many tasks. The first option is obviously tricky to
>>> prevent from duplicate entries or other artifacts.
>>>
>>
>> Can we add rcu_lock_break() like check_hung_uninterruptible_tasks() does?
> 
> This would be a better variant of your timeout based approach. But it
> can still produce an incomplete task list so it still consumes a lot of
> resources to print a long list of tasks potentially while that list is not
> useful for any evaluation. Maybe that is good enough. I don't know. I
> would generally recommend to disable the whole thing with workloads with
> many tasks though.
> 

The "safeguard" is useful when there are _unexpectedly_ many tasks (like
syzbot in this case). Why not to allow those who want to avoid lockup to
avoid lockup rather than forcing them to disable the whole thing?
Michal Hocko Sept. 7, 2018, 8:27 a.m. UTC | #8
On Fri 07-09-18 05:58:06, Tetsuo Handa wrote:
> On 2018/09/06 23:39, Michal Hocko wrote:
> >>>> I know /proc/sys/vm/oom_dump_tasks . Showing some entries while not always
> >>>> printing all entries might be helpful.
> >>>
> >>> Not really. It could be more confusing than helpful. The main purpose of
> >>> the listing is to double check the list to understand the oom victim
> >>> selection. If you have a partial list you simply cannot do that.
> >>
> >> It serves as a safeguard for avoiding RCU stall warnings.
> >>
> >>>
> >>> If the iteration takes too long and I can imagine it does with zillions
> >>> of tasks then the proper way around it is either release the lock
> >>> periodically after N tasks is processed or outright skip the whole thing
> >>> if there are too many tasks. The first option is obviously tricky to
> >>> prevent from duplicate entries or other artifacts.
> >>>
> >>
> >> Can we add rcu_lock_break() like check_hung_uninterruptible_tasks() does?
> > 
> > This would be a better variant of your timeout based approach. But it
> > can still produce an incomplete task list so it still consumes a lot of
> > resources to print a long list of tasks potentially while that list is not
> > useful for any evaluation. Maybe that is good enough. I don't know. I
> > would generally recommend to disable the whole thing with workloads with
> > many tasks though.
> > 
> 
> The "safeguard" is useful when there are _unexpectedly_ many tasks (like
> syzbot in this case). Why not to allow those who want to avoid lockup to
> avoid lockup rather than forcing them to disable the whole thing?

So you get an rcu lockup splat and what? Unless you have panic_on_rcu_stall
then this should be recoverable thing (assuming we cannot really
livelock as described by Dmitry).
Dmitry Vyukov Sept. 7, 2018, 9:36 a.m. UTC | #9
On Fri, Sep 7, 2018 at 10:27 AM, Michal Hocko <mhocko@kernel.org> wrote:
> On Fri 07-09-18 05:58:06, Tetsuo Handa wrote:
>> On 2018/09/06 23:39, Michal Hocko wrote:
>> >>>> I know /proc/sys/vm/oom_dump_tasks . Showing some entries while not always
>> >>>> printing all entries might be helpful.
>> >>>
>> >>> Not really. It could be more confusing than helpful. The main purpose of
>> >>> the listing is to double check the list to understand the oom victim
>> >>> selection. If you have a partial list you simply cannot do that.
>> >>
>> >> It serves as a safeguard for avoiding RCU stall warnings.
>> >>
>> >>>
>> >>> If the iteration takes too long and I can imagine it does with zillions
>> >>> of tasks then the proper way around it is either release the lock
>> >>> periodically after N tasks is processed or outright skip the whole thing
>> >>> if there are too many tasks. The first option is obviously tricky to
>> >>> prevent from duplicate entries or other artifacts.
>> >>>
>> >>
>> >> Can we add rcu_lock_break() like check_hung_uninterruptible_tasks() does?
>> >
>> > This would be a better variant of your timeout based approach. But it
>> > can still produce an incomplete task list so it still consumes a lot of
>> > resources to print a long list of tasks potentially while that list is not
>> > useful for any evaluation. Maybe that is good enough. I don't know. I
>> > would generally recommend to disable the whole thing with workloads with
>> > many tasks though.
>> >
>>
>> The "safeguard" is useful when there are _unexpectedly_ many tasks (like
>> syzbot in this case). Why not to allow those who want to avoid lockup to
>> avoid lockup rather than forcing them to disable the whole thing?
>
> So you get an rcu lockup splat and what? Unless you have panic_on_rcu_stall
> then this should be recoverable thing (assuming we cannot really
> livelock as described by Dmitry).


Should I add "vm.oom_dump_tasks = 0" to /etc/sysctl.conf on syzbot?
It looks like it will make things faster, not pollute console output,
prevent these stalls and that output does not seem to be too useful
for debugging.

But I am still concerned as to what has changed recently. Potentially
this happens only on linux-next, at least that's where I saw all
existing reports.
New tasks seem to be added to the tail of the tasks list, but this
part does not seem to be changed recently in linux-next..
Tetsuo Handa Sept. 7, 2018, 10:20 a.m. UTC | #10
On 2018/09/07 17:27, Michal Hocko wrote:
> On Fri 07-09-18 05:58:06, Tetsuo Handa wrote:
>> On 2018/09/06 23:39, Michal Hocko wrote:
>>>>>> I know /proc/sys/vm/oom_dump_tasks . Showing some entries while not always
>>>>>> printing all entries might be helpful.
>>>>>
>>>>> Not really. It could be more confusing than helpful. The main purpose of
>>>>> the listing is to double check the list to understand the oom victim
>>>>> selection. If you have a partial list you simply cannot do that.
>>>>
>>>> It serves as a safeguard for avoiding RCU stall warnings.
>>>>
>>>>>
>>>>> If the iteration takes too long and I can imagine it does with zillions
>>>>> of tasks then the proper way around it is either release the lock
>>>>> periodically after N tasks is processed or outright skip the whole thing
>>>>> if there are too many tasks. The first option is obviously tricky to
>>>>> prevent from duplicate entries or other artifacts.
>>>>>
>>>>
>>>> Can we add rcu_lock_break() like check_hung_uninterruptible_tasks() does?
>>>
>>> This would be a better variant of your timeout based approach. But it
>>> can still produce an incomplete task list so it still consumes a lot of
>>> resources to print a long list of tasks potentially while that list is not
>>> useful for any evaluation. Maybe that is good enough. I don't know. I
>>> would generally recommend to disable the whole thing with workloads with
>>> many tasks though.
>>>
>>
>> The "safeguard" is useful when there are _unexpectedly_ many tasks (like
>> syzbot in this case). Why not to allow those who want to avoid lockup to
>> avoid lockup rather than forcing them to disable the whole thing?
> 
> So you get an rcu lockup splat and what? Unless you have panic_on_rcu_stall
> then this should be recoverable thing (assuming we cannot really
> livelock as described by Dmitry).
> 

syzbot is getting hung task panic (140 seconds) because one dump_tasks() from
out_of_memory() consumes 52 seconds on a 2 CPU machine because we have only 
cond_resched() which can yield CPU resource to tasks which need CPU resource.
This is similar to a bug shown below.

  [upstream] INFO: task hung in fsnotify_mark_destroy_workfn
  https://syzkaller.appspot.com/bug?id=0e75779a6f0faac461510c6330514e8f0e893038

  [upstream] INFO: task hung in fsnotify_connector_destroy_workfn
  https://syzkaller.appspot.com/bug?id=aa11d2d767f3750ef9a40d156a149e9cfa735b73

Continuing printk() until khungtaskd fires is a stupid behavior.
Tetsuo Handa Sept. 7, 2018, 10:49 a.m. UTC | #11
On 2018/09/07 18:36, Dmitry Vyukov wrote:
> But I am still concerned as to what has changed recently. Potentially
> this happens only on linux-next, at least that's where I saw all
> existing reports.
> New tasks seem to be added to the tail of the tasks list, but this
> part does not seem to be changed recently in linux-next..
> 

As far as dump_tasks() is saying, these tasks are alive. Thus, I want to know
what these tasks are doing (i.e. SysRq-t output). Since this is occurring in
linux-next, we can try CONFIG_DEBUG_AID_FOR_SYZBOT=y case like
https://lkml.org/lkml/2018/9/3/353 does.
Michal Hocko Sept. 7, 2018, 11:08 a.m. UTC | #12
On Fri 07-09-18 11:36:55, Dmitry Vyukov wrote:
> On Fri, Sep 7, 2018 at 10:27 AM, Michal Hocko <mhocko@kernel.org> wrote:
> > On Fri 07-09-18 05:58:06, Tetsuo Handa wrote:
> >> On 2018/09/06 23:39, Michal Hocko wrote:
> >> >>>> I know /proc/sys/vm/oom_dump_tasks . Showing some entries while not always
> >> >>>> printing all entries might be helpful.
> >> >>>
> >> >>> Not really. It could be more confusing than helpful. The main purpose of
> >> >>> the listing is to double check the list to understand the oom victim
> >> >>> selection. If you have a partial list you simply cannot do that.
> >> >>
> >> >> It serves as a safeguard for avoiding RCU stall warnings.
> >> >>
> >> >>>
> >> >>> If the iteration takes too long and I can imagine it does with zillions
> >> >>> of tasks then the proper way around it is either release the lock
> >> >>> periodically after N tasks is processed or outright skip the whole thing
> >> >>> if there are too many tasks. The first option is obviously tricky to
> >> >>> prevent from duplicate entries or other artifacts.
> >> >>>
> >> >>
> >> >> Can we add rcu_lock_break() like check_hung_uninterruptible_tasks() does?
> >> >
> >> > This would be a better variant of your timeout based approach. But it
> >> > can still produce an incomplete task list so it still consumes a lot of
> >> > resources to print a long list of tasks potentially while that list is not
> >> > useful for any evaluation. Maybe that is good enough. I don't know. I
> >> > would generally recommend to disable the whole thing with workloads with
> >> > many tasks though.
> >> >
> >>
> >> The "safeguard" is useful when there are _unexpectedly_ many tasks (like
> >> syzbot in this case). Why not to allow those who want to avoid lockup to
> >> avoid lockup rather than forcing them to disable the whole thing?
> >
> > So you get an rcu lockup splat and what? Unless you have panic_on_rcu_stall
> > then this should be recoverable thing (assuming we cannot really
> > livelock as described by Dmitry).
> 
> 
> Should I add "vm.oom_dump_tasks = 0" to /etc/sysctl.conf on syzbot?
> It looks like it will make things faster, not pollute console output,
> prevent these stalls and that output does not seem to be too useful
> for debugging.

I think that oom_dump_tasks has only very limited usefulness for your
testing.

> But I am still concerned as to what has changed recently. Potentially
> this happens only on linux-next, at least that's where I saw all
> existing reports.
> New tasks seem to be added to the tail of the tasks list, but this
> part does not seem to be changed recently in linux-next..

Yes, that would be interesting to find out.
Dmitry Vyukov Sept. 8, 2018, 2 p.m. UTC | #13
On Fri, Sep 7, 2018 at 1:08 PM, Michal Hocko <mhocko@kernel.org> wrote:
>> >> >>>> I know /proc/sys/vm/oom_dump_tasks . Showing some entries while not always
>> >> >>>> printing all entries might be helpful.
>> >> >>>
>> >> >>> Not really. It could be more confusing than helpful. The main purpose of
>> >> >>> the listing is to double check the list to understand the oom victim
>> >> >>> selection. If you have a partial list you simply cannot do that.
>> >> >>
>> >> >> It serves as a safeguard for avoiding RCU stall warnings.
>> >> >>
>> >> >>>
>> >> >>> If the iteration takes too long and I can imagine it does with zillions
>> >> >>> of tasks then the proper way around it is either release the lock
>> >> >>> periodically after N tasks is processed or outright skip the whole thing
>> >> >>> if there are too many tasks. The first option is obviously tricky to
>> >> >>> prevent from duplicate entries or other artifacts.
>> >> >>>
>> >> >>
>> >> >> Can we add rcu_lock_break() like check_hung_uninterruptible_tasks() does?
>> >> >
>> >> > This would be a better variant of your timeout based approach. But it
>> >> > can still produce an incomplete task list so it still consumes a lot of
>> >> > resources to print a long list of tasks potentially while that list is not
>> >> > useful for any evaluation. Maybe that is good enough. I don't know. I
>> >> > would generally recommend to disable the whole thing with workloads with
>> >> > many tasks though.
>> >> >
>> >>
>> >> The "safeguard" is useful when there are _unexpectedly_ many tasks (like
>> >> syzbot in this case). Why not to allow those who want to avoid lockup to
>> >> avoid lockup rather than forcing them to disable the whole thing?
>> >
>> > So you get an rcu lockup splat and what? Unless you have panic_on_rcu_stall
>> > then this should be recoverable thing (assuming we cannot really
>> > livelock as described by Dmitry).
>>
>>
>> Should I add "vm.oom_dump_tasks = 0" to /etc/sysctl.conf on syzbot?
>> It looks like it will make things faster, not pollute console output,
>> prevent these stalls and that output does not seem to be too useful
>> for debugging.
>
> I think that oom_dump_tasks has only very limited usefulness for your
> testing.
>
>> But I am still concerned as to what has changed recently. Potentially
>> this happens only on linux-next, at least that's where I saw all
>> existing reports.
>> New tasks seem to be added to the tail of the tasks list, but this
>> part does not seem to be changed recently in linux-next..
>
> Yes, that would be interesting to find out.


Looking at another similar report:
https://syzkaller.appspot.com/bug?extid=0d867757fdc016c0157e
It looks like it can be just syzkaller learning how to do fork bombs
after all (same binary multiplied infinite amount of times). Probably
required some creativity because test programs do not contain loops
per se and clone syscall does not accept start function pc.
I will set vm.oom_dump_tasks = 0 and try to additionally restrict it
with cgroups.
Dmitry Vyukov Sept. 10, 2018, 2:36 p.m. UTC | #14
On Sat, Sep 8, 2018 at 4:00 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> On Fri, Sep 7, 2018 at 1:08 PM, Michal Hocko <mhocko@kernel.org> wrote:
>>> >> >>>> I know /proc/sys/vm/oom_dump_tasks . Showing some entries while not always
>>> >> >>>> printing all entries might be helpful.
>>> >> >>>
>>> >> >>> Not really. It could be more confusing than helpful. The main purpose of
>>> >> >>> the listing is to double check the list to understand the oom victim
>>> >> >>> selection. If you have a partial list you simply cannot do that.
>>> >> >>
>>> >> >> It serves as a safeguard for avoiding RCU stall warnings.
>>> >> >>
>>> >> >>>
>>> >> >>> If the iteration takes too long and I can imagine it does with zillions
>>> >> >>> of tasks then the proper way around it is either release the lock
>>> >> >>> periodically after N tasks is processed or outright skip the whole thing
>>> >> >>> if there are too many tasks. The first option is obviously tricky to
>>> >> >>> prevent from duplicate entries or other artifacts.
>>> >> >>>
>>> >> >>
>>> >> >> Can we add rcu_lock_break() like check_hung_uninterruptible_tasks() does?
>>> >> >
>>> >> > This would be a better variant of your timeout based approach. But it
>>> >> > can still produce an incomplete task list so it still consumes a lot of
>>> >> > resources to print a long list of tasks potentially while that list is not
>>> >> > useful for any evaluation. Maybe that is good enough. I don't know. I
>>> >> > would generally recommend to disable the whole thing with workloads with
>>> >> > many tasks though.
>>> >> >
>>> >>
>>> >> The "safeguard" is useful when there are _unexpectedly_ many tasks (like
>>> >> syzbot in this case). Why not to allow those who want to avoid lockup to
>>> >> avoid lockup rather than forcing them to disable the whole thing?
>>> >
>>> > So you get an rcu lockup splat and what? Unless you have panic_on_rcu_stall
>>> > then this should be recoverable thing (assuming we cannot really
>>> > livelock as described by Dmitry).
>>>
>>>
>>> Should I add "vm.oom_dump_tasks = 0" to /etc/sysctl.conf on syzbot?
>>> It looks like it will make things faster, not pollute console output,
>>> prevent these stalls and that output does not seem to be too useful
>>> for debugging.
>>
>> I think that oom_dump_tasks has only very limited usefulness for your
>> testing.
>>
>>> But I am still concerned as to what has changed recently. Potentially
>>> this happens only on linux-next, at least that's where I saw all
>>> existing reports.
>>> New tasks seem to be added to the tail of the tasks list, but this
>>> part does not seem to be changed recently in linux-next..
>>
>> Yes, that would be interesting to find out.
>
>
> Looking at another similar report:
> https://syzkaller.appspot.com/bug?extid=0d867757fdc016c0157e
> It looks like it can be just syzkaller learning how to do fork bombs
> after all (same binary multiplied infinite amount of times). Probably
> required some creativity because test programs do not contain loops
> per se and clone syscall does not accept start function pc.
> I will set vm.oom_dump_tasks = 0 and try to additionally restrict it
> with cgroups.


FTR, syzkaller now restricts test processes with pids.max=32. This
should prevent any fork bombs.
https://github.com/google/syzkaller/commit/f167cb6b0957d34f95b1067525aa87083f264035
Oleg Nesterov Sept. 11, 2018, 4:37 p.m. UTC | #15
On 09/06, Michal Hocko wrote:
>
> Ccing Oleg.

Thanks, but somehow I can't find this patch on marc.info ...

> So a question for Oleg I guess. Is it possible that for_each_process
> live locks (or stalls for way too long/unbounded amount of time) under
> heavy fork/exit loads?

Oh yes, it can... plus other problems.

I even sent the initial patches which introduce for_each_process_break/continue
a long ago... I'll try to find them tommorrow and resend.

Oleg.
Oleg Nesterov Sept. 12, 2018, 4:45 p.m. UTC | #16
On 09/11, Oleg Nesterov wrote:
>
> On 09/06, Michal Hocko wrote:
> >
> > So a question for Oleg I guess. Is it possible that for_each_process
> > live locks (or stalls for way too long/unbounded amount of time) under
> > heavy fork/exit loads?
>
> Oh yes, it can... plus other problems.
>
> I even sent the initial patches which introduce for_each_process_break/continue
> a long ago... I'll try to find them tommorrow and resend.

Two years ago ;) I don't understand why there were ignored, please see
"[PATCH 0/2] introduce for_each_process_thread_break() and for_each_process_thread_continue()"
I sent a minute ago.

However, I didn't notice that the subject mentions oom/dump_tasks... As for
dump_tasks() it probably doesn't need the new helpers, I'll write another email
tomorrow, but perhaps the time limit is all we need.

Oleg.
diff mbox series

Patch

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index f10aa53..48e5bf6 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -399,14 +399,22 @@  static void dump_tasks(struct mem_cgroup *memcg, const nodemask_t *nodemask)
 {
 	struct task_struct *p;
 	struct task_struct *task;
+	unsigned long start;
+	unsigned int skipped = 0;
 
 	pr_info("Tasks state (memory values in pages):\n");
 	pr_info("[  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name\n");
 	rcu_read_lock();
+	start = jiffies;
 	for_each_process(p) {
 		if (oom_unkillable_task(p, memcg, nodemask))
 			continue;
 
+		if (time_after(jiffies, start + 3 * HZ)) {
+			skipped++;
+			continue;
+		}
+
 		task = find_lock_task_mm(p);
 		if (!task) {
 			/*
@@ -426,6 +434,8 @@  static void dump_tasks(struct mem_cgroup *memcg, const nodemask_t *nodemask)
 		task_unlock(task);
 	}
 	rcu_read_unlock();
+	if (skipped)
+		pr_info("Printing %u tasks omitted.\n", skipped);
 }
 
 static void dump_header(struct oom_control *oc, struct task_struct *p)