Message ID | 1547636121-9229-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm, oom: Tolerate processes sharing mm with different view of oom_score_adj. | expand |
On Wed 16-01-19 19:55:21, Tetsuo Handa wrote: > This patch reverts both commit 44a70adec910d692 ("mm, oom_adj: make sure > processes sharing mm have same view of oom_score_adj") and commit > 97fd49c2355ffded ("mm, oom: kill all tasks sharing the mm") in order to > close a race and reduce the latency at __set_oom_adj(), and reduces the > warning at __oom_kill_process() in order to minimize the latency. > > Commit 36324a990cf578b5 ("oom: clear TIF_MEMDIE after oom_reaper managed > to unmap the address space") introduced the worst case mentioned in > 44a70adec910d692. But since the OOM killer skips mm with MMF_OOM_SKIP set, > only administrators can trigger the worst case. > > Since 44a70adec910d692 did not take latency into account, we can hold RCU > for minutes and trigger RCU stall warnings by calling printk() on many > thousands of thread groups. Even without calling printk(), the latency is > mentioned by Yong-Taek Lee [1]. And I noticed that 44a70adec910d692 is > racy, and trying to fix the race will require a global lock which is too > costly for rare events. > > If the worst case in 44a70adec910d692 happens, it is an administrator's > request. Therefore, tolerate the worst case and speed up __set_oom_adj(). I really do not think we care about latency. I consider the overal API sanity much more important. Besides that the original report you are referring to was never exaplained/shown to represent real world usecase. oom_score_adj is not really a an interface to be tweaked in hot paths. I can be convinced otherwise but that really requires some _real_ usecase with an explanation why there is no other way. Until then Nacked-by: Michal Hocko <mhocko@suse.com> > > [1] https://lkml.kernel.org/r/20181008011931epcms1p82dd01b7e5c067ea99946418bc97de46a@epcms1p8 > > Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> > Reported-by: Yong-Taek Lee <ytk.lee@samsung.com> > --- > fs/proc/base.c | 46 ---------------------------------------------- > include/linux/mm.h | 2 -- > mm/oom_kill.c | 10 ++++++---- > 3 files changed, 6 insertions(+), 52 deletions(-) > > diff --git a/fs/proc/base.c b/fs/proc/base.c > index 633a634..41ece8f 100644 > --- a/fs/proc/base.c > +++ b/fs/proc/base.c > @@ -1020,7 +1020,6 @@ static ssize_t oom_adj_read(struct file *file, char __user *buf, size_t count, > static int __set_oom_adj(struct file *file, int oom_adj, bool legacy) > { > static DEFINE_MUTEX(oom_adj_mutex); > - struct mm_struct *mm = NULL; > struct task_struct *task; > int err = 0; > > @@ -1050,55 +1049,10 @@ static int __set_oom_adj(struct file *file, int oom_adj, bool legacy) > } > } > > - /* > - * Make sure we will check other processes sharing the mm if this is > - * not vfrok which wants its own oom_score_adj. > - * pin the mm so it doesn't go away and get reused after task_unlock > - */ > - if (!task->vfork_done) { > - struct task_struct *p = find_lock_task_mm(task); > - > - if (p) { > - if (atomic_read(&p->mm->mm_users) > 1) { > - mm = p->mm; > - mmgrab(mm); > - } > - task_unlock(p); > - } > - } > - > task->signal->oom_score_adj = oom_adj; > if (!legacy && has_capability_noaudit(current, CAP_SYS_RESOURCE)) > task->signal->oom_score_adj_min = (short)oom_adj; > trace_oom_score_adj_update(task); > - > - if (mm) { > - struct task_struct *p; > - > - rcu_read_lock(); > - for_each_process(p) { > - if (same_thread_group(task, p)) > - continue; > - > - /* do not touch kernel threads or the global init */ > - if (p->flags & PF_KTHREAD || is_global_init(p)) > - continue; > - > - task_lock(p); > - if (!p->vfork_done && process_shares_mm(p, mm)) { > - pr_info("updating oom_score_adj for %d (%s) from %d to %d because it shares mm with %d (%s). Report if this is unexpected.\n", > - task_pid_nr(p), p->comm, > - p->signal->oom_score_adj, oom_adj, > - task_pid_nr(task), task->comm); > - p->signal->oom_score_adj = oom_adj; > - if (!legacy && has_capability_noaudit(current, CAP_SYS_RESOURCE)) > - p->signal->oom_score_adj_min = (short)oom_adj; > - } > - task_unlock(p); > - } > - rcu_read_unlock(); > - mmdrop(mm); > - } > err_unlock: > mutex_unlock(&oom_adj_mutex); > put_task_struct(task); > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 80bb640..28879c1 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2690,8 +2690,6 @@ static inline int in_gate_area(struct mm_struct *mm, unsigned long addr) > } > #endif /* __HAVE_ARCH_GATE_AREA */ > > -extern bool process_shares_mm(struct task_struct *p, struct mm_struct *mm); > - > #ifdef CONFIG_SYSCTL > extern int sysctl_drop_caches; > int drop_caches_sysctl_handler(struct ctl_table *, int, > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > index f0e8cd9..c7005b1 100644 > --- a/mm/oom_kill.c > +++ b/mm/oom_kill.c > @@ -478,7 +478,7 @@ static void dump_header(struct oom_control *oc, struct task_struct *p) > * task's threads: if one of those is using this mm then this task was also > * using it. > */ > -bool process_shares_mm(struct task_struct *p, struct mm_struct *mm) > +static bool process_shares_mm(struct task_struct *p, struct mm_struct *mm) > { > struct task_struct *t; > > @@ -896,12 +896,14 @@ static void __oom_kill_process(struct task_struct *victim) > continue; > if (same_thread_group(p, victim)) > continue; > - if (is_global_init(p)) { > + if (is_global_init(p) || > + p->signal->oom_score_adj == OOM_SCORE_ADJ_MIN) { > can_oom_reap = false; > - set_bit(MMF_OOM_SKIP, &mm->flags); > - pr_info("oom killer %d (%s) has mm pinned by %d (%s)\n", > + if (!test_bit(MMF_OOM_SKIP, &mm->flags)) > + pr_info("oom killer %d (%s) has mm pinned by %d (%s)\n", > task_pid_nr(victim), victim->comm, > task_pid_nr(p), p->comm); > + set_bit(MMF_OOM_SKIP, &mm->flags); > continue; > } > /* > -- > 1.8.3.1
On 2019/01/16 20:09, Michal Hocko wrote: > On Wed 16-01-19 19:55:21, Tetsuo Handa wrote: >> This patch reverts both commit 44a70adec910d692 ("mm, oom_adj: make sure >> processes sharing mm have same view of oom_score_adj") and commit >> 97fd49c2355ffded ("mm, oom: kill all tasks sharing the mm") in order to >> close a race and reduce the latency at __set_oom_adj(), and reduces the >> warning at __oom_kill_process() in order to minimize the latency. >> >> Commit 36324a990cf578b5 ("oom: clear TIF_MEMDIE after oom_reaper managed >> to unmap the address space") introduced the worst case mentioned in >> 44a70adec910d692. But since the OOM killer skips mm with MMF_OOM_SKIP set, >> only administrators can trigger the worst case. >> >> Since 44a70adec910d692 did not take latency into account, we can hold RCU >> for minutes and trigger RCU stall warnings by calling printk() on many >> thousands of thread groups. Even without calling printk(), the latency is >> mentioned by Yong-Taek Lee [1]. And I noticed that 44a70adec910d692 is >> racy, and trying to fix the race will require a global lock which is too >> costly for rare events. >> >> If the worst case in 44a70adec910d692 happens, it is an administrator's >> request. Therefore, tolerate the worst case and speed up __set_oom_adj(). > > I really do not think we care about latency. I consider the overal API > sanity much more important. Besides that the original report you are > referring to was never exaplained/shown to represent real world usecase. > oom_score_adj is not really a an interface to be tweaked in hot paths. I do care about the latency. Holding RCU for more than 2 minutes is insane. ---------- #define _GNU_SOURCE #include <stdio.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> #include <sched.h> #include <sys/mman.h> #include <signal.h> #define STACKSIZE 8192 static int child(void *unused) { pause(); return 0; } int main(int argc, char *argv[]) { int fd = open("/proc/self/oom_score_adj", O_WRONLY); int i; char *stack = mmap(NULL, STACKSIZE, PROT_WRITE | PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, EOF, 0); for (i = 0; i < 8192 * 4; i++) if (clone(child, stack + STACKSIZE, CLONE_VM, NULL) == -1) break; write(fd, "0\n", 2); kill(0, SIGSEGV); return 0; } ---------- > > I can be convinced otherwise but that really requires some _real_ > usecase with an explanation why there is no other way. Until then > > Nacked-by: Michal Hocko <mhocko@suse.com>
On Wed 16-01-19 20:30:25, Tetsuo Handa wrote: > On 2019/01/16 20:09, Michal Hocko wrote: > > On Wed 16-01-19 19:55:21, Tetsuo Handa wrote: > >> This patch reverts both commit 44a70adec910d692 ("mm, oom_adj: make sure > >> processes sharing mm have same view of oom_score_adj") and commit > >> 97fd49c2355ffded ("mm, oom: kill all tasks sharing the mm") in order to > >> close a race and reduce the latency at __set_oom_adj(), and reduces the > >> warning at __oom_kill_process() in order to minimize the latency. > >> > >> Commit 36324a990cf578b5 ("oom: clear TIF_MEMDIE after oom_reaper managed > >> to unmap the address space") introduced the worst case mentioned in > >> 44a70adec910d692. But since the OOM killer skips mm with MMF_OOM_SKIP set, > >> only administrators can trigger the worst case. > >> > >> Since 44a70adec910d692 did not take latency into account, we can hold RCU > >> for minutes and trigger RCU stall warnings by calling printk() on many > >> thousands of thread groups. Even without calling printk(), the latency is > >> mentioned by Yong-Taek Lee [1]. And I noticed that 44a70adec910d692 is > >> racy, and trying to fix the race will require a global lock which is too > >> costly for rare events. > >> > >> If the worst case in 44a70adec910d692 happens, it is an administrator's > >> request. Therefore, tolerate the worst case and speed up __set_oom_adj(). > > > > I really do not think we care about latency. I consider the overal API > > sanity much more important. Besides that the original report you are > > referring to was never exaplained/shown to represent real world usecase. > > oom_score_adj is not really a an interface to be tweaked in hot paths. > > I do care about the latency. Holding RCU for more than 2 minutes is insane. Creating 8k threads could be considered insane as well. But more seriously. I absolutely do not insist on holding a single RCU section for the whole operation. But that doesn't really mean that we want to revert these changes. for_each_process is by far not only called from this path.
On 2019/01/16 21:19, Michal Hocko wrote: > On Wed 16-01-19 20:30:25, Tetsuo Handa wrote: >> On 2019/01/16 20:09, Michal Hocko wrote: >>> On Wed 16-01-19 19:55:21, Tetsuo Handa wrote: >>>> This patch reverts both commit 44a70adec910d692 ("mm, oom_adj: make sure >>>> processes sharing mm have same view of oom_score_adj") and commit >>>> 97fd49c2355ffded ("mm, oom: kill all tasks sharing the mm") in order to >>>> close a race and reduce the latency at __set_oom_adj(), and reduces the >>>> warning at __oom_kill_process() in order to minimize the latency. >>>> >>>> Commit 36324a990cf578b5 ("oom: clear TIF_MEMDIE after oom_reaper managed >>>> to unmap the address space") introduced the worst case mentioned in >>>> 44a70adec910d692. But since the OOM killer skips mm with MMF_OOM_SKIP set, >>>> only administrators can trigger the worst case. >>>> >>>> Since 44a70adec910d692 did not take latency into account, we can hold RCU >>>> for minutes and trigger RCU stall warnings by calling printk() on many >>>> thousands of thread groups. Even without calling printk(), the latency is >>>> mentioned by Yong-Taek Lee [1]. And I noticed that 44a70adec910d692 is >>>> racy, and trying to fix the race will require a global lock which is too >>>> costly for rare events. >>>> >>>> If the worst case in 44a70adec910d692 happens, it is an administrator's >>>> request. Therefore, tolerate the worst case and speed up __set_oom_adj(). >>> >>> I really do not think we care about latency. I consider the overal API >>> sanity much more important. Besides that the original report you are >>> referring to was never exaplained/shown to represent real world usecase. >>> oom_score_adj is not really a an interface to be tweaked in hot paths. >> >> I do care about the latency. Holding RCU for more than 2 minutes is insane. > > Creating 8k threads could be considered insane as well. But more > seriously. I absolutely do not insist on holding a single RCU section > for the whole operation. But that doesn't really mean that we want to > revert these changes. for_each_process is by far not only called from > this path. Unlike check_hung_uninterruptible_tasks() where failing to resume after breaking RCU section is tolerable, failing to resume after breaking RCU section for __set_oom_adj() is not tolerable; it leaves the possibility of different oom_score_adj. Unless it is inevitable (e.g. SysRq-t), I think that calling printk() on each thread from RCU section is a poor choice. What if thousands of threads concurrently called __set_oom_adj() when each __set_oom_adj() call involves printk() on thousands of threads which can take more than 2 minutes? How long will it take to complete?
On Wed 16-01-19 22:32:50, Tetsuo Handa wrote: > On 2019/01/16 21:19, Michal Hocko wrote: > > On Wed 16-01-19 20:30:25, Tetsuo Handa wrote: > >> On 2019/01/16 20:09, Michal Hocko wrote: > >>> On Wed 16-01-19 19:55:21, Tetsuo Handa wrote: > >>>> This patch reverts both commit 44a70adec910d692 ("mm, oom_adj: make sure > >>>> processes sharing mm have same view of oom_score_adj") and commit > >>>> 97fd49c2355ffded ("mm, oom: kill all tasks sharing the mm") in order to > >>>> close a race and reduce the latency at __set_oom_adj(), and reduces the > >>>> warning at __oom_kill_process() in order to minimize the latency. > >>>> > >>>> Commit 36324a990cf578b5 ("oom: clear TIF_MEMDIE after oom_reaper managed > >>>> to unmap the address space") introduced the worst case mentioned in > >>>> 44a70adec910d692. But since the OOM killer skips mm with MMF_OOM_SKIP set, > >>>> only administrators can trigger the worst case. > >>>> > >>>> Since 44a70adec910d692 did not take latency into account, we can hold RCU > >>>> for minutes and trigger RCU stall warnings by calling printk() on many > >>>> thousands of thread groups. Even without calling printk(), the latency is > >>>> mentioned by Yong-Taek Lee [1]. And I noticed that 44a70adec910d692 is > >>>> racy, and trying to fix the race will require a global lock which is too > >>>> costly for rare events. > >>>> > >>>> If the worst case in 44a70adec910d692 happens, it is an administrator's > >>>> request. Therefore, tolerate the worst case and speed up __set_oom_adj(). > >>> > >>> I really do not think we care about latency. I consider the overal API > >>> sanity much more important. Besides that the original report you are > >>> referring to was never exaplained/shown to represent real world usecase. > >>> oom_score_adj is not really a an interface to be tweaked in hot paths. > >> > >> I do care about the latency. Holding RCU for more than 2 minutes is insane. > > > > Creating 8k threads could be considered insane as well. But more > > seriously. I absolutely do not insist on holding a single RCU section > > for the whole operation. But that doesn't really mean that we want to > > revert these changes. for_each_process is by far not only called from > > this path. > > Unlike check_hung_uninterruptible_tasks() where failing to resume after > breaking RCU section is tolerable, failing to resume after breaking RCU > section for __set_oom_adj() is not tolerable; it leaves the possibility > of different oom_score_adj. Then make sure that no threads are really missed. Really I fail to see what you are actually arguing about. for_each_process is expensive. No question about that. If you can replace it for this specific and odd usecase then go ahead. But there is absolutely zero reason to have a broken oom_score_adj semantic just because somebody might have thousands of threads and want to update the score faster. > Unless it is inevitable (e.g. SysRq-t), I think > that calling printk() on each thread from RCU section is a poor choice. > > What if thousands of threads concurrently called __set_oom_adj() when > each __set_oom_adj() call involves printk() on thousands of threads > which can take more than 2 minutes? How long will it take to complete? I really do not mind removing printk if that is what really bothers users. The primary purpose of this printk was to catch users who wouldn't expect this change. There were exactly zero.
On 2019/01/16 22:41, Michal Hocko wrote: >>>> I do care about the latency. Holding RCU for more than 2 minutes is insane. >>> >>> Creating 8k threads could be considered insane as well. But more >>> seriously. I absolutely do not insist on holding a single RCU section >>> for the whole operation. But that doesn't really mean that we want to >>> revert these changes. for_each_process is by far not only called from >>> this path. >> >> Unlike check_hung_uninterruptible_tasks() where failing to resume after >> breaking RCU section is tolerable, failing to resume after breaking RCU >> section for __set_oom_adj() is not tolerable; it leaves the possibility >> of different oom_score_adj. > > Then make sure that no threads are really missed. Really I fail to see > what you are actually arguing about. Impossible unless we hold the global rw_semaphore for read during copy_process()/do_exit() while hold the global rw_semaphore for write during __set_oom_adj(). We won't accept such giant lock in order to close the __set_oom_adj() race. > for_each_process is expensive. No > question about that. I'm saying that printk() is far more expensive. Current __set_oom_adj() code allows wasting CPU by printing pointless message [ 1270.265958][ T8549] updating oom_score_adj for 30876 (a.out) from 0 to 0 because it shares mm with 8549 (a.out). Report if this is unexpected. [ 1270.265959][ T8549] updating oom_score_adj for 30877 (a.out) from 0 to 0 because it shares mm with 8549 (a.out). Report if this is unexpected. [ 1270.265961][ T8549] updating oom_score_adj for 30878 (a.out) from 0 to 0 because it shares mm with 8549 (a.out). Report if this is unexpected. [ 1270.265964][ T8549] updating oom_score_adj for 30879 (a.out) from 0 to 0 because it shares mm with 8549 (a.out). Report if this is unexpected. [ 1270.389516][ T8549] updating oom_score_adj for 30880 (a.out) from 0 to 0 because it shares mm with 8549 (a.out). Report if this is unexpected. [ 1270.395223][ T8549] updating oom_score_adj for 30881 (a.out) from 0 to 0 because it shares mm with 8549 (a.out). Report if this is unexpected. [ 1270.400871][ T8549] updating oom_score_adj for 30882 (a.out) from 0 to 0 because it shares mm with 8549 (a.out). Report if this is unexpected. [ 1270.406757][ T8549] updating oom_score_adj for 30883 (a.out) from 0 to 0 because it shares mm with 8549 (a.out). Report if this is unexpected. [ 1270.412509][ T8549] updating oom_score_adj for 30884 (a.out) from 0 to 0 because it shares mm with 8549 (a.out). Report if this is unexpected. for _longer than one month_ ('2 minutes for one __set_oom_adj() call' x '32000 thread groups concurrently do "echo 0 > /proc/self/oom_score_adj"' = 44 days to complete). This is nothing but a DoS attack vector. > If you can replace it for this specific and odd > usecase then go ahead. But there is absolutely zero reason to have a > broken oom_score_adj semantic just because somebody might have thousands > of threads and want to update the score faster. > >> Unless it is inevitable (e.g. SysRq-t), I think >> that calling printk() on each thread from RCU section is a poor choice. >> >> What if thousands of threads concurrently called __set_oom_adj() when >> each __set_oom_adj() call involves printk() on thousands of threads >> which can take more than 2 minutes? How long will it take to complete? > > I really do not mind removing printk if that is what really bothers > users. The primary purpose of this printk was to catch users who > wouldn't expect this change. There were exactly zero. > This printk() is pointless. There is no need to flood like above. Once is enough. What is bad, it says "Report if this is unexpected." rather than "Report if you saw this message.". If the user thinks that 'Oh, what a nifty caretaker. I need to do "echo 0 > /proc/self/oom_score_adj" for only once.', that user won't report it. And I estimate that we will need to wait for several more years to make sure that all users upgrade their kernels to Linux 4.8+ which has __set_oom_adj() code. So far "exactly zero" does not mean "changing oom_score_adj semantics is allowable". (But so far "exactly zero" might suggest that there is absolutely no "CLONE_VM without CLONE_SIGNAHD" user at all and thus preserving __oom_score_adj() code makes no sense.) Given that said, I think that querying "CLONE_VM without CLONE_SIGNAHD" users at copy_process() for the reason of that combination can improve the code. --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1732,6 +1732,15 @@ static __latent_entropy struct task_struct *copy_process( } /* + * Shared VM without signal handlers leads to complicated OOM-killer + * handling. Let's ask such users why they want such combination. + */ + if ((clone_flags & CLONE_VM) && !(clone_flags & CLONE_SIGHAND)) + pr_warn_once("***** %s(%d) is trying to create a thread sharing memory without signal handlers. Please be sure to report to linux-mm@kvack.org the reason why you want to use this combination. Otherwise, this combination will be forbidden in future kernels in order to simplify OOM-killer handling. *****\n", + current->comm, task_pid_nr(current)); + + + /* * Force any signals received before this point to be delivered * before the fork happens. Collect up signals sent to multiple * processes that happen during the fork and delay them so that If we waited enough period and there is no user, we can forbid that combination and eliminate OOM handling code for "CLONE_VM without CLONE_SIGNAHD" which is forcing current __set_oom_adj() code.
[I am mostly offline for the rest of the week] On Wed 16-01-19 14:41:31, Michal Hocko wrote: > On Wed 16-01-19 22:32:50, Tetsuo Handa wrote: > > On 2019/01/16 21:19, Michal Hocko wrote: > > > On Wed 16-01-19 20:30:25, Tetsuo Handa wrote: > > >> On 2019/01/16 20:09, Michal Hocko wrote: > > >>> On Wed 16-01-19 19:55:21, Tetsuo Handa wrote: > > >>>> This patch reverts both commit 44a70adec910d692 ("mm, oom_adj: make sure > > >>>> processes sharing mm have same view of oom_score_adj") and commit > > >>>> 97fd49c2355ffded ("mm, oom: kill all tasks sharing the mm") in order to > > >>>> close a race and reduce the latency at __set_oom_adj(), and reduces the > > >>>> warning at __oom_kill_process() in order to minimize the latency. > > >>>> > > >>>> Commit 36324a990cf578b5 ("oom: clear TIF_MEMDIE after oom_reaper managed > > >>>> to unmap the address space") introduced the worst case mentioned in > > >>>> 44a70adec910d692. But since the OOM killer skips mm with MMF_OOM_SKIP set, > > >>>> only administrators can trigger the worst case. > > >>>> > > >>>> Since 44a70adec910d692 did not take latency into account, we can hold RCU > > >>>> for minutes and trigger RCU stall warnings by calling printk() on many > > >>>> thousands of thread groups. Even without calling printk(), the latency is > > >>>> mentioned by Yong-Taek Lee [1]. And I noticed that 44a70adec910d692 is > > >>>> racy, and trying to fix the race will require a global lock which is too > > >>>> costly for rare events. > > >>>> > > >>>> If the worst case in 44a70adec910d692 happens, it is an administrator's > > >>>> request. Therefore, tolerate the worst case and speed up __set_oom_adj(). > > >>> > > >>> I really do not think we care about latency. I consider the overal API > > >>> sanity much more important. Besides that the original report you are > > >>> referring to was never exaplained/shown to represent real world usecase. > > >>> oom_score_adj is not really a an interface to be tweaked in hot paths. > > >> > > >> I do care about the latency. Holding RCU for more than 2 minutes is insane. > > > > > > Creating 8k threads could be considered insane as well. But more > > > seriously. I absolutely do not insist on holding a single RCU section > > > for the whole operation. But that doesn't really mean that we want to > > > revert these changes. for_each_process is by far not only called from > > > this path. > > > > Unlike check_hung_uninterruptible_tasks() where failing to resume after > > breaking RCU section is tolerable, failing to resume after breaking RCU > > section for __set_oom_adj() is not tolerable; it leaves the possibility > > of different oom_score_adj. > > Then make sure that no threads are really missed. Really I fail to see > what you are actually arguing about. for_each_process is expensive. No > question about that. If you can replace it for this specific and odd > usecase then go ahead. But there is absolutely zero reason to have a > broken oom_score_adj semantic just because somebody might have thousands > of threads and want to update the score faster. Btw. the current implementation annoyance is caused by the fact that the oom_score_adj is per signal_struct rather than mm_struct. The reason is that we really need: if (!vfork()) { set_oom_score_adj() exec() } to work properly. One way around that is to special case oom_score_adj for tasks in vfork and store their shadow value into the task_struct. The shadow value would get transfered over to the mm struct once a new one is allocated. So something very coarsly like short tsk_get_oom_score_adj(struct task_struct *tsk) { if (tsk->oom_score_adj != OOM_SCORE_ADJ_INVALID) return tsk->oom_score_adj; return tsk->signal->oom_score_adj; } use this helper instead of direct oom_score_adj usage. Then we need to special case the setting in __set_oom_adj and dup_mm to copy the value over instead of copy_signal. I think this is doable.
diff --git a/fs/proc/base.c b/fs/proc/base.c index 633a634..41ece8f 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -1020,7 +1020,6 @@ static ssize_t oom_adj_read(struct file *file, char __user *buf, size_t count, static int __set_oom_adj(struct file *file, int oom_adj, bool legacy) { static DEFINE_MUTEX(oom_adj_mutex); - struct mm_struct *mm = NULL; struct task_struct *task; int err = 0; @@ -1050,55 +1049,10 @@ static int __set_oom_adj(struct file *file, int oom_adj, bool legacy) } } - /* - * Make sure we will check other processes sharing the mm if this is - * not vfrok which wants its own oom_score_adj. - * pin the mm so it doesn't go away and get reused after task_unlock - */ - if (!task->vfork_done) { - struct task_struct *p = find_lock_task_mm(task); - - if (p) { - if (atomic_read(&p->mm->mm_users) > 1) { - mm = p->mm; - mmgrab(mm); - } - task_unlock(p); - } - } - task->signal->oom_score_adj = oom_adj; if (!legacy && has_capability_noaudit(current, CAP_SYS_RESOURCE)) task->signal->oom_score_adj_min = (short)oom_adj; trace_oom_score_adj_update(task); - - if (mm) { - struct task_struct *p; - - rcu_read_lock(); - for_each_process(p) { - if (same_thread_group(task, p)) - continue; - - /* do not touch kernel threads or the global init */ - if (p->flags & PF_KTHREAD || is_global_init(p)) - continue; - - task_lock(p); - if (!p->vfork_done && process_shares_mm(p, mm)) { - pr_info("updating oom_score_adj for %d (%s) from %d to %d because it shares mm with %d (%s). Report if this is unexpected.\n", - task_pid_nr(p), p->comm, - p->signal->oom_score_adj, oom_adj, - task_pid_nr(task), task->comm); - p->signal->oom_score_adj = oom_adj; - if (!legacy && has_capability_noaudit(current, CAP_SYS_RESOURCE)) - p->signal->oom_score_adj_min = (short)oom_adj; - } - task_unlock(p); - } - rcu_read_unlock(); - mmdrop(mm); - } err_unlock: mutex_unlock(&oom_adj_mutex); put_task_struct(task); diff --git a/include/linux/mm.h b/include/linux/mm.h index 80bb640..28879c1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2690,8 +2690,6 @@ static inline int in_gate_area(struct mm_struct *mm, unsigned long addr) } #endif /* __HAVE_ARCH_GATE_AREA */ -extern bool process_shares_mm(struct task_struct *p, struct mm_struct *mm); - #ifdef CONFIG_SYSCTL extern int sysctl_drop_caches; int drop_caches_sysctl_handler(struct ctl_table *, int, diff --git a/mm/oom_kill.c b/mm/oom_kill.c index f0e8cd9..c7005b1 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -478,7 +478,7 @@ static void dump_header(struct oom_control *oc, struct task_struct *p) * task's threads: if one of those is using this mm then this task was also * using it. */ -bool process_shares_mm(struct task_struct *p, struct mm_struct *mm) +static bool process_shares_mm(struct task_struct *p, struct mm_struct *mm) { struct task_struct *t; @@ -896,12 +896,14 @@ static void __oom_kill_process(struct task_struct *victim) continue; if (same_thread_group(p, victim)) continue; - if (is_global_init(p)) { + if (is_global_init(p) || + p->signal->oom_score_adj == OOM_SCORE_ADJ_MIN) { can_oom_reap = false; - set_bit(MMF_OOM_SKIP, &mm->flags); - pr_info("oom killer %d (%s) has mm pinned by %d (%s)\n", + if (!test_bit(MMF_OOM_SKIP, &mm->flags)) + pr_info("oom killer %d (%s) has mm pinned by %d (%s)\n", task_pid_nr(victim), victim->comm, task_pid_nr(p), p->comm); + set_bit(MMF_OOM_SKIP, &mm->flags); continue; } /*
This patch reverts both commit 44a70adec910d692 ("mm, oom_adj: make sure processes sharing mm have same view of oom_score_adj") and commit 97fd49c2355ffded ("mm, oom: kill all tasks sharing the mm") in order to close a race and reduce the latency at __set_oom_adj(), and reduces the warning at __oom_kill_process() in order to minimize the latency. Commit 36324a990cf578b5 ("oom: clear TIF_MEMDIE after oom_reaper managed to unmap the address space") introduced the worst case mentioned in 44a70adec910d692. But since the OOM killer skips mm with MMF_OOM_SKIP set, only administrators can trigger the worst case. Since 44a70adec910d692 did not take latency into account, we can hold RCU for minutes and trigger RCU stall warnings by calling printk() on many thousands of thread groups. Even without calling printk(), the latency is mentioned by Yong-Taek Lee [1]. And I noticed that 44a70adec910d692 is racy, and trying to fix the race will require a global lock which is too costly for rare events. If the worst case in 44a70adec910d692 happens, it is an administrator's request. Therefore, tolerate the worst case and speed up __set_oom_adj(). [1] https://lkml.kernel.org/r/20181008011931epcms1p82dd01b7e5c067ea99946418bc97de46a@epcms1p8 Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Reported-by: Yong-Taek Lee <ytk.lee@samsung.com> --- fs/proc/base.c | 46 ---------------------------------------------- include/linux/mm.h | 2 -- mm/oom_kill.c | 10 ++++++---- 3 files changed, 6 insertions(+), 52 deletions(-)