diff mbox series

[v2] oom, oom_reaper: do not enqueue same task twice

Message ID 1d161137-55a5-126f-b47e-b2625bd798ca@i-love.sakura.ne.jp (mailing list archive)
State New, archived
Headers show
Series [v2] oom, oom_reaper: do not enqueue same task twice | expand

Commit Message

Tetsuo Handa Jan. 26, 2019, 1:10 p.m. UTC
On 2019/01/26 20:29, Arkadiusz Miśkiewicz wrote:
> On 26/01/2019 12:09, Tetsuo Handa wrote:
>> Arkadiusz, will you try this patch?
> 
> 
> Works. Several tries and always getting 0 pids.current after ~1s.
> 

Thank you for testing.

I updated this patch to use tsk->signal->oom_mm (a snapshot of
tsk->mm saved by mark_oom_victim(tsk)) rather than raw tsk->mm
so that we don't need to worry about possibility of changing
tsk->mm across multiple wake_oom_reaper(tsk) calls.



From 9c9e935fc038342c48461aabca666f1b544e32b1 Mon Sep 17 00:00:00 2001
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Date: Sat, 26 Jan 2019 21:57:25 +0900
Subject: [PATCH v2] oom, oom_reaper: do not enqueue same task twice

Arkadiusz reported that enabling memcg's group oom killing causes
strange memcg statistics where there is no task in a memcg despite
the number of tasks in that memcg is not 0. It turned out that there
is a bug in wake_oom_reaper() which allows enqueuing same task twice
which makes impossible to decrease the number of tasks in that memcg
due to a refcount leak.

This bug existed since the OOM reaper became invokable from
task_will_free_mem(current) path in out_of_memory() in Linux 4.7,
but memcg's group oom killing made it easier to trigger this bug by
calling wake_oom_reaper() on the same task from one out_of_memory()
request.

Fix this bug using an approach used by commit 855b018325737f76
("oom, oom_reaper: disable oom_reaper for oom_kill_allocating_task").
As a side effect of this patch, this patch also avoids enqueuing
multiple threads sharing memory via task_will_free_mem(current) path.

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Reported-by: Arkadiusz Miśkiewicz <arekm@maven.pl>
Tested-by: Arkadiusz Miśkiewicz <arekm@maven.pl>
Fixes: af8e15cc85a25315 ("oom, oom_reaper: do not enqueue task if it is on the oom_reaper_list head")
---
 mm/oom_kill.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

Comments

Michal Hocko Jan. 27, 2019, 8:37 a.m. UTC | #1
On Sat 26-01-19 22:10:52, Tetsuo Handa wrote:
[...]
> >From 9c9e935fc038342c48461aabca666f1b544e32b1 Mon Sep 17 00:00:00 2001
> From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Date: Sat, 26 Jan 2019 21:57:25 +0900
> Subject: [PATCH v2] oom, oom_reaper: do not enqueue same task twice
> 
> Arkadiusz reported that enabling memcg's group oom killing causes
> strange memcg statistics where there is no task in a memcg despite
> the number of tasks in that memcg is not 0. It turned out that there
> is a bug in wake_oom_reaper() which allows enqueuing same task twice
> which makes impossible to decrease the number of tasks in that memcg
> due to a refcount leak.
> 
> This bug existed since the OOM reaper became invokable from
> task_will_free_mem(current) path in out_of_memory() in Linux 4.7,
> but memcg's group oom killing made it easier to trigger this bug by
> calling wake_oom_reaper() on the same task from one out_of_memory()
> request.
> 
> Fix this bug using an approach used by commit 855b018325737f76
> ("oom, oom_reaper: disable oom_reaper for oom_kill_allocating_task").
> As a side effect of this patch, this patch also avoids enqueuing
> multiple threads sharing memory via task_will_free_mem(current) path.

Thanks for the analysis and the patch. This should work, I believe but
I am not really thrilled to overload the meaning of the MMF_UNSTABLE.
The flag is meant to signal accessing address space is not stable and it
is not aimed to synchronize oom reaper with the oom path.

Can we make use mark_oom_victim directly? I didn't get to think that
through right now so I might be missing something but this should
prevent repeating queueing as well.

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index f0e8cd9edb1a..dac4f2197e53 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -690,7 +690,7 @@ static void mark_oom_victim(struct task_struct *tsk)
 	WARN_ON(oom_killer_disabled);
 	/* OOM killer might race with memcg OOM */
 	if (test_and_set_tsk_thread_flag(tsk, TIF_MEMDIE))
-		return;
+		return false;
 
 	/* oom_mm is bound to the signal struct life time. */
 	if (!cmpxchg(&tsk->signal->oom_mm, NULL, mm)) {
@@ -707,6 +707,8 @@ static void mark_oom_victim(struct task_struct *tsk)
 	__thaw_task(tsk);
 	atomic_inc(&oom_victims);
 	trace_mark_victim(tsk->pid);
+
+	return true;
 }
 
 /**
@@ -873,7 +875,7 @@ static void __oom_kill_process(struct task_struct *victim)
 	 * reserves from the user space under its control.
 	 */
 	do_send_sig_info(SIGKILL, SEND_SIG_PRIV, victim, PIDTYPE_TGID);
-	mark_oom_victim(victim);
+	can_oom_reap = mark_oom_victim(victim);
 	pr_err("Killed process %d (%s) total-vm:%lukB, anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",
 		task_pid_nr(victim), victim->comm, K(victim->mm->total_vm),
 		K(get_mm_counter(victim->mm, MM_ANONPAGES)),
@@ -954,8 +956,8 @@ static void oom_kill_process(struct oom_control *oc, const char *message)
 	 */
 	task_lock(p);
 	if (task_will_free_mem(p)) {
-		mark_oom_victim(p);
-		wake_oom_reaper(p);
+		if (mark_oom_victim(p)
+			wake_oom_reaper(p);
 		task_unlock(p);
 		put_task_struct(p);
 		return;
@@ -1084,8 +1086,8 @@ bool out_of_memory(struct oom_control *oc)
 	 * quickly exit and free its memory.
 	 */
 	if (task_will_free_mem(current)) {
-		mark_oom_victim(current);
-		wake_oom_reaper(current);
+		if (mark_oom_victim(current))
+			wake_oom_reaper(current);
 		return true;
 	}
Tetsuo Handa Jan. 27, 2019, 10:56 a.m. UTC | #2
On 2019/01/27 17:37, Michal Hocko wrote:
> Thanks for the analysis and the patch. This should work, I believe but
> I am not really thrilled to overload the meaning of the MMF_UNSTABLE.
> The flag is meant to signal accessing address space is not stable and it
> is not aimed to synchronize oom reaper with the oom path.
> 
> Can we make use mark_oom_victim directly? I didn't get to think that
> through right now so I might be missing something but this should
> prevent repeating queueing as well.

Yes, TIF_MEMDIE would work. But you are planning to remove TIF_MEMDIE. Also,
TIF_MEMDIE can't avoid enqueuing many threads sharing mm_struct to the OOM
reaper. There is no need to enqueue many threads sharing mm_struct because
the OOM reaper acts on mm_struct rather than task_struct. Thus, enqueuing
based on per mm_struct flag sounds better, but MMF_OOM_VICTIM cannot be
set from wake_oom_reaper(victim) because victim's mm might be already inside
exit_mmap() when wake_oom_reaper(victim) is called after task_unlock(victim).

We could reintroduce MMF_OOM_KILLED in commit 855b018325737f76
("oom, oom_reaper: disable oom_reaper for oom_kill_allocating_task")
if you don't like overloading the meaning of the MMF_UNSTABLE. But since
MMF_UNSTABLE is available in Linux 4.9+ kernels (which covers all LTS stable
versions with the OOM reaper support), we can temporarily use MMF_UNSTABLE
for ease of backporting.
Michal Hocko Jan. 27, 2019, 11:40 a.m. UTC | #3
On Sun 27-01-19 19:56:06, Tetsuo Handa wrote:
> On 2019/01/27 17:37, Michal Hocko wrote:
> > Thanks for the analysis and the patch. This should work, I believe but
> > I am not really thrilled to overload the meaning of the MMF_UNSTABLE.
> > The flag is meant to signal accessing address space is not stable and it
> > is not aimed to synchronize oom reaper with the oom path.
> > 
> > Can we make use mark_oom_victim directly? I didn't get to think that
> > through right now so I might be missing something but this should
> > prevent repeating queueing as well.
> 
> Yes, TIF_MEMDIE would work. But you are planning to remove TIF_MEMDIE. Also,
> TIF_MEMDIE can't avoid enqueuing many threads sharing mm_struct to the OOM
> reaper. There is no need to enqueue many threads sharing mm_struct because
> the OOM reaper acts on mm_struct rather than task_struct. Thus, enqueuing
> based on per mm_struct flag sounds better, but MMF_OOM_VICTIM cannot be
> set from wake_oom_reaper(victim) because victim's mm might be already inside
> exit_mmap() when wake_oom_reaper(victim) is called after task_unlock(victim).
>
> We could reintroduce MMF_OOM_KILLED in commit 855b018325737f76
> ("oom, oom_reaper: disable oom_reaper for oom_kill_allocating_task")
> if you don't like overloading the meaning of the MMF_UNSTABLE. But since
> MMF_UNSTABLE is available in Linux 4.9+ kernels (which covers all LTS stable
> versions with the OOM reaper support), we can temporarily use MMF_UNSTABLE
> for ease of backporting.

I agree that a per-mm state is more optimal but I would rather fix the
issue in a clear way first and only then think about an optimization on
top. Queueing based on mark_oom_victim (whatever that uses to guarantee
the victim is marked atomically and only once) makes sense from the
conceptual point of view and it makes a lot of sense to start from
there. MMF_UNSTABLE has a completely different purpose. So unless you
see a correctness issue with that then I would rather go that way.
diff mbox series

Patch

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index f0e8cd9..057bfee 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -505,14 +505,6 @@  bool __oom_reap_task_mm(struct mm_struct *mm)
 	struct vm_area_struct *vma;
 	bool ret = true;
 
-	/*
-	 * Tell all users of get_user/copy_from_user etc... that the content
-	 * is no longer stable. No barriers really needed because unmapping
-	 * should imply barriers already and the reader would hit a page fault
-	 * if it stumbled over a reaped memory.
-	 */
-	set_bit(MMF_UNSTABLE, &mm->flags);
-
 	for (vma = mm->mmap ; vma; vma = vma->vm_next) {
 		if (!can_madv_dontneed_vma(vma))
 			continue;
@@ -647,8 +639,13 @@  static int oom_reaper(void *unused)
 
 static void wake_oom_reaper(struct task_struct *tsk)
 {
-	/* tsk is already queued? */
-	if (tsk == oom_reaper_list || tsk->oom_reaper_list)
+	/*
+	 * Tell all users of get_user/copy_from_user etc... that the content
+	 * is no longer stable. No barriers really needed because unmapping
+	 * should imply barriers already and the reader would hit a page fault
+	 * if it stumbled over a reaped memory.
+	 */
+	if (test_and_set_bit(MMF_UNSTABLE, &tsk->signal->oom_mm->flags))
 		return;
 
 	get_task_struct(tsk);