mbox series

[RFC,0/3] rework mmap-exit vs. oom_reaper handover

Message ID 20180910125513.311-1-mhocko@kernel.org (mailing list archive)
Headers show
Series rework mmap-exit vs. oom_reaper handover | expand

Message

Michal Hocko Sept. 10, 2018, 12:55 p.m. UTC
I am sending this as a follow up to yet-another timeout based proposal
by Tetsuo because I was accused that I am pushing for a solution which
I am not working on.

This is a very coarse implementation of the idea I've had before.
Please note that I haven't tested it yet. It is mostly to show the
direction I would wish to go for.

I have already explained why I hate timeout based retry loop and why
I believe a more direct feedback based approach is much better.

The locking protocol between the oom_reaper and the exit path is as
follows.

All parts which cannot race should use the exclusive lock on the exit
path. Once the exit path has passed the moment when no blocking locks
are taken then it clears mm->mmap under the exclusive lock. It is
trivial to use a MMF_$FOO for this purpose if people think this is safer
or better for any other reason.

The oom proper is the only one which sets MMF_OOM_SKIP with these
patches which is IMHO better because it is much easier understand the
interaction this way.

Last but not least, this is a core of the implementation. We might
want to tune the number of oom_reaper retries or to think about a more
effective tear down when there are multiple oom victims in the queue. I
would simply care about those later once we see a clear evidence that
this is needed. Ideally with a workload description and some numbers.

If this looks like a feasible idea I can spend on this more time and
turn it into something meargable.

Comments

Tetsuo Handa Sept. 10, 2018, 2:59 p.m. UTC | #1
Thank you for proposing a patch.

On 2018/09/10 21:55, Michal Hocko wrote:
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 5f2b2b1..99bb9ce 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -3091,7 +3081,31 @@ void exit_mmap(struct mm_struct *mm)
>  	/* update_hiwater_rss(mm) here? but nobody should be looking */
>  	/* Use -1 here to ensure all VMAs in the mm are unmapped */
>  	unmap_vmas(&tlb, vma, 0, -1);

unmap_vmas() might involve hugepage path. Is it safe to race with the OOM reaper?

  i_mmap_lock_write(vma->vm_file->f_mapping);
  __unmap_hugepage_range_final(tlb, vma, start, end, NULL);
  i_mmap_unlock_write(vma->vm_file->f_mapping);

> -	free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
> +
> +	/* oom_reaper cannot race with the page tables teardown */
> +	if (oom)
> +		down_write(&mm->mmap_sem);
> +	/*
> +	 * Hide vma from rmap and truncate_pagecache before freeing
> +	 * pgtables
> +	 */
> +	while (vma) {
> +		unlink_anon_vmas(vma);
> +		unlink_file_vma(vma);
> +		vma = vma->vm_next;
> +	}
> +	vma = mm->mmap;
> +	if (oom) {
> +		/*
> +		 * the exit path is guaranteed to finish without any unbound
> +		 * blocking at this stage so make it clear to the caller.
> +		 */
> +		mm->mmap = NULL;
> +		up_write(&mm->mmap_sem);
> +	}
> +
> +	free_pgd_range(&tlb, vma->vm_start, vma->vm_prev->vm_end,
> +			FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);

Are you trying to inline free_pgtables() here? But some architectures are
using hugetlb_free_pgd_range() which does more than free_pgd_range(). Are
they really safe (with regard to memory allocation dependency and flags
manipulation) ?

>  	tlb_finish_mmu(&tlb, 0, -1);
>  
>  	/*

Also, how do you plan to give this thread enough CPU resources, for this thread might
be SCHED_IDLE priority? Since this thread might not be a thread which is exiting
(because this is merely a thread which invoked __mmput()), we can't use boosting
approach. CPU resource might be given eventually unless schedule_timeout_*() is used,
but it might be deadly slow if allocating threads keep wasting CPU resources.

Also, why MMF_OOM_SKIP will not be set if the OOM reaper handed over?
Michal Hocko Sept. 10, 2018, 3:11 p.m. UTC | #2
On Mon 10-09-18 23:59:02, Tetsuo Handa wrote:
> Thank you for proposing a patch.
> 
> On 2018/09/10 21:55, Michal Hocko wrote:
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 5f2b2b1..99bb9ce 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -3091,7 +3081,31 @@ void exit_mmap(struct mm_struct *mm)
> >  	/* update_hiwater_rss(mm) here? but nobody should be looking */
> >  	/* Use -1 here to ensure all VMAs in the mm are unmapped */
> >  	unmap_vmas(&tlb, vma, 0, -1);
> 
> unmap_vmas() might involve hugepage path. Is it safe to race with the OOM reaper?
> 
>   i_mmap_lock_write(vma->vm_file->f_mapping);
>   __unmap_hugepage_range_final(tlb, vma, start, end, NULL);
>   i_mmap_unlock_write(vma->vm_file->f_mapping);

We do not unmap hugetlb pages in the oom reaper.

> > -	free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
> > +
> > +	/* oom_reaper cannot race with the page tables teardown */
> > +	if (oom)
> > +		down_write(&mm->mmap_sem);
> > +	/*
> > +	 * Hide vma from rmap and truncate_pagecache before freeing
> > +	 * pgtables
> > +	 */
> > +	while (vma) {
> > +		unlink_anon_vmas(vma);
> > +		unlink_file_vma(vma);
> > +		vma = vma->vm_next;
> > +	}
> > +	vma = mm->mmap;
> > +	if (oom) {
> > +		/*
> > +		 * the exit path is guaranteed to finish without any unbound
> > +		 * blocking at this stage so make it clear to the caller.
> > +		 */
> > +		mm->mmap = NULL;
> > +		up_write(&mm->mmap_sem);
> > +	}
> > +
> > +	free_pgd_range(&tlb, vma->vm_start, vma->vm_prev->vm_end,
> > +			FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
> 
> Are you trying to inline free_pgtables() here? But some architectures are
> using hugetlb_free_pgd_range() which does more than free_pgd_range(). Are
> they really safe (with regard to memory allocation dependency and flags
> manipulation) ?

This is something for me to double check of course. A cursory look
suggests that ppc just does some address manipulations because
free_pgtables can be called from the unmap path and that might cut a
mapping into non-hugeltb pieces. This is not possible in the full tear
down though.

> 
> >  	tlb_finish_mmu(&tlb, 0, -1);
> >  
> >  	/*
> 
> Also, how do you plan to give this thread enough CPU resources, for this thread might
> be SCHED_IDLE priority? Since this thread might not be a thread which is exiting
> (because this is merely a thread which invoked __mmput()), we can't use boosting
> approach. CPU resource might be given eventually unless schedule_timeout_*() is used,
> but it might be deadly slow if allocating threads keep wasting CPU resources.

This is OOM path which is glacial slow path. This is btw. no different
from any other low priority tasks sitting on a lot of memory trying to
release the memory (either by unmapping or exiting). Why should be this
particular case any different?

> Also, why MMF_OOM_SKIP will not be set if the OOM reaper handed over?

The idea is that the mm is not visible to anybody (except for the oom
reaper) anymore. So MMF_OOM_SKIP shouldn't matter.
Tetsuo Handa Sept. 10, 2018, 3:40 p.m. UTC | #3
On 2018/09/11 0:11, Michal Hocko wrote:
> On Mon 10-09-18 23:59:02, Tetsuo Handa wrote:
>> Thank you for proposing a patch.
>>
>> On 2018/09/10 21:55, Michal Hocko wrote:
>>> diff --git a/mm/mmap.c b/mm/mmap.c
>>> index 5f2b2b1..99bb9ce 100644
>>> --- a/mm/mmap.c
>>> +++ b/mm/mmap.c
>>> @@ -3091,7 +3081,31 @@ void exit_mmap(struct mm_struct *mm)
>>>  	/* update_hiwater_rss(mm) here? but nobody should be looking */
>>>  	/* Use -1 here to ensure all VMAs in the mm are unmapped */
>>>  	unmap_vmas(&tlb, vma, 0, -1);
>>
>> unmap_vmas() might involve hugepage path. Is it safe to race with the OOM reaper?
>>
>>   i_mmap_lock_write(vma->vm_file->f_mapping);
>>   __unmap_hugepage_range_final(tlb, vma, start, end, NULL);
>>   i_mmap_unlock_write(vma->vm_file->f_mapping);
> 
> We do not unmap hugetlb pages in the oom reaper.
> 

But the OOM reaper can run while __unmap_hugepage_range_final() is in progress.
Then, I worry an overlooked race similar to clearing VM_LOCKED flag.

> 
>>
>>>  	tlb_finish_mmu(&tlb, 0, -1);
>>>  
>>>  	/*
>>
>> Also, how do you plan to give this thread enough CPU resources, for this thread might
>> be SCHED_IDLE priority? Since this thread might not be a thread which is exiting
>> (because this is merely a thread which invoked __mmput()), we can't use boosting
>> approach. CPU resource might be given eventually unless schedule_timeout_*() is used,
>> but it might be deadly slow if allocating threads keep wasting CPU resources.
> 
> This is OOM path which is glacial slow path. This is btw. no different
> from any other low priority tasks sitting on a lot of memory trying to
> release the memory (either by unmapping or exiting). Why should be this
> particular case any different?
> 

Not a problem if not under OOM situation. Since the OOM killer keeps wasting
CPU resources until memory reclaim completes, we want to solve OOM situation
as soon as possible.

>> Also, why MMF_OOM_SKIP will not be set if the OOM reaper handed over?
> 
> The idea is that the mm is not visible to anybody (except for the oom
> reaper) anymore. So MMF_OOM_SKIP shouldn't matter.
> 

I think it absolutely matters. The OOM killer waits until MMF_OOM_SKIP is set
on a mm which is visible via task_struct->signal->oom_mm .
Michal Hocko Sept. 10, 2018, 4:44 p.m. UTC | #4
On Tue 11-09-18 00:40:23, Tetsuo Handa wrote:
> On 2018/09/11 0:11, Michal Hocko wrote:
> > On Mon 10-09-18 23:59:02, Tetsuo Handa wrote:
> >> Thank you for proposing a patch.
> >>
> >> On 2018/09/10 21:55, Michal Hocko wrote:
> >>> diff --git a/mm/mmap.c b/mm/mmap.c
> >>> index 5f2b2b1..99bb9ce 100644
> >>> --- a/mm/mmap.c
> >>> +++ b/mm/mmap.c
> >>> @@ -3091,7 +3081,31 @@ void exit_mmap(struct mm_struct *mm)
> >>>  	/* update_hiwater_rss(mm) here? but nobody should be looking */
> >>>  	/* Use -1 here to ensure all VMAs in the mm are unmapped */
> >>>  	unmap_vmas(&tlb, vma, 0, -1);
> >>
> >> unmap_vmas() might involve hugepage path. Is it safe to race with the OOM reaper?
> >>
> >>   i_mmap_lock_write(vma->vm_file->f_mapping);
> >>   __unmap_hugepage_range_final(tlb, vma, start, end, NULL);
> >>   i_mmap_unlock_write(vma->vm_file->f_mapping);
> > 
> > We do not unmap hugetlb pages in the oom reaper.
> > 
> 
> But the OOM reaper can run while __unmap_hugepage_range_final() is in progress.
> Then, I worry an overlooked race similar to clearing VM_LOCKED flag.

But VM_HUGETLB is a persistent flag unlike VM_LOCKED IIRC.
 
> >>>  	tlb_finish_mmu(&tlb, 0, -1);
> >>>  
> >>>  	/*
> >>
> >> Also, how do you plan to give this thread enough CPU resources, for this thread might
> >> be SCHED_IDLE priority? Since this thread might not be a thread which is exiting
> >> (because this is merely a thread which invoked __mmput()), we can't use boosting
> >> approach. CPU resource might be given eventually unless schedule_timeout_*() is used,
> >> but it might be deadly slow if allocating threads keep wasting CPU resources.
> > 
> > This is OOM path which is glacial slow path. This is btw. no different
> > from any other low priority tasks sitting on a lot of memory trying to
> > release the memory (either by unmapping or exiting). Why should be this
> > particular case any different?
> > 
> 
> Not a problem if not under OOM situation. Since the OOM killer keeps wasting
> CPU resources until memory reclaim completes, we want to solve OOM situation
> as soon as possible.

OK, it seems that yet again we have a deep disagreement here. The point
of the OOM is to get the system out of the desperate situation. The
objective is to free up _some_ memory. Your QoS is completely off at
that moment.
 
> >> Also, why MMF_OOM_SKIP will not be set if the OOM reaper handed over?
> > 
> > The idea is that the mm is not visible to anybody (except for the oom
> > reaper) anymore. So MMF_OOM_SKIP shouldn't matter.
> > 
> 
> I think it absolutely matters. The OOM killer waits until MMF_OOM_SKIP is set
> on a mm which is visible via task_struct->signal->oom_mm .

Hmm, I have to re-read the exit path once again and see when we unhash
the task and how many dangerous things we do in the mean time. I might
have been overly optimistic and you might be right that we indeed have
to set MMF_OOM_SKIP after all.
Tetsuo Handa Sept. 11, 2018, 2:01 p.m. UTC | #5
On 2018/09/10 21:55, Michal Hocko wrote:
> This is a very coarse implementation of the idea I've had before.
> Please note that I haven't tested it yet. It is mostly to show the
> direction I would wish to go for.

Hmm, this patchset does not allow me to boot. ;-)

        free_pgd_range(&tlb, vma->vm_start, vma->vm_prev->vm_end,
                        FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);

[    1.875675] sched_clock: Marking stable (1810466565, 65169393)->(1977240380, -101604422)
[    1.877833] registered taskstats version 1
[    1.877853] Loading compiled-in X.509 certificates
[    1.878835] zswap: loaded using pool lzo/zbud
[    1.880835] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
[    1.881792] PGD 0 P4D 0 
[    1.881812] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[    1.882792] CPU: 1 PID: 121 Comm: modprobe Not tainted 4.19.0-rc3+ #469
[    1.883803] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
[    1.884792] RIP: 0010:exit_mmap+0x122/0x1f0
[    1.884812] Code: 8b 5b 10 48 85 db 75 e7 45 84 e4 48 8b 45 00 0f 85 9a 00 00 00 48 8b 50 18 48 8b 30 48 8d 7c 24 08 45 31 c0 31 c9 48 89 04 24 <48> 8b 52 08 e8 45 3b ff ff 48 8d 7c 24 08 31 f6 48 c7 c2 ff ff ff
[    1.886793] RSP: 0018:ffffc90000897de0 EFLAGS: 00010246
[    1.887812] RAX: ffff88013494fcc0 RBX: 0000000000000000 RCX: 0000000000000000
[    1.888872] RDX: 0000000000000000 RSI: 0000000000400000 RDI: ffffc90000897de8
[    1.889794] RBP: ffff880134950040 R08: 0000000000000000 R09: 0000000000000000
[    1.890792] R10: 0000000000000001 R11: 0000000000081741 R12: 0000000000000000
[    1.891794] R13: ffff8801348fe240 R14: ffff8801348fe928 R15: 0000000000000000
[    1.892836] FS:  0000000000000000(0000) GS:ffff88013b240000(0000) knlGS:0000000000000000
[    1.893792] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    1.894792] CR2: 0000000000000008 CR3: 000000000220f001 CR4: 00000000001606e0
[    1.895797] Call Trace:
[    1.895817]  ? switch_mm_irqs_off+0x2e1/0x870
[    1.895837]  mmput+0x63/0x130
[    1.895857]  do_exit+0x2a7/0xc80
[    1.895877]  ? __do_page_fault+0x219/0x520
[    1.896793]  ? trace_hardirqs_on_thunk+0x1a/0x1c
[    1.896813]  do_group_exit+0x41/0xc0
[    1.896833]  __x64_sys_exit_group+0xf/0x10
[    1.896853]  do_syscall_64+0x4f/0x1f0
[    1.896873]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
[    1.896893] RIP: 0033:0x7fa50e122909
[    1.896913] Code: Bad RIP value.
[    1.896933] RSP: 002b:00007fff0fdb96a8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
[    1.897792] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007fa50e122909
[    1.898792] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 0000000000000001
[    1.899795] RBP: 00007fa50e41f838 R08: 000000000000003c R09: 00000000000000e7
[    1.900800] R10: ffffffffffffff70 R11: 0000000000000246 R12: 00007fa50e41f838
[    1.901793] R13: 00007fa50e424e80 R14: 0000000000000000 R15: 0000000000000000
[    1.902796] Modules linked in:
[    1.902816] CR2: 0000000000000008
[    1.902836] ---[ end trace a1a4ea7953190d43 ]---
[    1.902856] RIP: 0010:exit_mmap+0x122/0x1f0
[    1.902876] Code: 8b 5b 10 48 85 db 75 e7 45 84 e4 48 8b 45 00 0f 85 9a 00 00 00 48 8b 50 18 48 8b 30 48 8d 7c 24 08 45 31 c0 31 c9 48 89 04 24 <48> 8b 52 08 e8 45 3b ff ff 48 8d 7c 24 08 31 f6 48 c7 c2 ff ff ff
[    1.905792] RSP: 0018:ffffc90000897de0 EFLAGS: 00010246
[    1.906792] RAX: ffff88013494fcc0 RBX: 0000000000000000 RCX: 0000000000000000
[    1.907799] RDX: 0000000000000000 RSI: 0000000000400000 RDI: ffffc90000897de8
[    1.908837] RBP: ffff880134950040 R08: 0000000000000000 R09: 0000000000000000
[    1.909814] R10: 0000000000000001 R11: 0000000000081741 R12: 0000000000000000
[    1.910812] R13: ffff8801348fe240 R14: ffff8801348fe928 R15: 0000000000000000
[    1.911807] FS:  0000000000000000(0000) GS:ffff88013b240000(0000) knlGS:0000000000000000
[    1.912792] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    1.913820] CR2: 00007fa50e1228df CR3: 000000000220f001 CR4: 00000000001606e0
[    1.914812] Fixing recursive fault but reboot is needed!
[    2.076860] input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3
[    2.667963] tsc: Refined TSC clocksource calibration: 2793.558 MHz
Tetsuo Handa Sept. 12, 2018, 3:06 a.m. UTC | #6
Michal Hocko wrote:
> On Tue 11-09-18 00:40:23, Tetsuo Handa wrote:
> > >> Also, why MMF_OOM_SKIP will not be set if the OOM reaper handed over?
> > > 
> > > The idea is that the mm is not visible to anybody (except for the oom
> > > reaper) anymore. So MMF_OOM_SKIP shouldn't matter.
> > > 
> > 
> > I think it absolutely matters. The OOM killer waits until MMF_OOM_SKIP is set
> > on a mm which is visible via task_struct->signal->oom_mm .
> 
> Hmm, I have to re-read the exit path once again and see when we unhash
> the task and how many dangerous things we do in the mean time. I might
> have been overly optimistic and you might be right that we indeed have
> to set MMF_OOM_SKIP after all.

What a foolhardy attempt!

Commit d7a94e7e11badf84 ("oom: don't count on mm-less current process") says

    out_of_memory() doesn't trigger the OOM killer if the current task is
    already exiting or it has fatal signals pending, and gives the task
    access to memory reserves instead.  However, doing so is wrong if
    out_of_memory() is called by an allocation (e.g. from exit_task_work())
    after the current task has already released its memory and cleared
    TIF_MEMDIE at exit_mm().  If we again set TIF_MEMDIE to post-exit_mm()
    current task, the OOM killer will be blocked by the task sitting in the
    final schedule() waiting for its parent to reap it.  It will trigger an
    OOM livelock if its parent is unable to reap it due to doing an
    allocation and waiting for the OOM killer to kill it.

and your

+               /*
+                * the exit path is guaranteed to finish without any unbound
+                * blocking at this stage so make it clear to the caller.
+                */

attempt is essentially same with "we keep TIF_MEMDIE of post-exit_mm() task".

That is, we can't expect that the OOM victim can finish without any unbound
blocking. We have no choice but timeout based heuristic if we don't want to
set MMF_OOM_SKIP even with your customized version of free_pgtables().
Michal Hocko Sept. 12, 2018, 7:18 a.m. UTC | #7
On Wed 12-09-18 12:06:19, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Tue 11-09-18 00:40:23, Tetsuo Handa wrote:
> > > >> Also, why MMF_OOM_SKIP will not be set if the OOM reaper handed over?
> > > > 
> > > > The idea is that the mm is not visible to anybody (except for the oom
> > > > reaper) anymore. So MMF_OOM_SKIP shouldn't matter.
> > > > 
> > > 
> > > I think it absolutely matters. The OOM killer waits until MMF_OOM_SKIP is set
> > > on a mm which is visible via task_struct->signal->oom_mm .
> > 
> > Hmm, I have to re-read the exit path once again and see when we unhash
> > the task and how many dangerous things we do in the mean time. I might
> > have been overly optimistic and you might be right that we indeed have
> > to set MMF_OOM_SKIP after all.
> 
> What a foolhardy attempt!
> 
> Commit d7a94e7e11badf84 ("oom: don't count on mm-less current process") says
> 
>     out_of_memory() doesn't trigger the OOM killer if the current task is
>     already exiting or it has fatal signals pending, and gives the task
>     access to memory reserves instead.  However, doing so is wrong if
>     out_of_memory() is called by an allocation (e.g. from exit_task_work())
>     after the current task has already released its memory and cleared
>     TIF_MEMDIE at exit_mm().  If we again set TIF_MEMDIE to post-exit_mm()
>     current task, the OOM killer will be blocked by the task sitting in the
>     final schedule() waiting for its parent to reap it.  It will trigger an
>     OOM livelock if its parent is unable to reap it due to doing an
>     allocation and waiting for the OOM killer to kill it.
> 
> and your
> 
> +               /*
> +                * the exit path is guaranteed to finish without any unbound
> +                * blocking at this stage so make it clear to the caller.
> +                */

This comment was meant to tell that the tear down will not block for
unbound amount of time.

> attempt is essentially same with "we keep TIF_MEMDIE of post-exit_mm() task".
> 
> That is, we can't expect that the OOM victim can finish without any unbound
> blocking. We have no choice but timeout based heuristic if we don't want to
> set MMF_OOM_SKIP even with your customized version of free_pgtables().

OK, I will fold the following to the patch

commit e57a1e84db95906e6505de26db896f1b66b5b057
Author: Michal Hocko <mhocko@suse.com>
Date:   Tue Sep 11 13:09:16 2018 +0200

    fold me "mm, oom: hand over MMF_OOM_SKIP to exit path if it is guranteed to finish"
    
    - the task is still visible to the OOM killer after exit_mmap terminates
      so we should set MMF_OOM_SKIP from that path to be sure the oom killer
      doesn't get stuck on this task (see d7a94e7e11badf84 for more context)
      - per Tetsuo

diff --git a/mm/mmap.c b/mm/mmap.c
index 99bb9ce29bc5..64e8ccce5282 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -3097,8 +3097,9 @@ void exit_mmap(struct mm_struct *mm)
 	vma = mm->mmap;
 	if (oom) {
 		/*
-		 * the exit path is guaranteed to finish without any unbound
-		 * blocking at this stage so make it clear to the caller.
+		 * the exit path is guaranteed to finish the memory tear down
+		 * without any unbound blocking at this stage so make it clear
+		 * to the oom_reaper
 		 */
 		mm->mmap = NULL;
 		up_write(&mm->mmap_sem);
@@ -3118,6 +3119,13 @@ void exit_mmap(struct mm_struct *mm)
 		vma = remove_vma(vma);
 	}
 	vm_unacct_memory(nr_accounted);
+
+	/*
+	 * Now that the full address space is torn down, make sure the
+	 * OOM killer skips over this task
+	 */
+	if (oom)
+		set_bit(MMF_OOM_SKIP, &mm->flags);
 }
 
 /* Insert vm structure into process list sorted by address
Michal Hocko Sept. 12, 2018, 7:50 a.m. UTC | #8
On Tue 11-09-18 23:01:57, Tetsuo Handa wrote:
> On 2018/09/10 21:55, Michal Hocko wrote:
> > This is a very coarse implementation of the idea I've had before.
> > Please note that I haven't tested it yet. It is mostly to show the
> > direction I would wish to go for.
> 
> Hmm, this patchset does not allow me to boot. ;-)
> 
>         free_pgd_range(&tlb, vma->vm_start, vma->vm_prev->vm_end,
>                         FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
> 
> [    1.875675] sched_clock: Marking stable (1810466565, 65169393)->(1977240380, -101604422)
> [    1.877833] registered taskstats version 1
> [    1.877853] Loading compiled-in X.509 certificates
> [    1.878835] zswap: loaded using pool lzo/zbud
> [    1.880835] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008

This is vm_prev == NULL. I thought we always have vm_prev as long as
this is not a single VMA in the address space. I will double check this.
Tetsuo Handa Sept. 12, 2018, 7:58 a.m. UTC | #9
Michal Hocko wrote:
> OK, I will fold the following to the patch

OK. But at that point, my patch which tries to wait for reclaimed memory
to be re-allocatable addresses a different problem which you are refusing.



By the way, is it guaranteed that vma->vm_ops->close(vma) in remove_vma() never
sleeps? Since remove_vma() has might_sleep() since 2005, and that might_sleep()
predates the git history, I don't know what that ->close() would do.

Anyway, please fix free_pgd_range() crash in this patchset.
Michal Hocko Sept. 12, 2018, 8:17 a.m. UTC | #10
On Wed 12-09-18 16:58:53, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > OK, I will fold the following to the patch
> 
> OK. But at that point, my patch which tries to wait for reclaimed memory
> to be re-allocatable addresses a different problem which you are refusing.

I am trying to address a real world example of when the excessive amount
of memory is in page tables. As David pointed, this can happen with some
userspace allocators.

> By the way, is it guaranteed that vma->vm_ops->close(vma) in remove_vma() never
> sleeps? Since remove_vma() has might_sleep() since 2005, and that might_sleep()
> predates the git history, I don't know what that ->close() would do.

Hmm, I am afraid we cannot assume anything so we have to consider it
unsafe. A cursory look at some callers shows that they are taking locks.
E.g. drm_gem_object_put_unlocked might take a mutex. So MMF_OOM_SKIP
would have to set right after releasing page tables.

> Anyway, please fix free_pgd_range() crash in this patchset.

I will try to get to this later today.
Tetsuo Handa Sept. 12, 2018, 10:59 a.m. UTC | #11
On 2018/09/12 17:17, Michal Hocko wrote:
> On Wed 12-09-18 16:58:53, Tetsuo Handa wrote:
>> Michal Hocko wrote:
>>> OK, I will fold the following to the patch
>>
>> OK. But at that point, my patch which tries to wait for reclaimed memory
>> to be re-allocatable addresses a different problem which you are refusing.
> 
> I am trying to address a real world example of when the excessive amount
> of memory is in page tables. As David pointed, this can happen with some
> userspace allocators.

My patch or David's patch will address it as well, without scattering
down_write(&mm->mmap_sem)/up_write(&mm->mmap_sem) like your attempt.

> 
>> By the way, is it guaranteed that vma->vm_ops->close(vma) in remove_vma() never
>> sleeps? Since remove_vma() has might_sleep() since 2005, and that might_sleep()
>> predates the git history, I don't know what that ->close() would do.
> 
> Hmm, I am afraid we cannot assume anything so we have to consider it
> unsafe. A cursory look at some callers shows that they are taking locks.
> E.g. drm_gem_object_put_unlocked might take a mutex. So MMF_OOM_SKIP
> would have to set right after releasing page tables.

I won't be happy unless handed over section can run in atomic context
(e.g. preempt_disable()/preempt_enable()) because current thread might be
SCHED_IDLE priority.

If current thread is SCHED_IDLE priority, it might be difficult to hand over
because current thread is unlikely able to reach

+	if (oom) {
+		/*
+		 * the exit path is guaranteed to finish without any unbound
+		 * blocking at this stage so make it clear to the caller.
+		 */
+		mm->mmap = NULL;
+		up_write(&mm->mmap_sem);
+	}

before the OOM reaper kernel thread (which is not SCHED_IDLE priority) checks
whether mm->mmap is already NULL.

Honestly, I'm not sure whether current thread (even !SCHED_IDLE priority) can
reach there before the OOM killer checks whether mm->mmap is already NULL, for
current thread has to do more things than the OOM reaper can do.

Also, in the worst case,

+                               /*
+                                * oom_reaper cannot handle mlocked vmas but we
+                                * need to serialize it with munlock_vma_pages_all
+                                * which clears VM_LOCKED, otherwise the oom reaper
+                                * cannot reliably test it.
+                                */
+                               if (oom)
+                                       down_write(&mm->mmap_sem);

would cause the OOM reaper to set MMF_OOM_SKIP without reclaiming any memory
if munlock_vma_pages_all(vma) by current thread did not complete quick enough
to make down_read_trylock(&mm->mmap_sem) attempt by the OOM reaper succeed.
Michal Hocko Sept. 12, 2018, 11:22 a.m. UTC | #12
On Wed 12-09-18 19:59:24, Tetsuo Handa wrote:
> On 2018/09/12 17:17, Michal Hocko wrote:
> > On Wed 12-09-18 16:58:53, Tetsuo Handa wrote:
> >> Michal Hocko wrote:
> >>> OK, I will fold the following to the patch
> >>
> >> OK. But at that point, my patch which tries to wait for reclaimed memory
> >> to be re-allocatable addresses a different problem which you are refusing.
> > 
> > I am trying to address a real world example of when the excessive amount
> > of memory is in page tables. As David pointed, this can happen with some
> > userspace allocators.
> 
> My patch or David's patch will address it as well, without scattering
> down_write(&mm->mmap_sem)/up_write(&mm->mmap_sem) like your attempt.

Based on a timeout. I am trying to fit the fix into an existing retry
logic and it seems this is possible.
 
> >> By the way, is it guaranteed that vma->vm_ops->close(vma) in remove_vma() never
> >> sleeps? Since remove_vma() has might_sleep() since 2005, and that might_sleep()
> >> predates the git history, I don't know what that ->close() would do.
> > 
> > Hmm, I am afraid we cannot assume anything so we have to consider it
> > unsafe. A cursory look at some callers shows that they are taking locks.
> > E.g. drm_gem_object_put_unlocked might take a mutex. So MMF_OOM_SKIP
> > would have to set right after releasing page tables.
> 
> I won't be happy unless handed over section can run in atomic context
> (e.g. preempt_disable()/preempt_enable()) because current thread might be
> SCHED_IDLE priority.
> 
> If current thread is SCHED_IDLE priority, it might be difficult to hand over
> because current thread is unlikely able to reach
> 
> +	if (oom) {
> +		/*
> +		 * the exit path is guaranteed to finish without any unbound
> +		 * blocking at this stage so make it clear to the caller.
> +		 */
> +		mm->mmap = NULL;
> +		up_write(&mm->mmap_sem);
> +	}
> 
> before the OOM reaper kernel thread (which is not SCHED_IDLE priority) checks
> whether mm->mmap is already NULL.
> 
> Honestly, I'm not sure whether current thread (even !SCHED_IDLE priority) can
> reach there before the OOM killer checks whether mm->mmap is already NULL, for
> current thread has to do more things than the OOM reaper can do.
> 
> Also, in the worst case,
> 
> +                               /*
> +                                * oom_reaper cannot handle mlocked vmas but we
> +                                * need to serialize it with munlock_vma_pages_all
> +                                * which clears VM_LOCKED, otherwise the oom reaper
> +                                * cannot reliably test it.
> +                                */
> +                               if (oom)
> +                                       down_write(&mm->mmap_sem);
> 
> would cause the OOM reaper to set MMF_OOM_SKIP without reclaiming any memory
> if munlock_vma_pages_all(vma) by current thread did not complete quick enough
> to make down_read_trylock(&mm->mmap_sem) attempt by the OOM reaper succeed.

Which is kind of inherent problem, isn't it? Unless we add some sort of
priority inheritance then this will be always the case. And this is by
far not OOM specific. It is the full memory reclaim path. So I really do
not see this as an argument. More importantly though, no timeout based
solution will handle it better.

As I've said countless times. I do not really consider this to be
interesting case until we see that actual real world workloads really
suffer from this problem.

So can we focus on the practical problems please?
Michal Hocko Sept. 12, 2018, 1:42 p.m. UTC | #13
On Wed 12-09-18 09:50:54, Michal Hocko wrote:
> On Tue 11-09-18 23:01:57, Tetsuo Handa wrote:
> > On 2018/09/10 21:55, Michal Hocko wrote:
> > > This is a very coarse implementation of the idea I've had before.
> > > Please note that I haven't tested it yet. It is mostly to show the
> > > direction I would wish to go for.
> > 
> > Hmm, this patchset does not allow me to boot. ;-)
> > 
> >         free_pgd_range(&tlb, vma->vm_start, vma->vm_prev->vm_end,
> >                         FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
> > 
> > [    1.875675] sched_clock: Marking stable (1810466565, 65169393)->(1977240380, -101604422)
> > [    1.877833] registered taskstats version 1
> > [    1.877853] Loading compiled-in X.509 certificates
> > [    1.878835] zswap: loaded using pool lzo/zbud
> > [    1.880835] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> 
> This is vm_prev == NULL. I thought we always have vm_prev as long as
> this is not a single VMA in the address space. I will double check this.

So this is me misunderstanding the code. vm_next, vm_prev are not a full
doubly linked list. The first entry doesn't really refer to the last
entry. So the above cannot work at all. We can go around this in two
ways. Either keep the iteration or use the following which should cover
the full mapped range, unless I am missing something

diff --git a/mm/mmap.c b/mm/mmap.c
index 64e8ccce5282..078295344a17 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -3105,7 +3105,7 @@ void exit_mmap(struct mm_struct *mm)
 		up_write(&mm->mmap_sem);
 	}
 
-	free_pgd_range(&tlb, vma->vm_start, vma->vm_prev->vm_end,
+	free_pgd_range(&tlb, vma->vm_start, mm->highest_vm_end,
 			FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
 	tlb_finish_mmu(&tlb, 0, -1);
Tetsuo Handa Sept. 13, 2018, 2:44 a.m. UTC | #14
On 2018/09/12 22:42, Michal Hocko wrote:
> On Wed 12-09-18 09:50:54, Michal Hocko wrote:
>> On Tue 11-09-18 23:01:57, Tetsuo Handa wrote:
>>> On 2018/09/10 21:55, Michal Hocko wrote:
>>>> This is a very coarse implementation of the idea I've had before.
>>>> Please note that I haven't tested it yet. It is mostly to show the
>>>> direction I would wish to go for.
>>>
>>> Hmm, this patchset does not allow me to boot. ;-)
>>>
>>>         free_pgd_range(&tlb, vma->vm_start, vma->vm_prev->vm_end,
>>>                         FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
>>>
>>> [    1.875675] sched_clock: Marking stable (1810466565, 65169393)->(1977240380, -101604422)
>>> [    1.877833] registered taskstats version 1
>>> [    1.877853] Loading compiled-in X.509 certificates
>>> [    1.878835] zswap: loaded using pool lzo/zbud
>>> [    1.880835] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
>>
>> This is vm_prev == NULL. I thought we always have vm_prev as long as
>> this is not a single VMA in the address space. I will double check this.
> 
> So this is me misunderstanding the code. vm_next, vm_prev are not a full
> doubly linked list. The first entry doesn't really refer to the last
> entry. So the above cannot work at all. We can go around this in two
> ways. Either keep the iteration or use the following which should cover
> the full mapped range, unless I am missing something
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 64e8ccce5282..078295344a17 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -3105,7 +3105,7 @@ void exit_mmap(struct mm_struct *mm)
>  		up_write(&mm->mmap_sem);
>  	}
>  
> -	free_pgd_range(&tlb, vma->vm_start, vma->vm_prev->vm_end,
> +	free_pgd_range(&tlb, vma->vm_start, mm->highest_vm_end,
>  			FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
>  	tlb_finish_mmu(&tlb, 0, -1);
>  

This is bad because architectures where hugetlb_free_pgd_range() does
more than free_pgd_range() need to check VM_HUGETLB flag for each "vma".
Thus, I think we need to keep the iteration.
Michal Hocko Sept. 13, 2018, 9:09 a.m. UTC | #15
On Thu 13-09-18 11:44:03, Tetsuo Handa wrote:
> On 2018/09/12 22:42, Michal Hocko wrote:
> > On Wed 12-09-18 09:50:54, Michal Hocko wrote:
> >> On Tue 11-09-18 23:01:57, Tetsuo Handa wrote:
> >>> On 2018/09/10 21:55, Michal Hocko wrote:
> >>>> This is a very coarse implementation of the idea I've had before.
> >>>> Please note that I haven't tested it yet. It is mostly to show the
> >>>> direction I would wish to go for.
> >>>
> >>> Hmm, this patchset does not allow me to boot. ;-)
> >>>
> >>>         free_pgd_range(&tlb, vma->vm_start, vma->vm_prev->vm_end,
> >>>                         FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
> >>>
> >>> [    1.875675] sched_clock: Marking stable (1810466565, 65169393)->(1977240380, -101604422)
> >>> [    1.877833] registered taskstats version 1
> >>> [    1.877853] Loading compiled-in X.509 certificates
> >>> [    1.878835] zswap: loaded using pool lzo/zbud
> >>> [    1.880835] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> >>
> >> This is vm_prev == NULL. I thought we always have vm_prev as long as
> >> this is not a single VMA in the address space. I will double check this.
> > 
> > So this is me misunderstanding the code. vm_next, vm_prev are not a full
> > doubly linked list. The first entry doesn't really refer to the last
> > entry. So the above cannot work at all. We can go around this in two
> > ways. Either keep the iteration or use the following which should cover
> > the full mapped range, unless I am missing something
> > 
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 64e8ccce5282..078295344a17 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -3105,7 +3105,7 @@ void exit_mmap(struct mm_struct *mm)
> >  		up_write(&mm->mmap_sem);
> >  	}
> >  
> > -	free_pgd_range(&tlb, vma->vm_start, vma->vm_prev->vm_end,
> > +	free_pgd_range(&tlb, vma->vm_start, mm->highest_vm_end,
> >  			FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
> >  	tlb_finish_mmu(&tlb, 0, -1);
> >  
> 
> This is bad because architectures where hugetlb_free_pgd_range() does
> more than free_pgd_range() need to check VM_HUGETLB flag for each "vma".
> Thus, I think we need to keep the iteration.

Fair point. I have looked more closely and most of them simply redirect
to free_pgd_range but ppc and sparc are doing some pretty involved
tricks which we cannot really skip. So I will go and split
free_pgtables into two phases and keep per vma loops. So this
incremental update on top

commit e568c3f34e11c1a7abb4fe6f26e51eb8f60620c3
Author: Michal Hocko <mhocko@suse.com>
Date:   Thu Sep 13 11:08:00 2018 +0200

    fold me "mm, oom: hand over MMF_OOM_SKIP to exit path if it is guranteed to finish"
    
    - split free_pgtables into unlinking and actual freeing part. We cannot
      rely on free_pgd_range because of hugetlb pages on ppc resp. sparc
      which do their own tear down

diff --git a/mm/internal.h b/mm/internal.h
index 87256ae1bef8..35adbfec4935 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -40,6 +40,9 @@ void page_writeback_init(void);
 
 vm_fault_t do_swap_page(struct vm_fault *vmf);
 
+void __unlink_vmas(struct vm_area_struct *vma);
+void __free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
+		unsigned long floor, unsigned long ceiling);
 void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
 		unsigned long floor, unsigned long ceiling);
 
diff --git a/mm/memory.c b/mm/memory.c
index c467102a5cbc..cf910ed5f283 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -612,20 +612,23 @@ void free_pgd_range(struct mmu_gather *tlb,
 	} while (pgd++, addr = next, addr != end);
 }
 
-void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
+void __unlink_vmas(struct vm_area_struct *vma)
+{
+	while (vma) {
+		unlink_anon_vmas(vma);
+		unlink_file_vma(vma);
+		vma = vma->vm_next;
+	}
+}
+
+/* expects that __unlink_vmas has been called already */
+void __free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		unsigned long floor, unsigned long ceiling)
 {
 	while (vma) {
 		struct vm_area_struct *next = vma->vm_next;
 		unsigned long addr = vma->vm_start;
 
-		/*
-		 * Hide vma from rmap and truncate_pagecache before freeing
-		 * pgtables
-		 */
-		unlink_anon_vmas(vma);
-		unlink_file_vma(vma);
-
 		if (is_vm_hugetlb_page(vma)) {
 			hugetlb_free_pgd_range(tlb, addr, vma->vm_end,
 				floor, next ? next->vm_start : ceiling);
@@ -637,8 +640,6 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			       && !is_vm_hugetlb_page(next)) {
 				vma = next;
 				next = vma->vm_next;
-				unlink_anon_vmas(vma);
-				unlink_file_vma(vma);
 			}
 			free_pgd_range(tlb, addr, vma->vm_end,
 				floor, next ? next->vm_start : ceiling);
@@ -647,6 +648,13 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 	}
 }
 
+void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
+		unsigned long floor, unsigned long ceiling)
+{
+	__unlink_vmas(vma);
+	__free_pgtables(tlb, vma, floor, ceiling);
+}
+
 int __pte_alloc(struct mm_struct *mm, pmd_t *pmd, unsigned long address)
 {
 	spinlock_t *ptl;
diff --git a/mm/mmap.c b/mm/mmap.c
index 078295344a17..f4b562e21764 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -3082,20 +3082,14 @@ void exit_mmap(struct mm_struct *mm)
 	/* Use -1 here to ensure all VMAs in the mm are unmapped */
 	unmap_vmas(&tlb, vma, 0, -1);
 
-	/* oom_reaper cannot race with the page tables teardown */
-	if (oom)
-		down_write(&mm->mmap_sem);
 	/*
-	 * Hide vma from rmap and truncate_pagecache before freeing
-	 * pgtables
+	 * oom_reaper cannot race with the page tables teardown but we
+	 * want to make sure that the exit path can take over the full
+	 * tear down when it is safe to do so
 	 */
-	while (vma) {
-		unlink_anon_vmas(vma);
-		unlink_file_vma(vma);
-		vma = vma->vm_next;
-	}
-	vma = mm->mmap;
 	if (oom) {
+		down_write(&mm->mmap_sem);
+		__unlink_vmas(vma);
 		/*
 		 * the exit path is guaranteed to finish the memory tear down
 		 * without any unbound blocking at this stage so make it clear
@@ -3103,10 +3097,11 @@ void exit_mmap(struct mm_struct *mm)
 		 */
 		mm->mmap = NULL;
 		up_write(&mm->mmap_sem);
+		__free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
+	} else {
+		free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
 	}
 
-	free_pgd_range(&tlb, vma->vm_start, mm->highest_vm_end,
-			FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
 	tlb_finish_mmu(&tlb, 0, -1);
 
 	/*
Tetsuo Handa Sept. 13, 2018, 11:20 a.m. UTC | #16
On 2018/09/13 18:09, Michal Hocko wrote:
>> This is bad because architectures where hugetlb_free_pgd_range() does
>> more than free_pgd_range() need to check VM_HUGETLB flag for each "vma".
>> Thus, I think we need to keep the iteration.
> 
> Fair point. I have looked more closely and most of them simply redirect
> to free_pgd_range but ppc and sparc are doing some pretty involved
> tricks which we cannot really skip. So I will go and split
> free_pgtables into two phases and keep per vma loops. So this
> incremental update on top

Next question.

        /* Use -1 here to ensure all VMAs in the mm are unmapped */
        unmap_vmas(&tlb, vma, 0, -1);

in exit_mmap() will now race with the OOM reaper. And unmap_vmas() will handle
VM_HUGETLB or VM_PFNMAP or VM_SHARED or !vma_is_anonymous() vmas, won't it?
Then, is it guaranteed to be safe if the OOM reaper raced with unmap_vmas() ?



By the way, there is a potential bug in hugepd_free() in arch/powerpc/mm/hugetlbpage.c

        if (*batchp == NULL) {
                *batchp = (struct hugepd_freelist *)__get_free_page(GFP_ATOMIC);
                (*batchp)->index = 0;
        }

because GFP_ATOMIC allocation might fail if ALLOC_OOM allocations are in progress?
Michal Hocko Sept. 13, 2018, 11:35 a.m. UTC | #17
On Thu 13-09-18 20:20:12, Tetsuo Handa wrote:
> On 2018/09/13 18:09, Michal Hocko wrote:
> >> This is bad because architectures where hugetlb_free_pgd_range() does
> >> more than free_pgd_range() need to check VM_HUGETLB flag for each "vma".
> >> Thus, I think we need to keep the iteration.
> > 
> > Fair point. I have looked more closely and most of them simply redirect
> > to free_pgd_range but ppc and sparc are doing some pretty involved
> > tricks which we cannot really skip. So I will go and split
> > free_pgtables into two phases and keep per vma loops. So this
> > incremental update on top
> 
> Next question.
> 
>         /* Use -1 here to ensure all VMAs in the mm are unmapped */
>         unmap_vmas(&tlb, vma, 0, -1);
> 
> in exit_mmap() will now race with the OOM reaper. And unmap_vmas() will handle
> VM_HUGETLB or VM_PFNMAP or VM_SHARED or !vma_is_anonymous() vmas, won't it?
> Then, is it guaranteed to be safe if the OOM reaper raced with unmap_vmas() ?

I do not understand the question. unmap_vmas is basically MADV_DONTNEED
and that doesn't require the exclusive mmap_sem lock so yes it should be
safe those two to race (modulo bugs of course but I am not aware of any
there).
 
> By the way, there is a potential bug in hugepd_free() in arch/powerpc/mm/hugetlbpage.c
> 
>         if (*batchp == NULL) {
>                 *batchp = (struct hugepd_freelist *)__get_free_page(GFP_ATOMIC);
>                 (*batchp)->index = 0;
>         }
> 
> because GFP_ATOMIC allocation might fail if ALLOC_OOM allocations are in progress?

I am not familiar with that code so I would recommend to ask
maintainers.
Tetsuo Handa Sept. 13, 2018, 11:53 a.m. UTC | #18
On 2018/09/13 20:35, Michal Hocko wrote:
>> Next question.
>>
>>         /* Use -1 here to ensure all VMAs in the mm are unmapped */
>>         unmap_vmas(&tlb, vma, 0, -1);
>>
>> in exit_mmap() will now race with the OOM reaper. And unmap_vmas() will handle
>> VM_HUGETLB or VM_PFNMAP or VM_SHARED or !vma_is_anonymous() vmas, won't it?
>> Then, is it guaranteed to be safe if the OOM reaper raced with unmap_vmas() ?
> 
> I do not understand the question. unmap_vmas is basically MADV_DONTNEED
> and that doesn't require the exclusive mmap_sem lock so yes it should be
> safe those two to race (modulo bugs of course but I am not aware of any
> there).
>  

You need to verify that races we observed with VM_LOCKED can't happen
for VM_HUGETLB / VM_PFNMAP / VM_SHARED / !vma_is_anonymous() cases.

                for (vma = mm->mmap; vma; vma = vma->vm_next) {
                        if (!(vma->vm_flags & VM_LOCKED))
                                continue;
                        /*
                         * oom_reaper cannot handle mlocked vmas but we
                         * need to serialize it with munlock_vma_pages_all
                         * which clears VM_LOCKED, otherwise the oom reaper
                         * cannot reliably test it.
                         */
                        if (oom)
                                down_write(&mm->mmap_sem);

                        munlock_vma_pages_all(vma);

                        if (oom)
                                up_write(&mm->mmap_sem);
                }

Without enough comments, future changes might overlook the assumption.
Michal Hocko Sept. 13, 2018, 1:40 p.m. UTC | #19
On Thu 13-09-18 20:53:24, Tetsuo Handa wrote:
> On 2018/09/13 20:35, Michal Hocko wrote:
> >> Next question.
> >>
> >>         /* Use -1 here to ensure all VMAs in the mm are unmapped */
> >>         unmap_vmas(&tlb, vma, 0, -1);
> >>
> >> in exit_mmap() will now race with the OOM reaper. And unmap_vmas() will handle
> >> VM_HUGETLB or VM_PFNMAP or VM_SHARED or !vma_is_anonymous() vmas, won't it?
> >> Then, is it guaranteed to be safe if the OOM reaper raced with unmap_vmas() ?
> > 
> > I do not understand the question. unmap_vmas is basically MADV_DONTNEED
> > and that doesn't require the exclusive mmap_sem lock so yes it should be
> > safe those two to race (modulo bugs of course but I am not aware of any
> > there).
> >  
> 
> You need to verify that races we observed with VM_LOCKED can't happen
> for VM_HUGETLB / VM_PFNMAP / VM_SHARED / !vma_is_anonymous() cases.

Well, VM_LOCKED is kind of special because that is not a permanent state
which might change. VM_HUGETLB, VM_PFNMAP resp VM_SHARED are not changed
throughout the vma lifetime.
Tetsuo Handa Sept. 14, 2018, 1:54 p.m. UTC | #20
On 2018/09/13 22:40, Michal Hocko wrote:
> On Thu 13-09-18 20:53:24, Tetsuo Handa wrote:
>> On 2018/09/13 20:35, Michal Hocko wrote:
>>>> Next question.
>>>>
>>>>         /* Use -1 here to ensure all VMAs in the mm are unmapped */
>>>>         unmap_vmas(&tlb, vma, 0, -1);
>>>>
>>>> in exit_mmap() will now race with the OOM reaper. And unmap_vmas() will handle
>>>> VM_HUGETLB or VM_PFNMAP or VM_SHARED or !vma_is_anonymous() vmas, won't it?
>>>> Then, is it guaranteed to be safe if the OOM reaper raced with unmap_vmas() ?
>>>
>>> I do not understand the question. unmap_vmas is basically MADV_DONTNEED
>>> and that doesn't require the exclusive mmap_sem lock so yes it should be
>>> safe those two to race (modulo bugs of course but I am not aware of any
>>> there).
>>>  
>>
>> You need to verify that races we observed with VM_LOCKED can't happen
>> for VM_HUGETLB / VM_PFNMAP / VM_SHARED / !vma_is_anonymous() cases.
> 
> Well, VM_LOCKED is kind of special because that is not a permanent state
> which might change. VM_HUGETLB, VM_PFNMAP resp VM_SHARED are not changed
> throughout the vma lifetime.
> 
OK, next question.
Is it guaranteed that arch_exit_mmap(mm) is safe with the OOM reaper?



Well, anyway, diffstat of your proposal would be

 include/linux/oom.h |  2 --
 mm/internal.h       |  3 +++
 mm/memory.c         | 28 ++++++++++++--------
 mm/mmap.c           | 73 +++++++++++++++++++++++++++++++----------------------
 mm/oom_kill.c       | 46 ++++++++++++++++++++++++---------
 5 files changed, 98 insertions(+), 54 deletions(-)

trying to hand over only __free_pgtables() part at the risk of
setting MMF_OOM_SKIP without reclaiming any memory due to dropping
__oom_reap_task_mm() and scattering down_write()/up_write() inside
exit_mmap(), while diffstat of my proposal (not tested yet) would be

 include/linux/mm_types.h |   2 +
 include/linux/oom.h      |   3 +-
 include/linux/sched.h    |   2 +-
 kernel/fork.c            |  11 +++
 mm/mmap.c                |  42 ++++-------
 mm/oom_kill.c            | 182 ++++++++++++++++++++++-------------------------
 6 files changed, 117 insertions(+), 125 deletions(-)

trying to wait until __mmput() completes and also trying to handle
multiple OOM victims in parallel.

You are refusing timeout based approach but I don't think this is
something we have to be frayed around the edge about possibility of
overlooking races/bugs because you don't want to use timeout. And you
have never showed that timeout based approach cannot work well enough.



Michal's proposal:

diff --git a/include/linux/oom.h b/include/linux/oom.h
index 69864a5..11e26ca 100644
--- a/include/linux/oom.h
+++ b/include/linux/oom.h
@@ -95,8 +95,6 @@ static inline vm_fault_t check_stable_address_space(struct mm_struct *mm)
 	return 0;
 }
 
-bool __oom_reap_task_mm(struct mm_struct *mm);
-
 extern unsigned long oom_badness(struct task_struct *p,
 		struct mem_cgroup *memcg, const nodemask_t *nodemask,
 		unsigned long totalpages);
diff --git a/mm/internal.h b/mm/internal.h
index 87256ae..35adbfe 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -40,6 +40,9 @@
 
 vm_fault_t do_swap_page(struct vm_fault *vmf);
 
+void __unlink_vmas(struct vm_area_struct *vma);
+void __free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
+		unsigned long floor, unsigned long ceiling);
 void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
 		unsigned long floor, unsigned long ceiling);
 
diff --git a/mm/memory.c b/mm/memory.c
index c467102..cf910ed 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -612,20 +612,23 @@ void free_pgd_range(struct mmu_gather *tlb,
 	} while (pgd++, addr = next, addr != end);
 }
 
-void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
+void __unlink_vmas(struct vm_area_struct *vma)
+{
+	while (vma) {
+		unlink_anon_vmas(vma);
+		unlink_file_vma(vma);
+		vma = vma->vm_next;
+	}
+}
+
+/* expects that __unlink_vmas has been called already */
+void __free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		unsigned long floor, unsigned long ceiling)
 {
 	while (vma) {
 		struct vm_area_struct *next = vma->vm_next;
 		unsigned long addr = vma->vm_start;
 
-		/*
-		 * Hide vma from rmap and truncate_pagecache before freeing
-		 * pgtables
-		 */
-		unlink_anon_vmas(vma);
-		unlink_file_vma(vma);
-
 		if (is_vm_hugetlb_page(vma)) {
 			hugetlb_free_pgd_range(tlb, addr, vma->vm_end,
 				floor, next ? next->vm_start : ceiling);
@@ -637,8 +640,6 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			       && !is_vm_hugetlb_page(next)) {
 				vma = next;
 				next = vma->vm_next;
-				unlink_anon_vmas(vma);
-				unlink_file_vma(vma);
 			}
 			free_pgd_range(tlb, addr, vma->vm_end,
 				floor, next ? next->vm_start : ceiling);
@@ -647,6 +648,13 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
 	}
 }
 
+void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
+		unsigned long floor, unsigned long ceiling)
+{
+	__unlink_vmas(vma);
+	__free_pgtables(tlb, vma, floor, ceiling);
+}
+
 int __pte_alloc(struct mm_struct *mm, pmd_t *pmd, unsigned long address)
 {
 	spinlock_t *ptl;
diff --git a/mm/mmap.c b/mm/mmap.c
index 5f2b2b1..67bd8a0 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -3042,40 +3042,26 @@ void exit_mmap(struct mm_struct *mm)
 	struct mmu_gather tlb;
 	struct vm_area_struct *vma;
 	unsigned long nr_accounted = 0;
+	const bool oom = mm_is_oom_victim(mm);
 
 	/* mm's last user has gone, and its about to be pulled down */
 	mmu_notifier_release(mm);
 
-	if (unlikely(mm_is_oom_victim(mm))) {
-		/*
-		 * Manually reap the mm to free as much memory as possible.
-		 * Then, as the oom reaper does, set MMF_OOM_SKIP to disregard
-		 * this mm from further consideration.  Taking mm->mmap_sem for
-		 * write after setting MMF_OOM_SKIP will guarantee that the oom
-		 * reaper will not run on this mm again after mmap_sem is
-		 * dropped.
-		 *
-		 * Nothing can be holding mm->mmap_sem here and the above call
-		 * to mmu_notifier_release(mm) ensures mmu notifier callbacks in
-		 * __oom_reap_task_mm() will not block.
-		 *
-		 * This needs to be done before calling munlock_vma_pages_all(),
-		 * which clears VM_LOCKED, otherwise the oom reaper cannot
-		 * reliably test it.
-		 */
-		(void)__oom_reap_task_mm(mm);
-
-		set_bit(MMF_OOM_SKIP, &mm->flags);
-		down_write(&mm->mmap_sem);
-		up_write(&mm->mmap_sem);
-	}
-
 	if (mm->locked_vm) {
-		vma = mm->mmap;
-		while (vma) {
-			if (vma->vm_flags & VM_LOCKED)
-				munlock_vma_pages_all(vma);
-			vma = vma->vm_next;
+		for (vma = mm->mmap; vma; vma = vma->vm_next) {
+			if (!(vma->vm_flags & VM_LOCKED))
+				continue;
+			/*
+			 * oom_reaper cannot handle mlocked vmas but we
+			 * need to serialize it with munlock_vma_pages_all
+			 * which clears VM_LOCKED, otherwise the oom reaper
+			 * cannot reliably test it.
+			 */
+			if (oom)
+				down_write(&mm->mmap_sem);
+			munlock_vma_pages_all(vma);
+			if (oom)
+				up_write(&mm->mmap_sem);
 		}
 	}
 
@@ -3091,10 +3077,37 @@ void exit_mmap(struct mm_struct *mm)
 	/* update_hiwater_rss(mm) here? but nobody should be looking */
 	/* Use -1 here to ensure all VMAs in the mm are unmapped */
 	unmap_vmas(&tlb, vma, 0, -1);
-	free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
+
+	/*
+	 * oom_reaper cannot race with the page tables teardown but we
+	 * want to make sure that the exit path can take over the full
+	 * tear down when it is safe to do so
+	 */
+	if (oom) {
+		down_write(&mm->mmap_sem);
+		__unlink_vmas(vma);
+		/*
+		 * the exit path is guaranteed to finish the memory tear down
+		 * without any unbound blocking at this stage so make it clear
+		 * to the oom_reaper
+		 */
+		mm->mmap = NULL;
+		up_write(&mm->mmap_sem);
+		__free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
+	} else {
+		free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
+	}
+
 	tlb_finish_mmu(&tlb, 0, -1);
 
 	/*
+	 * Now that the full address space is torn down, make sure the
+	 * OOM killer skips over this task
+	 */
+	if (oom)
+		set_bit(MMF_OOM_SKIP, &mm->flags);
+
+	/*
 	 * Walk the list again, actually closing and freeing it,
 	 * with preemption enabled, without holding any MM locks.
 	 */
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index f10aa53..abddcde 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -189,6 +189,16 @@ static bool is_dump_unreclaim_slabs(void)
 	return (global_node_page_state(NR_SLAB_UNRECLAIMABLE) > nr_lru);
 }
 
+/*
+ * Rough memory consumption of the given mm which should be theoretically freed
+ * when the mm is removed.
+ */
+static unsigned long oom_badness_pages(struct mm_struct *mm)
+{
+	return get_mm_rss(mm) + get_mm_counter(mm, MM_SWAPENTS) +
+		mm_pgtables_bytes(mm) / PAGE_SIZE;
+}
+
 /**
  * oom_badness - heuristic function to determine which candidate task to kill
  * @p: task struct of which task we should calculate
@@ -230,8 +240,7 @@ unsigned long oom_badness(struct task_struct *p, struct mem_cgroup *memcg,
 	 * The baseline for the badness score is the proportion of RAM that each
 	 * task's rss, pagetable and swap space use.
 	 */
-	points = get_mm_rss(p->mm) + get_mm_counter(p->mm, MM_SWAPENTS) +
-		mm_pgtables_bytes(p->mm) / PAGE_SIZE;
+	points = oom_badness_pages(p->mm);
 	task_unlock(p);
 
 	/* Normalize to oom_score_adj units */
@@ -483,12 +492,11 @@ bool process_shares_mm(struct task_struct *p, struct mm_struct *mm)
  * OOM Reaper kernel thread which tries to reap the memory used by the OOM
  * victim (if that is possible) to help the OOM killer to move on.
  */
-static struct task_struct *oom_reaper_th;
 static DECLARE_WAIT_QUEUE_HEAD(oom_reaper_wait);
 static struct task_struct *oom_reaper_list;
 static DEFINE_SPINLOCK(oom_reaper_lock);
 
-bool __oom_reap_task_mm(struct mm_struct *mm)
+static bool __oom_reap_task_mm(struct mm_struct *mm)
 {
 	struct vm_area_struct *vma;
 	bool ret = true;
@@ -501,7 +509,7 @@ bool __oom_reap_task_mm(struct mm_struct *mm)
 	 */
 	set_bit(MMF_UNSTABLE, &mm->flags);
 
-	for (vma = mm->mmap ; vma; vma = vma->vm_next) {
+	for (vma = mm->mmap; vma; vma = vma->vm_next) {
 		if (!can_madv_dontneed_vma(vma))
 			continue;
 
@@ -532,6 +540,16 @@ bool __oom_reap_task_mm(struct mm_struct *mm)
 		}
 	}
 
+	/*
+	 * If we still sit on a noticeable amount of memory even after successfully
+	 * reaping the address space then keep retrying until exit_mmap makes some
+	 * further progress.
+	 * TODO: add a flag for a stage when the exit path doesn't block anymore
+	 * and hand over MMF_OOM_SKIP handling there in that case
+	 */
+	if (ret && oom_badness_pages(mm) > 1024)
+		ret = false;
+
 	return ret;
 }
 
@@ -551,12 +569,10 @@ static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
 	}
 
 	/*
-	 * MMF_OOM_SKIP is set by exit_mmap when the OOM reaper can't
-	 * work on the mm anymore. The check for MMF_OOM_SKIP must run
-	 * under mmap_sem for reading because it serializes against the
-	 * down_write();up_write() cycle in exit_mmap().
+	 * If exit path clear mm->mmap then we know it will finish the tear down
+	 * and we can go and bail out here.
 	 */
-	if (test_bit(MMF_OOM_SKIP, &mm->flags)) {
+	if (!mm->mmap) {
 		trace_skip_task_reaping(tsk->pid);
 		goto out_unlock;
 	}
@@ -605,8 +621,14 @@ static void oom_reap_task(struct task_struct *tsk)
 	/*
 	 * Hide this mm from OOM killer because it has been either reaped or
 	 * somebody can't call up_write(mmap_sem).
+	 * Leave the MMF_OOM_SKIP to the exit path if it managed to reach the
+	 * point it is guaranteed to finish without any blocking
 	 */
-	set_bit(MMF_OOM_SKIP, &mm->flags);
+	if (mm->mmap)
+		set_bit(MMF_OOM_SKIP, &mm->flags);
+	else if (!test_bit(MMF_OOM_SKIP, &mm->flags))
+		pr_info("oom_reaper: handed over pid:%d (%s) to exit path\n",
+			task_pid_nr(tsk), tsk->comm);
 
 	/* Drop a reference taken by wake_oom_reaper */
 	put_task_struct(tsk);
@@ -650,7 +672,7 @@ static void wake_oom_reaper(struct task_struct *tsk)
 
 static int __init oom_init(void)
 {
-	oom_reaper_th = kthread_run(oom_reaper, NULL, "oom_reaper");
+	kthread_run(oom_reaper, NULL, "oom_reaper");
 	return 0;
 }
 subsys_initcall(oom_init)
Michal Hocko Sept. 14, 2018, 2:14 p.m. UTC | #21
On Fri 14-09-18 22:54:45, Tetsuo Handa wrote:
> On 2018/09/13 22:40, Michal Hocko wrote:
> > On Thu 13-09-18 20:53:24, Tetsuo Handa wrote:
> >> On 2018/09/13 20:35, Michal Hocko wrote:
> >>>> Next question.
> >>>>
> >>>>         /* Use -1 here to ensure all VMAs in the mm are unmapped */
> >>>>         unmap_vmas(&tlb, vma, 0, -1);
> >>>>
> >>>> in exit_mmap() will now race with the OOM reaper. And unmap_vmas() will handle
> >>>> VM_HUGETLB or VM_PFNMAP or VM_SHARED or !vma_is_anonymous() vmas, won't it?
> >>>> Then, is it guaranteed to be safe if the OOM reaper raced with unmap_vmas() ?
> >>>
> >>> I do not understand the question. unmap_vmas is basically MADV_DONTNEED
> >>> and that doesn't require the exclusive mmap_sem lock so yes it should be
> >>> safe those two to race (modulo bugs of course but I am not aware of any
> >>> there).
> >>>  
> >>
> >> You need to verify that races we observed with VM_LOCKED can't happen
> >> for VM_HUGETLB / VM_PFNMAP / VM_SHARED / !vma_is_anonymous() cases.
> > 
> > Well, VM_LOCKED is kind of special because that is not a permanent state
> > which might change. VM_HUGETLB, VM_PFNMAP resp VM_SHARED are not changed
> > throughout the vma lifetime.
> > 
> OK, next question.
> Is it guaranteed that arch_exit_mmap(mm) is safe with the OOM reaper?

I do not see any obvious problem and we used to allow to race unmaping
in exit and oom_reaper paths before we had to handle mlocked vmas
specially.
 
> Well, anyway, diffstat of your proposal would be
> 
>  include/linux/oom.h |  2 --
>  mm/internal.h       |  3 +++
>  mm/memory.c         | 28 ++++++++++++--------
>  mm/mmap.c           | 73 +++++++++++++++++++++++++++++++----------------------
>  mm/oom_kill.c       | 46 ++++++++++++++++++++++++---------
>  5 files changed, 98 insertions(+), 54 deletions(-)
> 
> trying to hand over only __free_pgtables() part at the risk of
> setting MMF_OOM_SKIP without reclaiming any memory due to dropping
> __oom_reap_task_mm() and scattering down_write()/up_write() inside
> exit_mmap(), while diffstat of my proposal (not tested yet) would be
> 
>  include/linux/mm_types.h |   2 +
>  include/linux/oom.h      |   3 +-
>  include/linux/sched.h    |   2 +-
>  kernel/fork.c            |  11 +++
>  mm/mmap.c                |  42 ++++-------
>  mm/oom_kill.c            | 182 ++++++++++++++++++++++-------------------------
>  6 files changed, 117 insertions(+), 125 deletions(-)
> 
> trying to wait until __mmput() completes and also trying to handle
> multiple OOM victims in parallel.
> 
> You are refusing timeout based approach but I don't think this is
> something we have to be frayed around the edge about possibility of
> overlooking races/bugs because you don't want to use timeout. And you
> have never showed that timeout based approach cannot work well enough.

I have tried to explain why I do not like the timeout based approach
several times alreay and I am getting fed up repeating it over and over
again.  The main point though is that we know _what_ we are waiting for
and _how_ we are synchronizing different parts rather than let's wait
some time and hopefully something happens.

Moreover, we have a backoff mechanism. The new class of oom victims
with a large amount of memory in page tables can fit into that
model. The new model adds few more branches to the exit path so if this
is acceptable for other mm developers then I think this is much more
preferrable to add a diffrent retry mechanism.
Tetsuo Handa Sept. 14, 2018, 5:07 p.m. UTC | #22
On 2018/09/14 23:14, Michal Hocko wrote:
> On Fri 14-09-18 22:54:45, Tetsuo Handa wrote:
>> OK, next question.
>> Is it guaranteed that arch_exit_mmap(mm) is safe with the OOM reaper?
> 
> I do not see any obvious problem and we used to allow to race unmaping
> in exit and oom_reaper paths before we had to handle mlocked vmas
> specially.

Although we used to allow arch_exit_mmap() to race, it might be nothing but
we hit mlock() problem first. I want "clearly no problem".



>> Well, anyway, diffstat of your proposal would be
>>
>>  include/linux/oom.h |  2 --
>>  mm/internal.h       |  3 +++
>>  mm/memory.c         | 28 ++++++++++++--------
>>  mm/mmap.c           | 73 +++++++++++++++++++++++++++++++----------------------
>>  mm/oom_kill.c       | 46 ++++++++++++++++++++++++---------
>>  5 files changed, 98 insertions(+), 54 deletions(-)
>>
>> trying to hand over only __free_pgtables() part at the risk of
>> setting MMF_OOM_SKIP without reclaiming any memory due to dropping
>> __oom_reap_task_mm() and scattering down_write()/up_write() inside
>> exit_mmap(), while diffstat of my proposal (not tested yet) would be
>>
>>  include/linux/mm_types.h |   2 +
>>  include/linux/oom.h      |   3 +-
>>  include/linux/sched.h    |   2 +-
>>  kernel/fork.c            |  11 +++
>>  mm/mmap.c                |  42 ++++-------
>>  mm/oom_kill.c            | 182 ++++++++++++++++++++++-------------------------
>>  6 files changed, 117 insertions(+), 125 deletions(-)
>>
>> trying to wait until __mmput() completes and also trying to handle
>> multiple OOM victims in parallel.

Bottom is the fix-up for my proposal. It seems to be working well enough.

 include/linux/oom.h |  1 -
 kernel/fork.c       |  2 +-
 mm/oom_kill.c       | 30 ++++++++++++------------------
 3 files changed, 13 insertions(+), 20 deletions(-)



>>
>> You are refusing timeout based approach but I don't think this is
>> something we have to be frayed around the edge about possibility of
>> overlooking races/bugs because you don't want to use timeout. And you
>> have never showed that timeout based approach cannot work well enough.
> 
> I have tried to explain why I do not like the timeout based approach
> several times alreay and I am getting fed up repeating it over and over
> again.  The main point though is that we know _what_ we are waiting for
> and _how_ we are synchronizing different parts rather than let's wait
> some time and hopefully something happens.

At the risk of overlooking bugs. Quite few persons are checking OOM lockup
possibility which is a dangerous thing for taking your aggressive approach.

> 
> Moreover, we have a backoff mechanism. The new class of oom victims
> with a large amount of memory in page tables can fit into that
> model. The new model adds few more branches to the exit path so if this
> is acceptable for other mm developers then I think this is much more
> preferrable to add a diffrent retry mechanism.
> 

These "few more branches" have to be "clearly no problem" rather than
"passed some stress tests". And so far no response from other mm developers.






diff --git a/include/linux/oom.h b/include/linux/oom.h
index 8a987c6..9d30c15 100644
--- a/include/linux/oom.h
+++ b/include/linux/oom.h
@@ -104,7 +104,6 @@ extern unsigned long oom_badness(struct task_struct *p,
 extern bool out_of_memory(struct oom_control *oc);
 
 extern void exit_oom_victim(void);
-extern void exit_oom_mm(struct mm_struct *mm);
 
 extern int register_oom_notifier(struct notifier_block *nb);
 extern int unregister_oom_notifier(struct notifier_block *nb);
diff --git a/kernel/fork.c b/kernel/fork.c
index 3e662bb..5c32791 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1018,7 +1018,7 @@ static inline void __mmput(struct mm_struct *mm)
 	if (mm->binfmt)
 		module_put(mm->binfmt->module);
 	if (oom)
-		exit_oom_mm(mm);
+		set_bit(MMF_OOM_SKIP, &mm->flags);
 	mmdrop(mm);
 }
 
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 01fa0d7..cff41fa 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -561,6 +561,14 @@ static void oom_reap_task(struct task_struct *tsk)
 	struct mm_struct *mm = tsk->signal->oom_mm;
 	unsigned long pages;
 
+	if (test_bit(MMF_OOM_SKIP, &mm->flags)) {
+		spin_lock(&oom_reaper_lock);
+		list_del(&tsk->oom_victim);
+		spin_unlock(&oom_reaper_lock);
+		/* Drop a reference taken by wake_oom_reaper(). */
+		put_task_struct(tsk);
+		return;
+	}
 	oom_reap_task_mm(tsk, mm);
 	pages = oom_badness_pages(mm);
 	/* Hide this mm from OOM killer if stalled for too long. */
@@ -581,6 +589,7 @@ static int oom_reaper(void *unused)
 {
 	while (true) {
 		struct task_struct *tsk;
+		struct task_struct *tmp;
 
 		if (!list_empty(&oom_reaper_list))
 			schedule_timeout_uninterruptible(HZ / 10);
@@ -588,32 +597,17 @@ static int oom_reaper(void *unused)
 			wait_event_freezable(oom_reaper_wait,
 					     !list_empty(&oom_reaper_list));
 		spin_lock(&oom_reaper_lock);
-		list_for_each_entry(tsk, &oom_reaper_list, oom_victim) {
-			get_task_struct(tsk);
+		list_for_each_entry_safe(tsk, tmp, &oom_reaper_list,
+					 oom_victim) {
 			spin_unlock(&oom_reaper_lock);
 			oom_reap_task(tsk);
 			spin_lock(&oom_reaper_lock);
-			put_task_struct(tsk);
 		}
 		spin_unlock(&oom_reaper_lock);
 	}
 	return 0;
 }
 
-void exit_oom_mm(struct mm_struct *mm)
-{
-	struct task_struct *tsk;
-
-	spin_lock(&oom_reaper_lock);
-	list_for_each_entry(tsk, &oom_reaper_list, oom_victim)
-		if (tsk->signal->oom_mm == mm)
-			break;
-	list_del(&tsk->oom_victim);
-	spin_unlock(&oom_reaper_lock);
-	/* Drop a reference taken by wake_oom_reaper(). */
-	put_task_struct(tsk);
-}
-
 static void wake_oom_reaper(struct task_struct *tsk)
 {
 	struct mm_struct *mm = tsk->signal->oom_mm;
@@ -632,7 +626,7 @@ static void wake_oom_reaper(struct task_struct *tsk)
 	get_task_struct(tsk);
 
 	spin_lock(&oom_reaper_lock);
-	list_add_tail(&tsk->oom_victim, &oom_reaper_list);
+	list_add(&tsk->oom_victim, &oom_reaper_list);
 	spin_unlock(&oom_reaper_lock);
 	trace_wake_reaper(tsk->pid);
 	wake_up(&oom_reaper_wait);