diff mbox series

[v4,1/3] mm: protect free_pgtables with mmap_lock write lock in exit_mmap

Message ID 20211208212211.2860249-1-surenb@google.com (mailing list archive)
State New
Headers show
Series [v4,1/3] mm: protect free_pgtables with mmap_lock write lock in exit_mmap | expand

Commit Message

Suren Baghdasaryan Dec. 8, 2021, 9:22 p.m. UTC
oom-reaper and process_mrelease system call should protect against
races with exit_mmap which can destroy page tables while they
walk the VMA tree. oom-reaper protects from that race by setting
MMF_OOM_VICTIM and by relying on exit_mmap to set MMF_OOM_SKIP
before taking and releasing mmap_write_lock. process_mrelease has
to elevate mm->mm_users to prevent such race. Both oom-reaper and
process_mrelease hold mmap_read_lock when walking the VMA tree.
The locking rules and mechanisms could be simpler if exit_mmap takes
mmap_write_lock while executing destructive operations such as
free_pgtables.
Change exit_mmap to hold the mmap_write_lock when calling
free_pgtables and remove_vma. Operations like unmap_vmas and
unlock_range are not destructive and could run under mmap_read_lock
but for simplicity we take one mmap_write_lock during almost the entire
operation. Note also that because oom-reaper checks VM_LOCKED flag,
unlock_range() should not be allowed to race with it.
Before this patch, remove_vma used to be called with no locks held,
however with fput being executed asynchronously and vm_ops->close
not being allowed to hold mmap_lock (it is called from __split_vma
with mmap_sem held for write), changing that should be fine.
In most cases this lock should be uncontended. Previously, Kirill
reported ~4% regression caused by a similar change [1]. We reran the
same test and although the individual results are quite noisy, the
percentiles show lower regression with 1.6% being the worst case [2].
The change allows oom-reaper and process_mrelease to execute safely
under mmap_read_lock without worries that exit_mmap might destroy page
tables from under them.

[1] https://lore.kernel.org/all/20170725141723.ivukwhddk2voyhuc@node.shutemov.name/
[2] https://lore.kernel.org/all/CAJuCfpGC9-c9P40x7oy=jy5SphMcd0o0G_6U1-+JAziGKG6dGA@mail.gmail.com/

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
changes in v4
- Separated comments describing vm_operations_struct::close locking
requirements into a separate patch, per Matthew Wilcox

 mm/mmap.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

Comments

Michal Hocko Dec. 9, 2021, 8:55 a.m. UTC | #1
On Wed 08-12-21 13:22:09, Suren Baghdasaryan wrote:
> oom-reaper and process_mrelease system call should protect against
> races with exit_mmap which can destroy page tables while they
> walk the VMA tree. oom-reaper protects from that race by setting
> MMF_OOM_VICTIM and by relying on exit_mmap to set MMF_OOM_SKIP
> before taking and releasing mmap_write_lock. process_mrelease has
> to elevate mm->mm_users to prevent such race. Both oom-reaper and
> process_mrelease hold mmap_read_lock when walking the VMA tree.
> The locking rules and mechanisms could be simpler if exit_mmap takes
> mmap_write_lock while executing destructive operations such as
> free_pgtables.
> Change exit_mmap to hold the mmap_write_lock when calling
> free_pgtables and remove_vma. Operations like unmap_vmas and
> unlock_range are not destructive and could run under mmap_read_lock
> but for simplicity we take one mmap_write_lock during almost the entire
> operation.

unlock_range is not safe to be called under read lock. See 27ae357fa82b
("mm, oom: fix concurrent munlock and oom reaper unmap, v3").

> Note also that because oom-reaper checks VM_LOCKED flag,
> unlock_range() should not be allowed to race with it.
> Before this patch, remove_vma used to be called with no locks held,
> however with fput being executed asynchronously and vm_ops->close
> not being allowed to hold mmap_lock (it is called from __split_vma
> with mmap_sem held for write), changing that should be fine.
> In most cases this lock should be uncontended. Previously, Kirill
> reported ~4% regression caused by a similar change [1]. We reran the
> same test and although the individual results are quite noisy, the
> percentiles show lower regression with 1.6% being the worst case [2].
> The change allows oom-reaper and process_mrelease to execute safely
> under mmap_read_lock without worries that exit_mmap might destroy page
> tables from under them.
> 
> [1] https://lore.kernel.org/all/20170725141723.ivukwhddk2voyhuc@node.shutemov.name/
> [2] https://lore.kernel.org/all/CAJuCfpGC9-c9P40x7oy=jy5SphMcd0o0G_6U1-+JAziGKG6dGA@mail.gmail.com/
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

The patch looks good otherwise. Btw. when I was trying to do something
similar in the past Hugh has noted that we can get rid of the same 
lock&&unlock trick in ksm. Maybe you want to have a look at that as well
;)

Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

> ---
> changes in v4
> - Separated comments describing vm_operations_struct::close locking
> requirements into a separate patch, per Matthew Wilcox
> 
>  mm/mmap.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index bfb0ea164a90..f4e09d390a07 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -3142,25 +3142,27 @@ void exit_mmap(struct mm_struct *mm)
>  		 * to mmu_notifier_release(mm) ensures mmu notifier callbacks in
>  		 * __oom_reap_task_mm() will not block.
>  		 *
> -		 * This needs to be done before calling munlock_vma_pages_all(),
> +		 * This needs to be done before calling unlock_range(),
>  		 * which clears VM_LOCKED, otherwise the oom reaper cannot
>  		 * reliably test it.
>  		 */
>  		(void)__oom_reap_task_mm(mm);
>  
>  		set_bit(MMF_OOM_SKIP, &mm->flags);
> -		mmap_write_lock(mm);
> -		mmap_write_unlock(mm);
>  	}
>  
> +	mmap_write_lock(mm);
>  	if (mm->locked_vm)
>  		unlock_range(mm->mmap, ULONG_MAX);
>  
>  	arch_exit_mmap(mm);
>  
>  	vma = mm->mmap;
> -	if (!vma)	/* Can happen if dup_mmap() received an OOM */
> +	if (!vma) {
> +		/* Can happen if dup_mmap() received an OOM */
> +		mmap_write_unlock(mm);
>  		return;
> +	}
>  
>  	lru_add_drain();
>  	flush_cache_mm(mm);
> @@ -3171,16 +3173,14 @@ void exit_mmap(struct mm_struct *mm)
>  	free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
>  	tlb_finish_mmu(&tlb);
>  
> -	/*
> -	 * Walk the list again, actually closing and freeing it,
> -	 * with preemption enabled, without holding any MM locks.
> -	 */
> +	/* Walk the list again, actually closing and freeing it. */
>  	while (vma) {
>  		if (vma->vm_flags & VM_ACCOUNT)
>  			nr_accounted += vma_pages(vma);
>  		vma = remove_vma(vma);
>  		cond_resched();
>  	}
> +	mmap_write_unlock(mm);
>  	vm_unacct_memory(nr_accounted);
>  }
>  
> -- 
> 2.34.1.400.ga245620fadb-goog
Suren Baghdasaryan Dec. 9, 2021, 7:03 p.m. UTC | #2
On Thu, Dec 9, 2021 at 12:55 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Wed 08-12-21 13:22:09, Suren Baghdasaryan wrote:
> > oom-reaper and process_mrelease system call should protect against
> > races with exit_mmap which can destroy page tables while they
> > walk the VMA tree. oom-reaper protects from that race by setting
> > MMF_OOM_VICTIM and by relying on exit_mmap to set MMF_OOM_SKIP
> > before taking and releasing mmap_write_lock. process_mrelease has
> > to elevate mm->mm_users to prevent such race. Both oom-reaper and
> > process_mrelease hold mmap_read_lock when walking the VMA tree.
> > The locking rules and mechanisms could be simpler if exit_mmap takes
> > mmap_write_lock while executing destructive operations such as
> > free_pgtables.
> > Change exit_mmap to hold the mmap_write_lock when calling
> > free_pgtables and remove_vma. Operations like unmap_vmas and
> > unlock_range are not destructive and could run under mmap_read_lock
> > but for simplicity we take one mmap_write_lock during almost the entire
> > operation.
>
> unlock_range is not safe to be called under read lock. See 27ae357fa82b
> ("mm, oom: fix concurrent munlock and oom reaper unmap, v3").

Ok, I'll remove the sentence above.
Is my understanding correct that it is unsafe only because oom-reaper
can't deal with VM_LOCKED, otherwise it would be fine?

>
> > Note also that because oom-reaper checks VM_LOCKED flag,
> > unlock_range() should not be allowed to race with it.
> > Before this patch, remove_vma used to be called with no locks held,
> > however with fput being executed asynchronously and vm_ops->close
> > not being allowed to hold mmap_lock (it is called from __split_vma
> > with mmap_sem held for write), changing that should be fine.
> > In most cases this lock should be uncontended. Previously, Kirill
> > reported ~4% regression caused by a similar change [1]. We reran the
> > same test and although the individual results are quite noisy, the
> > percentiles show lower regression with 1.6% being the worst case [2].
> > The change allows oom-reaper and process_mrelease to execute safely
> > under mmap_read_lock without worries that exit_mmap might destroy page
> > tables from under them.
> >
> > [1] https://lore.kernel.org/all/20170725141723.ivukwhddk2voyhuc@node.shutemov.name/
> > [2] https://lore.kernel.org/all/CAJuCfpGC9-c9P40x7oy=jy5SphMcd0o0G_6U1-+JAziGKG6dGA@mail.gmail.com/
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
>
> The patch looks good otherwise. Btw. when I was trying to do something
> similar in the past Hugh has noted that we can get rid of the same
> lock&&unlock trick in ksm. Maybe you want to have a look at that as well
> ;)

I'll take a look after we cleanup this path completely (oom pieces included).

>
> Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

>
> Thanks!
>
> > ---
> > changes in v4
> > - Separated comments describing vm_operations_struct::close locking
> > requirements into a separate patch, per Matthew Wilcox
> >
> >  mm/mmap.c | 16 ++++++++--------
> >  1 file changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index bfb0ea164a90..f4e09d390a07 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -3142,25 +3142,27 @@ void exit_mmap(struct mm_struct *mm)
> >                * to mmu_notifier_release(mm) ensures mmu notifier callbacks in
> >                * __oom_reap_task_mm() will not block.
> >                *
> > -              * This needs to be done before calling munlock_vma_pages_all(),
> > +              * This needs to be done before calling unlock_range(),
> >                * which clears VM_LOCKED, otherwise the oom reaper cannot
> >                * reliably test it.
> >                */
> >               (void)__oom_reap_task_mm(mm);
> >
> >               set_bit(MMF_OOM_SKIP, &mm->flags);
> > -             mmap_write_lock(mm);
> > -             mmap_write_unlock(mm);
> >       }
> >
> > +     mmap_write_lock(mm);
> >       if (mm->locked_vm)
> >               unlock_range(mm->mmap, ULONG_MAX);
> >
> >       arch_exit_mmap(mm);
> >
> >       vma = mm->mmap;
> > -     if (!vma)       /* Can happen if dup_mmap() received an OOM */
> > +     if (!vma) {
> > +             /* Can happen if dup_mmap() received an OOM */
> > +             mmap_write_unlock(mm);
> >               return;
> > +     }
> >
> >       lru_add_drain();
> >       flush_cache_mm(mm);
> > @@ -3171,16 +3173,14 @@ void exit_mmap(struct mm_struct *mm)
> >       free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
> >       tlb_finish_mmu(&tlb);
> >
> > -     /*
> > -      * Walk the list again, actually closing and freeing it,
> > -      * with preemption enabled, without holding any MM locks.
> > -      */
> > +     /* Walk the list again, actually closing and freeing it. */
> >       while (vma) {
> >               if (vma->vm_flags & VM_ACCOUNT)
> >                       nr_accounted += vma_pages(vma);
> >               vma = remove_vma(vma);
> >               cond_resched();
> >       }
> > +     mmap_write_unlock(mm);
> >       vm_unacct_memory(nr_accounted);
> >  }
> >
> > --
> > 2.34.1.400.ga245620fadb-goog
>
> --
> Michal Hocko
> SUSE Labs
Michal Hocko Dec. 10, 2021, 9:20 a.m. UTC | #3
On Thu 09-12-21 11:03:11, Suren Baghdasaryan wrote:
> On Thu, Dec 9, 2021 at 12:55 AM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Wed 08-12-21 13:22:09, Suren Baghdasaryan wrote:
> > > oom-reaper and process_mrelease system call should protect against
> > > races with exit_mmap which can destroy page tables while they
> > > walk the VMA tree. oom-reaper protects from that race by setting
> > > MMF_OOM_VICTIM and by relying on exit_mmap to set MMF_OOM_SKIP
> > > before taking and releasing mmap_write_lock. process_mrelease has
> > > to elevate mm->mm_users to prevent such race. Both oom-reaper and
> > > process_mrelease hold mmap_read_lock when walking the VMA tree.
> > > The locking rules and mechanisms could be simpler if exit_mmap takes
> > > mmap_write_lock while executing destructive operations such as
> > > free_pgtables.
> > > Change exit_mmap to hold the mmap_write_lock when calling
> > > free_pgtables and remove_vma. Operations like unmap_vmas and
> > > unlock_range are not destructive and could run under mmap_read_lock
> > > but for simplicity we take one mmap_write_lock during almost the entire
> > > operation.
> >
> > unlock_range is not safe to be called under read lock. See 27ae357fa82b
> > ("mm, oom: fix concurrent munlock and oom reaper unmap, v3").
> 
> Ok, I'll remove the sentence above.
> Is my understanding correct that it is unsafe only because oom-reaper
> can't deal with VM_LOCKED, otherwise it would be fine?

The commit message (27ae357fa82b) goes into details that I have forgot already.
diff mbox series

Patch

diff --git a/mm/mmap.c b/mm/mmap.c
index bfb0ea164a90..f4e09d390a07 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -3142,25 +3142,27 @@  void exit_mmap(struct mm_struct *mm)
 		 * to mmu_notifier_release(mm) ensures mmu notifier callbacks in
 		 * __oom_reap_task_mm() will not block.
 		 *
-		 * This needs to be done before calling munlock_vma_pages_all(),
+		 * This needs to be done before calling unlock_range(),
 		 * which clears VM_LOCKED, otherwise the oom reaper cannot
 		 * reliably test it.
 		 */
 		(void)__oom_reap_task_mm(mm);
 
 		set_bit(MMF_OOM_SKIP, &mm->flags);
-		mmap_write_lock(mm);
-		mmap_write_unlock(mm);
 	}
 
+	mmap_write_lock(mm);
 	if (mm->locked_vm)
 		unlock_range(mm->mmap, ULONG_MAX);
 
 	arch_exit_mmap(mm);
 
 	vma = mm->mmap;
-	if (!vma)	/* Can happen if dup_mmap() received an OOM */
+	if (!vma) {
+		/* Can happen if dup_mmap() received an OOM */
+		mmap_write_unlock(mm);
 		return;
+	}
 
 	lru_add_drain();
 	flush_cache_mm(mm);
@@ -3171,16 +3173,14 @@  void exit_mmap(struct mm_struct *mm)
 	free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
 	tlb_finish_mmu(&tlb);
 
-	/*
-	 * Walk the list again, actually closing and freeing it,
-	 * with preemption enabled, without holding any MM locks.
-	 */
+	/* Walk the list again, actually closing and freeing it. */
 	while (vma) {
 		if (vma->vm_flags & VM_ACCOUNT)
 			nr_accounted += vma_pages(vma);
 		vma = remove_vma(vma);
 		cond_resched();
 	}
+	mmap_write_unlock(mm);
 	vm_unacct_memory(nr_accounted);
 }