diff mbox series

[RFC,03/24] mm: allow VM_FAULT_RETRY for multiple times

Message ID 20190121075722.7945-4-peterx@redhat.com (mailing list archive)
State New, archived
Headers show
Series userfaultfd: write protection support | expand

Commit Message

Peter Xu Jan. 21, 2019, 7:57 a.m. UTC
The idea comes from a discussion between Linus and Andrea [1].

Before this patch we only allow a page fault to retry once.  We achieved
this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing
handle_mm_fault() the second time.  This was majorly used to avoid
unexpected starvation of the system by looping over forever to handle
the page fault on a single page.  However that should hardly happen, and
after all for each code path to return a VM_FAULT_RETRY we'll first wait
for a condition (during which time we should possibly yield the cpu) to
happen before VM_FAULT_RETRY is really returned.

This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY
flag when we receive VM_FAULT_RETRY.  It means that the page fault
handler now can retry the page fault for multiple times if necessary
without the need to generate another page fault event. Meanwhile we
still keep the FAULT_FLAG_TRIED flag so page fault handler can still
identify whether a page fault is the first attempt or not.

GUP code is not touched yet and will be covered in follow up patch.

This will be a nice enhancement for current code at the same time a
supporting material for the future userfaultfd-writeprotect work since
in that work there will always be an explicit userfault writeprotect
retry for protected pages, and if that cannot resolve the page
fault (e.g., when userfaultfd-writeprotect is used in conjunction with
shared memory) then we'll possibly need a 3rd retry of the page fault.
It might also benefit other potential users who will have similar
requirement like userfault write-protection.

Please read the thread below for more information.

[1] https://lkml.org/lkml/2017/11/2/833

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 arch/alpha/mm/fault.c      | 2 +-
 arch/arc/mm/fault.c        | 1 -
 arch/arm/mm/fault.c        | 3 ---
 arch/arm64/mm/fault.c      | 5 -----
 arch/hexagon/mm/vm_fault.c | 1 -
 arch/ia64/mm/fault.c       | 1 -
 arch/m68k/mm/fault.c       | 3 ---
 arch/microblaze/mm/fault.c | 1 -
 arch/mips/mm/fault.c       | 1 -
 arch/nds32/mm/fault.c      | 1 -
 arch/nios2/mm/fault.c      | 3 ---
 arch/openrisc/mm/fault.c   | 1 -
 arch/parisc/mm/fault.c     | 2 --
 arch/powerpc/mm/fault.c    | 5 -----
 arch/riscv/mm/fault.c      | 5 -----
 arch/s390/mm/fault.c       | 5 +----
 arch/sh/mm/fault.c         | 1 -
 arch/sparc/mm/fault_32.c   | 1 -
 arch/sparc/mm/fault_64.c   | 1 -
 arch/um/kernel/trap.c      | 1 -
 arch/unicore32/mm/fault.c  | 6 +-----
 arch/x86/mm/fault.c        | 1 -
 arch/xtensa/mm/fault.c     | 1 -
 23 files changed, 3 insertions(+), 49 deletions(-)

Comments

Jerome Glisse Jan. 21, 2019, 3:55 p.m. UTC | #1
On Mon, Jan 21, 2019 at 03:57:01PM +0800, Peter Xu wrote:
> The idea comes from a discussion between Linus and Andrea [1].
> 
> Before this patch we only allow a page fault to retry once.  We achieved
> this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing
> handle_mm_fault() the second time.  This was majorly used to avoid
> unexpected starvation of the system by looping over forever to handle
> the page fault on a single page.  However that should hardly happen, and
> after all for each code path to return a VM_FAULT_RETRY we'll first wait
> for a condition (during which time we should possibly yield the cpu) to
> happen before VM_FAULT_RETRY is really returned.
> 
> This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY
> flag when we receive VM_FAULT_RETRY.  It means that the page fault
> handler now can retry the page fault for multiple times if necessary
> without the need to generate another page fault event. Meanwhile we
> still keep the FAULT_FLAG_TRIED flag so page fault handler can still
> identify whether a page fault is the first attempt or not.

So there is nothing protecting starvation after this patch ? AFAICT.
Do we sufficient proof that we never have a scenario where one process
might starve fault another ?

For instance some page locking could starve one process.


> 
> GUP code is not touched yet and will be covered in follow up patch.
> 
> This will be a nice enhancement for current code at the same time a
> supporting material for the future userfaultfd-writeprotect work since
> in that work there will always be an explicit userfault writeprotect
> retry for protected pages, and if that cannot resolve the page
> fault (e.g., when userfaultfd-writeprotect is used in conjunction with
> shared memory) then we'll possibly need a 3rd retry of the page fault.
> It might also benefit other potential users who will have similar
> requirement like userfault write-protection.
> 
> Please read the thread below for more information.
> 
> [1] https://lkml.org/lkml/2017/11/2/833
> 
> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
> Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
Peter Xu Jan. 22, 2019, 8:22 a.m. UTC | #2
On Mon, Jan 21, 2019 at 10:55:36AM -0500, Jerome Glisse wrote:
> On Mon, Jan 21, 2019 at 03:57:01PM +0800, Peter Xu wrote:
> > The idea comes from a discussion between Linus and Andrea [1].
> > 
> > Before this patch we only allow a page fault to retry once.  We achieved
> > this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing
> > handle_mm_fault() the second time.  This was majorly used to avoid
> > unexpected starvation of the system by looping over forever to handle
> > the page fault on a single page.  However that should hardly happen, and
> > after all for each code path to return a VM_FAULT_RETRY we'll first wait
> > for a condition (during which time we should possibly yield the cpu) to
> > happen before VM_FAULT_RETRY is really returned.
> > 
> > This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY
> > flag when we receive VM_FAULT_RETRY.  It means that the page fault
> > handler now can retry the page fault for multiple times if necessary
> > without the need to generate another page fault event. Meanwhile we
> > still keep the FAULT_FLAG_TRIED flag so page fault handler can still
> > identify whether a page fault is the first attempt or not.
> 
> So there is nothing protecting starvation after this patch ? AFAICT.
> Do we sufficient proof that we never have a scenario where one process
> might starve fault another ?
> 
> For instance some page locking could starve one process.

Hi, Jerome,

Do you mean lock_page()?

AFAIU lock_page() will only yield the process itself until the lock is
released, so IMHO it's not really starving the process but a natural
behavior.  After all the process may not continue without handling the
page fault correctly.

Or when you say "starvation" do you mean that we might return
VM_FAULT_RETRY from handle_mm_fault() continuously so we'll looping
over and over inside the page fault handler?

Thanks,

> 
> 
> > 
> > GUP code is not touched yet and will be covered in follow up patch.
> > 
> > This will be a nice enhancement for current code at the same time a
> > supporting material for the future userfaultfd-writeprotect work since
> > in that work there will always be an explicit userfault writeprotect
> > retry for protected pages, and if that cannot resolve the page
> > fault (e.g., when userfaultfd-writeprotect is used in conjunction with
> > shared memory) then we'll possibly need a 3rd retry of the page fault.
> > It might also benefit other potential users who will have similar
> > requirement like userfault write-protection.
> > 
> > Please read the thread below for more information.
> > 
> > [1] https://lkml.org/lkml/2017/11/2/833
> > 
> > Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
> > Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---

Regards,
Jerome Glisse Jan. 22, 2019, 4:53 p.m. UTC | #3
On Tue, Jan 22, 2019 at 04:22:38PM +0800, Peter Xu wrote:
> On Mon, Jan 21, 2019 at 10:55:36AM -0500, Jerome Glisse wrote:
> > On Mon, Jan 21, 2019 at 03:57:01PM +0800, Peter Xu wrote:
> > > The idea comes from a discussion between Linus and Andrea [1].
> > > 
> > > Before this patch we only allow a page fault to retry once.  We achieved
> > > this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing
> > > handle_mm_fault() the second time.  This was majorly used to avoid
> > > unexpected starvation of the system by looping over forever to handle
> > > the page fault on a single page.  However that should hardly happen, and
> > > after all for each code path to return a VM_FAULT_RETRY we'll first wait
> > > for a condition (during which time we should possibly yield the cpu) to
> > > happen before VM_FAULT_RETRY is really returned.
> > > 
> > > This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY
> > > flag when we receive VM_FAULT_RETRY.  It means that the page fault
> > > handler now can retry the page fault for multiple times if necessary
> > > without the need to generate another page fault event. Meanwhile we
> > > still keep the FAULT_FLAG_TRIED flag so page fault handler can still
> > > identify whether a page fault is the first attempt or not.
> > 
> > So there is nothing protecting starvation after this patch ? AFAICT.
> > Do we sufficient proof that we never have a scenario where one process
> > might starve fault another ?
> > 
> > For instance some page locking could starve one process.
> 
> Hi, Jerome,
> 
> Do you mean lock_page()?
> 
> AFAIU lock_page() will only yield the process itself until the lock is
> released, so IMHO it's not really starving the process but a natural
> behavior.  After all the process may not continue without handling the
> page fault correctly.
> 
> Or when you say "starvation" do you mean that we might return
> VM_FAULT_RETRY from handle_mm_fault() continuously so we'll looping
> over and over inside the page fault handler?

That one ie every time we retry someone else is holding the lock and
thus lock_page_or_retry() will continuously retry. Some process just
get unlucky ;)

With existing code because we remove the retry flag then on the second
try we end up waiting for the page lock while holding the mmap_sem so
we know that we are in line for the page lock and we will get it once
it is our turn.

Cheers,
Jérôme
Peter Xu Jan. 23, 2019, 2:12 a.m. UTC | #4
On Tue, Jan 22, 2019 at 11:53:10AM -0500, Jerome Glisse wrote:
> On Tue, Jan 22, 2019 at 04:22:38PM +0800, Peter Xu wrote:
> > On Mon, Jan 21, 2019 at 10:55:36AM -0500, Jerome Glisse wrote:
> > > On Mon, Jan 21, 2019 at 03:57:01PM +0800, Peter Xu wrote:
> > > > The idea comes from a discussion between Linus and Andrea [1].
> > > > 
> > > > Before this patch we only allow a page fault to retry once.  We achieved
> > > > this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing
> > > > handle_mm_fault() the second time.  This was majorly used to avoid
> > > > unexpected starvation of the system by looping over forever to handle
> > > > the page fault on a single page.  However that should hardly happen, and
> > > > after all for each code path to return a VM_FAULT_RETRY we'll first wait
> > > > for a condition (during which time we should possibly yield the cpu) to
> > > > happen before VM_FAULT_RETRY is really returned.
> > > > 
> > > > This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY
> > > > flag when we receive VM_FAULT_RETRY.  It means that the page fault
> > > > handler now can retry the page fault for multiple times if necessary
> > > > without the need to generate another page fault event. Meanwhile we
> > > > still keep the FAULT_FLAG_TRIED flag so page fault handler can still
> > > > identify whether a page fault is the first attempt or not.
> > > 
> > > So there is nothing protecting starvation after this patch ? AFAICT.
> > > Do we sufficient proof that we never have a scenario where one process
> > > might starve fault another ?
> > > 
> > > For instance some page locking could starve one process.
> > 
> > Hi, Jerome,
> > 
> > Do you mean lock_page()?
> > 
> > AFAIU lock_page() will only yield the process itself until the lock is
> > released, so IMHO it's not really starving the process but a natural
> > behavior.  After all the process may not continue without handling the
> > page fault correctly.
> > 
> > Or when you say "starvation" do you mean that we might return
> > VM_FAULT_RETRY from handle_mm_fault() continuously so we'll looping
> > over and over inside the page fault handler?
> 
> That one ie every time we retry someone else is holding the lock and
> thus lock_page_or_retry() will continuously retry. Some process just
> get unlucky ;)
> 
> With existing code because we remove the retry flag then on the second
> try we end up waiting for the page lock while holding the mmap_sem so
> we know that we are in line for the page lock and we will get it once
> it is our turn.

Ah I see. :)  It's indeed a valid questioning.

Firstly note that even after this patch we can still identify whether
we're at the first attempt or not by checking against FAULT_FLAG_TRIED
(it will be applied to the fault flag in all the retries but not in
the first atttempt). So IMHO this change might suite if we want to
keep the old behavior [1]:

diff --git a/mm/filemap.c b/mm/filemap.c
index 9f5e323e883e..44942c78bb92 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1351,7 +1351,7 @@ EXPORT_SYMBOL_GPL(__lock_page_killable);
 int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
                         unsigned int flags)
 {
-       if (flags & FAULT_FLAG_ALLOW_RETRY) {
+       if (!flags & FAULT_FLAG_TRIED) {
                /*
                 * CAUTION! In this case, mmap_sem is not released
                 * even though return 0.

But at the same time I'm stepping back trying to see the whole
picture... My understanding is that this is really a policy that we
can decide, and a trade off between "being polite or not on the
mmap_sem", that when taking the page lock in slow path we either:

  (1) release mmap_sem before waiting, polite enough but uncertain to
      finally have the lock, or,

  (2) keep mmap_sem before waiting, not polite enough but certain to
      take the lock.

We did (2) before on the reties because in existing code we only allow
to retry once, so we can't fail on the 2nd attempt.  That seems to be
a good reason to being "unpolite" - we took the mmap_sem without
considering others because we've been "polite" once.  I'm not that
experienced in mm development but AFAIU solution 2 is only reducing
our chance of starvation but adding that chance of starvation to other
processes that want the mmap_sem instead.  So IMHO the starvation
issue always existed even before this patch, and it looks natural and
sane to me so far...  And if with that in mind, I can't say that above
change at [1] would be better, and maybe, it'll be even more fair that
we should always release the mmap_sem first in this case (assuming
that we'll after all have that lock though we might pay more times of
retries)?

Or, is there a way to constantly starve the process that handles the
page fault that I've totally missed?

Thanks,
Jerome Glisse Jan. 23, 2019, 2:39 a.m. UTC | #5
On Wed, Jan 23, 2019 at 10:12:41AM +0800, Peter Xu wrote:
> On Tue, Jan 22, 2019 at 11:53:10AM -0500, Jerome Glisse wrote:
> > On Tue, Jan 22, 2019 at 04:22:38PM +0800, Peter Xu wrote:
> > > On Mon, Jan 21, 2019 at 10:55:36AM -0500, Jerome Glisse wrote:
> > > > On Mon, Jan 21, 2019 at 03:57:01PM +0800, Peter Xu wrote:
> > > > > The idea comes from a discussion between Linus and Andrea [1].
> > > > > 
> > > > > Before this patch we only allow a page fault to retry once.  We achieved
> > > > > this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing
> > > > > handle_mm_fault() the second time.  This was majorly used to avoid
> > > > > unexpected starvation of the system by looping over forever to handle
> > > > > the page fault on a single page.  However that should hardly happen, and
> > > > > after all for each code path to return a VM_FAULT_RETRY we'll first wait
> > > > > for a condition (during which time we should possibly yield the cpu) to
> > > > > happen before VM_FAULT_RETRY is really returned.
> > > > > 
> > > > > This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY
> > > > > flag when we receive VM_FAULT_RETRY.  It means that the page fault
> > > > > handler now can retry the page fault for multiple times if necessary
> > > > > without the need to generate another page fault event. Meanwhile we
> > > > > still keep the FAULT_FLAG_TRIED flag so page fault handler can still
> > > > > identify whether a page fault is the first attempt or not.
> > > > 
> > > > So there is nothing protecting starvation after this patch ? AFAICT.
> > > > Do we sufficient proof that we never have a scenario where one process
> > > > might starve fault another ?
> > > > 
> > > > For instance some page locking could starve one process.
> > > 
> > > Hi, Jerome,
> > > 
> > > Do you mean lock_page()?
> > > 
> > > AFAIU lock_page() will only yield the process itself until the lock is
> > > released, so IMHO it's not really starving the process but a natural
> > > behavior.  After all the process may not continue without handling the
> > > page fault correctly.
> > > 
> > > Or when you say "starvation" do you mean that we might return
> > > VM_FAULT_RETRY from handle_mm_fault() continuously so we'll looping
> > > over and over inside the page fault handler?
> > 
> > That one ie every time we retry someone else is holding the lock and
> > thus lock_page_or_retry() will continuously retry. Some process just
> > get unlucky ;)
> > 
> > With existing code because we remove the retry flag then on the second
> > try we end up waiting for the page lock while holding the mmap_sem so
> > we know that we are in line for the page lock and we will get it once
> > it is our turn.
> 
> Ah I see. :)  It's indeed a valid questioning.
> 
> Firstly note that even after this patch we can still identify whether
> we're at the first attempt or not by checking against FAULT_FLAG_TRIED
> (it will be applied to the fault flag in all the retries but not in
> the first atttempt). So IMHO this change might suite if we want to
> keep the old behavior [1]:
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 9f5e323e883e..44942c78bb92 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1351,7 +1351,7 @@ EXPORT_SYMBOL_GPL(__lock_page_killable);
>  int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
>                          unsigned int flags)
>  {
> -       if (flags & FAULT_FLAG_ALLOW_RETRY) {
> +       if (!flags & FAULT_FLAG_TRIED) {
>                 /*
>                  * CAUTION! In this case, mmap_sem is not released
>                  * even though return 0.

I need to check how FAULT_FLAG_TRIED have been use so far, but yes
it looks like this would keep the existing behavior intact.

> 
> But at the same time I'm stepping back trying to see the whole
> picture... My understanding is that this is really a policy that we
> can decide, and a trade off between "being polite or not on the
> mmap_sem", that when taking the page lock in slow path we either:
> 
>   (1) release mmap_sem before waiting, polite enough but uncertain to
>       finally have the lock, or,
> 
>   (2) keep mmap_sem before waiting, not polite enough but certain to
>       take the lock.
> 
> We did (2) before on the reties because in existing code we only allow
> to retry once, so we can't fail on the 2nd attempt.  That seems to be
> a good reason to being "unpolite" - we took the mmap_sem without
> considering others because we've been "polite" once.  I'm not that
> experienced in mm development but AFAIU solution 2 is only reducing
> our chance of starvation but adding that chance of starvation to other
> processes that want the mmap_sem instead.  So IMHO the starvation
> issue always existed even before this patch, and it looks natural and
> sane to me so far...  And if with that in mind, I can't say that above
> change at [1] would be better, and maybe, it'll be even more fair that
> we should always release the mmap_sem first in this case (assuming
> that we'll after all have that lock though we might pay more times of
> retries)?

Existing code does not starves anyone, the mmap_sem is rw_semaphore
so if there is no writter waiting then no ones wait, if there is a
writter waiting then everyone wait in line so that it is fair to
writter. So with existing code we have a "fair" behavior where every-
ones wait in line their turn. After this patch we can end up in unfair
situation were one thread might be continuously starve because it is
only doing try_lock and thus it is never added to wait line.


> Or, is there a way to constantly starve the process that handles the
> page fault that I've totally missed?

That's the discussion, with your change a process can constantly
retry page fault because it never get a lock on a page, so it can
end up in an infinite fault retry.

Yes it is unlikely to be infinite, but it can change how kernel
behave to some workload and thus impact existing user.

Cheers,
Jérôme
Peter Xu Jan. 24, 2019, 5:45 a.m. UTC | #6
On Tue, Jan 22, 2019 at 09:39:47PM -0500, Jerome Glisse wrote:
> On Wed, Jan 23, 2019 at 10:12:41AM +0800, Peter Xu wrote:
> > On Tue, Jan 22, 2019 at 11:53:10AM -0500, Jerome Glisse wrote:
> > > On Tue, Jan 22, 2019 at 04:22:38PM +0800, Peter Xu wrote:
> > > > On Mon, Jan 21, 2019 at 10:55:36AM -0500, Jerome Glisse wrote:
> > > > > On Mon, Jan 21, 2019 at 03:57:01PM +0800, Peter Xu wrote:
> > > > > > The idea comes from a discussion between Linus and Andrea [1].
> > > > > > 
> > > > > > Before this patch we only allow a page fault to retry once.  We achieved
> > > > > > this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing
> > > > > > handle_mm_fault() the second time.  This was majorly used to avoid
> > > > > > unexpected starvation of the system by looping over forever to handle
> > > > > > the page fault on a single page.  However that should hardly happen, and
> > > > > > after all for each code path to return a VM_FAULT_RETRY we'll first wait
> > > > > > for a condition (during which time we should possibly yield the cpu) to
> > > > > > happen before VM_FAULT_RETRY is really returned.
> > > > > > 
> > > > > > This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY
> > > > > > flag when we receive VM_FAULT_RETRY.  It means that the page fault
> > > > > > handler now can retry the page fault for multiple times if necessary
> > > > > > without the need to generate another page fault event. Meanwhile we
> > > > > > still keep the FAULT_FLAG_TRIED flag so page fault handler can still
> > > > > > identify whether a page fault is the first attempt or not.
> > > > > 
> > > > > So there is nothing protecting starvation after this patch ? AFAICT.
> > > > > Do we sufficient proof that we never have a scenario where one process
> > > > > might starve fault another ?
> > > > > 
> > > > > For instance some page locking could starve one process.
> > > > 
> > > > Hi, Jerome,
> > > > 
> > > > Do you mean lock_page()?
> > > > 
> > > > AFAIU lock_page() will only yield the process itself until the lock is
> > > > released, so IMHO it's not really starving the process but a natural
> > > > behavior.  After all the process may not continue without handling the
> > > > page fault correctly.
> > > > 
> > > > Or when you say "starvation" do you mean that we might return
> > > > VM_FAULT_RETRY from handle_mm_fault() continuously so we'll looping
> > > > over and over inside the page fault handler?
> > > 
> > > That one ie every time we retry someone else is holding the lock and
> > > thus lock_page_or_retry() will continuously retry. Some process just
> > > get unlucky ;)
> > > 
> > > With existing code because we remove the retry flag then on the second
> > > try we end up waiting for the page lock while holding the mmap_sem so
> > > we know that we are in line for the page lock and we will get it once
> > > it is our turn.
> > 
> > Ah I see. :)  It's indeed a valid questioning.
> > 
> > Firstly note that even after this patch we can still identify whether
> > we're at the first attempt or not by checking against FAULT_FLAG_TRIED
> > (it will be applied to the fault flag in all the retries but not in
> > the first atttempt). So IMHO this change might suite if we want to
> > keep the old behavior [1]:
> > 
> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index 9f5e323e883e..44942c78bb92 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -1351,7 +1351,7 @@ EXPORT_SYMBOL_GPL(__lock_page_killable);
> >  int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
> >                          unsigned int flags)
> >  {
> > -       if (flags & FAULT_FLAG_ALLOW_RETRY) {
> > +       if (!flags & FAULT_FLAG_TRIED) {
> >                 /*
> >                  * CAUTION! In this case, mmap_sem is not released
> >                  * even though return 0.
> 
> I need to check how FAULT_FLAG_TRIED have been use so far, but yes
> it looks like this would keep the existing behavior intact.
> 
> > 
> > But at the same time I'm stepping back trying to see the whole
> > picture... My understanding is that this is really a policy that we
> > can decide, and a trade off between "being polite or not on the
> > mmap_sem", that when taking the page lock in slow path we either:
> > 
> >   (1) release mmap_sem before waiting, polite enough but uncertain to
> >       finally have the lock, or,
> > 
> >   (2) keep mmap_sem before waiting, not polite enough but certain to
> >       take the lock.
> > 
> > We did (2) before on the reties because in existing code we only allow
> > to retry once, so we can't fail on the 2nd attempt.  That seems to be
> > a good reason to being "unpolite" - we took the mmap_sem without
> > considering others because we've been "polite" once.  I'm not that
> > experienced in mm development but AFAIU solution 2 is only reducing
> > our chance of starvation but adding that chance of starvation to other
> > processes that want the mmap_sem instead.  So IMHO the starvation
> > issue always existed even before this patch, and it looks natural and
> > sane to me so far...  And if with that in mind, I can't say that above
> > change at [1] would be better, and maybe, it'll be even more fair that
> > we should always release the mmap_sem first in this case (assuming
> > that we'll after all have that lock though we might pay more times of
> > retries)?
> 
> Existing code does not starves anyone, the mmap_sem is rw_semaphore
> so if there is no writter waiting then no ones wait, if there is a
> writter waiting then everyone wait in line so that it is fair to
> writter. So with existing code we have a "fair" behavior where every-
> ones wait in line their turn. After this patch we can end up in unfair
> situation were one thread might be continuously starve because it is
> only doing try_lock and thus it is never added to wait line.

I see the point.  Thanks for explaining it.

> 
> 
> > Or, is there a way to constantly starve the process that handles the
> > page fault that I've totally missed?
> 
> That's the discussion, with your change a process can constantly
> retry page fault because it never get a lock on a page, so it can
> end up in an infinite fault retry.
> 
> Yes it is unlikely to be infinite, but it can change how kernel
> behave to some workload and thus impact existing user.

Yes and even if anyone wants to change the behavior maybe it can be
changed after a proper justification, then it makes sense to me that I
squash above oneliner into this patch to keep the existing page
locking behavior.

Thanks again,
diff mbox series

Patch

diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
index 46e5e420ad2a..deae82bb83c1 100644
--- a/arch/alpha/mm/fault.c
+++ b/arch/alpha/mm/fault.c
@@ -169,7 +169,7 @@  do_page_fault(unsigned long address, unsigned long mmcsr,
 		else
 			current->min_flt++;
 		if (fault & VM_FAULT_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
+			flags |= FAULT_FLAG_TRIED;
 
 			 /* No need to up_read(&mm->mmap_sem) as we would
 			 * have already released it in __lock_page_or_retry
diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index 91492d244ea6..7f48b377028c 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -168,7 +168,6 @@  void do_page_fault(unsigned long address, struct pt_regs *regs)
 			}
 
 			if (fault & VM_FAULT_RETRY) {
-				flags &= ~FAULT_FLAG_ALLOW_RETRY;
 				flags |= FAULT_FLAG_TRIED;
 				goto retry;
 			}
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 743077d19669..377781d8491a 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -342,9 +342,6 @@  do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 					regs, addr);
 		}
 		if (fault & VM_FAULT_RETRY) {
-			/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
-			* of starvation. */
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 			goto retry;
 		}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 744d6451ea83..8a26e03fc2bf 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -510,12 +510,7 @@  static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 			return 0;
 		}
 
-		/*
-		 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk of
-		 * starvation.
-		 */
 		if (mm_flags & FAULT_FLAG_ALLOW_RETRY) {
-			mm_flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			mm_flags |= FAULT_FLAG_TRIED;
 			goto retry;
 		}
diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c
index be10b441d9cc..576751597e77 100644
--- a/arch/hexagon/mm/vm_fault.c
+++ b/arch/hexagon/mm/vm_fault.c
@@ -115,7 +115,6 @@  void do_page_fault(unsigned long address, long cause, struct pt_regs *regs)
 			else
 				current->min_flt++;
 			if (fault & VM_FAULT_RETRY) {
-				flags &= ~FAULT_FLAG_ALLOW_RETRY;
 				flags |= FAULT_FLAG_TRIED;
 				goto retry;
 			}
diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c
index 62c2d39d2bed..9de95d39935e 100644
--- a/arch/ia64/mm/fault.c
+++ b/arch/ia64/mm/fault.c
@@ -189,7 +189,6 @@  ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
 		else
 			current->min_flt++;
 		if (fault & VM_FAULT_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 
 			 /* No need to up_read(&mm->mmap_sem) as we would
diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c
index d9808a807ab8..b1b2109e4ab4 100644
--- a/arch/m68k/mm/fault.c
+++ b/arch/m68k/mm/fault.c
@@ -162,9 +162,6 @@  int do_page_fault(struct pt_regs *regs, unsigned long address,
 		else
 			current->min_flt++;
 		if (fault & VM_FAULT_RETRY) {
-			/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
-			 * of starvation. */
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 
 			/*
diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c
index 4fd2dbd0c5ca..05a4847ac0bf 100644
--- a/arch/microblaze/mm/fault.c
+++ b/arch/microblaze/mm/fault.c
@@ -236,7 +236,6 @@  void do_page_fault(struct pt_regs *regs, unsigned long address,
 		else
 			current->min_flt++;
 		if (fault & VM_FAULT_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 
 			/*
diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c
index 92374fd091d2..9953b5b571df 100644
--- a/arch/mips/mm/fault.c
+++ b/arch/mips/mm/fault.c
@@ -178,7 +178,6 @@  static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write,
 			tsk->min_flt++;
 		}
 		if (fault & VM_FAULT_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 
 			/*
diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c
index 72461745d3e1..f0b775cb5cdf 100644
--- a/arch/nds32/mm/fault.c
+++ b/arch/nds32/mm/fault.c
@@ -237,7 +237,6 @@  void do_page_fault(unsigned long entry, unsigned long addr,
 		else
 			tsk->min_flt++;
 		if (fault & VM_FAULT_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 
 			/* No need to up_read(&mm->mmap_sem) as we would
diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c
index 5939434a31ae..9dd1c51acc22 100644
--- a/arch/nios2/mm/fault.c
+++ b/arch/nios2/mm/fault.c
@@ -158,9 +158,6 @@  asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause,
 		else
 			current->min_flt++;
 		if (fault & VM_FAULT_RETRY) {
-			/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
-			 * of starvation. */
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 
 			/*
diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c
index 873ecb5d82d7..ff92c5674781 100644
--- a/arch/openrisc/mm/fault.c
+++ b/arch/openrisc/mm/fault.c
@@ -185,7 +185,6 @@  asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address,
 		else
 			tsk->min_flt++;
 		if (fault & VM_FAULT_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 
 			 /* No need to up_read(&mm->mmap_sem) as we would
diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
index 29422eec329d..7d3e96a9a7ab 100644
--- a/arch/parisc/mm/fault.c
+++ b/arch/parisc/mm/fault.c
@@ -327,8 +327,6 @@  void do_page_fault(struct pt_regs *regs, unsigned long code,
 		else
 			current->min_flt++;
 		if (fault & VM_FAULT_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
-
 			/*
 			 * No need to up_read(&mm->mmap_sem) as we would
 			 * have already released it in __lock_page_or_retry
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 8bc0d091f13c..8bdc7e75d2e5 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -569,11 +569,6 @@  static int __do_page_fault(struct pt_regs *regs, unsigned long address,
 	if (unlikely(fault & VM_FAULT_RETRY)) {
 		/* We retry only once */
 		if (flags & FAULT_FLAG_ALLOW_RETRY) {
-			/*
-			 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
-			 * of starvation.
-			 */
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 			if (!signal_pending(current))
 				goto retry;
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index 4fc8d746bec3..aad2c0557d2f 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -154,11 +154,6 @@  asmlinkage void do_page_fault(struct pt_regs *regs)
 				      1, regs, addr);
 		}
 		if (fault & VM_FAULT_RETRY) {
-			/*
-			 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
-			 * of starvation.
-			 */
-			flags &= ~(FAULT_FLAG_ALLOW_RETRY);
 			flags |= FAULT_FLAG_TRIED;
 
 			/*
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index 19b4fb2fafab..819f87169ee1 100644
--- a/arch/s390/mm/fault.c
+++ b/arch/s390/mm/fault.c
@@ -537,10 +537,7 @@  static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
 				fault = VM_FAULT_PFAULT;
 				goto out_up;
 			}
-			/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
-			 * of starvation. */
-			flags &= ~(FAULT_FLAG_ALLOW_RETRY |
-				   FAULT_FLAG_RETRY_NOWAIT);
+			flags &= ~FAULT_FLAG_RETRY_NOWAIT;
 			flags |= FAULT_FLAG_TRIED;
 			down_read(&mm->mmap_sem);
 			goto retry;
diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c
index baf5d73df40c..cd710e2d7c57 100644
--- a/arch/sh/mm/fault.c
+++ b/arch/sh/mm/fault.c
@@ -498,7 +498,6 @@  asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
 				      regs, address);
 		}
 		if (fault & VM_FAULT_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 
 			/*
diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c
index a2c83104fe35..6735cd1c09b9 100644
--- a/arch/sparc/mm/fault_32.c
+++ b/arch/sparc/mm/fault_32.c
@@ -261,7 +261,6 @@  asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
 				      1, regs, address);
 		}
 		if (fault & VM_FAULT_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 
 			/* No need to up_read(&mm->mmap_sem) as we would
diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c
index cad71ec5c7b3..28d5b4d012c6 100644
--- a/arch/sparc/mm/fault_64.c
+++ b/arch/sparc/mm/fault_64.c
@@ -459,7 +459,6 @@  asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
 				      1, regs, address);
 		}
 		if (fault & VM_FAULT_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 
 			/* No need to up_read(&mm->mmap_sem) as we would
diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
index 09baf37b65b9..c63fc292aea0 100644
--- a/arch/um/kernel/trap.c
+++ b/arch/um/kernel/trap.c
@@ -99,7 +99,6 @@  int handle_page_fault(unsigned long address, unsigned long ip,
 			else
 				current->min_flt++;
 			if (fault & VM_FAULT_RETRY) {
-				flags &= ~FAULT_FLAG_ALLOW_RETRY;
 				flags |= FAULT_FLAG_TRIED;
 
 				goto retry;
diff --git a/arch/unicore32/mm/fault.c b/arch/unicore32/mm/fault.c
index 3611f19234a1..fdf577956f5f 100644
--- a/arch/unicore32/mm/fault.c
+++ b/arch/unicore32/mm/fault.c
@@ -260,12 +260,8 @@  static int do_pf(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 			tsk->maj_flt++;
 		else
 			tsk->min_flt++;
-		if (fault & VM_FAULT_RETRY) {
-			/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
-			* of starvation. */
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
+		if (fault & VM_FAULT_RETRY)
 			goto retry;
-		}
 	}
 
 	up_read(&mm->mmap_sem);
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index b94ef0c2b98c..645b1365a72d 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1431,7 +1431,6 @@  void do_user_addr_fault(struct pt_regs *regs,
 	if (unlikely(fault & VM_FAULT_RETRY)) {
 		/* Retry at most once */
 		if (flags & FAULT_FLAG_ALLOW_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 			if (!signal_pending(tsk))
 				goto retry;
diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c
index 792dad5e2f12..7cd55f2d66c9 100644
--- a/arch/xtensa/mm/fault.c
+++ b/arch/xtensa/mm/fault.c
@@ -128,7 +128,6 @@  void do_page_fault(struct pt_regs *regs)
 		else
 			current->min_flt++;
 		if (fault & VM_FAULT_RETRY) {
-			flags &= ~FAULT_FLAG_ALLOW_RETRY;
 			flags |= FAULT_FLAG_TRIED;
 
 			 /* No need to up_read(&mm->mmap_sem) as we would