mbox series

[v10,00/10] Add support for SVM atomics in Nouveau

Message ID 20210607075855.5084-1-apopple@nvidia.com (mailing list archive)
Headers show
Series Add support for SVM atomics in Nouveau | expand

Message

Alistair Popple June 7, 2021, 7:58 a.m. UTC
Hi Andrew,

This is an update to address some comments on the previous version of
this series. Most are code comment updates, although there were a couple
of code changes as well. The most significant are:

 - Re-introduce the check of VM_LOCKED under the PTL in
   page_mlock_one(). This was present in an earlier version of the series
   but removed because we thought it was redundant. However Shakeel
   provided some background making it clear it is needed.

 - Reworked the return codes in copy_pte_range() based on suggestions
   from Peter Xu to hopefully make the code clearer and less error-prone.

 - Integrated a fix to the Nouveau code reported by Colin King.

As discussed to minimise impact I have also made this dependent on
CONFIG_DEVICE_PRIVATE. Hopefully these changes don't break any other series that
may have been based on the previous version. I see there has been some
discussion from Hugh and others around patch order, so if you need me to rebase
these to a different branch let me know.

Introduction
============

Some devices have features such as atomic PTE bits that can be used to
implement atomic access to system memory. To support atomic operations to a
shared virtual memory page such a device needs access to that page which is
exclusive of the CPU. This series introduces a mechanism to temporarily
unmap pages granting exclusive access to a device.

These changes are required to support OpenCL atomic operations in Nouveau
to shared virtual memory (SVM) regions allocated with the
CL_MEM_SVM_ATOMICS clSVMAlloc flag. A more complete description of the
OpenCL SVM feature is available at
https://www.khronos.org/registry/OpenCL/specs/3.0-unified/html/
OpenCL_API.html#_shared_virtual_memory .

Implementation
==============

Exclusive device access is implemented by adding a new swap entry type
(SWAP_DEVICE_EXCLUSIVE) which is similar to a migration entry. The main
difference is that on fault the original entry is immediately restored by
the fault handler instead of waiting.

Restoring the entry triggers calls to MMU notifers which allows a device
driver to revoke the atomic access permission from the GPU prior to the CPU
finalising the entry.

Patches
=======

Patches 1 & 2 refactor existing migration and device private entry
functions.

Patches 3 & 4 rework try_to_unmap_one() by splitting out unrelated
functionality into separate functions - try_to_migrate_one() and
try_to_munlock_one().

Patch 5 renames some existing code but does not introduce functionality.

Patch 6 is a small clean-up to swap entry handling in copy_pte_range().

Patch 7 contains the bulk of the implementation for device exclusive
memory.

Patch 8 contains some additions to the HMM selftests to ensure everything
works as expected.

Patch 9 is a cleanup for the Nouveau SVM implementation.

Patch 10 contains the implementation of atomic access for the Nouveau
driver.

Testing
=======

This has been tested with upstream Mesa 21.1.0 and a simple OpenCL program
which checks that GPU atomic accesses to system memory are atomic. Without
this series the test fails as there is no way of write-protecting the page
mapping which results in the device clobbering CPU writes. For reference
the test is available at https://ozlabs.org/~apopple/opencl_svm_atomics/

Further testing has been performed by adding support for testing exclusive
access to the hmm-tests kselftests.


Alistair Popple (10):
  mm: Remove special swap entry functions
  mm/swapops: Rework swap entry manipulation code
  mm/rmap: Split try_to_munlock from try_to_unmap
  mm/rmap: Split migration into its own function
  mm: Rename migrate_pgmap_owner
  mm/memory.c: Allow different return codes for copy_nonpresent_pte()
  mm: Device exclusive memory access
  mm: Selftests for exclusive device memory
  nouveau/svm: Refactor nouveau_range_fault
  nouveau/svm: Implement atomic SVM access

 Documentation/vm/hmm.rst                      |  19 +-
 Documentation/vm/unevictable-lru.rst          |  33 +-
 arch/s390/mm/pgtable.c                        |   2 +-
 drivers/gpu/drm/nouveau/include/nvif/if000c.h |   1 +
 drivers/gpu/drm/nouveau/nouveau_svm.c         | 156 ++++-
 drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h |   1 +
 .../drm/nouveau/nvkm/subdev/mmu/vmmgp100.c    |   6 +
 fs/proc/task_mmu.c                            |  23 +-
 include/linux/mmu_notifier.h                  |  26 +-
 include/linux/rmap.h                          |  11 +-
 include/linux/swap.h                          |  13 +-
 include/linux/swapops.h                       | 123 ++--
 lib/test_hmm.c                                | 126 +++-
 lib/test_hmm_uapi.h                           |   2 +
 mm/debug_vm_pgtable.c                         |  12 +-
 mm/hmm.c                                      |  12 +-
 mm/huge_memory.c                              |  45 +-
 mm/hugetlb.c                                  |  10 +-
 mm/memcontrol.c                               |   2 +-
 mm/memory.c                                   | 173 ++++-
 mm/migrate.c                                  |  51 +-
 mm/mlock.c                                    |  12 +-
 mm/mprotect.c                                 |  18 +-
 mm/page_vma_mapped.c                          |  15 +-
 mm/rmap.c                                     | 602 +++++++++++++++---
 tools/testing/selftests/vm/hmm-tests.c        | 158 +++++
 26 files changed, 1328 insertions(+), 324 deletions(-)

Comments

Peter Xu June 11, 2021, 3:01 p.m. UTC | #1
On Fri, Jun 11, 2021 at 01:43:20PM +1000, Alistair Popple wrote:
> On Friday, 11 June 2021 11:00:34 AM AEST Peter Xu wrote:
> > On Fri, Jun 11, 2021 at 09:17:14AM +1000, Alistair Popple wrote:
> > > On Friday, 11 June 2021 9:04:19 AM AEST Peter Xu wrote:
> > > > On Fri, Jun 11, 2021 at 12:21:26AM +1000, Alistair Popple wrote:
> > > > > > Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar effect to explicit
> > > > > > call split_huge_pmd_address(), afaict.  Since both of them use __split_huge_pmd()
> > > > > > internally which will generate that unwanted CLEAR notify.
> > > > >
> > > > > Agree that gup calls __split_huge_pmd() via split_huge_pmd_address()
> > > > > which will always CLEAR. However gup only calls split_huge_pmd_address() if it
> > > > > finds a thp pmd. In follow_pmd_mask() we have:
> > > > >
> > > > >       if (likely(!pmd_trans_huge(pmdval)))
> > > > >               return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
> > > > >
> > > > > So I don't think we have a problem here.
> > > >
> > > > Sorry I didn't follow here..  We do FOLL_SPLIT_PMD after this check, right?  I
> > > > mean, if it's a thp for the current mm, afaict pmd_trans_huge() should return
> > > > true above, so we'll skip follow_page_pte(); then we'll check FOLL_SPLIT_PMD
> > > > and do the split, then the CLEAR notify.  Hmm.. Did I miss something?
> > >
> > > That seems correct - if the thp is not mapped with a pmd we won't split and we
> > > won't CLEAR. If there is a thp pmd we will split and CLEAR, but in that case it
> > > is fine - we will retry, but the retry will won't CLEAR because the pmd has
> > > already been split.
> > 
> > Aha!
> > 
> > >
> > > The issue arises with doing it unconditionally in make device exclusive is that
> > > you *always* CLEAR even if there is no thp pmd to split. Or at least that's my
> > > understanding, please let me know if it doesn't make sense.
> > 
> > Exactly.  But if you see what I meant here, even if it can work like this, it
> > sounds still fragile, isn't it?  I just feel something is slightly off there..
> > 
> > IMHO split_huge_pmd() checked pmd before calling __split_huge_pmd() for
> > performance, afaict, because if it's not a thp even without locking, then it
> > won't be, so further __split_huge_pmd() is not necessary.
> > 
> > IOW, it's very legal if someday we'd like to let split_huge_pmd() call
> > __split_huge_pmd() directly, then AFAIU device exclusive API will be the 1st
> > one to be broken with that seems-to-be-irrelevant change I'm afraid..
> 
> Well I would argue the performance of memory notifiers is becoming increasingly
> important, and a change that causes them to be called unnecessarily is
> therefore not very legal. Likely the correct fix here is to optimise
> __split_huge_pmd() to only call the notifier if it's actually going to split a
> pmd. As you said though that's a completely different story which I think would
> be best done as a separate series.

Right, maybe I can look a bit more into that later; but my whole point was to
express that one functionality shouldn't depend on such a trivial detail of
implementation of other modules (thp split in this case).

> 
> > This lets me goes back a step to think about why do we need this notifier at
> > all to cover this whole range of make_device_exclusive() procedure..
> > 
> > What I am thinking is, we're afraid some CPU accesses this page so the pte got
> > quickly restored when device atomic operation is carrying on.  Then with this
> > notifier we'll be able to cancel it.  Makes perfect sense.
> > 
> > However do we really need to register this notifier so early?  The thing is the
> > GPU driver still has all the page locks, so even if there's a race to restore
> > the ptes, they'll block at taking the page lock until the driver releases it.
> > 
> > IOW, I'm wondering whether the "non-fragile" way to do this is not do
> > mmu_interval_notifier_insert() that early: what if we register that notifier
> > after make_device_exclusive_range() returns but before page_unlock() somehow?
> > So before page_unlock(), race is protected fully by the lock itself; after
> > that, it's done by mmu notifier.  Then maybe we don't need to worry about all
> > these notifications during marking exclusive (while we shouldn't)?
> 
> The notifier is needed to protect against races with pte changes. Once a page
> has been marked for exclusive access the driver will update it's page tables to
> allow atomic access to the page. However in the meantime the page could become
> unmapped entirely or write protected.
> 
> As I understand things the page lock won't protect against these kind of pte
> changes, hence the need for mmu_interval_read_begin/retry which allows the
> driver to hold a mutex protecting against invalidations via blocking the
> notifier until the device page tables have been updated.

Indeed, I suppose you mean change_pte_range() and zap_pte_range()
correspondingly.

Do you think we can restore pte right before wr-protect or zap?  Then all
things serializes with page lock (btw: it's already an insane userspace to
either unmap a page or wr-protect a page if it knows the device is using it!).
If these are the only two cases, it still sounds a cleaner approach to me than
the current approach.

This also reminded me that right now the cpu pgtable recovery is lazy - it
happens either from fork() or a cpu page fault.  Even after device finished
using it, swap ptes keep there.

What if the device tries to do atomic op on the same page twice?  I am not sure
whether it means we may also want to teach both GUP (majorly follow_page_pte()
for now before pmd support) and process of page_make_device_exclusive() with
understanding the device exclusive entries too?  Another option seems to be
restoring pte after device finish using it, as long as the device knows when.
Alistair Popple June 15, 2021, 3:08 a.m. UTC | #2
On Saturday, 12 June 2021 1:01:42 AM AEST Peter Xu wrote:
> On Fri, Jun 11, 2021 at 01:43:20PM +1000, Alistair Popple wrote:
> > On Friday, 11 June 2021 11:00:34 AM AEST Peter Xu wrote:
> > > On Fri, Jun 11, 2021 at 09:17:14AM +1000, Alistair Popple wrote:
> > > > On Friday, 11 June 2021 9:04:19 AM AEST Peter Xu wrote:
> > > > > On Fri, Jun 11, 2021 at 12:21:26AM +1000, Alistair Popple wrote:
> > > > > > > Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar effect to explicit
> > > > > > > call split_huge_pmd_address(), afaict.  Since both of them use __split_huge_pmd()
> > > > > > > internally which will generate that unwanted CLEAR notify.
> > > > > >
> > > > > > Agree that gup calls __split_huge_pmd() via split_huge_pmd_address()
> > > > > > which will always CLEAR. However gup only calls split_huge_pmd_address() if it
> > > > > > finds a thp pmd. In follow_pmd_mask() we have:
> > > > > >
> > > > > >       if (likely(!pmd_trans_huge(pmdval)))
> > > > > >               return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
> > > > > >
> > > > > > So I don't think we have a problem here.
> > > > >
> > > > > Sorry I didn't follow here..  We do FOLL_SPLIT_PMD after this check, right?  I
> > > > > mean, if it's a thp for the current mm, afaict pmd_trans_huge() should return
> > > > > true above, so we'll skip follow_page_pte(); then we'll check FOLL_SPLIT_PMD
> > > > > and do the split, then the CLEAR notify.  Hmm.. Did I miss something?
> > > >
> > > > That seems correct - if the thp is not mapped with a pmd we won't split and we
> > > > won't CLEAR. If there is a thp pmd we will split and CLEAR, but in that case it
> > > > is fine - we will retry, but the retry will won't CLEAR because the pmd has
> > > > already been split.
> > >
> > > Aha!
> > >
> > > >
> > > > The issue arises with doing it unconditionally in make device exclusive is that
> > > > you *always* CLEAR even if there is no thp pmd to split. Or at least that's my
> > > > understanding, please let me know if it doesn't make sense.
> > >
> > > Exactly.  But if you see what I meant here, even if it can work like this, it
> > > sounds still fragile, isn't it?  I just feel something is slightly off there..
> > >
> > > IMHO split_huge_pmd() checked pmd before calling __split_huge_pmd() for
> > > performance, afaict, because if it's not a thp even without locking, then it
> > > won't be, so further __split_huge_pmd() is not necessary.
> > >
> > > IOW, it's very legal if someday we'd like to let split_huge_pmd() call
> > > __split_huge_pmd() directly, then AFAIU device exclusive API will be the 1st
> > > one to be broken with that seems-to-be-irrelevant change I'm afraid..
> >
> > Well I would argue the performance of memory notifiers is becoming increasingly
> > important, and a change that causes them to be called unnecessarily is
> > therefore not very legal. Likely the correct fix here is to optimise
> > __split_huge_pmd() to only call the notifier if it's actually going to split a
> > pmd. As you said though that's a completely different story which I think would
> > be best done as a separate series.
> 
> Right, maybe I can look a bit more into that later; but my whole point was to
> express that one functionality shouldn't depend on such a trivial detail of
> implementation of other modules (thp split in this case).
> 
> >
> > > This lets me goes back a step to think about why do we need this notifier at
> > > all to cover this whole range of make_device_exclusive() procedure..
> > >
> > > What I am thinking is, we're afraid some CPU accesses this page so the pte got
> > > quickly restored when device atomic operation is carrying on.  Then with this
> > > notifier we'll be able to cancel it.  Makes perfect sense.
> > >
> > > However do we really need to register this notifier so early?  The thing is the
> > > GPU driver still has all the page locks, so even if there's a race to restore
> > > the ptes, they'll block at taking the page lock until the driver releases it.
> > >
> > > IOW, I'm wondering whether the "non-fragile" way to do this is not do
> > > mmu_interval_notifier_insert() that early: what if we register that notifier
> > > after make_device_exclusive_range() returns but before page_unlock() somehow?
> > > So before page_unlock(), race is protected fully by the lock itself; after
> > > that, it's done by mmu notifier.  Then maybe we don't need to worry about all
> > > these notifications during marking exclusive (while we shouldn't)?
> >
> > The notifier is needed to protect against races with pte changes. Once a page
> > has been marked for exclusive access the driver will update it's page tables to
> > allow atomic access to the page. However in the meantime the page could become
> > unmapped entirely or write protected.
> >
> > As I understand things the page lock won't protect against these kind of pte
> > changes, hence the need for mmu_interval_read_begin/retry which allows the
> > driver to hold a mutex protecting against invalidations via blocking the
> > notifier until the device page tables have been updated.
> 
> Indeed, I suppose you mean change_pte_range() and zap_pte_range()
> correspondingly.

Right.

> Do you think we can restore pte right before wr-protect or zap?  Then all
> things serializes with page lock (btw: it's already an insane userspace to
> either unmap a page or wr-protect a page if it knows the device is using it!).
> If these are the only two cases, it still sounds a cleaner approach to me than
> the current approach.

Perhaps we could but it would make {zap|change}_pte_range() much more complex as
we can't sleep taking the page lock whilst holding the ptl, so we'd have to
implement a retry scheme similar to copy_pte_range() in both those functions as
well. Given mmu_interval_read_begin/retry was IMHO added to solve this type of
problem (freezing pte's to safely program device pte's) it seems like the
better option rather than adding more complex code to generic mm paths.

It's also worth noting i915 seems to use mmu_interval_read_begin/retry() with
gup to sync mappings so this isn't an entirely new concept. I'm not an expert
in that driver but I imagine changing gup to generate unconditional mmu notifier
invalidates would also cause issues there. So I think overall this is the
cleanest solution as it reduces the amount of code (particularly in generic mm
paths).

> This also reminded me that right now the cpu pgtable recovery is lazy - it
> happens either from fork() or a cpu page fault.  Even after device finished
> using it, swap ptes keep there.
> 
> What if the device tries to do atomic op on the same page twice?  I am not sure
> whether it means we may also want to teach both GUP (majorly follow_page_pte()
> for now before pmd support) and process of page_make_device_exclusive() with
> understanding the device exclusive entries too?  Another option seems to be
> restoring pte after device finish using it, as long as the device knows when.

I don't think we need to complicate follow_page_pte() with knowledge of
exclusive entries. GUP will just restore the original pte via the normal
fault path - follow_page_pte() will return NULL for an exclusive entry,
resulting in handle_mm_path() getting called via faultin_page(). Therefore
a driver calling make_device_exclusive() twice on the same page won't cause an
issue. Also the device shouldn't fault on subsequent accesses if the exclusive
entry is still in place anyway.

We can't restore the pte when the device is finished with it because there is
no way of knowing when a device is done using an exclusive entry - device
pte's work much the same as cpu pte's in that regard.

 - Alistair

> --
> Peter Xu
>
Peter Xu June 15, 2021, 4:25 p.m. UTC | #3
On Tue, Jun 15, 2021 at 01:08:11PM +1000, Alistair Popple wrote:
> On Saturday, 12 June 2021 1:01:42 AM AEST Peter Xu wrote:
> > On Fri, Jun 11, 2021 at 01:43:20PM +1000, Alistair Popple wrote:
> > > On Friday, 11 June 2021 11:00:34 AM AEST Peter Xu wrote:
> > > > On Fri, Jun 11, 2021 at 09:17:14AM +1000, Alistair Popple wrote:
> > > > > On Friday, 11 June 2021 9:04:19 AM AEST Peter Xu wrote:
> > > > > > On Fri, Jun 11, 2021 at 12:21:26AM +1000, Alistair Popple wrote:
> > > > > > > > Hmm, the thing is.. to me FOLL_SPLIT_PMD should have similar effect to explicit
> > > > > > > > call split_huge_pmd_address(), afaict.  Since both of them use __split_huge_pmd()
> > > > > > > > internally which will generate that unwanted CLEAR notify.
> > > > > > >
> > > > > > > Agree that gup calls __split_huge_pmd() via split_huge_pmd_address()
> > > > > > > which will always CLEAR. However gup only calls split_huge_pmd_address() if it
> > > > > > > finds a thp pmd. In follow_pmd_mask() we have:
> > > > > > >
> > > > > > >       if (likely(!pmd_trans_huge(pmdval)))
> > > > > > >               return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
> > > > > > >
> > > > > > > So I don't think we have a problem here.
> > > > > >
> > > > > > Sorry I didn't follow here..  We do FOLL_SPLIT_PMD after this check, right?  I
> > > > > > mean, if it's a thp for the current mm, afaict pmd_trans_huge() should return
> > > > > > true above, so we'll skip follow_page_pte(); then we'll check FOLL_SPLIT_PMD
> > > > > > and do the split, then the CLEAR notify.  Hmm.. Did I miss something?
> > > > >
> > > > > That seems correct - if the thp is not mapped with a pmd we won't split and we
> > > > > won't CLEAR. If there is a thp pmd we will split and CLEAR, but in that case it
> > > > > is fine - we will retry, but the retry will won't CLEAR because the pmd has
> > > > > already been split.
> > > >
> > > > Aha!
> > > >
> > > > >
> > > > > The issue arises with doing it unconditionally in make device exclusive is that
> > > > > you *always* CLEAR even if there is no thp pmd to split. Or at least that's my
> > > > > understanding, please let me know if it doesn't make sense.
> > > >
> > > > Exactly.  But if you see what I meant here, even if it can work like this, it
> > > > sounds still fragile, isn't it?  I just feel something is slightly off there..
> > > >
> > > > IMHO split_huge_pmd() checked pmd before calling __split_huge_pmd() for
> > > > performance, afaict, because if it's not a thp even without locking, then it
> > > > won't be, so further __split_huge_pmd() is not necessary.
> > > >
> > > > IOW, it's very legal if someday we'd like to let split_huge_pmd() call
> > > > __split_huge_pmd() directly, then AFAIU device exclusive API will be the 1st
> > > > one to be broken with that seems-to-be-irrelevant change I'm afraid..
> > >
> > > Well I would argue the performance of memory notifiers is becoming increasingly
> > > important, and a change that causes them to be called unnecessarily is
> > > therefore not very legal. Likely the correct fix here is to optimise
> > > __split_huge_pmd() to only call the notifier if it's actually going to split a
> > > pmd. As you said though that's a completely different story which I think would
> > > be best done as a separate series.
> > 
> > Right, maybe I can look a bit more into that later; but my whole point was to
> > express that one functionality shouldn't depend on such a trivial detail of
> > implementation of other modules (thp split in this case).
> > 
> > >
> > > > This lets me goes back a step to think about why do we need this notifier at
> > > > all to cover this whole range of make_device_exclusive() procedure..
> > > >
> > > > What I am thinking is, we're afraid some CPU accesses this page so the pte got
> > > > quickly restored when device atomic operation is carrying on.  Then with this
> > > > notifier we'll be able to cancel it.  Makes perfect sense.
> > > >
> > > > However do we really need to register this notifier so early?  The thing is the
> > > > GPU driver still has all the page locks, so even if there's a race to restore
> > > > the ptes, they'll block at taking the page lock until the driver releases it.
> > > >
> > > > IOW, I'm wondering whether the "non-fragile" way to do this is not do
> > > > mmu_interval_notifier_insert() that early: what if we register that notifier
> > > > after make_device_exclusive_range() returns but before page_unlock() somehow?
> > > > So before page_unlock(), race is protected fully by the lock itself; after
> > > > that, it's done by mmu notifier.  Then maybe we don't need to worry about all
> > > > these notifications during marking exclusive (while we shouldn't)?
> > >
> > > The notifier is needed to protect against races with pte changes. Once a page
> > > has been marked for exclusive access the driver will update it's page tables to
> > > allow atomic access to the page. However in the meantime the page could become
> > > unmapped entirely or write protected.
> > >
> > > As I understand things the page lock won't protect against these kind of pte
> > > changes, hence the need for mmu_interval_read_begin/retry which allows the
> > > driver to hold a mutex protecting against invalidations via blocking the
> > > notifier until the device page tables have been updated.
> > 
> > Indeed, I suppose you mean change_pte_range() and zap_pte_range()
> > correspondingly.
> 
> Right.
> 
> > Do you think we can restore pte right before wr-protect or zap?  Then all
> > things serializes with page lock (btw: it's already an insane userspace to
> > either unmap a page or wr-protect a page if it knows the device is using it!).
> > If these are the only two cases, it still sounds a cleaner approach to me than
> > the current approach.
> 
> Perhaps we could but it would make {zap|change}_pte_range() much more complex as
> we can't sleep taking the page lock whilst holding the ptl, so we'd have to
> implement a retry scheme similar to copy_pte_range() in both those functions as
> well.

Yes, but shouldn't be hard to do so, imho. E.g., see when __tlb_remove_page()
returns true in zap_pte_range(), so we already did something like that.  IMHO
it's not uncommon to have such facilities as we do have requirements to sleep
during a spinlock critical section for a lot of places in mm, so we release
them when needed and retake.

> Given mmu_interval_read_begin/retry was IMHO added to solve this type of
> problem (freezing pte's to safely program device pte's) it seems like the
> better option rather than adding more complex code to generic mm paths.
> 
> It's also worth noting i915 seems to use mmu_interval_read_begin/retry() with
> gup to sync mappings so this isn't an entirely new concept. I'm not an expert
> in that driver but I imagine changing gup to generate unconditional mmu notifier
> invalidates would also cause issues there. So I think overall this is the
> cleanest solution as it reduces the amount of code (particularly in generic mm
> paths).

I could be wrong somewhere, but to me depending on mmu notifiers being
"accurate" in general is fragile..

Take an example of change_pte_range(), which will generate PROTECTION_VMA
notifies.  Let's imaging an userspace calls mprotect() e.g. twice or even more
times with the same PROT_* and upon the same region, we know very possibly the
2nd,3rd,... calls will generate those notifies with totally no change to the
pgtable at all as they're all done on the 1st shot.  However we'll generate mmu
notifies anyways for the 2nd,3rd,... calls.  It means mmu notifiers should
really be tolerant of false positives as it does happen, and such thing can be
triggered even from userspace system calls very easily like this.  That's why I
think any kernel facility that depends on mmu notifiers being accurate is
probably not the right approach..

But yeah as you said I think it's working as is with the series (I think the
follow_pmd_mask() checking pmd_trans_huge before calling split_huge_pmd is a
double safety-net for it, so even if the GUP split_huge_pmd got replaced with
__split_huge_pmd it should still work with the one-retry logic), not sure
whether it matters a lot, as it's not common mm path; I think I'll step back so
Andrew could still pick it up as wish, I'm just still not fully convinced it's
the best solution to have for a long term to depend on that..

> 
> > This also reminded me that right now the cpu pgtable recovery is lazy - it
> > happens either from fork() or a cpu page fault.  Even after device finished
> > using it, swap ptes keep there.
> > 
> > What if the device tries to do atomic op on the same page twice?  I am not sure
> > whether it means we may also want to teach both GUP (majorly follow_page_pte()
> > for now before pmd support) and process of page_make_device_exclusive() with
> > understanding the device exclusive entries too?  Another option seems to be
> > restoring pte after device finish using it, as long as the device knows when.
> 
> I don't think we need to complicate follow_page_pte() with knowledge of
> exclusive entries. GUP will just restore the original pte via the normal
> fault path - follow_page_pte() will return NULL for an exclusive entry,
> resulting in handle_mm_path() getting called via faultin_page(). Therefore
> a driver calling make_device_exclusive() twice on the same page won't cause an
> issue. Also the device shouldn't fault on subsequent accesses if the exclusive
> entry is still in place anyway.

Right, looks good then.

> 
> We can't restore the pte when the device is finished with it because there is
> no way of knowing when a device is done using an exclusive entry - device
> pte's work much the same as cpu pte's in that regard.

I see, I feel like I understand how it works slightly better now, thanks.

One last pure question: I see nouveau_atomic_range_fault() will call the other
nvif_object_ioctl() which seems to do the device pgtable mapping, am I right?
Then I see the notifier is quickly removed before nouveau_atomic_range_fault()
returns.  What happens if CPU access happens after mmu notifier removed?  Or is
it not possible to happen?
Alistair Popple June 16, 2021, 2:47 a.m. UTC | #4
On Wednesday, 16 June 2021 2:25:09 AM AEST Peter Xu wrote:
> On Tue, Jun 15, 2021 at 01:08:11PM +1000, Alistair Popple wrote:
> > On Saturday, 12 June 2021 1:01:42 AM AEST Peter Xu wrote:
> > > On Fri, Jun 11, 2021 at 01:43:20PM +1000, Alistair Popple wrote:

[...]

> > > Do you think we can restore pte right before wr-protect or zap?  Then all
> > > things serializes with page lock (btw: it's already an insane userspace to
> > > either unmap a page or wr-protect a page if it knows the device is using it!).
> > > If these are the only two cases, it still sounds a cleaner approach to me than
> > > the current approach.
> >
> > Perhaps we could but it would make {zap|change}_pte_range() much more complex as
> > we can't sleep taking the page lock whilst holding the ptl, so we'd have to
> > implement a retry scheme similar to copy_pte_range() in both those functions as
> > well.
> 
> Yes, but shouldn't be hard to do so, imho. E.g., see when __tlb_remove_page()
> returns true in zap_pte_range(), so we already did something like that.  IMHO
> it's not uncommon to have such facilities as we do have requirements to sleep
> during a spinlock critical section for a lot of places in mm, so we release
> them when needed and retake.

Agreed, it's not hard to do and it's a common enough pattern. However we decided
that for such a specific application this (trying to take the lock or drop locks
and retry) was too complex for copy_pte_range() so it seems like the same should
apply here.

Admittedly copy_pte_range() already had several other retry paths so perhaps
it was adding yet another that made it relatively more complex. Overall I have
been trying to minimise the impact on core mm code for this feature, and adding
this pattern to zap_pte_range(), etc. would make it more complex for any future
addition that requires locks to be dropped and retried so I guess in that sense
it is no different.

> > Given mmu_interval_read_begin/retry was IMHO added to solve this type of
> > problem (freezing pte's to safely program device pte's) it seems like the
> > better option rather than adding more complex code to generic mm paths.
> >
> > It's also worth noting i915 seems to use mmu_interval_read_begin/retry() with
> > gup to sync mappings so this isn't an entirely new concept. I'm not an expert
> > in that driver but I imagine changing gup to generate unconditional mmu notifier
> > invalidates would also cause issues there. So I think overall this is the
> > cleanest solution as it reduces the amount of code (particularly in generic mm
> > paths).
> 
> I could be wrong somewhere, but to me depending on mmu notifiers being
> "accurate" in general is fragile..
> 
> Take an example of change_pte_range(), which will generate PROTECTION_VMA
> notifies.  Let's imaging an userspace calls mprotect() e.g. twice or even more
> times with the same PROT_* and upon the same region, we know very possibly the
> 2nd,3rd,... calls will generate those notifies with totally no change to the
> pgtable at all as they're all done on the 1st shot.  However we'll generate mmu
> notifies anyways for the 2nd,3rd,... calls.  It means mmu notifiers should
> really be tolerant of false positives as it does happen, and such thing can be
> triggered even from userspace system calls very easily like this.  That's why I
> think any kernel facility that depends on mmu notifiers being accurate is
> probably not the right approach..

Argh, thanks. I was focused on the specifics of this series but I think I
understand your point better now - that as a more general principle we can't
assume notifiers are accurate.

> But yeah as you said I think it's working as is with the series (I think the
> follow_pmd_mask() checking pmd_trans_huge before calling split_huge_pmd is a
> double safety-net for it, so even if the GUP split_huge_pmd got replaced with
> __split_huge_pmd it should still work with the one-retry logic), not sure
> whether it matters a lot, as it's not common mm path; I think I'll step back so
> Andrew could still pick it up as wish, I'm just still not fully convinced it's
> the best solution to have for a long term to depend on that..

Ok, thanks. I guess you have somewhat convinced me - depending on it for the
long term might be a bit fragile. However as you say the current implementation
does work and I am starting to look at support for PMD and file backed pages
which require changes here anyway. So I am hoping Andrew can still take this
(once rebased) as it would be easier for me to do those changes if the basic
support and clean ups were already in place.

> > > This also reminded me that right now the cpu pgtable recovery is lazy - it
> > > happens either from fork() or a cpu page fault.  Even after device finished
> > > using it, swap ptes keep there.
> > >
> > > What if the device tries to do atomic op on the same page twice?  I am not sure
> > > whether it means we may also want to teach both GUP (majorly follow_page_pte()
> > > for now before pmd support) and process of page_make_device_exclusive() with
> > > understanding the device exclusive entries too?  Another option seems to be
> > > restoring pte after device finish using it, as long as the device knows when.
> >
> > I don't think we need to complicate follow_page_pte() with knowledge of
> > exclusive entries. GUP will just restore the original pte via the normal
> > fault path - follow_page_pte() will return NULL for an exclusive entry,
> > resulting in handle_mm_path() getting called via faultin_page(). Therefore
> > a driver calling make_device_exclusive() twice on the same page won't cause an
> > issue. Also the device shouldn't fault on subsequent accesses if the exclusive
> > entry is still in place anyway.
> 
> Right, looks good then.
> 
> >
> > We can't restore the pte when the device is finished with it because there is
> > no way of knowing when a device is done using an exclusive entry - device
> > pte's work much the same as cpu pte's in that regard.
> 
> I see, I feel like I understand how it works slightly better now, thanks.

Feel free to ask if there are any more details you want, but there's nothing too
magical going on here.

> One last pure question: I see nouveau_atomic_range_fault() will call the other
> nvif_object_ioctl() which seems to do the device pgtable mapping, am I right?

Correct - that installs the page table mapping on the GPU.

> Then I see the notifier is quickly removed before nouveau_atomic_range_fault()
> returns.  What happens if CPU access happens after mmu notifier removed?  Or is
> it not possible to happen?

So there are two notifiers registered - this one and a process wide notifier
(see nouveau_mn_ops). In this case the process wide notifier will get called
to invalidate the access when the CPU fault removes the device exclusive
entries.

 - Alistair

> --
> Peter Xu
>