mbox series

[v9,0/3] mm/gup: disallow GUP writing to file-backed mappings by default

Message ID cover.1683235180.git.lstoakes@gmail.com (mailing list archive)
Headers show
Series mm/gup: disallow GUP writing to file-backed mappings by default | expand

Message

Lorenzo Stoakes May 4, 2023, 9:27 p.m. UTC
Writing to file-backed mappings which require folio dirty tracking using
GUP is a fundamentally broken operation, as kernel write access to GUP
mappings do not adhere to the semantics expected by a file system.

A GUP caller uses the direct mapping to access the folio, which does not
cause write notify to trigger, nor does it enforce that the caller marks
the folio dirty.

The problem arises when, after an initial write to the folio, writeback
results in the folio being cleaned and then the caller, via the GUP
interface, writes to the folio again.

As a result of the use of this secondary, direct, mapping to the folio no
write notify will occur, and if the caller does mark the folio dirty, this
will be done so unexpectedly.

For example, consider the following scenario:-

1. A folio is written to via GUP which write-faults the memory, notifying
   the file system and dirtying the folio.
2. Later, writeback is triggered, resulting in the folio being cleaned and
   the PTE being marked read-only.
3. The GUP caller writes to the folio, as it is mapped read/write via the
   direct mapping.
4. The GUP caller, now done with the page, unpins it and sets it dirty
   (though it does not have to).

This change updates both the PUP FOLL_LONGTERM slow and fast APIs. As
pin_user_pages_fast_only() does not exist, we can rely on a slightly
imperfect whitelisting in the PUP-fast case and fall back to the slow case
should this fail.

v9:
- Refactored vma_needs_dirty_tracking() and vma_wants_writenotify() to avoid
  duplicate check of shared writable/needs writenotify.
- Removed redundant comments.
- Improved vma_needs_dirty_tracking() commit message.
- Moved folio_fast_pin_allowed() into CONFIG_HAVE_FAST_GUP block as used by
  both the CONFIG_ARCH_HAS_PTE_SPECIAL and huge page cases, both of which
  are invoked under any CONFIG_HAVE_FAST_GUP configuration. Should fix
  mips/arm builds.
- Permit pins of swap cache anon pages.
- Permit KSM anon pages.

v8:
- Fixed typo writeable -> writable.
- Fixed bug in writable_file_mapping_allowed() - must check combination of
  FOLL_PIN AND FOLL_LONGTERM not either/or.
- Updated vma_needs_dirty_tracking() to include write/shared to account for
  MAP_PRIVATE mappings.
- Move to open-coding the checks in folio_pin_allowed() so we can
  READ_ONCE() the mapping and avoid unexpected compiler loads. Rename to
  account for fact we now check flags here.
- Disallow mapping == NULL or mapping & PAGE_MAPPING_FLAGS other than
  anon. Defer to slow path.
- Perform GUP-fast check _after_ the lowest page table level is confirmed to
  be stable.
- Updated comments and commit message for final patch as per Jason's
  suggestions.
https://lore.kernel.org/all/cover.1683067198.git.lstoakes@gmail.com/

v7:
- Fixed very silly bug in writeable_file_mapping_allowed() inverting the
  logic.
- Removed unnecessary RCU lock code and replaced with adaptation of Peter's
  idea.
- Removed unnecessary open-coded folio_test_anon() in
  folio_longterm_write_pin_allowed() and restructured to generally permit
  NULL folio_mapping().
https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com/

v6:
- Rebased on latest mm-unstable as of 28th April 2023.
- Add PUP-fast check with handling for rcu-locked TLB shootdown to synchronise
  correctly.
- Split patch series into 3 to make it more digestible.
https://lore.kernel.org/all/cover.1682981880.git.lstoakes@gmail.com/

v5:
- Rebased on latest mm-unstable as of 25th April 2023.
- Some small refactorings suggested by John.
- Added an extended description of the problem in the comment around
  writeable_file_mapping_allowed() for clarity.
- Updated commit message as suggested by Mika and John.
https://lore.kernel.org/all/6b73e692c2929dc4613af711bdf92e2ec1956a66.1682638385.git.lstoakes@gmail.com/

v4:
- Split out vma_needs_dirty_tracking() from vma_wants_writenotify() to
  reduce duplication and update to use this in the GUP check. Note that
  both separately check vm_ops_needs_writenotify() as the latter needs to
  test this before the vm_pgprot_modify() test, resulting in
  vma_wants_writenotify() checking this twice, however it is such a small
  check this should not be egregious.
https://lore.kernel.org/all/3b92d56f55671a0389252379237703df6e86ea48.1682464032.git.lstoakes@gmail.com/

v3:
- Rebased on latest mm-unstable as of 24th April 2023.
- Explicitly check whether file system requires folio dirtying. Note that
  vma_wants_writenotify() could not be used directly as it is very much focused
  on determining if the PTE r/w should be set (e.g. assuming private mapping
  does not require it as already set, soft dirty considerations).
- Tested code against shmem and hugetlb mappings - confirmed that these are not
  disallowed by the check.
- Eliminate FOLL_ALLOW_BROKEN_FILE_MAPPING flag and instead perform check only
  for FOLL_LONGTERM pins.
- As a result, limit check to internal GUP code.
 https://lore.kernel.org/all/23c19e27ef0745f6d3125976e047ee0da62569d4.1682406295.git.lstoakes@gmail.com/

v2:
- Add accidentally excluded ptrace_access_vm() use of
  FOLL_ALLOW_BROKEN_FILE_MAPPING.
- Tweak commit message.
https://lore.kernel.org/all/c8ee7e02d3d4f50bb3e40855c53bda39eec85b7d.1682321768.git.lstoakes@gmail.com/

v1:
https://lore.kernel.org/all/f86dc089b460c80805e321747b0898fd1efe93d7.1682168199.git.lstoakes@gmail.com/

Lorenzo Stoakes (3):
  mm/mmap: separate writenotify and dirty tracking logic
  mm/gup: disallow FOLL_LONGTERM GUP-nonfast writing to file-backed
    mappings
  mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed
    mappings

 include/linux/mm.h |   1 +
 mm/gup.c           | 145 ++++++++++++++++++++++++++++++++++++++++++++-
 mm/mmap.c          |  58 ++++++++++++++----
 3 files changed, 191 insertions(+), 13 deletions(-)

--
2.40.1

Comments

David Hildenbrand May 5, 2023, 8:21 p.m. UTC | #1
On 04.05.23 23:27, Lorenzo Stoakes wrote:
> Writing to file-backed mappings which require folio dirty tracking using
> GUP is a fundamentally broken operation, as kernel write access to GUP
> mappings do not adhere to the semantics expected by a file system.
> 
> A GUP caller uses the direct mapping to access the folio, which does not
> cause write notify to trigger, nor does it enforce that the caller marks
> the folio dirty.
> 
> The problem arises when, after an initial write to the folio, writeback
> results in the folio being cleaned and then the caller, via the GUP
> interface, writes to the folio again.
> 
> As a result of the use of this secondary, direct, mapping to the folio no
> write notify will occur, and if the caller does mark the folio dirty, this
> will be done so unexpectedly.
> 
> For example, consider the following scenario:-
> 
> 1. A folio is written to via GUP which write-faults the memory, notifying
>     the file system and dirtying the folio.
> 2. Later, writeback is triggered, resulting in the folio being cleaned and
>     the PTE being marked read-only.
> 3. The GUP caller writes to the folio, as it is mapped read/write via the
>     direct mapping.
> 4. The GUP caller, now done with the page, unpins it and sets it dirty
>     (though it does not have to).
> 
> This change updates both the PUP FOLL_LONGTERM slow and fast APIs. As
> pin_user_pages_fast_only() does not exist, we can rely on a slightly
> imperfect whitelisting in the PUP-fast case and fall back to the slow case
> should this fail.
> 
>

Thanks a lot, this looks pretty good to me!

I started writing some selftests (assuming none would be in the works) using
iouring and and the gup_tests interface. So far, no real surprises for the general
GUP interaction [1].


There are two things I noticed when registering an iouring fixed buffer (that differ
now from generic gup_test usage):


(1) Registering a fixed buffer targeting an unsupported MAP_SHARED FS file now fails with
     EFAULT (from pin_user_pages()) instead of EOPNOTSUPP (from io_pin_pages()).

The man page for io_uring_register documents:

        EOPNOTSUPP
               User buffers point to file-backed memory.

... we'd have to do some kind of errno translation in io_pin_pages(). But the
translation is not simple (sometimes we want to forward EOPNOTSUPP). That also
applies once we remove that special-casing in io_uring code.

... maybe we can simply update the manpage (stating that older kernels returned
EOPNOTSUPP) and start returning EFAULT?


(2) Registering a fixed buffer targeting a MAP_PRIVATE FS file fails with EOPNOTSUPP
     (from io_pin_pages()). As discussed, there is nothing wrong with pinning all-anon
     pages (resulting from breaking COW).

That could be easily be handled (allow any !VM_MAYSHARE), and would automatically be
handled once removing the iouring special-casing.


[1]

# ./pin_longterm
# [INFO] detected hugetlb size: 2048 KiB
# [INFO] detected hugetlb size: 1048576 KiB
TAP version 13
1..50
# [RUN] R/W longterm GUP pin in MAP_SHARED file mapping ... with memfd
ok 1 Pinning succeeded as expected
# [RUN] R/W longterm GUP pin in MAP_SHARED file mapping ... with tmpfile
ok 2 Pinning succeeded as expected
# [RUN] R/W longterm GUP pin in MAP_SHARED file mapping ... with local tmpfile
ok 3 Pinning failed as expected
# [RUN] R/W longterm GUP pin in MAP_SHARED file mapping ... with memfd hugetlb (2048 kB)
ok 4 # SKIP need more free huge pages
# [RUN] R/W longterm GUP pin in MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB)
ok 5 Pinning succeeded as expected
# [RUN] R/W longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd
ok 6 Pinning succeeded as expected
# [RUN] R/W longterm GUP-fast pin in MAP_SHARED file mapping ... with tmpfile
ok 7 Pinning succeeded as expected
# [RUN] R/W longterm GUP-fast pin in MAP_SHARED file mapping ... with local tmpfile
ok 8 Pinning failed as expected
# [RUN] R/W longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (2048 kB)
ok 9 # SKIP need more free huge pages
# [RUN] R/W longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB)
ok 10 Pinning succeeded as expected
# [RUN] R/O longterm GUP pin in MAP_SHARED file mapping ... with memfd
ok 11 Pinning succeeded as expected
# [RUN] R/O longterm GUP pin in MAP_SHARED file mapping ... with tmpfile
ok 12 Pinning succeeded as expected
# [RUN] R/O longterm GUP pin in MAP_SHARED file mapping ... with local tmpfile
ok 13 Pinning succeeded as expected
# [RUN] R/O longterm GUP pin in MAP_SHARED file mapping ... with memfd hugetlb (2048 kB)
ok 14 # SKIP need more free huge pages
# [RUN] R/O longterm GUP pin in MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB)
ok 15 Pinning succeeded as expected
# [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd
ok 16 Pinning succeeded as expected
# [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with tmpfile
ok 17 Pinning succeeded as expected
# [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with local tmpfile
ok 18 Pinning succeeded as expected
# [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (2048 kB)
ok 19 # SKIP need more free huge pages
# [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB)
ok 20 Pinning succeeded as expected
# [RUN] R/W longterm GUP pin in MAP_PRIVATE file mapping ... with memfd
ok 21 Pinning succeeded as expected
# [RUN] R/W longterm GUP pin in MAP_PRIVATE file mapping ... with tmpfile
ok 22 Pinning succeeded as expected
# [RUN] R/W longterm GUP pin in MAP_PRIVATE file mapping ... with local tmpfile
ok 23 Pinning succeeded as expected
# [RUN] R/W longterm GUP pin in MAP_PRIVATE file mapping ... with memfd hugetlb (2048 kB)
ok 24 # SKIP need more free huge pages
# [RUN] R/W longterm GUP pin in MAP_PRIVATE file mapping ... with memfd hugetlb (1048576 kB)
ok 25 Pinning succeeded as expected
# [RUN] R/W longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd
ok 26 Pinning succeeded as expected
# [RUN] R/W longterm GUP-fast pin in MAP_PRIVATE file mapping ... with tmpfile
ok 27 Pinning succeeded as expected
# [RUN] R/W longterm GUP-fast pin in MAP_PRIVATE file mapping ... with local tmpfile
ok 28 Pinning succeeded as expected
# [RUN] R/W longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd hugetlb (2048 kB)
ok 29 # SKIP need more free huge pages
# [RUN] R/W longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd hugetlb (1048576 kB)
ok 30 Pinning succeeded as expected
# [RUN] R/O longterm GUP pin in MAP_PRIVATE file mapping ... with memfd
ok 31 Pinning succeeded as expected
# [RUN] R/O longterm GUP pin in MAP_PRIVATE file mapping ... with tmpfile
ok 32 Pinning succeeded as expected
# [RUN] R/O longterm GUP pin in MAP_PRIVATE file mapping ... with local tmpfile
ok 33 Pinning succeeded as expected
# [RUN] R/O longterm GUP pin in MAP_PRIVATE file mapping ... with memfd hugetlb (2048 kB)
ok 34 # SKIP need more free huge pages
# [RUN] R/O longterm GUP pin in MAP_PRIVATE file mapping ... with memfd hugetlb (1048576 kB)
ok 35 Pinning succeeded as expected
# [RUN] R/O longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd
ok 36 Pinning succeeded as expected
# [RUN] R/O longterm GUP-fast pin in MAP_PRIVATE file mapping ... with tmpfile
ok 37 Pinning succeeded as expected
# [RUN] R/O longterm GUP-fast pin in MAP_PRIVATE file mapping ... with local tmpfile
ok 38 Pinning succeeded as expected
# [RUN] R/O longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd hugetlb (2048 kB)
ok 39 # SKIP need more free huge pages
# [RUN] R/O longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd hugetlb (1048576 kB)
ok 40 Pinning succeeded as expected
# [RUN] iouring fixed buffer with MAP_SHARED file mapping ... with memfd
ok 41 Pinning succeeded as expected
# [RUN] iouring fixed buffer with MAP_SHARED file mapping ... with tmpfile
ok 42 Pinning succeeded as expected
# [RUN] iouring fixed buffer with MAP_SHARED file mapping ... with local tmpfile
ok 43 Pinning failed as expected
# [RUN] iouring fixed buffer with MAP_SHARED file mapping ... with memfd hugetlb (2048 kB)
ok 44 # SKIP need more free huge pages
# [RUN] iouring fixed buffer with MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB)
ok 45 Pinning succeeded as expected
# [RUN] iouring fixed buffer with MAP_PRIVATE file mapping ... with memfd
ok 46 Pinning succeeded as expected
# [RUN] iouring fixed buffer with MAP_PRIVATE file mapping ... with tmpfile
ok 47 Pinning succeeded as expected
# [RUN] iouring fixed buffer with MAP_PRIVATE file mapping ... with local tmpfile
not ok 48 Pinning failed as expected
# [RUN] iouring fixed buffer with MAP_PRIVATE file mapping ... with memfd hugetlb (2048 kB)
ok 49 # SKIP need more free huge pages
# [RUN] iouring fixed buffer with MAP_PRIVATE file mapping ... with memfd hugetlb (1048576 kB)
ok 50 Pinning succeeded as expected
Bail out! 1 out of 50 tests failed
# Totals: pass:39 fail:1 xfail:0 xpass:0 skip:10 error:0
Lorenzo Stoakes May 5, 2023, 9:12 p.m. UTC | #2
On Fri, May 05, 2023 at 10:21:21PM +0200, David Hildenbrand wrote:
> On 04.05.23 23:27, Lorenzo Stoakes wrote:
> > Writing to file-backed mappings which require folio dirty tracking using
> > GUP is a fundamentally broken operation, as kernel write access to GUP
> > mappings do not adhere to the semantics expected by a file system.
> >
> > A GUP caller uses the direct mapping to access the folio, which does not
> > cause write notify to trigger, nor does it enforce that the caller marks
> > the folio dirty.
> >
> > The problem arises when, after an initial write to the folio, writeback
> > results in the folio being cleaned and then the caller, via the GUP
> > interface, writes to the folio again.
> >
> > As a result of the use of this secondary, direct, mapping to the folio no
> > write notify will occur, and if the caller does mark the folio dirty, this
> > will be done so unexpectedly.
> >
> > For example, consider the following scenario:-
> >
> > 1. A folio is written to via GUP which write-faults the memory, notifying
> >     the file system and dirtying the folio.
> > 2. Later, writeback is triggered, resulting in the folio being cleaned and
> >     the PTE being marked read-only.
> > 3. The GUP caller writes to the folio, as it is mapped read/write via the
> >     direct mapping.
> > 4. The GUP caller, now done with the page, unpins it and sets it dirty
> >     (though it does not have to).
> >
> > This change updates both the PUP FOLL_LONGTERM slow and fast APIs. As
> > pin_user_pages_fast_only() does not exist, we can rely on a slightly
> > imperfect whitelisting in the PUP-fast case and fall back to the slow case
> > should this fail.
> >
> >
>
> Thanks a lot, this looks pretty good to me!

Thanks!

>
> I started writing some selftests (assuming none would be in the works) using
> iouring and and the gup_tests interface. So far, no real surprises for the general
> GUP interaction [1].
>

Nice! I was using the cow selftests as just looking for something that
touches FOLL_LONGTERM with PUP_fast, I hacked it so it always wrote just to
test patches but clearly we need something more thorough.

>
> There are two things I noticed when registering an iouring fixed buffer (that differ
> now from generic gup_test usage):
>
>
> (1) Registering a fixed buffer targeting an unsupported MAP_SHARED FS file now fails with
>     EFAULT (from pin_user_pages()) instead of EOPNOTSUPP (from io_pin_pages()).
>
> The man page for io_uring_register documents:
>
>        EOPNOTSUPP
>               User buffers point to file-backed memory.
>
> ... we'd have to do some kind of errno translation in io_pin_pages(). But the
> translation is not simple (sometimes we want to forward EOPNOTSUPP). That also
> applies once we remove that special-casing in io_uring code.
>
> ... maybe we can simply update the manpage (stating that older kernels returned
> EOPNOTSUPP) and start returning EFAULT?

Yeah I noticed this discrepancy when going through initial attempts to
refactor in the vmas patch series, I wonder how important it is to
differentiate? I have a feeling it probably doesn't matter too much but
obviously need input from Jens and Pavel.

>
>
> (2) Registering a fixed buffer targeting a MAP_PRIVATE FS file fails with EOPNOTSUPP
>     (from io_pin_pages()). As discussed, there is nothing wrong with pinning all-anon
>     pages (resulting from breaking COW).
>
> That could be easily be handled (allow any !VM_MAYSHARE), and would automatically be
> handled once removing the iouring special-casing.

The entire intent of this series (for me :)) was to allow io_uring to just
drop this code altogether so we can unblock my drop the 'vmas' parameter
from GUP series [1].

I always intended to respin that after this settled down, Jens and Pavel
seemed onboard with this (and really they shouldn't need to be doing that
check, that was always a failing in GUP).

I will do a v5 of this soon.

[1]: https://lore.kernel.org/all/cover.1681831798.git.lstoakes@gmail.com/

>
>
> [1]
>
> # ./pin_longterm
> # [INFO] detected hugetlb size: 2048 KiB
> # [INFO] detected hugetlb size: 1048576 KiB
> TAP version 13
> 1..50
> # [RUN] R/W longterm GUP pin in MAP_SHARED file mapping ... with memfd
> ok 1 Pinning succeeded as expected
> # [RUN] R/W longterm GUP pin in MAP_SHARED file mapping ... with tmpfile
> ok 2 Pinning succeeded as expected
> # [RUN] R/W longterm GUP pin in MAP_SHARED file mapping ... with local tmpfile
> ok 3 Pinning failed as expected
> # [RUN] R/W longterm GUP pin in MAP_SHARED file mapping ... with memfd hugetlb (2048 kB)
> ok 4 # SKIP need more free huge pages
> # [RUN] R/W longterm GUP pin in MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB)
> ok 5 Pinning succeeded as expected
> # [RUN] R/W longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd
> ok 6 Pinning succeeded as expected
> # [RUN] R/W longterm GUP-fast pin in MAP_SHARED file mapping ... with tmpfile
> ok 7 Pinning succeeded as expected
> # [RUN] R/W longterm GUP-fast pin in MAP_SHARED file mapping ... with local tmpfile
> ok 8 Pinning failed as expected
> # [RUN] R/W longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (2048 kB)
> ok 9 # SKIP need more free huge pages
> # [RUN] R/W longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB)
> ok 10 Pinning succeeded as expected
> # [RUN] R/O longterm GUP pin in MAP_SHARED file mapping ... with memfd
> ok 11 Pinning succeeded as expected
> # [RUN] R/O longterm GUP pin in MAP_SHARED file mapping ... with tmpfile
> ok 12 Pinning succeeded as expected
> # [RUN] R/O longterm GUP pin in MAP_SHARED file mapping ... with local tmpfile
> ok 13 Pinning succeeded as expected
> # [RUN] R/O longterm GUP pin in MAP_SHARED file mapping ... with memfd hugetlb (2048 kB)
> ok 14 # SKIP need more free huge pages
> # [RUN] R/O longterm GUP pin in MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB)
> ok 15 Pinning succeeded as expected
> # [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd
> ok 16 Pinning succeeded as expected
> # [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with tmpfile
> ok 17 Pinning succeeded as expected
> # [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with local tmpfile
> ok 18 Pinning succeeded as expected
> # [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (2048 kB)
> ok 19 # SKIP need more free huge pages
> # [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB)
> ok 20 Pinning succeeded as expected
> # [RUN] R/W longterm GUP pin in MAP_PRIVATE file mapping ... with memfd
> ok 21 Pinning succeeded as expected
> # [RUN] R/W longterm GUP pin in MAP_PRIVATE file mapping ... with tmpfile
> ok 22 Pinning succeeded as expected
> # [RUN] R/W longterm GUP pin in MAP_PRIVATE file mapping ... with local tmpfile
> ok 23 Pinning succeeded as expected
> # [RUN] R/W longterm GUP pin in MAP_PRIVATE file mapping ... with memfd hugetlb (2048 kB)
> ok 24 # SKIP need more free huge pages
> # [RUN] R/W longterm GUP pin in MAP_PRIVATE file mapping ... with memfd hugetlb (1048576 kB)
> ok 25 Pinning succeeded as expected
> # [RUN] R/W longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd
> ok 26 Pinning succeeded as expected
> # [RUN] R/W longterm GUP-fast pin in MAP_PRIVATE file mapping ... with tmpfile
> ok 27 Pinning succeeded as expected
> # [RUN] R/W longterm GUP-fast pin in MAP_PRIVATE file mapping ... with local tmpfile
> ok 28 Pinning succeeded as expected
> # [RUN] R/W longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd hugetlb (2048 kB)
> ok 29 # SKIP need more free huge pages
> # [RUN] R/W longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd hugetlb (1048576 kB)
> ok 30 Pinning succeeded as expected
> # [RUN] R/O longterm GUP pin in MAP_PRIVATE file mapping ... with memfd
> ok 31 Pinning succeeded as expected
> # [RUN] R/O longterm GUP pin in MAP_PRIVATE file mapping ... with tmpfile
> ok 32 Pinning succeeded as expected
> # [RUN] R/O longterm GUP pin in MAP_PRIVATE file mapping ... with local tmpfile
> ok 33 Pinning succeeded as expected
> # [RUN] R/O longterm GUP pin in MAP_PRIVATE file mapping ... with memfd hugetlb (2048 kB)
> ok 34 # SKIP need more free huge pages
> # [RUN] R/O longterm GUP pin in MAP_PRIVATE file mapping ... with memfd hugetlb (1048576 kB)
> ok 35 Pinning succeeded as expected
> # [RUN] R/O longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd
> ok 36 Pinning succeeded as expected
> # [RUN] R/O longterm GUP-fast pin in MAP_PRIVATE file mapping ... with tmpfile
> ok 37 Pinning succeeded as expected
> # [RUN] R/O longterm GUP-fast pin in MAP_PRIVATE file mapping ... with local tmpfile
> ok 38 Pinning succeeded as expected
> # [RUN] R/O longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd hugetlb (2048 kB)
> ok 39 # SKIP need more free huge pages
> # [RUN] R/O longterm GUP-fast pin in MAP_PRIVATE file mapping ... with memfd hugetlb (1048576 kB)
> ok 40 Pinning succeeded as expected
> # [RUN] iouring fixed buffer with MAP_SHARED file mapping ... with memfd
> ok 41 Pinning succeeded as expected
> # [RUN] iouring fixed buffer with MAP_SHARED file mapping ... with tmpfile
> ok 42 Pinning succeeded as expected
> # [RUN] iouring fixed buffer with MAP_SHARED file mapping ... with local tmpfile
> ok 43 Pinning failed as expected
> # [RUN] iouring fixed buffer with MAP_SHARED file mapping ... with memfd hugetlb (2048 kB)
> ok 44 # SKIP need more free huge pages
> # [RUN] iouring fixed buffer with MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB)
> ok 45 Pinning succeeded as expected
> # [RUN] iouring fixed buffer with MAP_PRIVATE file mapping ... with memfd
> ok 46 Pinning succeeded as expected
> # [RUN] iouring fixed buffer with MAP_PRIVATE file mapping ... with tmpfile
> ok 47 Pinning succeeded as expected
> # [RUN] iouring fixed buffer with MAP_PRIVATE file mapping ... with local tmpfile
> not ok 48 Pinning failed as expected
> # [RUN] iouring fixed buffer with MAP_PRIVATE file mapping ... with memfd hugetlb (2048 kB)
> ok 49 # SKIP need more free huge pages
> # [RUN] iouring fixed buffer with MAP_PRIVATE file mapping ... with memfd hugetlb (1048576 kB)
> ok 50 Pinning succeeded as expected
> Bail out! 1 out of 50 tests failed
> # Totals: pass:39 fail:1 xfail:0 xpass:0 skip:10 error:0
>
>
> --
> Thanks,
>
> David / dhildenb
>
Lorenzo Stoakes May 14, 2023, 7:20 p.m. UTC | #3
On Thu, May 04, 2023 at 10:27:50PM +0100, Lorenzo Stoakes wrote:
> Writing to file-backed mappings which require folio dirty tracking using
> GUP is a fundamentally broken operation, as kernel write access to GUP
> mappings do not adhere to the semantics expected by a file system.
>
> A GUP caller uses the direct mapping to access the folio, which does not
> cause write notify to trigger, nor does it enforce that the caller marks
> the folio dirty.
>
> The problem arises when, after an initial write to the folio, writeback
> results in the folio being cleaned and then the caller, via the GUP
> interface, writes to the folio again.
>
> As a result of the use of this secondary, direct, mapping to the folio no
> write notify will occur, and if the caller does mark the folio dirty, this
> will be done so unexpectedly.
>
> For example, consider the following scenario:-
>
> 1. A folio is written to via GUP which write-faults the memory, notifying
>    the file system and dirtying the folio.
> 2. Later, writeback is triggered, resulting in the folio being cleaned and
>    the PTE being marked read-only.
> 3. The GUP caller writes to the folio, as it is mapped read/write via the
>    direct mapping.
> 4. The GUP caller, now done with the page, unpins it and sets it dirty
>    (though it does not have to).
>
> This change updates both the PUP FOLL_LONGTERM slow and fast APIs. As
> pin_user_pages_fast_only() does not exist, we can rely on a slightly
> imperfect whitelisting in the PUP-fast case and fall back to the slow case
> should this fail.
[snip]

As discussed at LSF/MM, on the flight over I wrote a little repro [0] which
reliably triggers the ext4 warning by recreating the scenario described
above, using a small userland program and kernel module.

This code is not perfect (plane code :) but does seem to do the job
adequately, also obviously this should only be run in a VM environment
where data loss is acceptable (in my case a small qemu instance).

Hopefully this is useful in some way. Note that I explicitly use
pin_user_pages() without FOLL_LONGTERM here in order to not run into the
mitigation this very patch series provides! Obviously if you revert this
series you can see the same happening with FOLL_LONGTERM set.

I have licensed the code as GPLv2 so anybody's free to do with it as they
will if it's useful in any way!

[0]:https://github.com/lorenzo-stoakes/gup-repro
Christoph Hellwig May 15, 2023, 5:14 a.m. UTC | #4
On Sun, May 14, 2023 at 08:20:04PM +0100, Lorenzo Stoakes wrote:
> As discussed at LSF/MM, on the flight over I wrote a little repro [0] which
> reliably triggers the ext4 warning by recreating the scenario described
> above, using a small userland program and kernel module.
> 
> This code is not perfect (plane code :) but does seem to do the job
> adequately, also obviously this should only be run in a VM environment
> where data loss is acceptable (in my case a small qemu instance).

It would be really awesome if you could wire it up with and submit it
to xfstests.
Kirill A. Shutemov May 15, 2023, 11:03 a.m. UTC | #5
On Thu, May 04, 2023 at 10:27:50PM +0100, Lorenzo Stoakes wrote:
> Writing to file-backed mappings which require folio dirty tracking using
> GUP is a fundamentally broken operation, as kernel write access to GUP
> mappings do not adhere to the semantics expected by a file system.
> 
> A GUP caller uses the direct mapping to access the folio, which does not
> cause write notify to trigger, nor does it enforce that the caller marks
> the folio dirty.

Okay, problem is clear and the patchset look good to me. But I'm worried
breaking existing users.

Do we expect the change to be visible to real world users? If yes, are we
okay to break them?

One thing that came to mind is KVM with "qemu -object memory-backend-file,share=on..."
It is mostly used for pmem emulation.

Do we have plan B?

Just a random/crazy/broken idea:

 - Allow folio_mkclean() (and folio_clear_dirty_for_io()) to fail,
   indicating that the page cannot be cleared because it is pinned;

 - Introduce a new vm_operations_struct::mkclean() that would be called by
   page_vma_mkclean_one() before clearing the range and can fail;

 - On GUP, create an in-kernel fake VMA that represents the file, but with
   custom vm_ops. The VMA registered in rmap to get notified on
   folio_mkclean() and fail it because of GUP.

 - folio_clear_dirty_for_io() callers will handle the new failure as
   indication that the page can be written back but will stay dirty and
   fs-specific data that is associated with the page writeback cannot be
   freed.

I'm sure the idea is broken on many levels (I have never looked closely at
the writeback path). But maybe it is good enough as conversation started?
Lorenzo Stoakes May 15, 2023, 11:16 a.m. UTC | #6
On Mon, May 15, 2023 at 02:03:15PM +0300, Kirill A . Shutemov wrote:
> On Thu, May 04, 2023 at 10:27:50PM +0100, Lorenzo Stoakes wrote:
> > Writing to file-backed mappings which require folio dirty tracking using
> > GUP is a fundamentally broken operation, as kernel write access to GUP
> > mappings do not adhere to the semantics expected by a file system.
> >
> > A GUP caller uses the direct mapping to access the folio, which does not
> > cause write notify to trigger, nor does it enforce that the caller marks
> > the folio dirty.
>
> Okay, problem is clear and the patchset look good to me. But I'm worried
> breaking existing users.
>
> Do we expect the change to be visible to real world users? If yes, are we
> okay to break them?

The general consensus at the moment is that there is no entirely reasonable
usage of this case and you're already running the riks of a kernel oops if
you do this, so it's already broken.

>
> One thing that came to mind is KVM with "qemu -object memory-backend-file,share=on..."
> It is mostly used for pmem emulation.
>
> Do we have plan B?

Yes, we can make it opt-in or opt-out via a FOLL_FLAG. This would be easy
to implement in the event of any issues arising.

>
> Just a random/crazy/broken idea:
>
>  - Allow folio_mkclean() (and folio_clear_dirty_for_io()) to fail,
>    indicating that the page cannot be cleared because it is pinned;
>
>  - Introduce a new vm_operations_struct::mkclean() that would be called by
>    page_vma_mkclean_one() before clearing the range and can fail;
>
>  - On GUP, create an in-kernel fake VMA that represents the file, but with
>    custom vm_ops. The VMA registered in rmap to get notified on
>    folio_mkclean() and fail it because of GUP.
>
>  - folio_clear_dirty_for_io() callers will handle the new failure as
>    indication that the page can be written back but will stay dirty and
>    fs-specific data that is associated with the page writeback cannot be
>    freed.
>
> I'm sure the idea is broken on many levels (I have never looked closely at
> the writeback path). But maybe it is good enough as conversation started?
>

Yeah there are definitely a few ideas down this road that might be
possible, I am not sure how a filesystem can be expected to cope or this to
be reasonably used without dirty/writeback though because you'll just not
track anything or I guess you mean the mapping would be read-only but
somehow stay dirty?

I also had ideas along these lines of e.g. having a special vmalloc mode
which mimics the correct wrprotect settings + does the right thing, but of
course that does nothing to help DMA writing to a GUP-pinned page.

Though if the issue is at the point of the kernel marking the page dirty
unexpectedly, perhaps we can just invoke the mkwrite() _there_ before
marking dirty?

There are probably some sycnhronisation issues there too.

Jason will have some thoughts on this I'm sure. I guess the key question
here is - is it actually feasible for this to work at all? Once we
establish that, the rest are details :)

> --
>   Kiryl Shutsemau / Kirill A. Shutemov
Lorenzo Stoakes May 15, 2023, 11:31 a.m. UTC | #7
On Sun, May 14, 2023 at 10:14:46PM -0700, Christoph Hellwig wrote:
> On Sun, May 14, 2023 at 08:20:04PM +0100, Lorenzo Stoakes wrote:
> > As discussed at LSF/MM, on the flight over I wrote a little repro [0] which
> > reliably triggers the ext4 warning by recreating the scenario described
> > above, using a small userland program and kernel module.
> >
> > This code is not perfect (plane code :) but does seem to do the job
> > adequately, also obviously this should only be run in a VM environment
> > where data loss is acceptable (in my case a small qemu instance).
>
> It would be really awesome if you could wire it up with and submit it
> to xfstests.

Sure am happy to take a look at that! Also happy if David finds it useful in any
way for this unit tests.

The kernel module interface is a bit sketchy (it takes a user address which it
blindly pins for you) so it's not something that should be run in any unsafe
environment but as long as we are ok with that :)
Jason Gunthorpe May 15, 2023, 12:12 p.m. UTC | #8
On Mon, May 15, 2023 at 12:16:21PM +0100, Lorenzo Stoakes wrote:
> > One thing that came to mind is KVM with "qemu -object memory-backend-file,share=on..."
> > It is mostly used for pmem emulation.
> >
> > Do we have plan B?
> 
> Yes, we can make it opt-in or opt-out via a FOLL_FLAG. This would be easy
> to implement in the event of any issues arising.

I'm becoming less keen on the idea of a per-subsystem opt out. I think
we should make a kernel wide opt out. I like the idea of using lower
lockdown levels. Lots of things become unavaiable in the uAPI when the
lockdown level increases already.

> Jason will have some thoughts on this I'm sure. I guess the key question
> here is - is it actually feasible for this to work at all? Once we
> establish that, the rest are details :)

Surely it is, but like Ted said, the FS folks are not interested and
they are at least half the solution..

The FS also has to actively not write out the page while it cannot be
write protected unless it copies the data to a stable page. The block
stack needs the source data to be stable to do checksum/parity/etc
stuff. It is a complicated subject.

Jason
Lorenzo Stoakes May 15, 2023, 1:07 p.m. UTC | #9
On Mon, May 15, 2023 at 09:12:49AM -0300, Jason Gunthorpe wrote:
> On Mon, May 15, 2023 at 12:16:21PM +0100, Lorenzo Stoakes wrote:
> > > One thing that came to mind is KVM with "qemu -object memory-backend-file,share=on..."
> > > It is mostly used for pmem emulation.
> > >
> > > Do we have plan B?
> >
> > Yes, we can make it opt-in or opt-out via a FOLL_FLAG. This would be easy
> > to implement in the event of any issues arising.
>
> I'm becoming less keen on the idea of a per-subsystem opt out. I think
> we should make a kernel wide opt out. I like the idea of using lower
> lockdown levels. Lots of things become unavaiable in the uAPI when the
> lockdown level increases already.

This would be the 'safest' in the sense that a user can't be surprised by
higher lockdown = access modes disallowed, however we'd _definitely_ need
to have an opt-in in that instance so io_uring can make use of this
regardless. That's easy to add however.

If we do go down that road, we can be even stricter/vary what we do at
different levels right?

>
> > Jason will have some thoughts on this I'm sure. I guess the key question
> > here is - is it actually feasible for this to work at all? Once we
> > establish that, the rest are details :)
>
> Surely it is, but like Ted said, the FS folks are not interested and
> they are at least half the solution..

:'(

>
> The FS also has to actively not write out the page while it cannot be
> write protected unless it copies the data to a stable page. The block
> stack needs the source data to be stable to do checksum/parity/etc
> stuff. It is a complicated subject.

Yes my sense was that being able to write arbitrarily to these pages _at
all_ was a big issue, not only the dirty tracking aspect.

I guess at some level letting filesystems have such total flexibility as to
how they implement things leaves us in a difficult position.

>
> Jason
Jan Kara May 17, 2023, 7:29 a.m. UTC | #10
On Mon 15-05-23 14:07:57, Lorenzo Stoakes wrote:
> On Mon, May 15, 2023 at 09:12:49AM -0300, Jason Gunthorpe wrote:
> > On Mon, May 15, 2023 at 12:16:21PM +0100, Lorenzo Stoakes wrote:
> > > Jason will have some thoughts on this I'm sure. I guess the key question
> > > here is - is it actually feasible for this to work at all? Once we
> > > establish that, the rest are details :)
> >
> > Surely it is, but like Ted said, the FS folks are not interested and
> > they are at least half the solution..
> 
> :'(

Well, I'd phrase this a bit differently - it is a difficult sell to fs
maintainers that they should significantly complicate writeback code / VFS
with bounce page handling etc. for a thing that is not much used corner
case. So if we can get away with forbiding long-term pins, then that's the
easiest solution. Dealing with short-term pins is easier as we can just
wait for unpinning which is implementable in a localized manner.

> > The FS also has to actively not write out the page while it cannot be
> > write protected unless it copies the data to a stable page. The block
> > stack needs the source data to be stable to do checksum/parity/etc
> > stuff. It is a complicated subject.
> 
> Yes my sense was that being able to write arbitrarily to these pages _at
> all_ was a big issue, not only the dirty tracking aspect.

Yes.

> I guess at some level letting filesystems have such total flexibility as to
> how they implement things leaves us in a difficult position.

I'm not sure what you mean by "total flexibility" here. In my opinion it is
also about how HW performs checksumming etc.

								Honza
Lorenzo Stoakes May 17, 2023, 7:40 a.m. UTC | #11
On Wed, May 17, 2023 at 09:29:20AM +0200, Jan Kara wrote:
> On Mon 15-05-23 14:07:57, Lorenzo Stoakes wrote:
> > On Mon, May 15, 2023 at 09:12:49AM -0300, Jason Gunthorpe wrote:
> > > On Mon, May 15, 2023 at 12:16:21PM +0100, Lorenzo Stoakes wrote:
> > > > Jason will have some thoughts on this I'm sure. I guess the key question
> > > > here is - is it actually feasible for this to work at all? Once we
> > > > establish that, the rest are details :)
> > >
> > > Surely it is, but like Ted said, the FS folks are not interested and
> > > they are at least half the solution..
> >
> > :'(
>
> Well, I'd phrase this a bit differently - it is a difficult sell to fs
> maintainers that they should significantly complicate writeback code / VFS
> with bounce page handling etc. for a thing that is not much used corner
> case. So if we can get away with forbiding long-term pins, then that's the
> easiest solution. Dealing with short-term pins is easier as we can just
> wait for unpinning which is implementable in a localized manner.
>

Totally understandable. It's unfortunately I feel a case of something we
should simply not have allowed.

> > > The FS also has to actively not write out the page while it cannot be
> > > write protected unless it copies the data to a stable page. The block
> > > stack needs the source data to be stable to do checksum/parity/etc
> > > stuff. It is a complicated subject.
> >
> > Yes my sense was that being able to write arbitrarily to these pages _at
> > all_ was a big issue, not only the dirty tracking aspect.
>
> Yes.
>
> > I guess at some level letting filesystems have such total flexibility as to
> > how they implement things leaves us in a difficult position.
>
> I'm not sure what you mean by "total flexibility" here. In my opinion it is
> also about how HW performs checksumming etc.

I mean to say *_ops allow a lot of flexibility in how things are
handled. Certainly checksumming is a great example but in theory an
arbitrary filesystem could be doing, well, anything and always assuming
that only userland mappings should be modifying the underlying data.

>
> 								Honza
> --
> Jan Kara <jack@suse.com>
> SUSE Labs, CR
Christoph Hellwig May 17, 2023, 7:42 a.m. UTC | #12
On Wed, May 17, 2023 at 09:29:20AM +0200, Jan Kara wrote:
> > > Surely it is, but like Ted said, the FS folks are not interested and
> > > they are at least half the solution..
> > 
> > :'(
> 
> Well, I'd phrase this a bit differently - it is a difficult sell to fs
> maintainers that they should significantly complicate writeback code / VFS
> with bounce page handling etc. for a thing that is not much used corner
> case. So if we can get away with forbiding long-term pins, then that's the
> easiest solution. Dealing with short-term pins is easier as we can just
> wait for unpinning which is implementable in a localized manner.

Full agreement here.  The whole concept of supporting writeback for
long term mappings does not make much sense.

> > > The FS also has to actively not write out the page while it cannot be
> > > write protected unless it copies the data to a stable page. The block
> > > stack needs the source data to be stable to do checksum/parity/etc
> > > stuff. It is a complicated subject.
> > 
> > Yes my sense was that being able to write arbitrarily to these pages _at
> > all_ was a big issue, not only the dirty tracking aspect.
> 
> Yes.
> 
> > I guess at some level letting filesystems have such total flexibility as to
> > how they implement things leaves us in a difficult position.
> 
> I'm not sure what you mean by "total flexibility" here. In my opinion it is
> also about how HW performs checksumming etc.

I have no idea what total flexbility is even supposed to be.
Christoph Hellwig May 17, 2023, 7:43 a.m. UTC | #13
On Wed, May 17, 2023 at 08:40:26AM +0100, Lorenzo Stoakes wrote:
> > I'm not sure what you mean by "total flexibility" here. In my opinion it is
> > also about how HW performs checksumming etc.
> 
> I mean to say *_ops allow a lot of flexibility in how things are
> handled. Certainly checksumming is a great example but in theory an
> arbitrary filesystem could be doing, well, anything and always assuming
> that only userland mappings should be modifying the underlying data.

File systems need a wait to track when a page is dirtied so that it can
be written back.  Not much to do with flexbility.
Lorenzo Stoakes May 17, 2023, 7:55 a.m. UTC | #14
On Wed, May 17, 2023 at 12:43:34AM -0700, Christoph Hellwig wrote:
> On Wed, May 17, 2023 at 08:40:26AM +0100, Lorenzo Stoakes wrote:
> > > I'm not sure what you mean by "total flexibility" here. In my opinion it is
> > > also about how HW performs checksumming etc.
> >
> > I mean to say *_ops allow a lot of flexibility in how things are
> > handled. Certainly checksumming is a great example but in theory an
> > arbitrary filesystem could be doing, well, anything and always assuming
> > that only userland mappings should be modifying the underlying data.
>
> File systems need a wait to track when a page is dirtied so that it can
> be written back.  Not much to do with flexbility.

I'll try to take this in good faith because... yeah. I do get that, I mean
I literally created a repro for this situation and referenced in the commit
msg and comments this precise problem in my patch series that
addresses... this problem :P

Perhaps I'm not being clear but it was simply my intent to highlight that
yes this is the primary problem but ALSO GUP writing to ostensibly 'clean'
pages 'behind the back' of a fs is _also_ a problem.

Not least for checksumming (e.g. assume hw-reported checksum for a block ==
checksum derived from page cache) but, because VFS allows a great deal of
flexibility in how filesystems are implemented, perhaps in other respects
we haven't considered.

So I just wanted to highlight (happy to be corrected if I'm wrong) that the
PRIMARY problem is the dirty tracking breaking, but also strikes me that
arbitrary writes to 'clean' pages in the background is one too.
Christoph Hellwig May 17, 2023, 8:10 a.m. UTC | #15
On Wed, May 17, 2023 at 08:55:27AM +0100, Lorenzo Stoakes wrote:
> I'll try to take this in good faith because... yeah. I do get that, I mean
> I literally created a repro for this situation and referenced in the commit
> msg and comments this precise problem in my patch series that
> addresses... this problem :P
> 
> Perhaps I'm not being clear but it was simply my intent to highlight that
> yes this is the primary problem but ALSO GUP writing to ostensibly 'clean'
> pages 'behind the back' of a fs is _also_ a problem.

Yes, it absolutely is a problem if that happens.  But we can just
fix it in the kernel using the:

   lock_page()
   copy data
   set_page_dirty_locked()
   unlock_page();

pattern, and we should have covere every place that did in tree.
But there's no good way to verify it except for regular code audits.
David Hildenbrand May 17, 2023, 8:26 a.m. UTC | #16
On 15.05.23 13:31, Lorenzo Stoakes wrote:
> On Sun, May 14, 2023 at 10:14:46PM -0700, Christoph Hellwig wrote:
>> On Sun, May 14, 2023 at 08:20:04PM +0100, Lorenzo Stoakes wrote:
>>> As discussed at LSF/MM, on the flight over I wrote a little repro [0] which
>>> reliably triggers the ext4 warning by recreating the scenario described
>>> above, using a small userland program and kernel module.
>>>
>>> This code is not perfect (plane code :) but does seem to do the job
>>> adequately, also obviously this should only be run in a VM environment
>>> where data loss is acceptable (in my case a small qemu instance).
>>
>> It would be really awesome if you could wire it up with and submit it
>> to xfstests.
> 
> Sure am happy to take a look at that! Also happy if David finds it useful in any
> way for this unit tests.

I played with a simple selftest that would reuse the existing gup_test 
infrastructure (adding PIN_LONGTERM_TEST_WRITE), and try reproducing an 
actual data corruption.

So far, I was not able to reproduce any corruption easily without your 
patches, because d824ec2a1546 ("mm: do not reclaim private data from 
pinned page") seems to mitigate most of it.

So ... before my patches (adding PIN_LONGTERM_TEST_WRITE) I cannot test 
it from a selftest, with d824ec2a1546 ("mm: do not reclaim private data 
from pinned page") I cannot reproduce and with your patches long-term 
pinning just fails.

Long story short: I'll most probably not add such a test but instead 
keep testing that long-term pinning works/fails now as expected, based 
on the FS type.

> 
> The kernel module interface is a bit sketchy (it takes a user address which it
> blindly pins for you) so it's not something that should be run in any unsafe
> environment but as long as we are ok with that :)

I can submit the PIN_LONGTERM_TEST_WRITE extension, that would allow to 
test with a stock kernel that has the module compiled in. It won't allow 
!longterm, though (it would be kind-of hacky to have !longterm 
controlled by user space, even if it's a GUP test module).

Finding an actual reproducer using existing pinning functionality would 
be preferred. For example, using O_DIRECT (should be possible even 
before it starts using FOLL_PIN instead of FOLL_GET). That would be 
highly racy then, but most probably not impossible.

Such (racy) tests are not a good fit for selftests.

Maybe I'll have a try later to reproduce with O_DIRECT.