diff mbox series

[v6,06/12] mm/gup: track FOLL_PIN pages

Message ID 20200211001536.1027652-7-jhubbard@nvidia.com (mailing list archive)
State New, archived
Headers show
Series mm/gup: track FOLL_PIN pages | expand

Commit Message

John Hubbard Feb. 11, 2020, 12:15 a.m. UTC
Add tracking of pages that were pinned via FOLL_PIN. This tracking is
implemented via overloading of page->_refcount: pins are added by
adding GUP_PIN_COUNTING_BIAS (1024) to the refcount. This provides a
fuzzy indication of pinning, and it can have false positives (and that's
OK). Please see the pre-existing
Documentation/core-api/pin_user_pages.rst for details.

As mentioned in pin_user_pages.rst, callers who effectively set FOLL_PIN
(typically via pin_user_pages*()) are required to ultimately free such
pages via unpin_user_page().

Please also note the limitation, discussed in pin_user_pages.rst under
the "TODO: for 1GB and larger huge pages" section. (That limitation will
be removed in a following patch.)

The effect of a FOLL_PIN flag is similar to that of FOLL_GET, and may be
thought of as "FOLL_GET for DIO and/or RDMA use".

Pages that have been pinned via FOLL_PIN are identifiable via a
new function call:

   bool page_maybe_dma_pinned(struct page *page);

What to do in response to encountering such a page, is left to later
patchsets. There is discussion about this in [1], [2], [3], and [4].

This also changes a BUG_ON(), to a WARN_ON(), in follow_page_mask().

[1] Some slow progress on get_user_pages() (Apr 2, 2019):
    https://lwn.net/Articles/784574/
[2] DMA and get_user_pages() (LPC: Dec 12, 2018):
    https://lwn.net/Articles/774411/
[3] The trouble with get_user_pages() (Apr 30, 2018):
    https://lwn.net/Articles/753027/
[4] LWN kernel index: get_user_pages():
    https://lwn.net/Kernel/Index/#Memory_management-get_user_pages

Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Suggested-by: Jan Kara <jack@suse.cz>
Suggested-by: Jérôme Glisse <jglisse@redhat.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
 Documentation/core-api/pin_user_pages.rst |   6 +-
 include/linux/mm.h                        |  82 +++++--
 mm/gup.c                                  | 254 +++++++++++++++++-----
 mm/huge_memory.c                          |  29 ++-
 mm/hugetlb.c                              |  54 +++--
 5 files changed, 334 insertions(+), 91 deletions(-)

Comments

Alex Williamson April 24, 2020, 6:18 p.m. UTC | #1
On Mon, 10 Feb 2020 16:15:30 -0800
John Hubbard <jhubbard@nvidia.com> wrote:

> Add tracking of pages that were pinned via FOLL_PIN. This tracking is
> implemented via overloading of page->_refcount: pins are added by
> adding GUP_PIN_COUNTING_BIAS (1024) to the refcount. This provides a
> fuzzy indication of pinning, and it can have false positives (and that's
> OK). Please see the pre-existing
> Documentation/core-api/pin_user_pages.rst for details.
> 
> As mentioned in pin_user_pages.rst, callers who effectively set FOLL_PIN
> (typically via pin_user_pages*()) are required to ultimately free such
> pages via unpin_user_page().
> 
> Please also note the limitation, discussed in pin_user_pages.rst under
> the "TODO: for 1GB and larger huge pages" section. (That limitation will
> be removed in a following patch.)
> 
> The effect of a FOLL_PIN flag is similar to that of FOLL_GET, and may be
> thought of as "FOLL_GET for DIO and/or RDMA use".
> 
> Pages that have been pinned via FOLL_PIN are identifiable via a
> new function call:
> 
>    bool page_maybe_dma_pinned(struct page *page);
> 
> What to do in response to encountering such a page, is left to later
> patchsets. There is discussion about this in [1], [2], [3], and [4].
> 
> This also changes a BUG_ON(), to a WARN_ON(), in follow_page_mask().
> 
> [1] Some slow progress on get_user_pages() (Apr 2, 2019):
>     https://lwn.net/Articles/784574/
> [2] DMA and get_user_pages() (LPC: Dec 12, 2018):
>     https://lwn.net/Articles/774411/
> [3] The trouble with get_user_pages() (Apr 30, 2018):
>     https://lwn.net/Articles/753027/
> [4] LWN kernel index: get_user_pages():
>     https://lwn.net/Kernel/Index/#Memory_management-get_user_pages
> 
> Reviewed-by: Jan Kara <jack@suse.cz>
> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Suggested-by: Jan Kara <jack@suse.cz>
> Suggested-by: Jérôme Glisse <jglisse@redhat.com>
> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
> ---
>  Documentation/core-api/pin_user_pages.rst |   6 +-
>  include/linux/mm.h                        |  82 +++++--
>  mm/gup.c                                  | 254 +++++++++++++++++-----
>  mm/huge_memory.c                          |  29 ++-
>  mm/hugetlb.c                              |  54 +++--
>  5 files changed, 334 insertions(+), 91 deletions(-)

Hi John,

I'm seeing a regression bisected back to this commit (3faa52c03f44
mm/gup: track FOLL_PIN pages).  I've attached some vfio-pci test code
that reproduces this by mmap'ing a page of MMIO space of a device and
then tries to map that through the IOMMU, so this should be attempting
a gup/pin of a PFNMAP page.  Previously this failed gracefully (-EFAULT),
but now results in:

BUG: unable to handle page fault for address: ffffae5cbfe5e938
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0 
Oops: 0000 [#1] SMP NOPTI
CPU: 18 PID: 3365 Comm: vfio-pci-dma-ma Tainted: G           OE     5.6.0+ #6
Hardware name: AMD Corporation Diesel/Diesel, BIOS TDL100CB 03/17/2020
RIP: 0010:get_pfnblock_flags_mask+0x22/0x70
Code: c3 0f 1f 80 00 00 00 00 0f 1f 44 00 00 48 8b 05 bc e1 d9 01 48 89 f7 49 89 c8 48 c1 ef 0f 48 85 c0 74 48 48 89 f1 48 c1 e9 17 <48> 8b 04 c8 48 85 c0 74 0b 40 0f b6 ff 48 c1 e7 04 48 01 f8 48 c1
RSP: 0018:ffffb55289b3fcc8 EFLAGS: 00010216
RAX: ffff9e5cbff50000 RBX: 0000000000000001 RCX: 000001fffffe1d27
RDX: 0000000000000002 RSI: ffffff0e93acd633 RDI: 0001fffffe1d2759
RBP: ffffb55289b3fd88 R08: 0000000000000007 R09: ffff9e48a52476a8
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001
R13: 0000000000000000 R14: 0000000000000001 R15: ffff9e48ab358cc0
FS:  00007f4ef7269740(0000) GS:ffff9e48afa80000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffae5cbfe5e938 CR3: 0000000c61eda000 CR4: 00000000003406e0
Call Trace:
 __gup_longterm_locked+0x274/0x620
 vaddr_get_pfn+0x74/0x110 [vfio_iommu_type1]
 vfio_pin_pages_remote+0x6e/0x370 [vfio_iommu_type1]
 vfio_iommu_type1_ioctl+0x8e5/0xaac [vfio_iommu_type1]
 ksys_ioctl+0x86/0xc0
 __x64_sys_ioctl+0x16/0x20
 do_syscall_64+0x5b/0x1f0
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f4ef6d7d307
Code: 44 00 00 48 8b 05 69 1b 2d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 1b 2d 00 f7 d8 64 89 01 48
RSP: 002b:00007fff76ada738 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4ef6d7d307
RDX: 00007fff76ada760 RSI: 0000000000003b71 RDI: 0000000000000003
RBP: 00007fff76ada930 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000213 R12: 0000000000400950
R13: 00007fff76adaa10 R14: 0000000000000000 R15: 0000000000000000
Modules linked in: vfio_pci(OE) vfio_virqfd(OE) vfio_iommu_type1(OE) vfio(OE) amd64_edac_mod edac_mce_amd kvm_amd kvm rfkill sunrpc ipmi_ssif vfat irqbypass fat ipmi_si crct10dif_pclmul crc32_pclmul sp5100_tco ghash_clmulni_intel ipmi_devintf pcspkr joydev ccp i2c_piix4 k10temp ipmi_msghandler pinctrl_amd acpi_cpufreq ip_tables nouveau ast video mxm_wmi drm_vram_helper wmi drm_ttm_helper i2c_algo_bit drm_kms_helper cec ttm drm i40e e1000e crc32c_intel
CR2: ffffae5cbfe5e938
---[ end trace a384ab7cc8e37d46 ]---
RIP: 0010:get_pfnblock_flags_mask+0x22/0x70
Code: c3 0f 1f 80 00 00 00 00 0f 1f 44 00 00 48 8b 05 bc e1 d9 01 48 89 f7 49 89 c8 48 c1 ef 0f 48 85 c0 74 48 48 89 f1 48 c1 e9 17 <48> 8b 04 c8 48 85 c0 74 0b 40 0f b6 ff 48 c1 e7 04 48 01 f8 48 c1
RSP: 0018:ffffb55289b3fcc8 EFLAGS: 00010216
RAX: ffff9e5cbff50000 RBX: 0000000000000001 RCX: 000001fffffe1d27
RDX: 0000000000000002 RSI: ffffff0e93acd633 RDI: 0001fffffe1d2759
RBP: ffffb55289b3fd88 R08: 0000000000000007 R09: ffff9e48a52476a8
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001
R13: 0000000000000000 R14: 0000000000000001 R15: ffff9e48ab358cc0
FS:  00007f4ef7269740(0000) GS:ffff9e48afa80000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffae5cbfe5e938 CR3: 0000000c61eda000 CR4: 00000000003406e0

Thanks,
Alex
John Hubbard April 24, 2020, 7:20 p.m. UTC | #2
On 2020-04-24 11:18, Alex Williamson wrote:
...
> Hi John,
> 
> I'm seeing a regression bisected back to this commit (3faa52c03f44
> mm/gup: track FOLL_PIN pages).  I've attached some vfio-pci test code
> that reproduces this by mmap'ing a page of MMIO space of a device and
> then tries to map that through the IOMMU, so this should be attempting
> a gup/pin of a PFNMAP page.  Previously this failed gracefully (-EFAULT),
> but now results in:


Hi Alex,

Thanks for this report, and especially for source code to test it, 
seeing as how I can't immediately spot the problem just from the crash
data so far.  I'll get set up and attempt a repro.

Actually this looks like it should be relatively easier than the usual 
sort of "oops, we leaked a pin_user_pages() or unpin_user_pages() call,
good luck finding which one" report that I fear the most. :) This one 
looks more like a crash that happens directly, when calling into the 
pin_user_pages_remote() code. Which should be a lot easier to solve...

btw, if you are set up for it, it would be nice to know what source file 
and line number corresponds to the RIP (get_pfnblock_flags_mask+0x22) 
below. But if not, no problem, because I've likely got to do the repro 
in any case.

thanks,
Alex Williamson April 24, 2020, 8:15 p.m. UTC | #3
On Fri, 24 Apr 2020 12:20:03 -0700
John Hubbard <jhubbard@nvidia.com> wrote:

> On 2020-04-24 11:18, Alex Williamson wrote:
> ...
> > Hi John,
> > 
> > I'm seeing a regression bisected back to this commit (3faa52c03f44
> > mm/gup: track FOLL_PIN pages).  I've attached some vfio-pci test code
> > that reproduces this by mmap'ing a page of MMIO space of a device and
> > then tries to map that through the IOMMU, so this should be attempting
> > a gup/pin of a PFNMAP page.  Previously this failed gracefully (-EFAULT),
> > but now results in:  
> 
> 
> Hi Alex,
> 
> Thanks for this report, and especially for source code to test it, 
> seeing as how I can't immediately spot the problem just from the crash
> data so far.  I'll get set up and attempt a repro.
> 
> Actually this looks like it should be relatively easier than the usual 
> sort of "oops, we leaked a pin_user_pages() or unpin_user_pages() call,
> good luck finding which one" report that I fear the most. :) This one 
> looks more like a crash that happens directly, when calling into the 
> pin_user_pages_remote() code. Which should be a lot easier to solve...
> 
> btw, if you are set up for it, it would be nice to know what source file 
> and line number corresponds to the RIP (get_pfnblock_flags_mask+0x22) 
> below. But if not, no problem, because I've likely got to do the repro 
> in any case.

Hey John,

TBH I'm feeling a lot less confident about this bisect.  This was
readily reproducible to me on a clean tree a bit ago, but now it
eludes me.  Let me go back and figure out what's going on before you
spend any more time on it.  Thanks,

Alex
John Hubbard April 24, 2020, 10:58 p.m. UTC | #4
On 2020-04-24 13:15, Alex Williamson wrote:
> On Fri, 24 Apr 2020 12:20:03 -0700
> John Hubbard <jhubbard@nvidia.com> wrote:
> 
>> On 2020-04-24 11:18, Alex Williamson wrote:
>> ...
>>> Hi John,
>>>
>>> I'm seeing a regression bisected back to this commit (3faa52c03f44
>>> mm/gup: track FOLL_PIN pages).  I've attached some vfio-pci test code
>>> that reproduces this by mmap'ing a page of MMIO space of a device and
>>> then tries to map that through the IOMMU, so this should be attempting
>>> a gup/pin of a PFNMAP page.  Previously this failed gracefully (-EFAULT),
>>> but now results in:
>>
>>
>> Hi Alex,
>>
>> Thanks for this report, and especially for source code to test it,
>> seeing as how I can't immediately spot the problem just from the crash
>> data so far.  I'll get set up and attempt a repro.
>>
>> Actually this looks like it should be relatively easier than the usual
>> sort of "oops, we leaked a pin_user_pages() or unpin_user_pages() call,
>> good luck finding which one" report that I fear the most. :) This one
>> looks more like a crash that happens directly, when calling into the
>> pin_user_pages_remote() code. Which should be a lot easier to solve...
>>
>> btw, if you are set up for it, it would be nice to know what source file
>> and line number corresponds to the RIP (get_pfnblock_flags_mask+0x22)
>> below. But if not, no problem, because I've likely got to do the repro
>> in any case.
> 
> Hey John,
> 
> TBH I'm feeling a lot less confident about this bisect.  This was
> readily reproducible to me on a clean tree a bit ago, but now it
> eludes me.  Let me go back and figure out what's going on before you
> spend any more time on it.  Thanks,
> 

OK. But I'm keeping the repro program! :)  It made it quick and easy to
set up a vfio test, so it was worth doing in any case.

Anyway, I wanted to double check this just out of paranoia, and so
now I have a data point for you: your test program runs and passes for
me using today's linux.git kernel, with an NVIDIA GPU as the vfio
device, and the kernel log is clean. No hint of any problems.

I traced it a little bit:

# sudo bpftrace -e kprobe:__get_user_pages { @[kstack()] = count(); }
Attaching 1 probe...
^C
...
@[
     __get_user_pages+1
     __gup_longterm_locked+176
     vaddr_get_pfn+104
     vfio_pin_pages_remote+113
     vfio_dma_do_map+760
     vfio_iommu_type1_ioctl+761
     ksys_ioctl+135
     __x64_sys_ioctl+22
     do_syscall_64+90
     entry_SYSCALL_64_after_hwframe+73
]: 1

...and also verified that it's not actually pinning any pages with that
path:

$ grep foll_pin /proc/vmstat
nr_foll_pin_acquired 0
nr_foll_pin_released 0


Good luck and let me know if it starts pointing to FOLL_PIN or gup, etc.

thanks,
Alex Williamson April 28, 2020, 4:54 p.m. UTC | #5
On Fri, 24 Apr 2020 15:58:29 -0700
John Hubbard <jhubbard@nvidia.com> wrote:

> On 2020-04-24 13:15, Alex Williamson wrote:
> > On Fri, 24 Apr 2020 12:20:03 -0700
> > John Hubbard <jhubbard@nvidia.com> wrote:
> >   
> >> On 2020-04-24 11:18, Alex Williamson wrote:
> >> ...  
> >>> Hi John,
> >>>
> >>> I'm seeing a regression bisected back to this commit (3faa52c03f44
> >>> mm/gup: track FOLL_PIN pages).  I've attached some vfio-pci test code
> >>> that reproduces this by mmap'ing a page of MMIO space of a device and
> >>> then tries to map that through the IOMMU, so this should be attempting
> >>> a gup/pin of a PFNMAP page.  Previously this failed gracefully (-EFAULT),
> >>> but now results in:  
> >>
> >>
> >> Hi Alex,
> >>
> >> Thanks for this report, and especially for source code to test it,
> >> seeing as how I can't immediately spot the problem just from the crash
> >> data so far.  I'll get set up and attempt a repro.
> >>
> >> Actually this looks like it should be relatively easier than the usual
> >> sort of "oops, we leaked a pin_user_pages() or unpin_user_pages() call,
> >> good luck finding which one" report that I fear the most. :) This one
> >> looks more like a crash that happens directly, when calling into the
> >> pin_user_pages_remote() code. Which should be a lot easier to solve...
> >>
> >> btw, if you are set up for it, it would be nice to know what source file
> >> and line number corresponds to the RIP (get_pfnblock_flags_mask+0x22)
> >> below. But if not, no problem, because I've likely got to do the repro
> >> in any case.  
> > 
> > Hey John,
> > 
> > TBH I'm feeling a lot less confident about this bisect.  This was
> > readily reproducible to me on a clean tree a bit ago, but now it
> > eludes me.  Let me go back and figure out what's going on before you
> > spend any more time on it.  Thanks,
> >   
> 
> OK. But I'm keeping the repro program! :)  It made it quick and easy to
> set up a vfio test, so it was worth doing in any case.

Great, because I've traced my steps, re-bisected and came back to the
same upstream commit with the same test program.  The major difference
is that I thought I was seeing this on pure upstream, but some vfio
code that I'm trying to prepare for upstream snuck in, so this isn't a
pure upstream regression, but the changes I was making worked on v5.6
and does not work with this commit.  Maybe still a latent regression,
maybe a bug in my changes.

> Anyway, I wanted to double check this just out of paranoia, and so
> now I have a data point for you: your test program runs and passes for
> me using today's linux.git kernel, with an NVIDIA GPU as the vfio
> device, and the kernel log is clean. No hint of any problems.

Yep, I agree.  The vfio change I'm experimenting with is to move the
remap_pfn_range() from vfio_pci_mmap() to a vm_ops.fault handler.  This
is why I have the test program creating an mmap of the device mmio
space and then immediately mapping it through the iommu without
touching it.  If the vma gets faulted in the dma mapping path via
pin_user_pages_remote(), I see the crash I reported initially.  If the
test program is modified to access the mmap before doing the dma
mapping, everything works normally.  In either case, the fault handler
is called and satisfies the fault with remap_pfn_range() and returns
VM_FAULT_NOPAGE (vfio patch attached).

Here's the crash I'm seeing with some further debugging:

BUG: unable to handle page fault for address: ffffa5b8bfe14f38
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0 
Oops: 0000 [#1] SMP NOPTI
CPU: 70 PID: 3343 Comm: vfio-pci-dma-ma Not tainted 5.6.0-3faa52c03f44+ #20
Hardware name: AMD Corporation Diesel/Diesel, BIOS TDL100CB 03/17/2020
RIP: 0010:get_pfnblock_flags_mask+0x22/0x70
Code: c3 0f 1f 80 00 00 00 00 0f 1f 44 00 00 48 8b 05 bc e1 d9 01 48 89 f7 49 89 c8 48 c1 ef 0f 48 85 c0 74 48 48 89 f1 48 c1 e9 17 <48> 8b 04 c8 48 85 c0 74 0b 40 0f b6 ff 48 c1 e7 04 48 01 f8 48 c1
RSP: 0018:ffffb2e3c910fcc8 EFLAGS: 00010216
RAX: ffff95b8bff50000 RBX: 0000000000000001 RCX: 000001fffffd89e7
RDX: 0000000000000002 RSI: fffffec4f3a899ba RDI: 0001fffffd89e751
RBP: ffffb2e3c910fd88 R08: 0000000000000007 R09: ffff95a4aa79fce8
R10: 0000000000000000 R11: ffffb2e3c910f840 R12: 0000000100000000
R13: 0000000000000000 R14: 0000000000000001 R15: ffff959caa266e80
FS:  00007f1a95023740(0000) GS:ffff959caf180000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffa5b8bfe14f38 CR3: 0000000462a1e000 CR4: 00000000003406e0
Call Trace:
 __gup_longterm_locked+0x274/0x620
 vaddr_get_pfn+0x74/0x110 [vfio_iommu_type1]
 vfio_pin_pages_remote+0x6e/0x370 [vfio_iommu_type1]
 vfio_iommu_type1_ioctl+0x8e5/0xaac [vfio_iommu_type1]

get_pfnblock_flags_mask+0x22/0x70 is here:

include/linux/mmzone.h:1254
static inline struct mem_section *__nr_to_section(unsigned long nr)
{
#ifdef CONFIG_SPARSEMEM_EXTREME
        if (!mem_section)
                return NULL;
#endif
====>   if (!mem_section[SECTION_NR_TO_ROOT(nr)])
                return NULL;
        return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
}

The call trace is:

__gup_longterm_locked
  check_and_migrate_cma_pages
    is_migrate_cma_page
      get_pageblock_migratetype
        get_pfnblock_flags_mask
          __get_pfnblock_flags_mask
            get_pageblock_bitmap
              __pfn_to_section
                __nr_to_section

Any ideas why we see this difference between the vma being faulted in
outside of the page pinning path versus faulted by it?

FWIW, here's the stack dump for getting to the fault handler in the
latter case:

 vfio_pci_mmap_fault+0x22/0x130 [vfio_pci]
 __do_fault+0x38/0xd0
 __handle_mm_fault+0xd4b/0x1380
 handle_mm_fault+0xe2/0x1f0
 __get_user_pages+0x188/0x820
 __gup_longterm_locked+0xc8/0x620

Thanks!
Alex
Jason Gunthorpe April 28, 2020, 5:49 p.m. UTC | #6
On Tue, Apr 28, 2020 at 10:54:55AM -0600, Alex Williamson wrote:
>  static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
>  {
>  	struct vfio_pci_device *vdev = device_data;
> @@ -1253,8 +1323,14 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>  	vma->vm_pgoff = (pci_resource_start(pdev, index) >> PAGE_SHIFT) + pgoff;
>  
> +	vma->vm_ops = &vfio_pci_mmap_ops;
> +
> +#if 1
> +	return 0;
> +#else
>  	return remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
> -			       req_len, vma->vm_page_prot);
> +			       vma->vm_end - vma->vm_start, vma->vm_page_prot);

The remap_pfn_range here is what tells get_user_pages this is a
non-struct page mapping:

	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;

Which has to be set when the VMA is created, they shouldn't be
modified during fault.

Also the vma code above looked a little strange to me, if you do send
something like this cc me and I can look at it. I did some work like
this for rdma a while ago..

Jason
Alex Williamson April 28, 2020, 7:07 p.m. UTC | #7
On Tue, 28 Apr 2020 14:49:57 -0300
Jason Gunthorpe <jgg@ziepe.ca> wrote:

> On Tue, Apr 28, 2020 at 10:54:55AM -0600, Alex Williamson wrote:
> >  static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
> >  {
> >  	struct vfio_pci_device *vdev = device_data;
> > @@ -1253,8 +1323,14 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
> >  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> >  	vma->vm_pgoff = (pci_resource_start(pdev, index) >> PAGE_SHIFT) + pgoff;
> >  
> > +	vma->vm_ops = &vfio_pci_mmap_ops;
> > +
> > +#if 1
> > +	return 0;
> > +#else
> >  	return remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
> > -			       req_len, vma->vm_page_prot);
> > +			       vma->vm_end - vma->vm_start, vma->vm_page_prot);  
> 
> The remap_pfn_range here is what tells get_user_pages this is a
> non-struct page mapping:
> 
> 	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> 
> Which has to be set when the VMA is created, they shouldn't be
> modified during fault.

Aha, thanks Jason!  So fundamentally, pin_user_pages_remote() should
never have been faulting in this vma since the pages are non-struct
page backed.  Maybe I was just getting lucky before this commit.  For a
VM_PFNMAP, vaddr_get_pfn() only needs pin_user_pages_remote() to return
error and the vma information that we setup in vfio_pci_mmap().  We
only need the fault handler to trigger for user access, which is what I
see with this change.  That should work for me.

> Also the vma code above looked a little strange to me, if you do send
> something like this cc me and I can look at it. I did some work like
> this for rdma a while ago..

Cool, I'll do that.  I'd like to be able to zap the vmas from user
access at a later point and I have doubts that I'm holding the
refs/locks that I need to for that.  Thanks,

Alex
Jason Gunthorpe April 28, 2020, 7:22 p.m. UTC | #8
On Tue, Apr 28, 2020 at 01:07:52PM -0600, Alex Williamson wrote:
> On Tue, 28 Apr 2020 14:49:57 -0300
> Jason Gunthorpe <jgg@ziepe.ca> wrote:
> 
> > On Tue, Apr 28, 2020 at 10:54:55AM -0600, Alex Williamson wrote:
> > >  static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
> > >  {
> > >  	struct vfio_pci_device *vdev = device_data;
> > > @@ -1253,8 +1323,14 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
> > >  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> > >  	vma->vm_pgoff = (pci_resource_start(pdev, index) >> PAGE_SHIFT) + pgoff;
> > >  
> > > +	vma->vm_ops = &vfio_pci_mmap_ops;
> > > +
> > > +#if 1
> > > +	return 0;
> > > +#else
> > >  	return remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
> > > -			       req_len, vma->vm_page_prot);
> > > +			       vma->vm_end - vma->vm_start, vma->vm_page_prot);  
> > 
> > The remap_pfn_range here is what tells get_user_pages this is a
> > non-struct page mapping:
> > 
> > 	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> > 
> > Which has to be set when the VMA is created, they shouldn't be
> > modified during fault.
> 
> Aha, thanks Jason!  So fundamentally, pin_user_pages_remote() should
> never have been faulting in this vma since the pages are non-struct
> page backed. 

gup should not try to pin them.. I think the VM will still call fault
though, not sure from memory?

> Maybe I was just getting lucky before this commit.  For a
> VM_PFNMAP, vaddr_get_pfn() only needs pin_user_pages_remote() to return
> error and the vma information that we setup in vfio_pci_mmap().

I've written on this before, vfio should not be passing pages to the
iommu that it cannot pin eg it should not touch VM_PFNMAP vma's in the
first place.

It is a use-after-free security issue the way it is..

> only need the fault handler to trigger for user access, which is what I
> see with this change.  That should work for me.
> 
> > Also the vma code above looked a little strange to me, if you do send
> > something like this cc me and I can look at it. I did some work like
> > this for rdma a while ago..
> 
> Cool, I'll do that.  I'd like to be able to zap the vmas from user
> access at a later point and I have doubts that I'm holding the
> refs/locks that I need to for that.  Thanks,

Check rdma_umap_ops, it does what you described (actually it replaces
them with 0 page, but along the way it zaps too).

Jason
Alex Williamson April 28, 2020, 8:12 p.m. UTC | #9
On Tue, 28 Apr 2020 16:22:51 -0300
Jason Gunthorpe <jgg@ziepe.ca> wrote:

> On Tue, Apr 28, 2020 at 01:07:52PM -0600, Alex Williamson wrote:
> > On Tue, 28 Apr 2020 14:49:57 -0300
> > Jason Gunthorpe <jgg@ziepe.ca> wrote:
> >   
> > > On Tue, Apr 28, 2020 at 10:54:55AM -0600, Alex Williamson wrote:  
> > > >  static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
> > > >  {
> > > >  	struct vfio_pci_device *vdev = device_data;
> > > > @@ -1253,8 +1323,14 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma)
> > > >  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> > > >  	vma->vm_pgoff = (pci_resource_start(pdev, index) >> PAGE_SHIFT) + pgoff;
> > > >  
> > > > +	vma->vm_ops = &vfio_pci_mmap_ops;
> > > > +
> > > > +#if 1
> > > > +	return 0;
> > > > +#else
> > > >  	return remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
> > > > -			       req_len, vma->vm_page_prot);
> > > > +			       vma->vm_end - vma->vm_start, vma->vm_page_prot);    
> > > 
> > > The remap_pfn_range here is what tells get_user_pages this is a
> > > non-struct page mapping:
> > > 
> > > 	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> > > 
> > > Which has to be set when the VMA is created, they shouldn't be
> > > modified during fault.  
> > 
> > Aha, thanks Jason!  So fundamentally, pin_user_pages_remote() should
> > never have been faulting in this vma since the pages are non-struct
> > page backed.   
> 
> gup should not try to pin them.. I think the VM will still call fault
> though, not sure from memory?

Hmm, at commit 3faa52c03f44 the behavior is that I don't see a fault on
pin, maybe that's a bug.  But trying to rebase to current top of tree,
now my DMA mapping gets an -EFAULT, so something is still funky :-\

> > Maybe I was just getting lucky before this commit.  For a
> > VM_PFNMAP, vaddr_get_pfn() only needs pin_user_pages_remote() to return
> > error and the vma information that we setup in vfio_pci_mmap().  
> 
> I've written on this before, vfio should not be passing pages to the
> iommu that it cannot pin eg it should not touch VM_PFNMAP vma's in the
> first place.
> 
> It is a use-after-free security issue the way it is..

Where is the user after free?  Here I'm trying to map device mmio space
through the iommu, which we need to enable p2p when the user owns
multiple devices.  The device is owned by the user, bound to vfio-pci,
and can't be unbound while the user has it open.  The iommu mappings
are torn down on release.  I guess I don't understand the problem.

> > only need the fault handler to trigger for user access, which is what I
> > see with this change.  That should work for me.
> >   
> > > Also the vma code above looked a little strange to me, if you do send
> > > something like this cc me and I can look at it. I did some work like
> > > this for rdma a while ago..  
> > 
> > Cool, I'll do that.  I'd like to be able to zap the vmas from user
> > access at a later point and I have doubts that I'm holding the
> > refs/locks that I need to for that.  Thanks,  
> 
> Check rdma_umap_ops, it does what you described (actually it replaces
> them with 0 page, but along the way it zaps too).

Ok, thanks,

Alex
Jason Gunthorpe April 29, 2020, 12:29 a.m. UTC | #10
On Tue, Apr 28, 2020 at 02:12:23PM -0600, Alex Williamson wrote:

> > > Maybe I was just getting lucky before this commit.  For a
> > > VM_PFNMAP, vaddr_get_pfn() only needs pin_user_pages_remote() to return
> > > error and the vma information that we setup in vfio_pci_mmap().  
> > 
> > I've written on this before, vfio should not be passing pages to the
> > iommu that it cannot pin eg it should not touch VM_PFNMAP vma's in the
> > first place.
> > 
> > It is a use-after-free security issue the way it is..
> 
> Where is the user after free?  Here I'm trying to map device mmio space
> through the iommu, which we need to enable p2p when the user owns
> multiple devices.

Yes, I gathered what the intent was..

> The device is owned by the user, bound to vfio-pci, and can't be
> unbound while the user has it open.  The iommu mappings are torn
> down on release.  I guess I don't understand the problem.

For PFNMAP VMAs the lifecycle rule is basically that the PFN inside
the VMA can only be used inside the mmap_sem that read it. Ie you
cannot take a PFN outside the mmap_sem and continue to use it.

This is because the owner of the VMA owns the lifetime of that PFN,
and under the write side of the mmap_sem it can zap the PFN, or close
the VMA. Afterwards the VMA owner knows that there are no active
reference to the PFN in the system and can reclaim the PFN

ie the PFNMAP has no per-page pin counter. All lifetime revolves around
the mmap_sem and the vma.

What vfio does is take the PFN out of the mmap_sem and program it into
the iommu.

So when the VMA owner decides the PFN has no references, it actually
doesn't: vfio continues to access it beyond its permitted lifetime.

HW like mlx5 and GPUs have BAR pages which have security
properties. Once the PFN is returned to the driver the security
context of the PFN can be reset and re-assigned to another
process. Using VFIO a hostile user space can retain access to the BAR
page and upon its reassignment access a security context they were not
permitted to access.

This is why GUP does not return PFNMAP pages and vfio should not carry
a reference outside the mmap_sem. It breaks all the lifetime rules.

Jason
Alex Williamson April 29, 2020, 7:56 p.m. UTC | #11
On Tue, 28 Apr 2020 21:29:03 -0300
Jason Gunthorpe <jgg@ziepe.ca> wrote:

> On Tue, Apr 28, 2020 at 02:12:23PM -0600, Alex Williamson wrote:
> 
> > > > Maybe I was just getting lucky before this commit.  For a
> > > > VM_PFNMAP, vaddr_get_pfn() only needs pin_user_pages_remote() to return
> > > > error and the vma information that we setup in vfio_pci_mmap().    
> > > 
> > > I've written on this before, vfio should not be passing pages to the
> > > iommu that it cannot pin eg it should not touch VM_PFNMAP vma's in the
> > > first place.
> > > 
> > > It is a use-after-free security issue the way it is..  
> > 
> > Where is the user after free?  Here I'm trying to map device mmio space
> > through the iommu, which we need to enable p2p when the user owns
> > multiple devices.  
> 
> Yes, I gathered what the intent was..
> 
> > The device is owned by the user, bound to vfio-pci, and can't be
> > unbound while the user has it open.  The iommu mappings are torn
> > down on release.  I guess I don't understand the problem.  
> 
> For PFNMAP VMAs the lifecycle rule is basically that the PFN inside
> the VMA can only be used inside the mmap_sem that read it. Ie you
> cannot take a PFN outside the mmap_sem and continue to use it.
> 
> This is because the owner of the VMA owns the lifetime of that PFN,
> and under the write side of the mmap_sem it can zap the PFN, or close
> the VMA. Afterwards the VMA owner knows that there are no active
> reference to the PFN in the system and can reclaim the PFN
> 
> ie the PFNMAP has no per-page pin counter. All lifetime revolves around
> the mmap_sem and the vma.
> 
> What vfio does is take the PFN out of the mmap_sem and program it into
> the iommu.
> 
> So when the VMA owner decides the PFN has no references, it actually
> doesn't: vfio continues to access it beyond its permitted lifetime.
> 
> HW like mlx5 and GPUs have BAR pages which have security
> properties. Once the PFN is returned to the driver the security
> context of the PFN can be reset and re-assigned to another
> process. Using VFIO a hostile user space can retain access to the BAR
> page and upon its reassignment access a security context they were not
> permitted to access.
> 
> This is why GUP does not return PFNMAP pages and vfio should not carry
> a reference outside the mmap_sem. It breaks all the lifetime rules.

Thanks for the explanation.  I'm inferring that there is no solution to
this, but why can't we use mmu notifiers to invalidate the iommu on zap
or close?  I know that at least QEMU won't consider these sorts of
mapping fatal, so we could possibly change the default and make support
for such mappings opt-in, but I don't know if I'd break DPDK, or
potentially users within QEMU that make use of p2p between devices.
Thanks,

Alex
Jason Gunthorpe April 29, 2020, 11:03 p.m. UTC | #12
On Wed, Apr 29, 2020 at 01:56:33PM -0600, Alex Williamson wrote:
> On Tue, 28 Apr 2020 21:29:03 -0300
> Jason Gunthorpe <jgg@ziepe.ca> wrote:
> 
> > On Tue, Apr 28, 2020 at 02:12:23PM -0600, Alex Williamson wrote:
> > 
> > > > > Maybe I was just getting lucky before this commit.  For a
> > > > > VM_PFNMAP, vaddr_get_pfn() only needs pin_user_pages_remote() to return
> > > > > error and the vma information that we setup in vfio_pci_mmap().    
> > > > 
> > > > I've written on this before, vfio should not be passing pages to the
> > > > iommu that it cannot pin eg it should not touch VM_PFNMAP vma's in the
> > > > first place.
> > > > 
> > > > It is a use-after-free security issue the way it is..  
> > > 
> > > Where is the user after free?  Here I'm trying to map device mmio space
> > > through the iommu, which we need to enable p2p when the user owns
> > > multiple devices.  
> > 
> > Yes, I gathered what the intent was..
> > 
> > > The device is owned by the user, bound to vfio-pci, and can't be
> > > unbound while the user has it open.  The iommu mappings are torn
> > > down on release.  I guess I don't understand the problem.  
> > 
> > For PFNMAP VMAs the lifecycle rule is basically that the PFN inside
> > the VMA can only be used inside the mmap_sem that read it. Ie you
> > cannot take a PFN outside the mmap_sem and continue to use it.
> > 
> > This is because the owner of the VMA owns the lifetime of that PFN,
> > and under the write side of the mmap_sem it can zap the PFN, or close
> > the VMA. Afterwards the VMA owner knows that there are no active
> > reference to the PFN in the system and can reclaim the PFN
> > 
> > ie the PFNMAP has no per-page pin counter. All lifetime revolves around
> > the mmap_sem and the vma.
> > 
> > What vfio does is take the PFN out of the mmap_sem and program it into
> > the iommu.
> > 
> > So when the VMA owner decides the PFN has no references, it actually
> > doesn't: vfio continues to access it beyond its permitted lifetime.
> > 
> > HW like mlx5 and GPUs have BAR pages which have security
> > properties. Once the PFN is returned to the driver the security
> > context of the PFN can be reset and re-assigned to another
> > process. Using VFIO a hostile user space can retain access to the BAR
> > page and upon its reassignment access a security context they were not
> > permitted to access.
> > 
> > This is why GUP does not return PFNMAP pages and vfio should not carry
> > a reference outside the mmap_sem. It breaks all the lifetime rules.
> 
> Thanks for the explanation.  I'm inferring that there is no solution to
> this, 

Not a particularly good one unfortunately. I've been wanting to use
P2P_DMA pages to solve these kinds of things but they are kind of
expensive.

I have a copy of some draft patches trying to do this

> but why can't we use mmu notifiers to invalidate the iommu on zap or
> close?

Hum.. I think with the new mmu interval notifiers vfio might be able
to manage that without a huge amount of trouble. But the iommu
invalidation needs to be synchronous from a mmu notifier callback - is
that feasible?

But even so, we have all this stuff now for authorizing PCI P2P which
this design completely ignores as well. :(

> I know that at least QEMU won't consider these sorts of mapping
> fatal, so we could possibly change the default and make support for
> such mappings opt-in, but I don't know if I'd break DPDK, or
> potentially users within QEMU that make use of p2p between devices.

I'd heard this was mostly for GPU device assignment? I'd be surprised
if DPDK used this..

Jason
diff mbox series

Patch

diff --git a/Documentation/core-api/pin_user_pages.rst b/Documentation/core-api/pin_user_pages.rst
index 1d490155ecd7..9829345428f8 100644
--- a/Documentation/core-api/pin_user_pages.rst
+++ b/Documentation/core-api/pin_user_pages.rst
@@ -173,8 +173,8 @@  CASE 4: Pinning for struct page manipulation only
 -------------------------------------------------
 Here, normal GUP calls are sufficient, so neither flag needs to be set.
 
-page_dma_pinned(): the whole point of pinning
-=============================================
+page_maybe_dma_pinned(): the whole point of pinning
+===================================================
 
 The whole point of marking pages as "DMA-pinned" or "gup-pinned" is to be able
 to query, "is this page DMA-pinned?" That allows code such as page_mkclean()
@@ -186,7 +186,7 @@  and debates (see the References at the end of this document). It's a TODO item
 here: fill in the details once that's worked out. Meanwhile, it's safe to say
 that having this available: ::
 
-        static inline bool page_dma_pinned(struct page *page)
+        static inline bool page_maybe_dma_pinned(struct page *page)
 
 ...is a prerequisite to solving the long-running gup+DMA problem.
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 52269e56c514..8d4f9f4094f4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1001,6 +1001,8 @@  static inline void get_page(struct page *page)
 	page_ref_inc(page);
 }
 
+bool __must_check try_grab_page(struct page *page, unsigned int flags);
+
 static inline __must_check bool try_get_page(struct page *page)
 {
 	page = compound_head(page);
@@ -1029,29 +1031,79 @@  static inline void put_page(struct page *page)
 		__put_page(page);
 }
 
-/**
- * unpin_user_page() - release a gup-pinned page
- * @page:            pointer to page to be released
+/*
+ * GUP_PIN_COUNTING_BIAS, and the associated functions that use it, overload
+ * the page's refcount so that two separate items are tracked: the original page
+ * reference count, and also a new count of how many pin_user_pages() calls were
+ * made against the page. ("gup-pinned" is another term for the latter).
+ *
+ * With this scheme, pin_user_pages() becomes special: such pages are marked as
+ * distinct from normal pages. As such, the unpin_user_page() call (and its
+ * variants) must be used in order to release gup-pinned pages.
+ *
+ * Choice of value:
+ *
+ * By making GUP_PIN_COUNTING_BIAS a power of two, debugging of page reference
+ * counts with respect to pin_user_pages() and unpin_user_page() becomes
+ * simpler, due to the fact that adding an even power of two to the page
+ * refcount has the effect of using only the upper N bits, for the code that
+ * counts up using the bias value. This means that the lower bits are left for
+ * the exclusive use of the original code that increments and decrements by one
+ * (or at least, by much smaller values than the bias value).
  *
- * Pages that were pinned via pin_user_pages*() must be released via either
- * unpin_user_page(), or one of the unpin_user_pages*() routines. This is so
- * that eventually such pages can be separately tracked and uniquely handled. In
- * particular, interactions with RDMA and filesystems need special handling.
+ * Of course, once the lower bits overflow into the upper bits (and this is
+ * OK, because subtraction recovers the original values), then visual inspection
+ * no longer suffices to directly view the separate counts. However, for normal
+ * applications that don't have huge page reference counts, this won't be an
+ * issue.
  *
- * unpin_user_page() and put_page() are not interchangeable, despite this early
- * implementation that makes them look the same. unpin_user_page() calls must
- * be perfectly matched up with pin*() calls.
+ * Locking: the lockless algorithm described in page_cache_get_speculative()
+ * and page_cache_gup_pin_speculative() provides safe operation for
+ * get_user_pages and page_mkclean and other calls that race to set up page
+ * table entries.
  */
-static inline void unpin_user_page(struct page *page)
-{
-	put_page(page);
-}
+#define GUP_PIN_COUNTING_BIAS (1U << 10)
 
+void unpin_user_page(struct page *page);
 void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
 				 bool make_dirty);
-
 void unpin_user_pages(struct page **pages, unsigned long npages);
 
+/**
+ * page_maybe_dma_pinned() - report if a page is pinned for DMA.
+ *
+ * This function checks if a page has been pinned via a call to
+ * pin_user_pages*().
+ *
+ * For non-huge pages, the return value is partially fuzzy: false is not fuzzy,
+ * because it means "definitely not pinned for DMA", but true means "probably
+ * pinned for DMA, but possibly a false positive due to having at least
+ * GUP_PIN_COUNTING_BIAS worth of normal page references".
+ *
+ * False positives are OK, because: a) it's unlikely for a page to get that many
+ * refcounts, and b) all the callers of this routine are expected to be able to
+ * deal gracefully with a false positive.
+ *
+ * For more information, please see Documentation/vm/pin_user_pages.rst.
+ *
+ * @page:	pointer to page to be queried.
+ * @Return:	True, if it is likely that the page has been "dma-pinned".
+ *		False, if the page is definitely not dma-pinned.
+ */
+static inline bool page_maybe_dma_pinned(struct page *page)
+{
+	/*
+	 * page_ref_count() is signed. If that refcount overflows, then
+	 * page_ref_count() returns a negative value, and callers will avoid
+	 * further incrementing the refcount.
+	 *
+	 * Here, for that overflow case, use the signed bit to count a little
+	 * bit higher via unsigned math, and thus still get an accurate result.
+	 */
+	return ((unsigned int)page_ref_count(compound_head(page))) >=
+		GUP_PIN_COUNTING_BIAS;
+}
+
 #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
 #define SECTION_IN_PAGE_FLAGS
 #endif
diff --git a/mm/gup.c b/mm/gup.c
index c8affbea2019..a2356482e1ea 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -44,6 +44,135 @@  static inline struct page *try_get_compound_head(struct page *page, int refs)
 	return head;
 }
 
+/*
+ * try_grab_compound_head() - attempt to elevate a page's refcount, by a
+ * flags-dependent amount.
+ *
+ * "grab" names in this file mean, "look at flags to decide whether to use
+ * FOLL_PIN or FOLL_GET behavior, when incrementing the page's refcount.
+ *
+ * Either FOLL_PIN or FOLL_GET (or neither) must be set, but not both at the
+ * same time. (That's true throughout the get_user_pages*() and
+ * pin_user_pages*() APIs.) Cases:
+ *
+ *    FOLL_GET: page's refcount will be incremented by 1.
+ *    FOLL_PIN: page's refcount will be incremented by GUP_PIN_COUNTING_BIAS.
+ *
+ * Return: head page (with refcount appropriately incremented) for success, or
+ * NULL upon failure. If neither FOLL_GET nor FOLL_PIN was set, that's
+ * considered failure, and furthermore, a likely bug in the caller, so a warning
+ * is also emitted.
+ */
+static __maybe_unused struct page *try_grab_compound_head(struct page *page,
+							  int refs,
+							  unsigned int flags)
+{
+	if (flags & FOLL_GET)
+		return try_get_compound_head(page, refs);
+	else if (flags & FOLL_PIN) {
+		refs *= GUP_PIN_COUNTING_BIAS;
+		return try_get_compound_head(page, refs);
+	}
+
+	WARN_ON_ONCE(1);
+	return NULL;
+}
+
+/**
+ * try_grab_page() - elevate a page's refcount by a flag-dependent amount
+ *
+ * This might not do anything at all, depending on the flags argument.
+ *
+ * "grab" names in this file mean, "look at flags to decide whether to use
+ * FOLL_PIN or FOLL_GET behavior, when incrementing the page's refcount.
+ *
+ * @page:    pointer to page to be grabbed
+ * @flags:   gup flags: these are the FOLL_* flag values.
+ *
+ * Either FOLL_PIN or FOLL_GET (or neither) may be set, but not both at the same
+ * time. Cases:
+ *
+ *    FOLL_GET: page's refcount will be incremented by 1.
+ *    FOLL_PIN: page's refcount will be incremented by GUP_PIN_COUNTING_BIAS.
+ *
+ * Return: true for success, or if no action was required (if neither FOLL_PIN
+ * nor FOLL_GET was set, nothing is done). False for failure: FOLL_GET or
+ * FOLL_PIN was set, but the page could not be grabbed.
+ */
+bool __must_check try_grab_page(struct page *page, unsigned int flags)
+{
+	WARN_ON_ONCE((flags & (FOLL_GET | FOLL_PIN)) == (FOLL_GET | FOLL_PIN));
+
+	if (flags & FOLL_GET)
+		return try_get_page(page);
+	else if (flags & FOLL_PIN) {
+		page = compound_head(page);
+
+		if (WARN_ON_ONCE(page_ref_count(page) <= 0))
+			return false;
+
+		page_ref_add(page, GUP_PIN_COUNTING_BIAS);
+	}
+
+	return true;
+}
+
+#ifdef CONFIG_DEV_PAGEMAP_OPS
+static bool __unpin_devmap_managed_user_page(struct page *page)
+{
+	int count;
+
+	if (!page_is_devmap_managed(page))
+		return false;
+
+	count = page_ref_sub_return(page, GUP_PIN_COUNTING_BIAS);
+
+	/*
+	 * devmap page refcounts are 1-based, rather than 0-based: if
+	 * refcount is 1, then the page is free and the refcount is
+	 * stable because nobody holds a reference on the page.
+	 */
+	if (count == 1)
+		free_devmap_managed_page(page);
+	else if (!count)
+		__put_page(page);
+
+	return true;
+}
+#else
+static bool __unpin_devmap_managed_user_page(struct page *page)
+{
+	return false;
+}
+#endif /* CONFIG_DEV_PAGEMAP_OPS */
+
+/**
+ * unpin_user_page() - release a dma-pinned page
+ * @page:            pointer to page to be released
+ *
+ * Pages that were pinned via pin_user_pages*() must be released via either
+ * unpin_user_page(), or one of the unpin_user_pages*() routines. This is so
+ * that such pages can be separately tracked and uniquely handled. In
+ * particular, interactions with RDMA and filesystems need special handling.
+ */
+void unpin_user_page(struct page *page)
+{
+	page = compound_head(page);
+
+	/*
+	 * For devmap managed pages we need to catch refcount transition from
+	 * GUP_PIN_COUNTING_BIAS to 1, when refcount reach one it means the
+	 * page is free and we need to inform the device driver through
+	 * callback. See include/linux/memremap.h and HMM for details.
+	 */
+	if (__unpin_devmap_managed_user_page(page))
+		return;
+
+	if (page_ref_sub_and_test(page, GUP_PIN_COUNTING_BIAS))
+		__put_page(page);
+}
+EXPORT_SYMBOL(unpin_user_page);
+
 /**
  * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages
  * @pages:  array of pages to be maybe marked dirty, and definitely released.
@@ -230,10 +359,11 @@  static struct page *follow_page_pte(struct vm_area_struct *vma,
 	}
 
 	page = vm_normal_page(vma, address, pte);
-	if (!page && pte_devmap(pte) && (flags & FOLL_GET)) {
+	if (!page && pte_devmap(pte) && (flags & (FOLL_GET | FOLL_PIN))) {
 		/*
-		 * Only return device mapping pages in the FOLL_GET case since
-		 * they are only valid while holding the pgmap reference.
+		 * Only return device mapping pages in the FOLL_GET or FOLL_PIN
+		 * case since they are only valid while holding the pgmap
+		 * reference.
 		 */
 		*pgmap = get_dev_pagemap(pte_pfn(pte), *pgmap);
 		if (*pgmap)
@@ -271,11 +401,10 @@  static struct page *follow_page_pte(struct vm_area_struct *vma,
 		goto retry;
 	}
 
-	if (flags & FOLL_GET) {
-		if (unlikely(!try_get_page(page))) {
-			page = ERR_PTR(-ENOMEM);
-			goto out;
-		}
+	/* try_grab_page() does nothing unless FOLL_GET or FOLL_PIN is set. */
+	if (unlikely(!try_grab_page(page, flags))) {
+		page = ERR_PTR(-ENOMEM);
+		goto out;
 	}
 	if (flags & FOLL_TOUCH) {
 		if ((flags & FOLL_WRITE) &&
@@ -537,7 +666,7 @@  static struct page *follow_page_mask(struct vm_area_struct *vma,
 	/* make this handle hugepd */
 	page = follow_huge_addr(mm, address, flags & FOLL_WRITE);
 	if (!IS_ERR(page)) {
-		BUG_ON(flags & FOLL_GET);
+		WARN_ON_ONCE(flags & (FOLL_GET | FOLL_PIN));
 		return page;
 	}
 
@@ -1675,6 +1804,15 @@  long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm,
 {
 	return 0;
 }
+
+static long __get_user_pages_remote(struct task_struct *tsk,
+				    struct mm_struct *mm,
+				    unsigned long start, unsigned long nr_pages,
+				    unsigned int gup_flags, struct page **pages,
+				    struct vm_area_struct **vmas, int *locked)
+{
+	return 0;
+}
 #endif /* !CONFIG_MMU */
 
 /*
@@ -1877,7 +2015,10 @@  static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
 		struct page *page = pages[--(*nr)];
 
 		ClearPageReferenced(page);
-		put_page(page);
+		if (flags & FOLL_PIN)
+			unpin_user_page(page);
+		else
+			put_page(page);
 	}
 }
 
@@ -1919,7 +2060,7 @@  static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 		page = pte_page(pte);
 
-		head = try_get_compound_head(page, 1);
+		head = try_grab_compound_head(page, 1, flags);
 		if (!head)
 			goto pte_unmap;
 
@@ -1980,7 +2121,10 @@  static int __gup_device_huge(unsigned long pfn, unsigned long addr,
 		}
 		SetPageReferenced(page);
 		pages[*nr] = page;
-		get_page(page);
+		if (unlikely(!try_grab_page(page, flags))) {
+			undo_dev_pagemap(nr, nr_start, flags, pages);
+			return 0;
+		}
 		(*nr)++;
 		pfn++;
 	} while (addr += PAGE_SIZE, addr != end);
@@ -2056,6 +2200,9 @@  static int record_subpages(struct page *page, unsigned long addr,
 
 static void put_compound_head(struct page *page, int refs, unsigned int flags)
 {
+	if (flags & FOLL_PIN)
+		refs *= GUP_PIN_COUNTING_BIAS;
+
 	VM_BUG_ON_PAGE(page_ref_count(page) < refs, page);
 	/*
 	 * Calling put_page() for each ref is unnecessarily slow. Only the last
@@ -2099,7 +2246,7 @@  static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 	page = head + ((addr & (sz-1)) >> PAGE_SHIFT);
 	refs = record_subpages(page, addr, end, pages + *nr);
 
-	head = try_get_compound_head(head, refs);
+	head = try_grab_compound_head(head, refs, flags);
 	if (!head)
 		return 0;
 
@@ -2159,7 +2306,7 @@  static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 	page = pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
 	refs = record_subpages(page, addr, end, pages + *nr);
 
-	head = try_get_compound_head(pmd_page(orig), refs);
+	head = try_grab_compound_head(pmd_page(orig), refs, flags);
 	if (!head)
 		return 0;
 
@@ -2193,7 +2340,7 @@  static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 	page = pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
 	refs = record_subpages(page, addr, end, pages + *nr);
 
-	head = try_get_compound_head(pud_page(orig), refs);
+	head = try_grab_compound_head(pud_page(orig), refs, flags);
 	if (!head)
 		return 0;
 
@@ -2222,7 +2369,7 @@  static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
 	page = pgd_page(orig) + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT);
 	refs = record_subpages(page, addr, end, pages + *nr);
 
-	head = try_get_compound_head(pgd_page(orig), refs);
+	head = try_grab_compound_head(pgd_page(orig), refs, flags);
 	if (!head)
 		return 0;
 
@@ -2505,11 +2652,11 @@  static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
 
 /**
  * get_user_pages_fast() - pin user pages in memory
- * @start:	starting user address
- * @nr_pages:	number of pages from start to pin
- * @gup_flags:	flags modifying pin behaviour
- * @pages:	array that receives pointers to the pages pinned.
- *		Should be at least nr_pages long.
+ * @start:      starting user address
+ * @nr_pages:   number of pages from start to pin
+ * @gup_flags:  flags modifying pin behaviour
+ * @pages:      array that receives pointers to the pages pinned.
+ *              Should be at least nr_pages long.
  *
  * Attempt to pin user pages in memory without taking mm->mmap_sem.
  * If not successful, it will fall back to taking the lock and
@@ -2543,9 +2690,12 @@  EXPORT_SYMBOL_GPL(get_user_pages_fast);
 /**
  * pin_user_pages_fast() - pin user pages in memory without taking locks
  *
- * For now, this is a placeholder function, until various call sites are
- * converted to use the correct get_user_pages*() or pin_user_pages*() API. So,
- * this is identical to get_user_pages_fast().
+ * Nearly the same as get_user_pages_fast(), except that FOLL_PIN is set. See
+ * get_user_pages_fast() for documentation on the function arguments, because
+ * the arguments here are identical.
+ *
+ * FOLL_PIN means that the pages must be released via unpin_user_page(). Please
+ * see Documentation/vm/pin_user_pages.rst for further details.
  *
  * This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rst. It
  * is NOT intended for Case 2 (RDMA: long-term pins).
@@ -2553,21 +2703,24 @@  EXPORT_SYMBOL_GPL(get_user_pages_fast);
 int pin_user_pages_fast(unsigned long start, int nr_pages,
 			unsigned int gup_flags, struct page **pages)
 {
-	/*
-	 * This is a placeholder, until the pin functionality is activated.
-	 * Until then, just behave like the corresponding get_user_pages*()
-	 * routine.
-	 */
-	return get_user_pages_fast(start, nr_pages, gup_flags, pages);
+	/* FOLL_GET and FOLL_PIN are mutually exclusive. */
+	if (WARN_ON_ONCE(gup_flags & FOLL_GET))
+		return -EINVAL;
+
+	gup_flags |= FOLL_PIN;
+	return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages);
 }
 EXPORT_SYMBOL_GPL(pin_user_pages_fast);
 
 /**
  * pin_user_pages_remote() - pin pages of a remote process (task != current)
  *
- * For now, this is a placeholder function, until various call sites are
- * converted to use the correct get_user_pages*() or pin_user_pages*() API. So,
- * this is identical to get_user_pages_remote().
+ * Nearly the same as get_user_pages_remote(), except that FOLL_PIN is set. See
+ * get_user_pages_remote() for documentation on the function arguments, because
+ * the arguments here are identical.
+ *
+ * FOLL_PIN means that the pages must be released via unpin_user_page(). Please
+ * see Documentation/vm/pin_user_pages.rst for details.
  *
  * This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rst. It
  * is NOT intended for Case 2 (RDMA: long-term pins).
@@ -2577,22 +2730,24 @@  long pin_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm,
 			   unsigned int gup_flags, struct page **pages,
 			   struct vm_area_struct **vmas, int *locked)
 {
-	/*
-	 * This is a placeholder, until the pin functionality is activated.
-	 * Until then, just behave like the corresponding get_user_pages*()
-	 * routine.
-	 */
-	return get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags, pages,
-				     vmas, locked);
+	/* FOLL_GET and FOLL_PIN are mutually exclusive. */
+	if (WARN_ON_ONCE(gup_flags & FOLL_GET))
+		return -EINVAL;
+
+	gup_flags |= FOLL_PIN;
+	return __get_user_pages_remote(tsk, mm, start, nr_pages, gup_flags,
+				       pages, vmas, locked);
 }
 EXPORT_SYMBOL(pin_user_pages_remote);
 
 /**
  * pin_user_pages() - pin user pages in memory for use by other devices
  *
- * For now, this is a placeholder function, until various call sites are
- * converted to use the correct get_user_pages*() or pin_user_pages*() API. So,
- * this is identical to get_user_pages().
+ * Nearly the same as get_user_pages(), except that FOLL_TOUCH is not set, and
+ * FOLL_PIN is set.
+ *
+ * FOLL_PIN means that the pages must be released via unpin_user_page(). Please
+ * see Documentation/vm/pin_user_pages.rst for details.
  *
  * This is intended for Case 1 (DIO) in Documentation/vm/pin_user_pages.rst. It
  * is NOT intended for Case 2 (RDMA: long-term pins).
@@ -2601,11 +2756,12 @@  long pin_user_pages(unsigned long start, unsigned long nr_pages,
 		    unsigned int gup_flags, struct page **pages,
 		    struct vm_area_struct **vmas)
 {
-	/*
-	 * This is a placeholder, until the pin functionality is activated.
-	 * Until then, just behave like the corresponding get_user_pages*()
-	 * routine.
-	 */
-	return get_user_pages(start, nr_pages, gup_flags, pages, vmas);
+	/* FOLL_GET and FOLL_PIN are mutually exclusive. */
+	if (WARN_ON_ONCE(gup_flags & FOLL_GET))
+		return -EINVAL;
+
+	gup_flags |= FOLL_PIN;
+	return __gup_longterm_locked(current, current->mm, start, nr_pages,
+				     pages, vmas, gup_flags);
 }
 EXPORT_SYMBOL(pin_user_pages);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b08b199f9a11..580098e115bd 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -958,6 +958,11 @@  struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
 	 */
 	WARN_ONCE(flags & FOLL_COW, "mm: In follow_devmap_pmd with FOLL_COW set");
 
+	/* FOLL_GET and FOLL_PIN are mutually exclusive. */
+	if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
+			 (FOLL_PIN | FOLL_GET)))
+		return NULL;
+
 	if (flags & FOLL_WRITE && !pmd_write(*pmd))
 		return NULL;
 
@@ -973,7 +978,7 @@  struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
 	 * device mapped pages can only be returned if the
 	 * caller will manage the page reference count.
 	 */
-	if (!(flags & FOLL_GET))
+	if (!(flags & (FOLL_GET | FOLL_PIN)))
 		return ERR_PTR(-EEXIST);
 
 	pfn += (addr & ~PMD_MASK) >> PAGE_SHIFT;
@@ -981,7 +986,8 @@  struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
 	if (!*pgmap)
 		return ERR_PTR(-EFAULT);
 	page = pfn_to_page(pfn);
-	get_page(page);
+	if (!try_grab_page(page, flags))
+		page = ERR_PTR(-ENOMEM);
 
 	return page;
 }
@@ -1101,6 +1107,11 @@  struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
 	if (flags & FOLL_WRITE && !pud_write(*pud))
 		return NULL;
 
+	/* FOLL_GET and FOLL_PIN are mutually exclusive. */
+	if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
+			 (FOLL_PIN | FOLL_GET)))
+		return NULL;
+
 	if (pud_present(*pud) && pud_devmap(*pud))
 		/* pass */;
 	else
@@ -1112,8 +1123,10 @@  struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
 	/*
 	 * device mapped pages can only be returned if the
 	 * caller will manage the page reference count.
+	 *
+	 * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here:
 	 */
-	if (!(flags & FOLL_GET))
+	if (!(flags & (FOLL_GET | FOLL_PIN)))
 		return ERR_PTR(-EEXIST);
 
 	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
@@ -1121,7 +1134,8 @@  struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
 	if (!*pgmap)
 		return ERR_PTR(-EFAULT);
 	page = pfn_to_page(pfn);
-	get_page(page);
+	if (!try_grab_page(page, flags))
+		page = ERR_PTR(-ENOMEM);
 
 	return page;
 }
@@ -1497,8 +1511,13 @@  struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 
 	page = pmd_page(*pmd);
 	VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
+
+	if (!try_grab_page(page, flags))
+		return ERR_PTR(-ENOMEM);
+
 	if (flags & FOLL_TOUCH)
 		touch_pmd(vma, addr, pmd, flags);
+
 	if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) {
 		/*
 		 * We don't mlock() pte-mapped THPs. This way we can avoid
@@ -1535,8 +1554,6 @@  struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 skip_mlock:
 	page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
 	VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
-	if (flags & FOLL_GET)
-		get_page(page);
 
 out:
 	return page;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index dd8737a94bec..ba1de6bc1402 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4375,19 +4375,6 @@  long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT;
 		page = pte_page(huge_ptep_get(pte));
 
-		/*
-		 * Instead of doing 'try_get_page()' below in the same_page
-		 * loop, just check the count once here.
-		 */
-		if (unlikely(page_count(page) <= 0)) {
-			if (pages) {
-				spin_unlock(ptl);
-				remainder = 0;
-				err = -ENOMEM;
-				break;
-			}
-		}
-
 		/*
 		 * If subpage information not requested, update counters
 		 * and skip the same_page loop below.
@@ -4405,7 +4392,22 @@  long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
 same_page:
 		if (pages) {
 			pages[i] = mem_map_offset(page, pfn_offset);
-			get_page(pages[i]);
+			/*
+			 * try_grab_page() should always succeed here, because:
+			 * a) we hold the ptl lock, and b) we've just checked
+			 * that the huge page is present in the page tables. If
+			 * the huge page is present, then the tail pages must
+			 * also be present. The ptl prevents the head page and
+			 * tail pages from being rearranged in any way. So this
+			 * page must be available at this point, unless the page
+			 * refcount overflowed:
+			 */
+			if (WARN_ON_ONCE(!try_grab_page(pages[i], flags))) {
+				spin_unlock(ptl);
+				remainder = 0;
+				err = -ENOMEM;
+				break;
+			}
 		}
 
 		if (vmas)
@@ -4965,6 +4967,12 @@  follow_huge_pmd(struct mm_struct *mm, unsigned long address,
 	struct page *page = NULL;
 	spinlock_t *ptl;
 	pte_t pte;
+
+	/* FOLL_GET and FOLL_PIN are mutually exclusive. */
+	if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
+			 (FOLL_PIN | FOLL_GET)))
+		return NULL;
+
 retry:
 	ptl = pmd_lockptr(mm, pmd);
 	spin_lock(ptl);
@@ -4977,8 +4985,18 @@  follow_huge_pmd(struct mm_struct *mm, unsigned long address,
 	pte = huge_ptep_get((pte_t *)pmd);
 	if (pte_present(pte)) {
 		page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
-		if (flags & FOLL_GET)
-			get_page(page);
+		/*
+		 * try_grab_page() should always succeed here, because: a) we
+		 * hold the pmd (ptl) lock, and b) we've just checked that the
+		 * huge pmd (head) page is present in the page tables. The ptl
+		 * prevents the head page and tail pages from being rearranged
+		 * in any way. So this page must be available at this point,
+		 * unless the page refcount overflowed:
+		 */
+		if (WARN_ON_ONCE(!try_grab_page(page, flags))) {
+			page = NULL;
+			goto out;
+		}
 	} else {
 		if (is_hugetlb_entry_migration(pte)) {
 			spin_unlock(ptl);
@@ -4999,7 +5017,7 @@  struct page * __weak
 follow_huge_pud(struct mm_struct *mm, unsigned long address,
 		pud_t *pud, int flags)
 {
-	if (flags & FOLL_GET)
+	if (flags & (FOLL_GET | FOLL_PIN))
 		return NULL;
 
 	return pte_page(*(pte_t *)pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
@@ -5008,7 +5026,7 @@  follow_huge_pud(struct mm_struct *mm, unsigned long address,
 struct page * __weak
 follow_huge_pgd(struct mm_struct *mm, unsigned long address, pgd_t *pgd, int flags)
 {
-	if (flags & FOLL_GET)
+	if (flags & (FOLL_GET | FOLL_PIN))
 		return NULL;
 
 	return pte_page(*(pte_t *)pgd) + ((address & ~PGDIR_MASK) >> PAGE_SHIFT);