diff mbox series

[linux-next,RFC] mm/gup.c: Convert to use get_user_pages_fast_only()

Message ID 1590252072-2793-1-git-send-email-jrdr.linux@gmail.com (mailing list archive)
State New, archived
Headers show
Series [linux-next,RFC] mm/gup.c: Convert to use get_user_pages_fast_only() | expand

Commit Message

Souptick Joarder May 23, 2020, 4:41 p.m. UTC
Renaming the API __get_user_pages_fast() to get_user_pages_
fast_only() to align with pin_user_pages_fast_only().

As part of this we will get rid of write parameter.
Instead caller will pass FOLL_WRITE to get_user_pages_fast_only().
This will not change any existing functionality of the API.

All the callers are changed to pass FOLL_WRITE.

Updated the documentation of the API.

Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Matthew Wilcox <willy@infradead.org>
---
 arch/powerpc/kvm/book3s_64_mmu_hv.c    |  2 +-
 arch/powerpc/kvm/book3s_64_mmu_radix.c |  2 +-
 arch/powerpc/perf/callchain_64.c       |  2 +-
 include/linux/mm.h                     |  4 ++--
 kernel/events/core.c                   |  4 ++--
 mm/gup.c                               | 29 ++++++++++++++++-------------
 virt/kvm/kvm_main.c                    |  6 +++---
 7 files changed, 26 insertions(+), 23 deletions(-)

Comments

Matthew Wilcox (Oracle) May 23, 2020, 5:25 p.m. UTC | #1
On Sat, May 23, 2020 at 10:11:12PM +0530, Souptick Joarder wrote:
> Renaming the API __get_user_pages_fast() to get_user_pages_
> fast_only() to align with pin_user_pages_fast_only().

Please don't split a function name across lines.  That messes
up people who are grepping for the function name in the changelog.

> As part of this we will get rid of write parameter.
> Instead caller will pass FOLL_WRITE to get_user_pages_fast_only().
> This will not change any existing functionality of the API.
> 
> All the callers are changed to pass FOLL_WRITE.
> 
> Updated the documentation of the API.

Everything you have done here is an improvement, and I'd be happy to
see it go in (after fixing the bug I note below).

But in reading through it, I noticed almost every user ...

> -	if (__get_user_pages_fast(hva, 1, 1, &page) == 1) {
> +	if (get_user_pages_fast_only(hva, 1, FOLL_WRITE, &page) == 1) {

passes '1' as the second parameter.  So do we want to add:

static inline bool get_user_page_fast_only(unsigned long addr,
		unsigned int gup_flags, struct page **pagep)
{
	return get_user_pages_fast_only(addr, 1, gup_flags, pagep) == 1;
}

> @@ -2797,10 +2803,7 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
>  	 * FOLL_FAST_ONLY is required in order to match the API description of
>  	 * this routine: no fall back to regular ("slow") GUP.
>  	 */
> -	unsigned int gup_flags = FOLL_GET | FOLL_FAST_ONLY;
> -
> -	if (write)
> -		gup_flags |= FOLL_WRITE;
> +	gup_flags = FOLL_GET | FOLL_FAST_ONLY;

Er ... gup_flags |=, surely?
Souptick Joarder May 23, 2020, 7:35 p.m. UTC | #2
On Sat, May 23, 2020 at 10:55 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Sat, May 23, 2020 at 10:11:12PM +0530, Souptick Joarder wrote:
> > Renaming the API __get_user_pages_fast() to get_user_pages_
> > fast_only() to align with pin_user_pages_fast_only().
>
> Please don't split a function name across lines.  That messes
> up people who are grepping for the function name in the changelog.

Ok.

>
> > As part of this we will get rid of write parameter.
> > Instead caller will pass FOLL_WRITE to get_user_pages_fast_only().
> > This will not change any existing functionality of the API.
> >
> > All the callers are changed to pass FOLL_WRITE.
> >
> > Updated the documentation of the API.
>
> Everything you have done here is an improvement, and I'd be happy to
> see it go in (after fixing the bug I note below).
>
> But in reading through it, I noticed almost every user ...
>
> > -     if (__get_user_pages_fast(hva, 1, 1, &page) == 1) {
> > +     if (get_user_pages_fast_only(hva, 1, FOLL_WRITE, &page) == 1) {
>
> passes '1' as the second parameter.  So do we want to add:
>
> static inline bool get_user_page_fast_only(unsigned long addr,
>                 unsigned int gup_flags, struct page **pagep)
> {
>         return get_user_pages_fast_only(addr, 1, gup_flags, pagep) == 1;
> }
>
Yes, this can be added. Does get_user_page_fast_only() naming is fine ?


> > @@ -2797,10 +2803,7 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
> >        * FOLL_FAST_ONLY is required in order to match the API description of
> >        * this routine: no fall back to regular ("slow") GUP.
> >        */
> > -     unsigned int gup_flags = FOLL_GET | FOLL_FAST_ONLY;
> > -
> > -     if (write)
> > -             gup_flags |= FOLL_WRITE;
> > +     gup_flags = FOLL_GET | FOLL_FAST_ONLY;
>
> Er ... gup_flags |=, surely?

Poor mistake.


@@ -1998,7 +1998,7 @@ int gfn_to_page_many_atomic(struct
kvm_memory_slot *slot, gfn_t gfn,
        if (entry < nr_pages)
                return 0;

-       return __get_user_pages_fast(addr, nr_pages, 1, pages);
+       return get_user_pages_fast(addr, nr_pages, FOLL_WRITE, pages);

Also this needs to be corrected.
John Hubbard May 23, 2020, 10:18 p.m. UTC | #3
On 2020-05-23 12:35, Souptick Joarder wrote:
...
>> Everything you have done here is an improvement, and I'd be happy to
>> see it go in (after fixing the bug I note below).
>>
>> But in reading through it, I noticed almost every user ...
>>
>>> -     if (__get_user_pages_fast(hva, 1, 1, &page) == 1) {
>>> +     if (get_user_pages_fast_only(hva, 1, FOLL_WRITE, &page) == 1) {
>>
>> passes '1' as the second parameter.  So do we want to add:
>>
>> static inline bool get_user_page_fast_only(unsigned long addr,
>>                  unsigned int gup_flags, struct page **pagep)
>> {
>>          return get_user_pages_fast_only(addr, 1, gup_flags, pagep) == 1;
>> }
>>
> Yes, this can be added. Does get_user_page_fast_only() naming is fine ?


It seems like a good name to me. And I think that the new wrapper call is
a good move, too.

I did pause and reflect for a moment about the number gup/pup API calls we
are building up, but that's merely an indication of the wide usage of this
functionality. So it all feels about right.


thanks,
diff mbox series

Patch

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 18aed97..34fc5c8 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -581,7 +581,7 @@  int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	 * We always ask for write permission since the common case
 	 * is that the page is writable.
 	 */
-	if (__get_user_pages_fast(hva, 1, 1, &page) == 1) {
+	if (get_user_pages_fast_only(hva, 1, FOLL_WRITE, &page) == 1) {
 		write_ok = true;
 	} else {
 		/* Call KVM generic code to do the slow-path check */
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 3248f78..3b6e342 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -795,7 +795,7 @@  int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 	 * is that the page is writable.
 	 */
 	hva = gfn_to_hva_memslot(memslot, gfn);
-	if (!kvm_ro && __get_user_pages_fast(hva, 1, 1, &page) == 1) {
+	if (!kvm_ro && get_user_pages_fast_only(hva, 1, FOLL_WRITE, &page) == 1) {
 		upgrade_write = true;
 	} else {
 		unsigned long pfn;
diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c
index 1bff896d..f719a74 100644
--- a/arch/powerpc/perf/callchain_64.c
+++ b/arch/powerpc/perf/callchain_64.c
@@ -32,7 +32,7 @@  int read_user_stack_slow(void __user *ptr, void *buf, int nb)
 	int nrpages;
 	void *kaddr;
 
-	nrpages = __get_user_pages_fast(addr, 1, 1, &page);
+	nrpages = get_user_pages_fast_only(addr, 1, FOLL_WRITE, &page);
 	if (nrpages == 1) {
 		kaddr = page_address(page);
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 93d93bd..10a6758 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1817,8 +1817,8 @@  extern int mprotect_fixup(struct vm_area_struct *vma,
 /*
  * doesn't attempt to fault and will return short.
  */
-int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
-			  struct page **pages);
+int get_user_pages_fast_only(unsigned long start, int nr_pages,
+			unsigned int gup_flags, struct page **pages);
 int pin_user_pages_fast_only(unsigned long start, int nr_pages,
 			     unsigned int gup_flags, struct page **pages);
 /*
diff --git a/kernel/events/core.c b/kernel/events/core.c
index c94eb27..81d6e73 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6934,12 +6934,12 @@  static u64 perf_virt_to_phys(u64 virt)
 		 * Walking the pages tables for user address.
 		 * Interrupts are disabled, so it prevents any tear down
 		 * of the page tables.
-		 * Try IRQ-safe __get_user_pages_fast first.
+		 * Try IRQ-safe get_user_pages_fast_only first.
 		 * If failed, leave phys_addr as 0.
 		 */
 		if (current->mm != NULL) {
 			pagefault_disable();
-			if (__get_user_pages_fast(virt, 1, 0, &p) == 1)
+			if (get_user_pages_fast_only(virt, 1, 0, &p) == 1)
 				phys_addr = page_to_phys(p) + virt % PAGE_SIZE;
 			pagefault_enable();
 		}
diff --git a/mm/gup.c b/mm/gup.c
index 80f51a36..d8aabc0 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2278,7 +2278,7 @@  static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
  * to be special.
  *
  * For a futex to be placed on a THP tail page, get_futex_key requires a
- * __get_user_pages_fast implementation that can pin pages. Thus it's still
+ * get_user_pages_fast_only implementation that can pin pages. Thus it's still
  * useful to have gup_huge_pmd even if we can't operate on ptes.
  */
 static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
@@ -2683,7 +2683,7 @@  static inline void gup_pgd_range(unsigned long addr, unsigned long end,
 
 #ifndef gup_fast_permitted
 /*
- * Check if it's allowed to use __get_user_pages_fast() for the range, or
+ * Check if it's allowed to use get_user_pages_fast_only() for the range, or
  * we need to fall back to the slow version:
  */
 static bool gup_fast_permitted(unsigned long start, unsigned long end)
@@ -2776,8 +2776,14 @@  static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
 
 	return ret;
 }
-
-/*
+/**
+ * get_user_pages_fast_only() - pin user pages in memory
+ * @start:      starting user address
+ * @nr_pages:   number of pages from start to pin
+ * @gup_flags:  flags modifying pin behaviour
+ * @pages:      array that receives pointers to the pages pinned.
+ *              Should be at least nr_pages long.
+ *
  * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to
  * the regular GUP.
  * Note a difference with get_user_pages_fast: this always returns the
@@ -2786,8 +2792,8 @@  static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
  * If the architecture does not support this function, simply return with no
  * pages pinned.
  */
-int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
-			  struct page **pages)
+int get_user_pages_fast_only(unsigned long start, int nr_pages,
+			unsigned int gup_flags, struct page **pages)
 {
 	int nr_pinned;
 	/*
@@ -2797,10 +2803,7 @@  int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
 	 * FOLL_FAST_ONLY is required in order to match the API description of
 	 * this routine: no fall back to regular ("slow") GUP.
 	 */
-	unsigned int gup_flags = FOLL_GET | FOLL_FAST_ONLY;
-
-	if (write)
-		gup_flags |= FOLL_WRITE;
+	gup_flags = FOLL_GET | FOLL_FAST_ONLY;
 
 	nr_pinned = internal_get_user_pages_fast(start, nr_pages, gup_flags,
 						 pages);
@@ -2815,7 +2818,7 @@  int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
 
 	return nr_pinned;
 }
-EXPORT_SYMBOL_GPL(__get_user_pages_fast);
+EXPORT_SYMBOL_GPL(get_user_pages_fast_only);
 
 /**
  * get_user_pages_fast() - pin user pages in memory
@@ -2886,8 +2889,8 @@  int pin_user_pages_fast(unsigned long start, int nr_pages,
 EXPORT_SYMBOL_GPL(pin_user_pages_fast);
 
 /*
- * This is the FOLL_PIN equivalent of __get_user_pages_fast(). Behavior is the
- * same, except that this one sets FOLL_PIN instead of FOLL_GET.
+ * This is the FOLL_PIN equivalent of get_user_pages_fast_only(). Behavior
+ * is the same, except that this one sets FOLL_PIN instead of FOLL_GET.
  *
  * The API rules are the same, too: no negative values may be returned.
  */
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index fc38d63..cec7919 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1750,7 +1750,7 @@  static bool hva_to_pfn_fast(unsigned long addr, bool write_fault,
 	if (!(write_fault || writable))
 		return false;
 
-	npages = __get_user_pages_fast(addr, 1, 1, page);
+	npages = get_user_pages_fast_only(addr, 1, FOLL_WRITE, page);
 	if (npages == 1) {
 		*pfn = page_to_pfn(page[0]);
 
@@ -1791,7 +1791,7 @@  static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault,
 	if (unlikely(!write_fault) && writable) {
 		struct page *wpage;
 
-		if (__get_user_pages_fast(addr, 1, 1, &wpage) == 1) {
+		if (get_user_pages_fast_only(addr, 1, FOLL_WRITE, &wpage) == 1) {
 			*writable = true;
 			put_page(page);
 			page = wpage;
@@ -1998,7 +1998,7 @@  int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn,
 	if (entry < nr_pages)
 		return 0;
 
-	return __get_user_pages_fast(addr, nr_pages, 1, pages);
+	return get_user_pages_fast(addr, nr_pages, FOLL_WRITE, pages);
 }
 EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic);