diff mbox series

[3/4] mm/gup: Remove enfornced COW mechanism

Message ID 20200821234958.7896-4-peterx@redhat.com (mailing list archive)
State New, archived
Headers show
Series mm: Simplfy cow handling | expand

Commit Message

Peter Xu Aug. 21, 2020, 11:49 p.m. UTC
With the more strict (but greatly simplified) page reuse logic in do_wp_page(),
we can savely go back to the world where cow is not enforced with writes.

This (majorly) reverts commit 17839856fd588f4ab6b789f482ed3ffd7c403e1f.
There're some context differences due to some changes later on around it:

  2170ecfa7688 ("drm/i915: convert get_user_pages() --> pin_user_pages()", 2020-06-03)
  376a34efa4ee ("mm/gup: refactor and de-duplicate gup_fast() code", 2020-06-03)

Some lines moved back and forth with those, but this revert patch should have
striped out and covered all the enforced cow bits anyways.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  8 -----
 mm/gup.c                                    | 40 +++------------------
 mm/huge_memory.c                            |  7 ++--
 3 files changed, 9 insertions(+), 46 deletions(-)

Comments

Oleg Nesterov Sept. 14, 2020, 2:27 p.m. UTC | #1
On 08/21, Peter Xu wrote:
>
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -381,22 +381,13 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
>  }
>  
>  /*
> - * FOLL_FORCE or a forced COW break can write even to unwritable pte's,
> - * but only after we've gone through a COW cycle and they are dirty.
> + * FOLL_FORCE can write to even unwritable pte's, but only
> + * after we've gone through a COW cycle and they are dirty.
>   */
>  static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
>  {
> -	return pte_write(pte) || ((flags & FOLL_COW) && pte_dirty(pte));
> -}
> -
> -/*
> - * A (separate) COW fault might break the page the other way and
> - * get_user_pages() would return the page from what is now the wrong
> - * VM. So we need to force a COW break at GUP time even for reads.
> - */
> -static inline bool should_force_cow_break(struct vm_area_struct *vma, unsigned int flags)
> -{
> -	return is_cow_mapping(vma->vm_flags) && (flags & (FOLL_GET | FOLL_PIN));
> +	return pte_write(pte) ||
> +		((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));

Do we really need to add the FOLL_FORCE check back?

Afaics, FOLL_COW is only possible if FOLL_FORCE was set.

Oleg.
Peter Xu Sept. 14, 2020, 5:59 p.m. UTC | #2
On Mon, Sep 14, 2020 at 04:27:24PM +0200, Oleg Nesterov wrote:
> On 08/21, Peter Xu wrote:
> >
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -381,22 +381,13 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
> >  }
> >  
> >  /*
> > - * FOLL_FORCE or a forced COW break can write even to unwritable pte's,
> > - * but only after we've gone through a COW cycle and they are dirty.
> > + * FOLL_FORCE can write to even unwritable pte's, but only
> > + * after we've gone through a COW cycle and they are dirty.
> >   */
> >  static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
> >  {
> > -	return pte_write(pte) || ((flags & FOLL_COW) && pte_dirty(pte));
> > -}
> > -
> > -/*
> > - * A (separate) COW fault might break the page the other way and
> > - * get_user_pages() would return the page from what is now the wrong
> > - * VM. So we need to force a COW break at GUP time even for reads.
> > - */
> > -static inline bool should_force_cow_break(struct vm_area_struct *vma, unsigned int flags)
> > -{
> > -	return is_cow_mapping(vma->vm_flags) && (flags & (FOLL_GET | FOLL_PIN));
> > +	return pte_write(pte) ||
> > +		((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));
> 
> Do we really need to add the FOLL_FORCE check back?
> 
> Afaics, FOLL_COW is only possible if FOLL_FORCE was set.

When I proposed the patch I wanted to add back FOLL_FORCE because the previous
removing of FOLL_FORCE should be related to the enforced COW mechanism where
FOLL_COW can definitely happen without FOLL_FORCE.  So when we want to revert
the enforced COW we definitely need to recover this check too as it was.  I
didn't think deeper than that.

However now I'm a bit confused on why FOLL_COW must be with FOLL_FORCE even
without the enforced COW... Shouldn't FOLL_COW be able to happen even without
FOLL_FORCE (as long as when a page is shared, and the gup is with WRITE
permission)?  Not sure what I've missed, though.
Linus Torvalds Sept. 14, 2020, 7:03 p.m. UTC | #3
On Mon, Sep 14, 2020 at 10:59 AM Peter Xu <peterx@redhat.com> wrote:
>
> However now I'm a bit confused on why FOLL_COW must be with FOLL_FORCE even
> without the enforced COW... Shouldn't FOLL_COW be able to happen even without
> FOLL_FORCE (as long as when a page is shared, and the gup is with WRITE
> permission)?  Not sure what I've missed, though.

Afaik, the FOLL_FORCE test was (and is) unnecessary.

If FOLL_COW is set, we're going through this the second time, and we
either had that pte_write() or we had FOLL_FORCE originally.

So can_follow_write_pte() doesn't need the FOLL_FORCE test - it's
redundant - but it isn't technically wrong either.

                 Linus
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 2c2bf24140c9..12b30075134a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -596,14 +596,6 @@  static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
 				      GFP_KERNEL |
 				      __GFP_NORETRY |
 				      __GFP_NOWARN);
-		/*
-		 * Using __get_user_pages_fast() with a read-only
-		 * access is questionable. A read-only page may be
-		 * COW-broken, and then this might end up giving
-		 * the wrong side of the COW..
-		 *
-		 * We may or may not care.
-		 */
 		if (pvec) {
 			/* defer to worker if malloc fails */
 			if (!i915_gem_object_is_readonly(obj))
diff --git a/mm/gup.c b/mm/gup.c
index ae096ea7583f..bb93251194d8 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -381,22 +381,13 @@  static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
 }
 
 /*
- * FOLL_FORCE or a forced COW break can write even to unwritable pte's,
- * but only after we've gone through a COW cycle and they are dirty.
+ * FOLL_FORCE can write to even unwritable pte's, but only
+ * after we've gone through a COW cycle and they are dirty.
  */
 static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
 {
-	return pte_write(pte) || ((flags & FOLL_COW) && pte_dirty(pte));
-}
-
-/*
- * A (separate) COW fault might break the page the other way and
- * get_user_pages() would return the page from what is now the wrong
- * VM. So we need to force a COW break at GUP time even for reads.
- */
-static inline bool should_force_cow_break(struct vm_area_struct *vma, unsigned int flags)
-{
-	return is_cow_mapping(vma->vm_flags) && (flags & (FOLL_GET | FOLL_PIN));
+	return pte_write(pte) ||
+		((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));
 }
 
 static struct page *follow_page_pte(struct vm_area_struct *vma,
@@ -1067,11 +1058,9 @@  static long __get_user_pages(struct mm_struct *mm,
 				goto out;
 			}
 			if (is_vm_hugetlb_page(vma)) {
-				if (should_force_cow_break(vma, foll_flags))
-					foll_flags |= FOLL_WRITE;
 				i = follow_hugetlb_page(mm, vma, pages, vmas,
 						&start, &nr_pages, i,
-						foll_flags, locked);
+						gup_flags, locked);
 				if (locked && *locked == 0) {
 					/*
 					 * We've got a VM_FAULT_RETRY
@@ -1085,10 +1074,6 @@  static long __get_user_pages(struct mm_struct *mm,
 				continue;
 			}
 		}
-
-		if (should_force_cow_break(vma, foll_flags))
-			foll_flags |= FOLL_WRITE;
-
 retry:
 		/*
 		 * If we have a pending SIGKILL, don't keep faulting pages and
@@ -2689,19 +2674,6 @@  static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
 		return -EFAULT;
 
 	/*
-	 * The FAST_GUP case requires FOLL_WRITE even for pure reads,
-	 * because get_user_pages() may need to cause an early COW in
-	 * order to avoid confusing the normal COW routines. So only
-	 * targets that are already writable are safe to do by just
-	 * looking at the page tables.
-	 *
-	 * NOTE! With FOLL_FAST_ONLY we allow read-only gup_fast() here,
-	 * because there is no slow path to fall back on. But you'd
-	 * better be careful about possible COW pages - you'll get _a_
-	 * COW page, but not necessarily the one you intended to get
-	 * depending on what COW event happens after this. COW may break
-	 * the page copy in a random direction.
-	 *
 	 * Disable interrupts. The nested form is used, in order to allow
 	 * full, general purpose use of this routine.
 	 *
@@ -2714,8 +2686,6 @@  static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
 	 */
 	if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && gup_fast_permitted(start, end)) {
 		unsigned long fast_flags = gup_flags;
-		if (!(gup_flags & FOLL_FAST_ONLY))
-			fast_flags |= FOLL_WRITE;
 
 		local_irq_save(flags);
 		gup_pgd_range(addr, end, fast_flags, pages, &nr_pinned);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2ccff8472cd4..7ff29cc3d55c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1291,12 +1291,13 @@  vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
 }
 
 /*
- * FOLL_FORCE or a forced COW break can write even to unwritable pmd's,
- * but only after we've gone through a COW cycle and they are dirty.
+ * FOLL_FORCE can write to even unwritable pmd's, but only
+ * after we've gone through a COW cycle and they are dirty.
  */
 static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags)
 {
-	return pmd_write(pmd) || ((flags & FOLL_COW) && pmd_dirty(pmd));
+	return pmd_write(pmd) ||
+	       ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd));
 }
 
 struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,