diff mbox

[12/20] mm: Factor out common parts of write fault handling

Message ID 1474992504-20133-13-git-send-email-jack@suse.cz (mailing list archive)
State New, archived
Headers show

Commit Message

Jan Kara Sept. 27, 2016, 4:08 p.m. UTC
Currently we duplicate handling of shared write faults in
wp_page_reuse() and do_shared_fault(). Factor them out into a common
function.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/memory.c | 78 +++++++++++++++++++++++++++++--------------------------------
 1 file changed, 37 insertions(+), 41 deletions(-)

Comments

Ross Zwisler Oct. 17, 2016, 10:08 p.m. UTC | #1
On Tue, Sep 27, 2016 at 06:08:16PM +0200, Jan Kara wrote:
> Currently we duplicate handling of shared write faults in
> wp_page_reuse() and do_shared_fault(). Factor them out into a common
> function.
> 
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  mm/memory.c | 78 +++++++++++++++++++++++++++++--------------------------------
>  1 file changed, 37 insertions(+), 41 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 63d9c1a54caf..0643b3b5a12a 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2063,6 +2063,41 @@ static int do_page_mkwrite(struct vm_area_struct *vma, struct page *page,
>  }
>  
>  /*
> + * Handle dirtying of a page in shared file mapping on a write fault.
> + *
> + * The function expects the page to be locked and unlocks it.
> + */
> +static void fault_dirty_shared_page(struct vm_area_struct *vma,
> +				    struct page *page)
> +{
> +	struct address_space *mapping;
> +	bool dirtied;
> +	bool page_mkwrite = vma->vm_ops->page_mkwrite;

I think you may need to pass in a 'page_mkwrite' parameter if you don't want
to change behavior.  Just checking to see of vma->vm_ops->page_mkwrite is
non-NULL works fine for this path:

do_shared_fault()
	fault_dirty_shared_page()

and for

wp_page_shared()
	wp_page_reuse()
		fault_dirty_shared_page()

But for these paths:

wp_pfn_shared()
	wp_page_reuse()
		fault_dirty_shared_page()

and

do_wp_page()
	wp_page_reuse()
		fault_dirty_shared_page()

we unconditionally pass 0 for the 'page_mkwrite' parameter, even though from
the logic in wp_pfn_shared() especially you can see that
vma->vm_ops->pfn_mkwrite() must be defined some of the time.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jan Kara Oct. 18, 2016, 10:50 a.m. UTC | #2
On Mon 17-10-16 16:08:51, Ross Zwisler wrote:
> On Tue, Sep 27, 2016 at 06:08:16PM +0200, Jan Kara wrote:
> > Currently we duplicate handling of shared write faults in
> > wp_page_reuse() and do_shared_fault(). Factor them out into a common
> > function.
> > 
> > Signed-off-by: Jan Kara <jack@suse.cz>
> > ---
> >  mm/memory.c | 78 +++++++++++++++++++++++++++++--------------------------------
> >  1 file changed, 37 insertions(+), 41 deletions(-)
> > 
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 63d9c1a54caf..0643b3b5a12a 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -2063,6 +2063,41 @@ static int do_page_mkwrite(struct vm_area_struct *vma, struct page *page,
> >  }
> >  
> >  /*
> > + * Handle dirtying of a page in shared file mapping on a write fault.
> > + *
> > + * The function expects the page to be locked and unlocks it.
> > + */
> > +static void fault_dirty_shared_page(struct vm_area_struct *vma,
> > +				    struct page *page)
> > +{
> > +	struct address_space *mapping;
> > +	bool dirtied;
> > +	bool page_mkwrite = vma->vm_ops->page_mkwrite;
> 
> I think you may need to pass in a 'page_mkwrite' parameter if you don't want
> to change behavior.  Just checking to see of vma->vm_ops->page_mkwrite is
> non-NULL works fine for this path:
> 
> do_shared_fault()
> 	fault_dirty_shared_page()
> 
> and for
> 
> wp_page_shared()
> 	wp_page_reuse()
> 		fault_dirty_shared_page()
> 
> But for these paths:
> 
> wp_pfn_shared()
> 	wp_page_reuse()
> 		fault_dirty_shared_page()
> 
> and
> 
> do_wp_page()
> 	wp_page_reuse()
> 		fault_dirty_shared_page()
> 
> we unconditionally pass 0 for the 'page_mkwrite' parameter, even though from
> the logic in wp_pfn_shared() especially you can see that
> vma->vm_ops->pfn_mkwrite() must be defined some of the time.

The trick which makes this work is that for fault_dirty_shared_page() to be
called at all, you have to set 'dirty_shared' argument to wp_page_reuse()
and that does not happen from wp_pfn_shared() and do_wp_page() paths. So
things work as they should. If you look somewhat later into the series,
the patch "mm: Move part of wp_page_reuse() into the single call site"
cleans this up to make things more obvious.

								Honza
Ross Zwisler Oct. 18, 2016, 5:32 p.m. UTC | #3
On Tue, Oct 18, 2016 at 12:50:00PM +0200, Jan Kara wrote:
> On Mon 17-10-16 16:08:51, Ross Zwisler wrote:
> > On Tue, Sep 27, 2016 at 06:08:16PM +0200, Jan Kara wrote:
> > > Currently we duplicate handling of shared write faults in
> > > wp_page_reuse() and do_shared_fault(). Factor them out into a common
> > > function.
> > > 
> > > Signed-off-by: Jan Kara <jack@suse.cz>
> > > ---
> > >  mm/memory.c | 78 +++++++++++++++++++++++++++++--------------------------------
> > >  1 file changed, 37 insertions(+), 41 deletions(-)
> > > 
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 63d9c1a54caf..0643b3b5a12a 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -2063,6 +2063,41 @@ static int do_page_mkwrite(struct vm_area_struct *vma, struct page *page,
> > >  }
> > >  
> > >  /*
> > > + * Handle dirtying of a page in shared file mapping on a write fault.
> > > + *
> > > + * The function expects the page to be locked and unlocks it.
> > > + */
> > > +static void fault_dirty_shared_page(struct vm_area_struct *vma,
> > > +				    struct page *page)
> > > +{
> > > +	struct address_space *mapping;
> > > +	bool dirtied;
> > > +	bool page_mkwrite = vma->vm_ops->page_mkwrite;
> > 
> > I think you may need to pass in a 'page_mkwrite' parameter if you don't want
> > to change behavior.  Just checking to see of vma->vm_ops->page_mkwrite is
> > non-NULL works fine for this path:
> > 
> > do_shared_fault()
> > 	fault_dirty_shared_page()
> > 
> > and for
> > 
> > wp_page_shared()
> > 	wp_page_reuse()
> > 		fault_dirty_shared_page()
> > 
> > But for these paths:
> > 
> > wp_pfn_shared()
> > 	wp_page_reuse()
> > 		fault_dirty_shared_page()
> > 
> > and
> > 
> > do_wp_page()
> > 	wp_page_reuse()
> > 		fault_dirty_shared_page()
> > 
> > we unconditionally pass 0 for the 'page_mkwrite' parameter, even though from
> > the logic in wp_pfn_shared() especially you can see that
> > vma->vm_ops->pfn_mkwrite() must be defined some of the time.
> 
> The trick which makes this work is that for fault_dirty_shared_page() to be
> called at all, you have to set 'dirty_shared' argument to wp_page_reuse()
> and that does not happen from wp_pfn_shared() and do_wp_page() paths. So
> things work as they should. If you look somewhat later into the series,
> the patch "mm: Move part of wp_page_reuse() into the single call site"
> cleans this up to make things more obvious.
> 
> 								Honza

Ah, cool, that makes sense.

You can add:

Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/mm/memory.c b/mm/memory.c
index 63d9c1a54caf..0643b3b5a12a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2063,6 +2063,41 @@  static int do_page_mkwrite(struct vm_area_struct *vma, struct page *page,
 }
 
 /*
+ * Handle dirtying of a page in shared file mapping on a write fault.
+ *
+ * The function expects the page to be locked and unlocks it.
+ */
+static void fault_dirty_shared_page(struct vm_area_struct *vma,
+				    struct page *page)
+{
+	struct address_space *mapping;
+	bool dirtied;
+	bool page_mkwrite = vma->vm_ops->page_mkwrite;
+
+	dirtied = set_page_dirty(page);
+	VM_BUG_ON_PAGE(PageAnon(page), page);
+	/*
+	 * Take a local copy of the address_space - page.mapping may be zeroed
+	 * by truncate after unlock_page().   The address_space itself remains
+	 * pinned by vma->vm_file's reference.  We rely on unlock_page()'s
+	 * release semantics to prevent the compiler from undoing this copying.
+	 */
+	mapping = page_rmapping(page);
+	unlock_page(page);
+
+	if ((dirtied || page_mkwrite) && mapping) {
+		/*
+		 * Some device drivers do not set page.mapping
+		 * but still dirty their pages
+		 */
+		balance_dirty_pages_ratelimited(mapping);
+	}
+
+	if (!page_mkwrite)
+		file_update_time(vma->vm_file);
+}
+
+/*
  * Handle write page faults for pages that can be reused in the current vma
  *
  * This can happen either due to the mapping being with the VM_SHARED flag,
@@ -2092,28 +2127,11 @@  static inline int wp_page_reuse(struct vm_fault *vmf, struct page *page,
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
 
 	if (dirty_shared) {
-		struct address_space *mapping;
-		int dirtied;
-
 		if (!page_mkwrite)
 			lock_page(page);
 
-		dirtied = set_page_dirty(page);
-		VM_BUG_ON_PAGE(PageAnon(page), page);
-		mapping = page->mapping;
-		unlock_page(page);
+		fault_dirty_shared_page(vma, page);
 		put_page(page);
-
-		if ((dirtied || page_mkwrite) && mapping) {
-			/*
-			 * Some device drivers do not set page.mapping
-			 * but still dirty their pages
-			 */
-			balance_dirty_pages_ratelimited(mapping);
-		}
-
-		if (!page_mkwrite)
-			file_update_time(vma->vm_file);
 	}
 
 	return VM_FAULT_WRITE;
@@ -3256,8 +3274,6 @@  uncharge_out:
 static int do_shared_fault(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	struct address_space *mapping;
-	int dirtied = 0;
 	int ret, tmp;
 
 	ret = __do_fault(vmf);
@@ -3286,27 +3302,7 @@  static int do_shared_fault(struct vm_fault *vmf)
 		return ret;
 	}
 
-	if (set_page_dirty(vmf->page))
-		dirtied = 1;
-	/*
-	 * Take a local copy of the address_space - page.mapping may be zeroed
-	 * by truncate after unlock_page().   The address_space itself remains
-	 * pinned by vma->vm_file's reference.  We rely on unlock_page()'s
-	 * release semantics to prevent the compiler from undoing this copying.
-	 */
-	mapping = page_rmapping(vmf->page);
-	unlock_page(vmf->page);
-	if ((dirtied || vma->vm_ops->page_mkwrite) && mapping) {
-		/*
-		 * Some device drivers do not set page.mapping but still
-		 * dirty their pages
-		 */
-		balance_dirty_pages_ratelimited(mapping);
-	}
-
-	if (!vma->vm_ops->page_mkwrite)
-		file_update_time(vma->vm_file);
-
+	fault_dirty_shared_page(vma, vmf->page);
 	return ret;
 }