diff mbox

[v2,11/11] xfs: add support for DAX fsync/msync

Message ID 1447459610-14259-12-git-send-email-ross.zwisler@linux.intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Ross Zwisler Nov. 14, 2015, 12:06 a.m. UTC
To properly support the new DAX fsync/msync infrastructure filesystems
need to call dax_pfn_mkwrite() so that DAX can properly track when a user
write faults on a previously cleaned address.  They also need to call
dax_fsync() in the filesystem fsync() path.  This dax_fsync() call uses
addresses retrieved from get_block() so it needs to be ordered with
respect to truncate.  This is accomplished by using the same locking that
was set up for DAX page faults.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
---
 fs/xfs/xfs_file.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

Comments

Dave Chinner Nov. 16, 2015, 11:12 p.m. UTC | #1
On Fri, Nov 13, 2015 at 05:06:50PM -0700, Ross Zwisler wrote:
> To properly support the new DAX fsync/msync infrastructure filesystems
> need to call dax_pfn_mkwrite() so that DAX can properly track when a user
> write faults on a previously cleaned address.  They also need to call
> dax_fsync() in the filesystem fsync() path.  This dax_fsync() call uses
> addresses retrieved from get_block() so it needs to be ordered with
> respect to truncate.  This is accomplished by using the same locking that
> was set up for DAX page faults.
> 
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> ---
>  fs/xfs/xfs_file.c | 18 +++++++++++++-----
>  1 file changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index 39743ef..2b490a1 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -209,7 +209,8 @@ xfs_file_fsync(
>  	loff_t			end,
>  	int			datasync)
>  {
> -	struct inode		*inode = file->f_mapping->host;
> +	struct address_space	*mapping = file->f_mapping;
> +	struct inode		*inode = mapping->host;
>  	struct xfs_inode	*ip = XFS_I(inode);
>  	struct xfs_mount	*mp = ip->i_mount;
>  	int			error = 0;
> @@ -218,7 +219,13 @@ xfs_file_fsync(
>  
>  	trace_xfs_file_fsync(ip);
>  
> -	error = filemap_write_and_wait_range(inode->i_mapping, start, end);
> +	if (dax_mapping(mapping)) {
> +		xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
> +		dax_fsync(mapping, start, end);
> +		xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
> +	}
> +
> +	error = filemap_write_and_wait_range(mapping, start, end);

Ok, I don't understand a couple of things here.

Firstly, if it's a DAX mapping, why are we still calling
filemap_write_and_wait_range() after the dax_fsync() call that has
already written back all the dirty cachelines?

Secondly, exactly what is the XFS_MMAPLOCK_SHARED lock supposed to
be doing here? I don't see where dax_fsync() has any callouts to
get_block(), so the comment "needs to be ordered with respect to
truncate" doesn't make any obvious sense. If we have a racing
truncate removing entries from the radix tree, then thanks to the
mapping tree lock we'll either find an entry we need to write back,
or we won't find any entry at all, right?

Lastly, this flushing really needs to be inside
filemap_write_and_wait_range(), because we call the writeback code
from many more places than just fsync to ensure ordering of various
operations such that files are in known state before proceeding
(e.g. hole punch).

Cheers,

Dave.
Ross Zwisler Nov. 17, 2015, 7:03 p.m. UTC | #2
On Tue, Nov 17, 2015 at 10:12:22AM +1100, Dave Chinner wrote:
> On Fri, Nov 13, 2015 at 05:06:50PM -0700, Ross Zwisler wrote:
> > To properly support the new DAX fsync/msync infrastructure filesystems
> > need to call dax_pfn_mkwrite() so that DAX can properly track when a user
> > write faults on a previously cleaned address.  They also need to call
> > dax_fsync() in the filesystem fsync() path.  This dax_fsync() call uses
> > addresses retrieved from get_block() so it needs to be ordered with
> > respect to truncate.  This is accomplished by using the same locking that
> > was set up for DAX page faults.
> > 
> > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> > ---
> >  fs/xfs/xfs_file.c | 18 +++++++++++++-----
> >  1 file changed, 13 insertions(+), 5 deletions(-)
> > 
> > diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> > index 39743ef..2b490a1 100644
> > --- a/fs/xfs/xfs_file.c
> > +++ b/fs/xfs/xfs_file.c
> > @@ -209,7 +209,8 @@ xfs_file_fsync(
> >  	loff_t			end,
> >  	int			datasync)
> >  {
> > -	struct inode		*inode = file->f_mapping->host;
> > +	struct address_space	*mapping = file->f_mapping;
> > +	struct inode		*inode = mapping->host;
> >  	struct xfs_inode	*ip = XFS_I(inode);
> >  	struct xfs_mount	*mp = ip->i_mount;
> >  	int			error = 0;
> > @@ -218,7 +219,13 @@ xfs_file_fsync(
> >  
> >  	trace_xfs_file_fsync(ip);
> >  
> > -	error = filemap_write_and_wait_range(inode->i_mapping, start, end);
> > +	if (dax_mapping(mapping)) {
> > +		xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
> > +		dax_fsync(mapping, start, end);
> > +		xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
> > +	}
> > +
> > +	error = filemap_write_and_wait_range(mapping, start, end);
> 
> Ok, I don't understand a couple of things here.
> 
> Firstly, if it's a DAX mapping, why are we still calling
> filemap_write_and_wait_range() after the dax_fsync() call that has
> already written back all the dirty cachelines?
> 
> Secondly, exactly what is the XFS_MMAPLOCK_SHARED lock supposed to
> be doing here? I don't see where dax_fsync() has any callouts to
> get_block(), so the comment "needs to be ordered with respect to
> truncate" doesn't make any obvious sense. If we have a racing
> truncate removing entries from the radix tree, then thanks to the
> mapping tree lock we'll either find an entry we need to write back,
> or we won't find any entry at all, right?

You're right, dax_fsync() doesn't call out to get_block() any more.  It does
save the results of get_block() calls from the page faults, though, and I was
concerned about the following race:

fsync thread				truncate thread
------------				---------------
dax_fsync()
save tagged entries in pvec

					change block mapping for inode so that
					entries saved in pvec are no longer
					owned by this inode

loop through pvec using stale results
from get_block(), flushing and cleaning
entries we no longer own

In looking at the xfs_file_fsync() code, though, it seems like if this race
existed it would also exist for page cache entries that were being put into a
pvec in write_cache_pages(), and that we would similarly be writing back
cached pages that no longer belong to this inode.

Is this race non-existent?

> Lastly, this flushing really needs to be inside
> filemap_write_and_wait_range(), because we call the writeback code
> from many more places than just fsync to ensure ordering of various
> operations such that files are in known state before proceeding
> (e.g. hole punch).

The call to dax_fsync() (soon to be dax_writeback_mapping_range()) first lived
in do_writepages() in the RFC version, but was moved into the filesystem so we
could have access to get_block(), which is no longer needed, and so we could
use the FS level locking.  If the race described above isn't an issue then I
agree moving this call out of the filesystems and down into the generic page
writeback code is probably the right thing to do.

Thanks for the feedback.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Nov. 20, 2015, 12:37 a.m. UTC | #3
On Tue, Nov 17, 2015 at 12:03:41PM -0700, Ross Zwisler wrote:
> On Tue, Nov 17, 2015 at 10:12:22AM +1100, Dave Chinner wrote:
> > On Fri, Nov 13, 2015 at 05:06:50PM -0700, Ross Zwisler wrote:
> > > To properly support the new DAX fsync/msync infrastructure filesystems
> > > need to call dax_pfn_mkwrite() so that DAX can properly track when a user
> > > write faults on a previously cleaned address.  They also need to call
> > > dax_fsync() in the filesystem fsync() path.  This dax_fsync() call uses
> > > addresses retrieved from get_block() so it needs to be ordered with
> > > respect to truncate.  This is accomplished by using the same locking that
> > > was set up for DAX page faults.
> > > 
> > > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> > > ---
> > >  fs/xfs/xfs_file.c | 18 +++++++++++++-----
> > >  1 file changed, 13 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> > > index 39743ef..2b490a1 100644
> > > --- a/fs/xfs/xfs_file.c
> > > +++ b/fs/xfs/xfs_file.c
> > > @@ -209,7 +209,8 @@ xfs_file_fsync(
> > >  	loff_t			end,
> > >  	int			datasync)
> > >  {
> > > -	struct inode		*inode = file->f_mapping->host;
> > > +	struct address_space	*mapping = file->f_mapping;
> > > +	struct inode		*inode = mapping->host;
> > >  	struct xfs_inode	*ip = XFS_I(inode);
> > >  	struct xfs_mount	*mp = ip->i_mount;
> > >  	int			error = 0;
> > > @@ -218,7 +219,13 @@ xfs_file_fsync(
> > >  
> > >  	trace_xfs_file_fsync(ip);
> > >  
> > > -	error = filemap_write_and_wait_range(inode->i_mapping, start, end);
> > > +	if (dax_mapping(mapping)) {
> > > +		xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
> > > +		dax_fsync(mapping, start, end);
> > > +		xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
> > > +	}
> > > +
> > > +	error = filemap_write_and_wait_range(mapping, start, end);
> > 
> > Ok, I don't understand a couple of things here.
> > 
> > Firstly, if it's a DAX mapping, why are we still calling
> > filemap_write_and_wait_range() after the dax_fsync() call that has
> > already written back all the dirty cachelines?
> > 
> > Secondly, exactly what is the XFS_MMAPLOCK_SHARED lock supposed to
> > be doing here? I don't see where dax_fsync() has any callouts to
> > get_block(), so the comment "needs to be ordered with respect to
> > truncate" doesn't make any obvious sense. If we have a racing
> > truncate removing entries from the radix tree, then thanks to the
> > mapping tree lock we'll either find an entry we need to write back,
> > or we won't find any entry at all, right?
> 
> You're right, dax_fsync() doesn't call out to get_block() any more.  It does
> save the results of get_block() calls from the page faults, though, and I was
> concerned about the following race:
> 
> fsync thread				truncate thread
> ------------				---------------
> dax_fsync()
> save tagged entries in pvec
> 
> 					change block mapping for inode so that
> 					entries saved in pvec are no longer
> 					owned by this inode
> 
> loop through pvec using stale results
> from get_block(), flushing and cleaning
> entries we no longer own

dax_fsync is trying to do lockless lookups on an object that has no
internal reference count or synchronisation mechanism. That simply
doesn't work. In contrast, the struct page has the page lock, and
then with that held we can do the page->mapping checks to serialise
against and detect races with invalidation.

If you note the code in clear_exceptional_entry() in the
invalidation code:

        spin_lock_irq(&mapping->tree_lock);
        /*
         * Regular page slots are stabilized by the page lock even
         * without the tree itself locked.  These unlocked entries
         * need verification under the tree lock.
         */
        if (!__radix_tree_lookup(&mapping->page_tree, index, &node, &slot))
                goto unlock;
        if (*slot != entry)
		goto unlock;
	radix_tree_replace_slot(slot, NULL);


it basically says exactly this: exception entries are only valid
when the lookup is done under the mapping tree lock. IOWs, while you
can find exceptional entries via lockless radix tree lookups, you
*can't use them* safely.

Hence dax_fsync() needs to validate the exceptional entries it finds
via the pvec lookup under the mapping tree lock, and then flush the
cache while still holding the mapping tree lock. At that point, it
is safe against invalidation races....

> In looking at the xfs_file_fsync() code, though, it seems like if this race
> existed it would also exist for page cache entries that were being put into a
> pvec in write_cache_pages(), and that we would similarly be writing back
> cached pages that no longer belong to this inode.

That's what the page->mapping checks in write_cache_pages() protect
against. Everywhere you see a "lock_page(); if (page->mapping !=
mapping)" style of operation, it is checking against a racing
page invalidation.

Cheers,

Dave.
diff mbox

Patch

diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 39743ef..2b490a1 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -209,7 +209,8 @@  xfs_file_fsync(
 	loff_t			end,
 	int			datasync)
 {
-	struct inode		*inode = file->f_mapping->host;
+	struct address_space	*mapping = file->f_mapping;
+	struct inode		*inode = mapping->host;
 	struct xfs_inode	*ip = XFS_I(inode);
 	struct xfs_mount	*mp = ip->i_mount;
 	int			error = 0;
@@ -218,7 +219,13 @@  xfs_file_fsync(
 
 	trace_xfs_file_fsync(ip);
 
-	error = filemap_write_and_wait_range(inode->i_mapping, start, end);
+	if (dax_mapping(mapping)) {
+		xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
+		dax_fsync(mapping, start, end);
+		xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
+	}
+
+	error = filemap_write_and_wait_range(mapping, start, end);
 	if (error)
 		return error;
 
@@ -1603,9 +1610,8 @@  xfs_filemap_pmd_fault(
 /*
  * pfn_mkwrite was originally inteneded to ensure we capture time stamp
  * updates on write faults. In reality, it's need to serialise against
- * truncate similar to page_mkwrite. Hence we open-code dax_pfn_mkwrite()
- * here and cycle the XFS_MMAPLOCK_SHARED to ensure we serialise the fault
- * barrier in place.
+ * truncate similar to page_mkwrite. Hence we cycle the XFS_MMAPLOCK_SHARED
+ * to ensure we serialise the fault barrier in place.
  */
 static int
 xfs_filemap_pfn_mkwrite(
@@ -1628,6 +1634,8 @@  xfs_filemap_pfn_mkwrite(
 	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
 	if (vmf->pgoff >= size)
 		ret = VM_FAULT_SIGBUS;
+	else if (IS_DAX(inode))
+		ret = dax_pfn_mkwrite(vma, vmf);
 	xfs_iunlock(ip, XFS_MMAPLOCK_SHARED);
 	sb_end_pagefault(inode->i_sb);
 	return ret;