diff mbox

ovl: use copy_file_range for copy up if possible

Message ID 1473348594-31425-1-git-send-email-amir73il@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Amir Goldstein Sept. 8, 2016, 3:29 p.m. UTC
When copying up within the same fs, try to use f_op->copy_file_range().
This becomes very efficient when lower and upper are on the same fs
with file reflink support.

Tested correct behavior when lower and upper are on:
1. same ext4 (copy)
2. same xfs + reflink patches + mkfs.xfs (copy)
3. same xfs + reflink patches + mkfs.xfs -m reflink=1 (clone)
4. different xfs + reflink patches + mkfs.xfs -m reflink=1 (copy)

Verified that all the overlay xfstests pass in the 'same xfs+reflink'
setup.

For comparison, on my laptop, xfstest overlay/001 (copy up of large
sparse files) takes less than 1 second in the xfs reflink setup vs.
25 seconds on the rest of the setups.

Signed-off-by: Amir Goldstein <amir73il@gmail.com>
---
 fs/overlayfs/copy_up.c | 28 +++++++++++++++++++++++++---
 1 file changed, 25 insertions(+), 3 deletions(-)

Comments

Dave Chinner Sept. 8, 2016, 8:25 p.m. UTC | #1
On Thu, Sep 08, 2016 at 06:29:54PM +0300, Amir Goldstein wrote:
> When copying up within the same fs, try to use f_op->copy_file_range().
> This becomes very efficient when lower and upper are on the same fs
> with file reflink support.
> 
> Tested correct behavior when lower and upper are on:
> 1. same ext4 (copy)
> 2. same xfs + reflink patches + mkfs.xfs (copy)
> 3. same xfs + reflink patches + mkfs.xfs -m reflink=1 (clone)
> 4. different xfs + reflink patches + mkfs.xfs -m reflink=1 (copy)
> 
> Verified that all the overlay xfstests pass in the 'same xfs+reflink'
> setup.
> 
> For comparison, on my laptop, xfstest overlay/001 (copy up of large
> sparse files) takes less than 1 second in the xfs reflink setup vs.
> 25 seconds on the rest of the setups.
> 
> Signed-off-by: Amir Goldstein <amir73il@gmail.com>
> ---
>  fs/overlayfs/copy_up.c | 28 +++++++++++++++++++++++++---
>  1 file changed, 25 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
> index 43fdc27..400567b 100644
> --- a/fs/overlayfs/copy_up.c
> +++ b/fs/overlayfs/copy_up.c
> @@ -121,6 +121,7 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
>  	struct file *new_file;
>  	loff_t old_pos = 0;
>  	loff_t new_pos = 0;
> +	int try_copy_file = 0;
>  	int error = 0;
>  
>  	if (len == 0)
> @@ -136,6 +137,13 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
>  		goto out_fput;
>  	}
>  
> +	/*
> +	 * When copying up within the same fs, try to use fs's copy_file_range
> +	 */
> +	if (file_inode(old_file)->i_sb == file_inode(new_file)->i_sb) {
> +		try_copy_file = (new_file->f_op->copy_file_range != NULL);
> +	}

You don't need this. .copy_file_range() should return -EXDEV when
you try to use it to copy files across different mount points or
superblocks.

i.e. you should probably be calling vfs_copy_file_range() here to do
the copy up, and if that fails (for whatever reason) then fall back
to the existing data copying code.

Cheers,

Dave.
Amir Goldstein Sept. 9, 2016, 7:31 a.m. UTC | #2
On Thu, Sep 8, 2016 at 11:25 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Thu, Sep 08, 2016 at 06:29:54PM +0300, Amir Goldstein wrote:
>> When copying up within the same fs, try to use f_op->copy_file_range().
>> This becomes very efficient when lower and upper are on the same fs
>> with file reflink support.
>>
>> Tested correct behavior when lower and upper are on:
>> 1. same ext4 (copy)
>> 2. same xfs + reflink patches + mkfs.xfs (copy)
>> 3. same xfs + reflink patches + mkfs.xfs -m reflink=1 (clone)
>> 4. different xfs + reflink patches + mkfs.xfs -m reflink=1 (copy)
>>
>> Verified that all the overlay xfstests pass in the 'same xfs+reflink'
>> setup.
>>
>> For comparison, on my laptop, xfstest overlay/001 (copy up of large
>> sparse files) takes less than 1 second in the xfs reflink setup vs.
>> 25 seconds on the rest of the setups.
>>
>> Signed-off-by: Amir Goldstein <amir73il@gmail.com>
>> ---
>>  fs/overlayfs/copy_up.c | 28 +++++++++++++++++++++++++---
>>  1 file changed, 25 insertions(+), 3 deletions(-)
>>
>> diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
>> index 43fdc27..400567b 100644
>> --- a/fs/overlayfs/copy_up.c
>> +++ b/fs/overlayfs/copy_up.c
>> @@ -121,6 +121,7 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
>>       struct file *new_file;
>>       loff_t old_pos = 0;
>>       loff_t new_pos = 0;
>> +     int try_copy_file = 0;
>>       int error = 0;
>>
>>       if (len == 0)
>> @@ -136,6 +137,13 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
>>               goto out_fput;
>>       }
>>
>> +     /*
>> +      * When copying up within the same fs, try to use fs's copy_file_range
>> +      */
>> +     if (file_inode(old_file)->i_sb == file_inode(new_file)->i_sb) {
>> +             try_copy_file = (new_file->f_op->copy_file_range != NULL);
>> +     }
>
> You don't need this. .copy_file_range() should return -EXDEV when
> you try to use it to copy files across different mount points or
> superblocks.
>

Right.

> i.e. you should probably be calling vfs_copy_file_range() here to do
> the copy up, and if that fails (for whatever reason) then fall back
> to the existing data copying code.
>

Yes, I considered that. With this V0 patch, copy_file_range() is
called inside the copy data 'killable loop'
but, unlike the slower splice, it tries to copy the entire remaining
len on every cycle and will most likely get all or nothing without
causing any major stalls.
So my options for V1 are:
1. use the existing loop only fallback to splice on any
copy_file_range() failure.
2. add another (non killable?) loop before the splice killable loop to
try and copy up as much data with copy_file_range()
3. implement ovl_copy_up_file_range() and do the fallback near the
call site of ovl_copy_up_data()

Miklos, so you have any preference?

Cheers,
Amir.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Sept. 9, 2016, 7:54 a.m. UTC | #3
On Fri, Sep 09, 2016 at 10:31:02AM +0300, Amir Goldstein wrote:
> On Thu, Sep 8, 2016 at 11:25 PM, Dave Chinner <david@fromorbit.com> wrote:
> > On Thu, Sep 08, 2016 at 06:29:54PM +0300, Amir Goldstein wrote:
> >> When copying up within the same fs, try to use f_op->copy_file_range().
> >> This becomes very efficient when lower and upper are on the same fs
> >> with file reflink support.
> >>
> >> Tested correct behavior when lower and upper are on:
> >> 1. same ext4 (copy)
> >> 2. same xfs + reflink patches + mkfs.xfs (copy)
> >> 3. same xfs + reflink patches + mkfs.xfs -m reflink=1 (clone)
> >> 4. different xfs + reflink patches + mkfs.xfs -m reflink=1 (copy)
> >>
> >> Verified that all the overlay xfstests pass in the 'same xfs+reflink'
> >> setup.
> >>
> >> For comparison, on my laptop, xfstest overlay/001 (copy up of large
> >> sparse files) takes less than 1 second in the xfs reflink setup vs.
> >> 25 seconds on the rest of the setups.
> >>
> >> Signed-off-by: Amir Goldstein <amir73il@gmail.com>
> >> ---
> >>  fs/overlayfs/copy_up.c | 28 +++++++++++++++++++++++++---
> >>  1 file changed, 25 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
> >> index 43fdc27..400567b 100644
> >> --- a/fs/overlayfs/copy_up.c
> >> +++ b/fs/overlayfs/copy_up.c
> >> @@ -121,6 +121,7 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
> >>       struct file *new_file;
> >>       loff_t old_pos = 0;
> >>       loff_t new_pos = 0;
> >> +     int try_copy_file = 0;
> >>       int error = 0;
> >>
> >>       if (len == 0)
> >> @@ -136,6 +137,13 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
> >>               goto out_fput;
> >>       }
> >>
> >> +     /*
> >> +      * When copying up within the same fs, try to use fs's copy_file_range
> >> +      */
> >> +     if (file_inode(old_file)->i_sb == file_inode(new_file)->i_sb) {
> >> +             try_copy_file = (new_file->f_op->copy_file_range != NULL);
> >> +     }
> >
> > You don't need this. .copy_file_range() should return -EXDEV when
> > you try to use it to copy files across different mount points or
> > superblocks.
> >
> 
> Right.
> 
> > i.e. you should probably be calling vfs_copy_file_range() here to do
> > the copy up, and if that fails (for whatever reason) then fall back
> > to the existing data copying code.
> >
> 
> Yes, I considered that. With this V0 patch, copy_file_range() is
> called inside the copy data 'killable loop'
> but, unlike the slower splice, it tries to copy the entire remaining
> len on every cycle and will most likely get all or nothing without
> causing any major stalls.
> So my options for V1 are:
> 1. use the existing loop only fallback to splice on any
> copy_file_range() failure.
> 2. add another (non killable?) loop before the splice killable loop to
> try and copy up as much data with copy_file_range()
> 3. implement ovl_copy_up_file_range() and do the fallback near the
> call site of ovl_copy_up_data()

vfs_copy_file_range() already has a fallback to call
do_splice_direct() itself if ->copy_file_range() is not supported.
i.e. it will behave identically to the existing code if
copy_file_range is not supported by the underlying fs.

If copy_file_range() fails, then it's for a reason that will cause
do_splice_direct() to fail as well.

vfs_copy_file_range() should really be a direct replacement for any
code that calls do_splice_direct(). If it's not, we should make it
so (e.g call do_splice direct for cross-fs copies automatically
rather than returning EXDEV) and then replace all the calls in the
kernel to do_splice_direct() with vfs_copy_file_range()....

Cheers,

Dave.
Amir Goldstein Sept. 9, 2016, 8:27 a.m. UTC | #4
On Fri, Sep 9, 2016 at 10:54 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Fri, Sep 09, 2016 at 10:31:02AM +0300, Amir Goldstein wrote:
>> On Thu, Sep 8, 2016 at 11:25 PM, Dave Chinner <david@fromorbit.com> wrote:
>> > On Thu, Sep 08, 2016 at 06:29:54PM +0300, Amir Goldstein wrote:
>> >> When copying up within the same fs, try to use f_op->copy_file_range().
>> >> This becomes very efficient when lower and upper are on the same fs
>> >> with file reflink support.
>> >>
>> >> Tested correct behavior when lower and upper are on:
>> >> 1. same ext4 (copy)
>> >> 2. same xfs + reflink patches + mkfs.xfs (copy)
>> >> 3. same xfs + reflink patches + mkfs.xfs -m reflink=1 (clone)
>> >> 4. different xfs + reflink patches + mkfs.xfs -m reflink=1 (copy)
>> >>
>> >> Verified that all the overlay xfstests pass in the 'same xfs+reflink'
>> >> setup.
>> >>
>> >> For comparison, on my laptop, xfstest overlay/001 (copy up of large
>> >> sparse files) takes less than 1 second in the xfs reflink setup vs.
>> >> 25 seconds on the rest of the setups.
>> >>
>> >> Signed-off-by: Amir Goldstein <amir73il@gmail.com>
>> >> ---
>> >>  fs/overlayfs/copy_up.c | 28 +++++++++++++++++++++++++---
>> >>  1 file changed, 25 insertions(+), 3 deletions(-)
>> >>
>> >> diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
>> >> index 43fdc27..400567b 100644
>> >> --- a/fs/overlayfs/copy_up.c
>> >> +++ b/fs/overlayfs/copy_up.c
>> >> @@ -121,6 +121,7 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
>> >>       struct file *new_file;
>> >>       loff_t old_pos = 0;
>> >>       loff_t new_pos = 0;
>> >> +     int try_copy_file = 0;
>> >>       int error = 0;
>> >>
>> >>       if (len == 0)
>> >> @@ -136,6 +137,13 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
>> >>               goto out_fput;
>> >>       }
>> >>
>> >> +     /*
>> >> +      * When copying up within the same fs, try to use fs's copy_file_range
>> >> +      */
>> >> +     if (file_inode(old_file)->i_sb == file_inode(new_file)->i_sb) {
>> >> +             try_copy_file = (new_file->f_op->copy_file_range != NULL);
>> >> +     }
>> >
>> > You don't need this. .copy_file_range() should return -EXDEV when
>> > you try to use it to copy files across different mount points or
>> > superblocks.
>> >
>>
>> Right.
>>
>> > i.e. you should probably be calling vfs_copy_file_range() here to do
>> > the copy up, and if that fails (for whatever reason) then fall back
>> > to the existing data copying code.
>> >
>>
>> Yes, I considered that. With this V0 patch, copy_file_range() is
>> called inside the copy data 'killable loop'
>> but, unlike the slower splice, it tries to copy the entire remaining
>> len on every cycle and will most likely get all or nothing without
>> causing any major stalls.
>> So my options for V1 are:
>> 1. use the existing loop only fallback to splice on any
>> copy_file_range() failure.
>> 2. add another (non killable?) loop before the splice killable loop to
>> try and copy up as much data with copy_file_range()
>> 3. implement ovl_copy_up_file_range() and do the fallback near the
>> call site of ovl_copy_up_data()
>
> vfs_copy_file_range() already has a fallback to call
> do_splice_direct() itself if ->copy_file_range() is not supported.
> i.e. it will behave identically to the existing code if
> copy_file_range is not supported by the underlying fs.
>

I though so initially, but existing code is not identical to the
vfs_copy_file_range() implementation because ovl_copy_up_data()
splices in small chunks allowing the user to kill the copying process.
This makes sense because the poor process only called open(),
so the app writer may not have been expecting a stall of copying
a large file...

> If copy_file_range() fails, then it's for a reason that will cause
> do_splice_direct() to fail as well.
>
> vfs_copy_file_range() should really be a direct replacement for any
> code that calls do_splice_direct(). If it's not, we should make it
> so (e.g call do_splice direct for cross-fs copies automatically
> rather than returning EXDEV)

But man page states that EXDEV will be returned if
     "The files referred to by file_in and file_out are not on the
      same mounted filesystem"

I guess that when API is updated to allow for non zero flags,
then vfs_copy_file_range() should do_splice() instead or returning
EXDEV, only if (flags == COPY_FR_COPY).

> and then replace all the calls in the
> kernel to do_splice_direct() with vfs_copy_file_range()....

So in this case, I could not have replaced do_splice_direct() with
vfs_copy_file_range(), because I would either break the killable loop
behavior, or call copy_file_range() in small chunks which is not
desirable - is it?

Cheers,
Amir.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Sept. 9, 2016, 11:52 p.m. UTC | #5
On Fri, Sep 09, 2016 at 11:27:34AM +0300, Amir Goldstein wrote:
> On Fri, Sep 9, 2016 at 10:54 AM, Dave Chinner <david@fromorbit.com> wrote:
> > On Fri, Sep 09, 2016 at 10:31:02AM +0300, Amir Goldstein wrote:
> >> On Thu, Sep 8, 2016 at 11:25 PM, Dave Chinner <david@fromorbit.com> wrote:
> >> > On Thu, Sep 08, 2016 at 06:29:54PM +0300, Amir Goldstein wrote:
> >> Yes, I considered that. With this V0 patch, copy_file_range() is
> >> called inside the copy data 'killable loop'
> >> but, unlike the slower splice, it tries to copy the entire remaining
> >> len on every cycle and will most likely get all or nothing without
> >> causing any major stalls.
> >> So my options for V1 are:
> >> 1. use the existing loop only fallback to splice on any
> >> copy_file_range() failure.
> >> 2. add another (non killable?) loop before the splice killable loop to
> >> try and copy up as much data with copy_file_range()
> >> 3. implement ovl_copy_up_file_range() and do the fallback near the
> >> call site of ovl_copy_up_data()
> >
> > vfs_copy_file_range() already has a fallback to call
> > do_splice_direct() itself if ->copy_file_range() is not supported.
> > i.e. it will behave identically to the existing code if
> > copy_file_range is not supported by the underlying fs.
> >
> 
> I though so initially, but existing code is not identical to the
> vfs_copy_file_range() implementation because ovl_copy_up_data()
> splices in small chunks allowing the user to kill the copying process.
> This makes sense because the poor process only called open(),
> so the app writer may not have been expecting a stall of copying
> a large file...

So call vfs_copy_file_range() iteratively, just like is being done
right now for do_splice_direct() to limit latency on kill.

> > If copy_file_range() fails, then it's for a reason that will cause
> > do_splice_direct() to fail as well.
> >
> > vfs_copy_file_range() should really be a direct replacement for any
> > code that calls do_splice_direct(). If it's not, we should make it
> > so (e.g call do_splice direct for cross-fs copies automatically
> > rather than returning EXDEV)
> 
> But man page states that EXDEV will be returned if
>      "The files referred to by file_in and file_out are not on the
>       same mounted filesystem"

That's the /syscall/ man page, not how we must implement the
internal helper. Did you even /look/ at vfs_copy_file_range()?
hint:

        /* this could be relaxed once a method supports cross-fs copies */
        if (inode_in->i_sb != inode_out->i_sb)
                return -EXDEV;


> 
> I guess that when API is updated to allow for non zero flags,
> then vfs_copy_file_range() should do_splice() instead or returning
> EXDEV, only if (flags == COPY_FR_COPY).

Not necessary - just hoist the EXDEV check to the syscall layer.
Then, as I've already said, make vfs_copy_file_range "call do_splice
direct for cross-fs copies automatically".

i.e. vfs_copy_file_range() should just copy the data in the most
efficient way possible for the given src/dst inode pair.  In future,
if we add capability for offload of cross-fs copies, we can add the
infrastructure to do that within vfs_copy_file_range() and not have
to change a single caller to take advantage of it....

> > and then replace all the calls in the
> > kernel to do_splice_direct() with vfs_copy_file_range()....
> 
> So in this case, I could not have replaced do_splice_direct() with
> vfs_copy_file_range(), because I would either break the killable loop
> behavior, or call copy_file_range() in small chunks which is not
> desirable - is it?

Of course you can call vfs_copy_file_range() in small chunks. It's
just not going to be as efficient as a single large copy offload.
Worst case, it ends up being identical to what ovl is doing now.

But the question here is this: why are you even trying to /copy/ the
data?  That's not guaranteed to do a fast, atomic,
zero-data-movement operation. i.e. what we really want here first is
an attempt to /clone/ the data:

	1. try a fast, atomic, metadata clone operation like reflink
	2. try a fast, optimised data copy
	3. if all else fails, use do_splice_direct() to copy data.

i.e first try vfs_clone_file_range() because:

http://oss.sgi.com/archives/xfs/2015-12/msg00356.html

	[...] Note that clones are different from
	file copies in several ways:

	 - they are atomic vs other writers
	 - they support whole file clones
	 - they support 64-bit legth clones
	 - they do not allow partial success (aka short writes)
	 - clones are expected to be a fast metadata operation

i.e. if you want to use reflink type methods to optimise copy-up
latency, then you need to be /cloning/ the file, not copying it.
You can test whether this is supported at mount time, so you do a
simply flag test at copyup to determine if a clone should be
attempted or not.

If cloning fails or is not supported, then try vfs_copy_file_range()
to do an optimised iterative partial range file copy.  Finally, try
a slow, iterative partial range file copies using
do_splice_direct(). This part can be wholly handled by
vfs_copy_file_range() - this 'not supported' fallback doesn't need
to be implemented every time someone wants to copy data between two
files...

Cheers,

Dave.
Christoph Hellwig Sept. 10, 2016, 7:40 a.m. UTC | #6
On Sat, Sep 10, 2016 at 09:52:21AM +1000, Dave Chinner wrote:
> > vfs_copy_file_range() implementation because ovl_copy_up_data()
> > splices in small chunks allowing the user to kill the copying process.
> > This makes sense because the poor process only called open(),
> > so the app writer may not have been expecting a stall of copying
> > a large file...
> 
> So call vfs_copy_file_range() iteratively, just like is being done
> right now for do_splice_direct() to limit latency on kill.

I wish vfs_copy_file_range would do useful chinking itself.

But either way it might be a good idea to call vfs_clone_file_range
first, because that gives your a very efficient copy without the need
to copy anything if supported.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Amir Goldstein Sept. 10, 2016, 6:15 p.m. UTC | #7
On Sat, Sep 10, 2016 at 10:40 AM, Christoph Hellwig <hch@infradead.org> wrote:
> On Sat, Sep 10, 2016 at 09:52:21AM +1000, Dave Chinner wrote:
>> > vfs_copy_file_range() implementation because ovl_copy_up_data()
>> > splices in small chunks allowing the user to kill the copying process.
>> > This makes sense because the poor process only called open(),
>> > so the app writer may not have been expecting a stall of copying
>> > a large file...
>>
>> So call vfs_copy_file_range() iteratively, just like is being done
>> right now for do_splice_direct() to limit latency on kill.
>
> I wish vfs_copy_file_range would do useful chinking itself.
>

I guess that is more changes then I set out to do here, but
if there is consensus about this idea I don't mind drafting the patch.

> But either way it might be a good idea to call vfs_clone_file_range
> first, because that gives your a very efficient copy without the need
> to copy anything if supported.

Definitely. I'll do that.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Amir Goldstein Sept. 10, 2016, 6:54 p.m. UTC | #8
On Sat, Sep 10, 2016 at 2:52 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Fri, Sep 09, 2016 at 11:27:34AM +0300, Amir Goldstein wrote:
>> On Fri, Sep 9, 2016 at 10:54 AM, Dave Chinner <david@fromorbit.com> wrote:
>> > On Fri, Sep 09, 2016 at 10:31:02AM +0300, Amir Goldstein wrote:
>> >> On Thu, Sep 8, 2016 at 11:25 PM, Dave Chinner <david@fromorbit.com> wrote:
>> >> > On Thu, Sep 08, 2016 at 06:29:54PM +0300, Amir Goldstein wrote:
>> >> Yes, I considered that. With this V0 patch, copy_file_range() is
>> >> called inside the copy data 'killable loop'
>> >> but, unlike the slower splice, it tries to copy the entire remaining
>> >> len on every cycle and will most likely get all or nothing without
>> >> causing any major stalls.
>> >> So my options for V1 are:
>> >> 1. use the existing loop only fallback to splice on any
>> >> copy_file_range() failure.
>> >> 2. add another (non killable?) loop before the splice killable loop to
>> >> try and copy up as much data with copy_file_range()
>> >> 3. implement ovl_copy_up_file_range() and do the fallback near the
>> >> call site of ovl_copy_up_data()
>> >
>> > vfs_copy_file_range() already has a fallback to call
>> > do_splice_direct() itself if ->copy_file_range() is not supported.
>> > i.e. it will behave identically to the existing code if
>> > copy_file_range is not supported by the underlying fs.
>> >
>>
>> I though so initially, but existing code is not identical to the
>> vfs_copy_file_range() implementation because ovl_copy_up_data()
>> splices in small chunks allowing the user to kill the copying process.
>> This makes sense because the poor process only called open(),
>> so the app writer may not have been expecting a stall of copying
>> a large file...
>
> So call vfs_copy_file_range() iteratively, just like is being done
> right now for do_splice_direct() to limit latency on kill.
>
>> > If copy_file_range() fails, then it's for a reason that will cause
>> > do_splice_direct() to fail as well.
>> >
>> > vfs_copy_file_range() should really be a direct replacement for any
>> > code that calls do_splice_direct(). If it's not, we should make it
>> > so (e.g call do_splice direct for cross-fs copies automatically
>> > rather than returning EXDEV)
>>
>> But man page states that EXDEV will be returned if
>>      "The files referred to by file_in and file_out are not on the
>>       same mounted filesystem"
>
> That's the /syscall/ man page, not how we must implement the
> internal helper. Did you even /look/ at vfs_copy_file_range()?
> hint:
>
>         /* this could be relaxed once a method supports cross-fs copies */
>         if (inode_in->i_sb != inode_out->i_sb)
>                 return -EXDEV;
>
>
>>
>> I guess that when API is updated to allow for non zero flags,
>> then vfs_copy_file_range() should do_splice() instead or returning
>> EXDEV, only if (flags == COPY_FR_COPY).
>
> Not necessary - just hoist the EXDEV check to the syscall layer.
> Then, as I've already said, make vfs_copy_file_range "call do_splice
> direct for cross-fs copies automatically".
>
> i.e. vfs_copy_file_range() should just copy the data in the most
> efficient way possible for the given src/dst inode pair.  In future,
> if we add capability for offload of cross-fs copies, we can add the
> infrastructure to do that within vfs_copy_file_range() and not have
> to change a single caller to take advantage of it....
>
>> > and then replace all the calls in the
>> > kernel to do_splice_direct() with vfs_copy_file_range()....
>>
>> So in this case, I could not have replaced do_splice_direct() with
>> vfs_copy_file_range(), because I would either break the killable loop
>> behavior, or call copy_file_range() in small chunks which is not
>> desirable - is it?
>
> Of course you can call vfs_copy_file_range() in small chunks. It's
> just not going to be as efficient as a single large copy offload.
> Worst case, it ends up being identical to what ovl is doing now.
>
> But the question here is this: why are you even trying to /copy/ the
> data?  That's not guaranteed to do a fast, atomic,
> zero-data-movement operation. i.e. what we really want here first is
> an attempt to /clone/ the data:
>
>         1. try a fast, atomic, metadata clone operation like reflink
>         2. try a fast, optimised data copy
>         3. if all else fails, use do_splice_direct() to copy data.
>
> i.e first try vfs_clone_file_range() because:
>
> http://oss.sgi.com/archives/xfs/2015-12/msg00356.html
>
>         [...] Note that clones are different from
>         file copies in several ways:
>
>          - they are atomic vs other writers
>          - they support whole file clones
>          - they support 64-bit legth clones
>          - they do not allow partial success (aka short writes)
>          - clones are expected to be a fast metadata operation
>

I admit I missed this Note. perhaps it would be good to keep it in comment next
to the copy/clone_range helpers.

> i.e. if you want to use reflink type methods to optimise copy-up
> latency, then you need to be /cloning/ the file, not copying it.

That's a good advise and I will definitely follow it.
I shall call clone_file_range once above the splice loop.

> You can test whether this is supported at mount time, so you do a
> simply flag test at copyup to determine if a clone should be
> attempted or not.
>

I am not sure that would not be over optimization.
I am already checking for the obvious reason for clone to fail in copyup -
different i_sb.
After all, if I just call clone_file_range() once and it fails, then we are back
to copying and that is going to make the cost of calling clone insignificant.

> If cloning fails or is not supported, then try vfs_copy_file_range()
> to do an optimised iterative partial range file copy.  Finally, try
> a slow, iterative partial range file copies using
> do_splice_direct(). This part can be wholly handled by
> vfs_copy_file_range() - this 'not supported' fallback doesn't need
> to be implemented every time someone wants to copy data between two
> files...

You do realize that vfs_copy_file_range() itself does the 'not
supported' fallback
and if I do call it iteratively, then there will be a lot more 'not
supported' attempts
then there are in my current patch.
But regardless, as I wrote to Christoph, changing the
vfs_copy_file_range() helper
and changing users of do_splice to use it like you suggested sounds
like it may be the right thing to do, but without consensus, I am a bit hesitant
to make those changes. I am definitely willing to draft the patch and test it
if I get more ACKs on the idea.

Beyond the question of whether or not to change vfs_copy_file_range(),
there is the pragmatic question of which workload is going to benefit from
this in the copyup case.

My patch sets out to improve a very clear and immediate problem for
overlayfs over xfs setup (lower and upper on the same fs).
It is not a hypothetical case, it is a very common case for docker.
Further more, when docker users realize they have this improvement,
it will provide a very good incentive for users (and container distros) to
test and deploy overlayfs over xfs with reflink support.
So it would be a great service to the docker community if xfs reflink support
would be out with v4.9 (hint hint).

The copy_file_range() before do_splice() step, OTOH, may be useful
for overlayfs over nfs (lower and upper on the same fs) and I don't know
that is an interesting use case. anyone?

Thanks for the good review comments
Will work on V1 tomorrow.

Cheers,
Amir.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Sept. 11, 2016, 10:11 p.m. UTC | #9
On Sat, Sep 10, 2016 at 09:54:59PM +0300, Amir Goldstein wrote:
> On Sat, Sep 10, 2016 at 2:52 AM, Dave Chinner <david@fromorbit.com> wrote:
> 
> > You can test whether this is supported at mount time, so you do a
> > simply flag test at copyup to determine if a clone should be
> > attempted or not.
> >
> 
> I am not sure that would not be over optimization.
> I am already checking for the obvious reason for clone to fail in copyup -
> different i_sb.

Again, please don't do that.  Call vfs_clone_file_range() as it
checks a whole lot more stuff that can cause a clone to fail. And it
makes sure that the write references to the mnt are taken so that
things like freeze and remount-ro behave correctly while a clone is
in progress.

> After all, if I just call clone_file_range() once and it fails, then we are back
> to copying and that is going to make the cost of calling clone insignificant.

Apart from the fact that the ->clone_file_range() calls assume that
all the validity checks have already been done by the caller, which
you are not doing.

> > If cloning fails or is not supported, then try vfs_copy_file_range()
> > to do an optimised iterative partial range file copy.  Finally, try
> > a slow, iterative partial range file copies using
> > do_splice_direct(). This part can be wholly handled by
> > vfs_copy_file_range() - this 'not supported' fallback doesn't need
> > to be implemented every time someone wants to copy data between two
> > files...
> 
> You do realize that vfs_copy_file_range() itself does the 'not
> supported' fallback
> and if I do call it iteratively, then there will be a lot more 'not
> supported' attempts
> then there are in my current patch.

No shit, Sherlock. But you're concentrating on the wrong thing -
the overhead of checking if .clone_file_range/.copy_file_range is
implemented and can be executed is effectively zero compared to
copying any amount of data.

IOWs, Amir, you're trying to *optimise the wrong thing*. It's the
data copy that is costly and needs to be optimised, not the
iteration or the checks done to determine what type of clone/copy
can be executed. Shortcuts around generic helpers like you are
proposing are more costly in the long run because code like this is
much more likely to contain/mask bugs that only appear months or
years later when something else is changed. Case in point: the mnt
write references that need to be taken before calling
clone/copy_file_range()....

Please, just use vfs_clone_file_range() and vfs_copy_file_range()
and only fall back to a slower method if the error returned is
-EOPNOTSUPP. For any other error, the copy should fail, not be
ignored.

> But regardless, as I wrote to Christoph, changing the
> vfs_copy_file_range() helper
> and changing users of do_splice to use it like you suggested sounds
> like it may be the right thing to do, but without consensus, I am a bit hesitant
> to make those changes. I am definitely willing to draft the patch and test it
> if I get more ACKs on the idea.

Send a patch - that's the only way you'll get anyone to comment
on it.

Cheers,

Dave.
Amir Goldstein Sept. 12, 2016, 6:52 a.m. UTC | #10
On Mon, Sep 12, 2016 at 1:11 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Sat, Sep 10, 2016 at 09:54:59PM +0300, Amir Goldstein wrote:
>> On Sat, Sep 10, 2016 at 2:52 AM, Dave Chinner <david@fromorbit.com> wrote:
>>
>> > You can test whether this is supported at mount time, so you do a
>> > simply flag test at copyup to determine if a clone should be
>> > attempted or not.
>> >
>>
>> I am not sure that would not be over optimization.
>> I am already checking for the obvious reason for clone to fail in copyup -
>> different i_sb.
>
> Again, please don't do that.  Call vfs_clone_file_range() as it
> checks a whole lot more stuff that can cause a clone to fail. And it
> makes sure that the write references to the mnt are taken so that
> things like freeze and remount-ro behave correctly while a clone is
> in progress.
>

OK

>> After all, if I just call clone_file_range() once and it fails, then we are back
>> to copying and that is going to make the cost of calling clone insignificant.
>
> Apart from the fact that the ->clone_file_range() calls assume that
> all the validity checks have already been done by the caller, which
> you are not doing.
>
>> > If cloning fails or is not supported, then try vfs_copy_file_range()
>> > to do an optimised iterative partial range file copy.  Finally, try
>> > a slow, iterative partial range file copies using
>> > do_splice_direct(). This part can be wholly handled by
>> > vfs_copy_file_range() - this 'not supported' fallback doesn't need
>> > to be implemented every time someone wants to copy data between two
>> > files...
>>
>> You do realize that vfs_copy_file_range() itself does the 'not
>> supported' fallback
>> and if I do call it iteratively, then there will be a lot more 'not
>> supported' attempts
>> then there are in my current patch.
>
> No shit, Sherlock. But you're concentrating on the wrong thing -
> the overhead of checking if .clone_file_range/.copy_file_range is
> implemented and can be executed is effectively zero compared to
> copying any amount of data.
>
> IOWs, Amir, you're trying to *optimise the wrong thing*. It's the
> data copy that is costly and needs to be optimised, not the
> iteration or the checks done to determine what type of clone/copy
> can be executed. Shortcuts around generic helpers like you are
> proposing are more costly in the long run because code like this is
> much more likely to contain/mask bugs that only appear months or
> years later when something else is changed. Case in point: the mnt
> write references that need to be taken before calling
> clone/copy_file_range()....
>
> Please, just use vfs_clone_file_range() and vfs_copy_file_range()
> and only fall back to a slower method if the error returned is
> -EOPNOTSUPP. For any other error, the copy should fail, not be
> ignored.
>

Obviously, you meant to check for -EOPNOTSUPP or -EXDEV

>> But regardless, as I wrote to Christoph, changing the
>> vfs_copy_file_range() helper
>> and changing users of do_splice to use it like you suggested sounds
>> like it may be the right thing to do, but without consensus, I am a bit hesitant
>> to make those changes. I am definitely willing to draft the patch and test it
>> if I get more ACKs on the idea.
>
> Send a patch - that's the only way you'll get anyone to comment
> on it.
>

Will do.

Thanks,
Amir.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Amir Goldstein Sept. 12, 2016, 3:06 p.m. UTC | #11
Btrfs has file reflink support and XFS is about to gain
file reflink support soon. It is very useful to use reflink
to implement copy up of regular file data when possible.

For example, on my laptop, xfstest overlay/001 (copy up of 4G
sparse files) takes less than 1 second with copy up by reflink
vs. 25 seconds with regular copy up.

This series includes two pairs of patches:
- patches 1,2 utilize the clone_file_range() API
- patches 3,4 utilize the copy_file_range() API

The two pairs of patches are independent of each other.
They were each tested separately and both tested together.
All combinations passed the unionmount-testsuite (over tmpfs)
All combinations passed the overlay/??? xfstests over the
following underlying fs:
1. ext4 (copy up)
2. xfs + reflink patches + mkfs.xfs (copy up)
3. xfs + reflink patches + mkfs.xfs -m reflink=1 (reflink up)

Dave Chinner suggested the following implementation for copy up,
which I implemented in this series:
1. try to clone_file_range() entire length
2. fallback to trying copy_file_range() in small chunks
3. fallback to do_splice_direct() in small chunks

This is a good general implementation to cover the future use cases of
file systems that can do either clone_file_range() or copy_file_range().
However, currently, the only in-tree file systems that support
clone/copy_file_range are btrfs, xfs (soon), cifs and nfs.
btrfs and xfs use the same implementation for clone and copy range,
so the copy_file_range() step is never needed.
cifs supports only clone_file_range() so copy_file_range() step is moot.
nfs does have a different implementation for clone_file_range() and
copy_file_range(), but nfs is not supported as upper layer for overlayfs
at the moment.

Please pick patches 1,2 for clear and immediate benefit to copy up
performance on filesystems with reflink support.

Please consider picking patches 3,4 additionally for future generations
and for code consolidation into vfs helpers.

Cheers,
Amir.

V2:
- Re-factor vfs helpers so they can be called from copy up
- Single call to vfs_clone_file_range() and fallback to
  vfs_copy_file_range() loop

V1:
- Replace iteravite call to copy_file_range() with
  a single call to clone_file_range()

V0:
- Call clone_file_range() and fallback to do_splice_direct()

Amir Goldstein (4):
  vfs: allow vfs_clone_file_range() across mount points
  ovl: use vfs_clone_file_range() for copy up if possible
  vfs: allow vfs_copy_file_range() across file systems
  ovl: use vfs_copy_file_range() to copy up file data

 fs/ioctl.c             |  2 ++
 fs/overlayfs/copy_up.c | 19 +++++++++++++++----
 fs/read_write.c        | 23 ++++++++++++++++-------
 3 files changed, 33 insertions(+), 11 deletions(-)
Amir Goldstein Sept. 12, 2016, 3:37 p.m. UTC | #12
On Mon, Sep 12, 2016 at 9:52 AM, Amir Goldstein <amir73il@gmail.com> wrote:
> On Mon, Sep 12, 2016 at 1:11 AM, Dave Chinner <david@fromorbit.com> wrote:
>> On Sat, Sep 10, 2016 at 09:54:59PM +0300, Amir Goldstein wrote:
>>> On Sat, Sep 10, 2016 at 2:52 AM, Dave Chinner <david@fromorbit.com> wrote:
>>>
>>> > You can test whether this is supported at mount time, so you do a
>>> > simply flag test at copyup to determine if a clone should be
>>> > attempted or not.
>>> >
>>>
>>> I am not sure that would not be over optimization.
>>> I am already checking for the obvious reason for clone to fail in copyup -
>>> different i_sb.
>>
>> Again, please don't do that.  Call vfs_clone_file_range() as it
>> checks a whole lot more stuff that can cause a clone to fail. And it
>> makes sure that the write references to the mnt are taken so that
>> things like freeze and remount-ro behave correctly while a clone is
>> in progress.
>>
>
> OK
>

Dave,

I just sent out v2 patch series that follows your suggestions (I hope).
Please note that inside vfs_copy_file_range() I *did* add a pre-condition
of different i_sb *before* calling into ->copy_file_range().
The reason is that not all fs check for different i_sb inside the implementation
(i.e. nfs) and since I removed same i_sb constrain from vfs_copy_file_range()
I wanted to make sure that different i_sb case always ends up with
do_splice_direct()
and never propagates into the fs implementation where consequences are unknown.

Please reply on new patch set if you disagree.
Thanks,

Amir.

>>> After all, if I just call clone_file_range() once and it fails, then we are back
>>> to copying and that is going to make the cost of calling clone insignificant.
>>
>> Apart from the fact that the ->clone_file_range() calls assume that
>> all the validity checks have already been done by the caller, which
>> you are not doing.
>>
>>> > If cloning fails or is not supported, then try vfs_copy_file_range()
>>> > to do an optimised iterative partial range file copy.  Finally, try
>>> > a slow, iterative partial range file copies using
>>> > do_splice_direct(). This part can be wholly handled by
>>> > vfs_copy_file_range() - this 'not supported' fallback doesn't need
>>> > to be implemented every time someone wants to copy data between two
>>> > files...
>>>
>>> You do realize that vfs_copy_file_range() itself does the 'not
>>> supported' fallback
>>> and if I do call it iteratively, then there will be a lot more 'not
>>> supported' attempts
>>> then there are in my current patch.
>>
>> No shit, Sherlock. But you're concentrating on the wrong thing -
>> the overhead of checking if .clone_file_range/.copy_file_range is
>> implemented and can be executed is effectively zero compared to
>> copying any amount of data.
>>
>> IOWs, Amir, you're trying to *optimise the wrong thing*. It's the
>> data copy that is costly and needs to be optimised, not the
>> iteration or the checks done to determine what type of clone/copy
>> can be executed. Shortcuts around generic helpers like you are
>> proposing are more costly in the long run because code like this is
>> much more likely to contain/mask bugs that only appear months or
>> years later when something else is changed. Case in point: the mnt
>> write references that need to be taken before calling
>> clone/copy_file_range()....
>>
>> Please, just use vfs_clone_file_range() and vfs_copy_file_range()
>> and only fall back to a slower method if the error returned is
>> -EOPNOTSUPP. For any other error, the copy should fail, not be
>> ignored.
>>
>
> Obviously, you meant to check for -EOPNOTSUPP or -EXDEV
>
>>> But regardless, as I wrote to Christoph, changing the
>>> vfs_copy_file_range() helper
>>> and changing users of do_splice to use it like you suggested sounds
>>> like it may be the right thing to do, but without consensus, I am a bit hesitant
>>> to make those changes. I am definitely willing to draft the patch and test it
>>> if I get more ACKs on the idea.
>>
>> Send a patch - that's the only way you'll get anyone to comment
>> on it.
>>
>
> Will do.
>
> Thanks,
> Amir.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
index 43fdc27..400567b 100644
--- a/fs/overlayfs/copy_up.c
+++ b/fs/overlayfs/copy_up.c
@@ -121,6 +121,7 @@  static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
 	struct file *new_file;
 	loff_t old_pos = 0;
 	loff_t new_pos = 0;
+	int try_copy_file = 0;
 	int error = 0;
 
 	if (len == 0)
@@ -136,6 +137,13 @@  static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
 		goto out_fput;
 	}
 
+	/*
+	 * When copying up within the same fs, try to use fs's copy_file_range
+	 */
+	if (file_inode(old_file)->i_sb == file_inode(new_file)->i_sb) {
+		try_copy_file = (new_file->f_op->copy_file_range != NULL);
+	}
+
 	/* FIXME: copy up sparse files efficiently */
 	while (len) {
 		size_t this_len = OVL_COPY_UP_CHUNK_SIZE;
@@ -149,9 +157,23 @@  static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
 			break;
 		}
 
-		bytes = do_splice_direct(old_file, &old_pos,
-					 new_file, &new_pos,
-					 this_len, SPLICE_F_MOVE);
+		if (try_copy_file) {
+			bytes = new_file->f_op->copy_file_range(
+					old_file, old_pos,
+					new_file, new_pos,
+					len, 0);
+			if (bytes == -EOPNOTSUPP) {
+				try_copy_file = 0;
+				continue;
+			} else if (bytes > 0) {
+				old_pos += bytes;
+				new_pos += bytes;
+			}
+		} else {
+			bytes = do_splice_direct(old_file, &old_pos,
+					new_file, &new_pos,
+					this_len, SPLICE_F_MOVE);
+		}
 		if (bytes <= 0) {
 			error = bytes;
 			break;