diff mbox

[16/20] xfs: pass a 64-bit count argument to xfs_iomap_write_unwritten

Message ID 1421925006-24231-17-git-send-email-hch@lst.de (mailing list archive)
State New, archived
Headers show

Commit Message

Christoph Hellwig Jan. 22, 2015, 11:10 a.m. UTC
The code is already ready for it, and the pnfs layout commit code expects
to be able to pass a larger than 32-bit argument.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/xfs/xfs_iomap.c | 2 +-
 fs/xfs/xfs_iomap.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

Comments

J. Bruce Fields Jan. 29, 2015, 8:52 p.m. UTC | #1
Who can give us ACKs on these last five fs/xfs patches?  (And is it
going to cause trouble if they go in through the nfsd tree?)

--b.

On Thu, Jan 22, 2015 at 12:10:02PM +0100, Christoph Hellwig wrote:
> The code is already ready for it, and the pnfs layout commit code expects
> to be able to pass a larger than 32-bit argument.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/xfs/xfs_iomap.c | 2 +-
>  fs/xfs/xfs_iomap.h | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index c980e2a..ccb1dd0 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -802,7 +802,7 @@ int
>  xfs_iomap_write_unwritten(
>  	xfs_inode_t	*ip,
>  	xfs_off_t	offset,
> -	size_t		count)
> +	xfs_off_t	count)
>  {
>  	xfs_mount_t	*mp = ip->i_mount;
>  	xfs_fileoff_t	offset_fsb;
> diff --git a/fs/xfs/xfs_iomap.h b/fs/xfs/xfs_iomap.h
> index 411fbb8..8688e66 100644
> --- a/fs/xfs/xfs_iomap.h
> +++ b/fs/xfs/xfs_iomap.h
> @@ -27,6 +27,6 @@ int xfs_iomap_write_delay(struct xfs_inode *, xfs_off_t, size_t,
>  			struct xfs_bmbt_irec *);
>  int xfs_iomap_write_allocate(struct xfs_inode *, xfs_off_t,
>  			struct xfs_bmbt_irec *);
> -int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, size_t);
> +int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t);
>  
>  #endif /* __XFS_IOMAP_H__*/
> -- 
> 1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Feb. 2, 2015, 7:30 a.m. UTC | #2
On Thu, Jan 29, 2015 at 03:52:32PM -0500, J. Bruce Fields wrote:
> Who can give us ACKs on these last five fs/xfs patches?  (And is it
> going to cause trouble if they go in through the nfsd tree?)


We'd need ACKs from Dave.  He already has pulled in two patches so
we might run into some conflicts.  Maybe the best idea is to add the
exportfs patch to both the XFS and nfsd trees, so each of them can
pull in the rest?  Or we could commit the two XFS preparation patches
to both tree and get something that compiles and works in the nfsd
tree.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Feb. 2, 2015, 7:24 p.m. UTC | #3
On Mon, Feb 02, 2015 at 08:30:24AM +0100, Christoph Hellwig wrote:
> On Thu, Jan 29, 2015 at 03:52:32PM -0500, J. Bruce Fields wrote:
> > Who can give us ACKs on these last five fs/xfs patches?  (And is it
> > going to cause trouble if they go in through the nfsd tree?)
> 
> 
> We'd need ACKs from Dave.  He already has pulled in two patches so
> we might run into some conflicts.  Maybe the best idea is to add the
> exportfs patch to both the XFS and nfsd trees, so each of them can
> pull in the rest?  Or we could commit the two XFS preparation patches
> to both tree and get something that compiles and works in the nfsd
> tree.

This patch has already been committed to the XFS repo.

Cheers,

Dave.
Dave Chinner Feb. 2, 2015, 7:43 p.m. UTC | #4
On Tue, Feb 03, 2015 at 06:24:04AM +1100, Dave Chinner wrote:
> On Mon, Feb 02, 2015 at 08:30:24AM +0100, Christoph Hellwig wrote:
> > On Thu, Jan 29, 2015 at 03:52:32PM -0500, J. Bruce Fields wrote:
> > > Who can give us ACKs on these last five fs/xfs patches?  (And is it
> > > going to cause trouble if they go in through the nfsd tree?)
> > 
> > 
> > We'd need ACKs from Dave.  He already has pulled in two patches so
> > we might run into some conflicts.  Maybe the best idea is to add the
> > exportfs patch to both the XFS and nfsd trees, so each of them can
> > pull in the rest?  Or we could commit the two XFS preparation patches
> > to both tree and get something that compiles and works in the nfsd
> > tree.
> 
> This patch has already been committed to the XFS repo.

And it looks like I missed the sync transaction on growfs patch,
too, so I'll commit that one later today.

As to the pNFSD specific changes, I haven't really looked them over
in any great detail yet. My main concern is that there are no
specific regression tests for this yet, I'm not sure how we go about
verifying it actually works properly and we don't inadvertantly
break it in the future. Christoph?

Cheers,

Dave.
J. Bruce Fields Feb. 2, 2015, 7:48 p.m. UTC | #5
On Tue, Feb 03, 2015 at 06:43:00AM +1100, Dave Chinner wrote:
> On Tue, Feb 03, 2015 at 06:24:04AM +1100, Dave Chinner wrote:
> > On Mon, Feb 02, 2015 at 08:30:24AM +0100, Christoph Hellwig wrote:
> > > On Thu, Jan 29, 2015 at 03:52:32PM -0500, J. Bruce Fields wrote:
> > > > Who can give us ACKs on these last five fs/xfs patches?  (And is it
> > > > going to cause trouble if they go in through the nfsd tree?)
> > > 
> > > 
> > > We'd need ACKs from Dave.  He already has pulled in two patches so
> > > we might run into some conflicts.  Maybe the best idea is to add the
> > > exportfs patch to both the XFS and nfsd trees, so each of them can
> > > pull in the rest?  Or we could commit the two XFS preparation patches
> > > to both tree and get something that compiles and works in the nfsd
> > > tree.
> > 
> > This patch has already been committed to the XFS repo.
> 
> And it looks like I missed the sync transaction on growfs patch,
> too, so I'll commit that one later today.
> 
> As to the pNFSD specific changes, I haven't really looked them over
> in any great detail yet. My main concern is that there are no
> specific regression tests for this yet, I'm not sure how we go about
> verifying it actually works properly and we don't inadvertantly
> break it in the future. Christoph?

Previously: http://lkml.kernel.org/r/20150106175611.GA16413@lst.de

	>       - any advice on testing?  Is there was some simple
	>       virtual setup that would allow any loser with no special
	>       hardware (e.g., me) to check whether they've broken the
	>       block server?

	Run two kvm VMs that share the same disk.  Create an XFS
	filesystem on the MDS, and export it.  If the client has blkmapd
	running (on Debian it needs to be started manually) it will use
	pNFS for accessing the filesystem.  Verify that using the
	per-operation counters in /proc/self/mounstats.  Repeat with
	additional clients as nessecary.

	Alternatively set up a simple iSCSI target using tgt or lio and
	connect to it from multiple clients.

Which sounds reasonable to me, but I haven't tried to incorporate this
into my regression testing yet.

--b.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Feb. 3, 2015, 6:35 p.m. UTC | #6
On Mon, Feb 02, 2015 at 02:48:26PM -0500, J. Bruce Fields wrote:
> Previously: http://lkml.kernel.org/r/20150106175611.GA16413@lst.de
> 
> 	>       - any advice on testing?  Is there was some simple
> 	>       virtual setup that would allow any loser with no special
> 	>       hardware (e.g., me) to check whether they've broken the
> 	>       block server?
> 
> 	Run two kvm VMs that share the same disk.  Create an XFS
> 	filesystem on the MDS, and export it.  If the client has blkmapd
> 	running (on Debian it needs to be started manually) it will use
> 	pNFS for accessing the filesystem.  Verify that using the
> 	per-operation counters in /proc/self/mounstats.  Repeat with
> 	additional clients as nessecary.
> 
> 	Alternatively set up a simple iSCSI target using tgt or lio and
> 	connect to it from multiple clients.
> 
> Which sounds reasonable to me, but I haven't tried to incorporate this
> into my regression testing yet.

Additionally I can offer the following script to generate recalls,
which don't really happen during normal operation.  I don't
really know how to write a proper testcase that coordinates access
to the exported filesystem and nfs unless it runs locally on the same node,
though.  It would need some higher level, network aware test harness:

----- snip -----
#!/bin/sh

set +x

# wait for grace period
touch /mnt/nfs1/foo

dd if=/dev/zero of=/mnt/nfs1/foo bs=128M count=32 conv=fdatasync oflag=direct &

sleep 2

echo "" > /mnt/test/foo && echo "recall done"
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christoph Hellwig Feb. 4, 2015, 7:57 a.m. UTC | #7
On Tue, Feb 03, 2015 at 06:43:00AM +1100, Dave Chinner wrote:
> As to the pNFSD specific changes, I haven't really looked them over
> in any great detail yet. My main concern is that there are no
> specific regression tests for this yet, I'm not sure how we go about
> verifying it actually works properly and we don't inadvertantly
> break it in the future. Christoph?

Any chance you could review them this week so we can get them
merged in time for 3.20?  In the worst case Bruce will have to pull
the xfs tree into the nfsd tree so that we have all the dependencies
available.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Feb. 4, 2015, 8:02 p.m. UTC | #8
On Wed, Feb 04, 2015 at 08:57:56AM +0100, Christoph Hellwig wrote:
> On Tue, Feb 03, 2015 at 06:43:00AM +1100, Dave Chinner wrote:
> > As to the pNFSD specific changes, I haven't really looked them over
> > in any great detail yet. My main concern is that there are no
> > specific regression tests for this yet, I'm not sure how we go about
> > verifying it actually works properly and we don't inadvertantly
> > break it in the future. Christoph?
> 
> Any chance you could review them this week so we can get them
> merged in time for 3.20?  In the worst case Bruce will have to pull
> the xfs tree into the nfsd tree so that we have all the dependencies
> available.

I'm working my way through them. I'm just about to pull in the
growfs transaction patch (missed it last time around), and I try to
have a decent look over the other two patches later today.

I'm not sure I have any bandwidth to test them yet, but perhaps if I
add a one-time message "Experimental feature in use" when the code
is first executed then it will be OK to merge (i.e. process similar
to delayed logging and CRC introduction). Once we've got more
confidence that it's all working properly, then we can remove the
experimental tag from it. Does that sound like a reasonable
approach to take?

Cheers,

Dave.
J. Bruce Fields Feb. 11, 2015, 10:35 p.m. UTC | #9
On Tue, Feb 03, 2015 at 07:35:33PM +0100, Christoph Hellwig wrote:
> On Mon, Feb 02, 2015 at 02:48:26PM -0500, J. Bruce Fields wrote:
> > Previously: http://lkml.kernel.org/r/20150106175611.GA16413@lst.de
> > 
> > 	>       - any advice on testing?  Is there was some simple
> > 	>       virtual setup that would allow any loser with no special
> > 	>       hardware (e.g., me) to check whether they've broken the
> > 	>       block server?
> > 
> > 	Run two kvm VMs that share the same disk.  Create an XFS
> > 	filesystem on the MDS, and export it.  If the client has blkmapd
> > 	running (on Debian it needs to be started manually) it will use
> > 	pNFS for accessing the filesystem.  Verify that using the
> > 	per-operation counters in /proc/self/mounstats.  Repeat with
> > 	additional clients as nessecary.
> > 
> > 	Alternatively set up a simple iSCSI target using tgt or lio and
> > 	connect to it from multiple clients.
> > 
> > Which sounds reasonable to me, but I haven't tried to incorporate this
> > into my regression testing yet.
> 
> Additionally I can offer the following script to generate recalls,
> which don't really happen during normal operation.  I don't
> really know how to write a proper testcase that coordinates access
> to the exported filesystem and nfs unless it runs locally on the same node,
> though.  It would need some higher level, network aware test harness:

Thanks.  I got as far as doing a quick manual test with vm's sharing a
"disk":

	[root@f21-2]# mount -overs=4.1 f21-1:/exports/xfs-pnfs /mnt/
	[root@f21-2]# echo "hello world" >/mnt/testfile
	[root@f21-2]# grep LAYOUTGET /proc/self/mountstats 
		   LAYOUTGET: 1 1 0 236 196 0 4 4

I haven't tried to set up automated testing with recalls, but that
shouldn't be hard.

--b.

> 
> ----- snip -----
> #!/bin/sh
> 
> set +x
> 
> # wait for grace period
> touch /mnt/nfs1/foo
> 
> dd if=/dev/zero of=/mnt/nfs1/foo bs=128M count=32 conv=fdatasync oflag=direct &
> 
> sleep 2
> 
> echo "" > /mnt/test/foo && echo "recall done"
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
J. Bruce Fields Feb. 11, 2015, 10:54 p.m. UTC | #10
On Wed, Feb 11, 2015 at 05:35:22PM -0500, J. Bruce Fields wrote:
> On Tue, Feb 03, 2015 at 07:35:33PM +0100, Christoph Hellwig wrote:
> > On Mon, Feb 02, 2015 at 02:48:26PM -0500, J. Bruce Fields wrote:
> > > Previously: http://lkml.kernel.org/r/20150106175611.GA16413@lst.de
> > > 
> > > 	>       - any advice on testing?  Is there was some simple
> > > 	>       virtual setup that would allow any loser with no special
> > > 	>       hardware (e.g., me) to check whether they've broken the
> > > 	>       block server?
> > > 
> > > 	Run two kvm VMs that share the same disk.  Create an XFS
> > > 	filesystem on the MDS, and export it.  If the client has blkmapd
> > > 	running (on Debian it needs to be started manually) it will use
> > > 	pNFS for accessing the filesystem.  Verify that using the
> > > 	per-operation counters in /proc/self/mounstats.  Repeat with
> > > 	additional clients as nessecary.
> > > 
> > > 	Alternatively set up a simple iSCSI target using tgt or lio and
> > > 	connect to it from multiple clients.
> > > 
> > > Which sounds reasonable to me, but I haven't tried to incorporate this
> > > into my regression testing yet.
> > 
> > Additionally I can offer the following script to generate recalls,
> > which don't really happen during normal operation.  I don't
> > really know how to write a proper testcase that coordinates access
> > to the exported filesystem and nfs unless it runs locally on the same node,
> > though.  It would need some higher level, network aware test harness:
> 
> Thanks.  I got as far as doing a quick manual test with vm's sharing a
> "disk":

Oh, also I forgot, on fedora:

	[root@f21-2]# systemctl enable nfs-blkmap.target

> 	[root@f21-2]# mount -overs=4.1 f21-1:/exports/xfs-pnfs /mnt/
> 	[root@f21-2]# echo "hello world" >/mnt/testfile
> 	[root@f21-2]# grep LAYOUTGET /proc/self/mountstats 
> 		   LAYOUTGET: 1 1 0 236 196 0 4 4
> 
> I haven't tried to set up automated testing with recalls, but that
> shouldn't be hard.
> 
> --b.
> 
> > 
> > ----- snip -----
> > #!/bin/sh
> > 
> > set +x
> > 
> > # wait for grace period
> > touch /mnt/nfs1/foo
> > 
> > dd if=/dev/zero of=/mnt/nfs1/foo bs=128M count=32 conv=fdatasync oflag=direct &
> > 
> > sleep 2
> > 
> > echo "" > /mnt/test/foo && echo "recall done"
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index c980e2a..ccb1dd0 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -802,7 +802,7 @@  int
 xfs_iomap_write_unwritten(
 	xfs_inode_t	*ip,
 	xfs_off_t	offset,
-	size_t		count)
+	xfs_off_t	count)
 {
 	xfs_mount_t	*mp = ip->i_mount;
 	xfs_fileoff_t	offset_fsb;
diff --git a/fs/xfs/xfs_iomap.h b/fs/xfs/xfs_iomap.h
index 411fbb8..8688e66 100644
--- a/fs/xfs/xfs_iomap.h
+++ b/fs/xfs/xfs_iomap.h
@@ -27,6 +27,6 @@  int xfs_iomap_write_delay(struct xfs_inode *, xfs_off_t, size_t,
 			struct xfs_bmbt_irec *);
 int xfs_iomap_write_allocate(struct xfs_inode *, xfs_off_t,
 			struct xfs_bmbt_irec *);
-int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, size_t);
+int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t);
 
 #endif /* __XFS_IOMAP_H__*/