diff mbox series

[4/3] xfs: attach dquots to inode before reading data/cow fork mappings

Message ID Y4OuLTwPVdiHMBGi@magnolia (mailing list archive)
State Superseded
Headers show
Series xfs: fixes for 6.2 | expand

Commit Message

Darrick J. Wong Nov. 27, 2022, 6:36 p.m. UTC
From: Darrick J. Wong <djwong@kernel.org>

I've been running near-continuous integration testing of online fsck,
and I've noticed that once a day, one of the ARM VMs will fail the test
with out of order records in the data fork.

xfs/804 races fsstress with online scrub (aka scan but do not change
anything), so I think this might be a bug in the core xfs code.  This
also only seems to trigger if one runs the test for more than ~6 minutes
via TIME_FACTOR=13 or something.
https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/tree/tests/xfs/804?h=djwong-wtf

I added a debugging patch to the kernel to check the data fork extents
after taking the ILOCK, before dropping ILOCK, and before and after each
bmapping operation.  So far I've narrowed it down to the delalloc code
inserting a record in the wrong place in the iext tree:

xfs_bmap_add_extent_hole_delay, near line 2691:

	case 0:
		/*
		 * New allocation is not contiguous with another
		 * delayed allocation.
		 * Insert a new entry.
		 */
		oldlen = newlen = 0;
		xfs_iunlock_check_datafork(ip);		<-- ok here
		xfs_iext_insert(ip, icur, new, state);
		xfs_iunlock_check_datafork(ip);		<-- bad here
		break;
	}

I recorded the state of the data fork mappings and iext cursor state
when a corrupt data fork is detected immediately after the
xfs_bmap_add_extent_hole_delay call in xfs_bmapi_reserve_delalloc:

ino 0x140bb3 func xfs_bmapi_reserve_delalloc line 4164 data fork:
    ino 0x140bb3 nr 0x0 nr_real 0x0 offset 0xb9 blockcount 0x1f startblock 0x935de2 state 1
    ino 0x140bb3 nr 0x1 nr_real 0x1 offset 0xe6 blockcount 0xa startblock 0xffffffffe0007 state 0
    ino 0x140bb3 nr 0x2 nr_real 0x1 offset 0xd8 blockcount 0xe startblock 0x935e01 state 0

Here we see that a delalloc extent was inserted into the wrong position
in the iext leaf, same as all the other times.  The extra trace data I
collected are as follows:

ino 0x140bb3 fork 0 oldoff 0xe6 oldlen 0x4 oldprealloc 0x6 isize 0xe6000
    ino 0x140bb3 oldgotoff 0xea oldgotstart 0xfffffffffffffffe oldgotcount 0x0 oldgotstate 0
    ino 0x140bb3 crapgotoff 0x0 crapgotstart 0x0 crapgotcount 0x0 crapgotstate 0
    ino 0x140bb3 freshgotoff 0xd8 freshgotstart 0x935e01 freshgotcount 0xe freshgotstate 0
    ino 0x140bb3 nowgotoff 0xe6 nowgotstart 0xffffffffe0007 nowgotcount 0xa nowgotstate 0
    ino 0x140bb3 oldicurpos 1 oldleafnr 2 oldleaf 0xfffffc00f0609a00
    ino 0x140bb3 crapicurpos 2 crapleafnr 2 crapleaf 0xfffffc00f0609a00
    ino 0x140bb3 freshicurpos 1 freshleafnr 2 freshleaf 0xfffffc00f0609a00
    ino 0x140bb3 newicurpos 1 newleafnr 3 newleaf 0xfffffc00f0609a00

The first line shows that xfs_bmapi_reserve_delalloc was called with
whichfork=XFS_DATA_FORK, off=0xe6, len=0x4, prealloc=6.

The second line ("oldgot") shows the contents of @got at the beginning
of the call, which are the results of the first iext lookup in
xfs_buffered_write_iomap_begin.

Line 3 ("crapgot") is the result of duplicating the cursor at the start
of the body of xfs_bmapi_reserve_delalloc and performing a fresh lookup
at @off.

Line 4 ("freshgot") is the result of a new xfs_iext_get_extent right
before the call to xfs_bmap_add_extent_hole_delay.  Totally garbage.

Line 5 ("nowgot") is contents of @got after the
xfs_bmap_add_extent_hole_delay call.

Line 6 is the contents of @icur at the beginning fo the call.  Lines 7-9
are the contents of the iext cursors at the point where the block
mappings were sampled.

I think @oldgot is a HOLESTARTBLOCK extent because the first lookup
didn't find anything, so we filled in imap with "fake hole until the
end".  At the time of the first lookup, I suspect that there's only one
32-block unwritten extent in the mapping (hence oldicurpos==1) but by
the time we get to recording crapgot, crapicurpos==2.

Dave then added:

Ok, that's much simpler to reason about, and implies the smoke is
coming from xfs_buffered_write_iomap_begin() or
xfs_bmapi_reserve_delalloc(). I suspect the former - it does a lot
of stuff with the ILOCK_EXCL held.....

.... including calling xfs_qm_dqattach_locked().

xfs_buffered_write_iomap_begin
  ILOCK_EXCL
  look up icur
  xfs_qm_dqattach_locked
    xfs_qm_dqattach_one
      xfs_qm_dqget_inode
        dquot cache miss
        xfs_iunlock(ip, XFS_ILOCK_EXCL);
        error = xfs_qm_dqread(mp, id, type, can_alloc, &dqp);
        xfs_ilock(ip, XFS_ILOCK_EXCL);
  ....
  xfs_bmapi_reserve_delalloc(icur)

Yup, that's what is letting the magic smoke out -
xfs_qm_dqattach_locked() can cycle the ILOCK. If that happens, we
can pass a stale icur to xfs_bmapi_reserve_delalloc() and it all
goes downhill from there.

So.  Fix this by moving the dqattach_locked call up, and add a comment
about how we must attach the dquots *before* sampling the data/cow fork
contents.

Fixes: a526c85c2236 ("xfs: move xfs_file_iomap_begin_delay around") # goes further back than this
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
---
 fs/xfs/xfs_iomap.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

Comments

Dave Chinner Nov. 29, 2022, 6:31 a.m. UTC | #1
On Sun, Nov 27, 2022 at 10:36:29AM -0800, Darrick J. Wong wrote:
> From: Darrick J. Wong <djwong@kernel.org>
> 
> I've been running near-continuous integration testing of online fsck,
> and I've noticed that once a day, one of the ARM VMs will fail the test
> with out of order records in the data fork.
> 
> xfs/804 races fsstress with online scrub (aka scan but do not change
> anything), so I think this might be a bug in the core xfs code.  This
> also only seems to trigger if one runs the test for more than ~6 minutes
> via TIME_FACTOR=13 or something.
> https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/tree/tests/xfs/804?h=djwong-wtf
.....
> So.  Fix this by moving the dqattach_locked call up, and add a comment
> about how we must attach the dquots *before* sampling the data/cow fork
> contents.
> 
> Fixes: a526c85c2236 ("xfs: move xfs_file_iomap_begin_delay around") # goes further back than this
> Signed-off-by: Darrick J. Wong <djwong@kernel.org>
> ---
>  fs/xfs/xfs_iomap.c |   12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index 1bdd7afc1010..d903f0586490 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -984,6 +984,14 @@ xfs_buffered_write_iomap_begin(
>  	if (error)
>  		goto out_unlock;
>  
> +	/*
> +	 * Attach dquots before we access the data/cow fork mappings, because
> +	 * this function can cycle the ILOCK.
> +	 */
> +	error = xfs_qm_dqattach_locked(ip, false);
> +	if (error)
> +		goto out_unlock;
> +
>  	/*
>  	 * Search the data fork first to look up our source mapping.  We
>  	 * always need the data fork map, as we have to return it to the
> @@ -1071,10 +1079,6 @@ xfs_buffered_write_iomap_begin(
>  			allocfork = XFS_COW_FORK;
>  	}
>  
> -	error = xfs_qm_dqattach_locked(ip, false);
> -	if (error)
> -		goto out_unlock;
> -
>  	if (eof && offset + count > XFS_ISIZE(ip)) {
>  		/*
>  		 * Determine the initial size of the preallocation.
> 

Why not attached the dquots before we call xfs_ilock_for_iomap()?
That way we can just call xfs_qm_dqattach(ip, false) and just return
on failure immediately. That's exactly what we do in the
xfs_iomap_write_direct() path, and it avoids the need to mention
anything about lock cycling because we just don't care
about cycling the ILOCK to read in or allocate dquots before we
start the real work that needs to be done...

Hmmmmm - this means there's a potential problem with IOCB_NOWAIT
here - if the dquots are not in memory, we're going to drop and then
retake the ILOCK_EXCL without trylocks, potentially blocking a task
that should not get blocked. That's a separate problem, though, and
we probably need to plumb NOWAIT through to the dquot lookup cache
miss case to solve that.

Cheers,

Dave.
Darrick J. Wong Nov. 29, 2022, 6:50 a.m. UTC | #2
On Tue, Nov 29, 2022 at 05:31:04PM +1100, Dave Chinner wrote:
> On Sun, Nov 27, 2022 at 10:36:29AM -0800, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@kernel.org>
> > 
> > I've been running near-continuous integration testing of online fsck,
> > and I've noticed that once a day, one of the ARM VMs will fail the test
> > with out of order records in the data fork.
> > 
> > xfs/804 races fsstress with online scrub (aka scan but do not change
> > anything), so I think this might be a bug in the core xfs code.  This
> > also only seems to trigger if one runs the test for more than ~6 minutes
> > via TIME_FACTOR=13 or something.
> > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/tree/tests/xfs/804?h=djwong-wtf
> .....
> > So.  Fix this by moving the dqattach_locked call up, and add a comment
> > about how we must attach the dquots *before* sampling the data/cow fork
> > contents.
> > 
> > Fixes: a526c85c2236 ("xfs: move xfs_file_iomap_begin_delay around") # goes further back than this
> > Signed-off-by: Darrick J. Wong <djwong@kernel.org>
> > ---
> >  fs/xfs/xfs_iomap.c |   12 ++++++++----
> >  1 file changed, 8 insertions(+), 4 deletions(-)
> > 
> > diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> > index 1bdd7afc1010..d903f0586490 100644
> > --- a/fs/xfs/xfs_iomap.c
> > +++ b/fs/xfs/xfs_iomap.c
> > @@ -984,6 +984,14 @@ xfs_buffered_write_iomap_begin(
> >  	if (error)
> >  		goto out_unlock;
> >  
> > +	/*
> > +	 * Attach dquots before we access the data/cow fork mappings, because
> > +	 * this function can cycle the ILOCK.
> > +	 */
> > +	error = xfs_qm_dqattach_locked(ip, false);
> > +	if (error)
> > +		goto out_unlock;
> > +
> >  	/*
> >  	 * Search the data fork first to look up our source mapping.  We
> >  	 * always need the data fork map, as we have to return it to the
> > @@ -1071,10 +1079,6 @@ xfs_buffered_write_iomap_begin(
> >  			allocfork = XFS_COW_FORK;
> >  	}
> >  
> > -	error = xfs_qm_dqattach_locked(ip, false);
> > -	if (error)
> > -		goto out_unlock;
> > -
> >  	if (eof && offset + count > XFS_ISIZE(ip)) {
> >  		/*
> >  		 * Determine the initial size of the preallocation.
> > 
> 
> Why not attached the dquots before we call xfs_ilock_for_iomap()?

I wanted to minimize the number of xfs_ilock calls -- under the scheme
you outline, xfs_qm_dqattach will lock it once; a dquot cache miss
will drop and retake it; and then xfs_ilock_for_iomap would take it yet
again.  That's one more ilock song-and-dance than this patch does...

> That way we can just call xfs_qm_dqattach(ip, false) and just return
> on failure immediately. That's exactly what we do in the
> xfs_iomap_write_direct() path, and it avoids the need to mention
> anything about lock cycling because we just don't care
> about cycling the ILOCK to read in or allocate dquots before we
> start the real work that needs to be done...

...but I guess it's cleaner once you start assuming that dqattach has
grown its own NOWAIT flag.  I'd sorta prefer to commit this corruption
fix as it is and rearrange dqget with NOWAIT as a separate series since
Linus has already warned us[1] to get things done sooner than later.

[1] https://lore.kernel.org/lkml/CAHk-=wgUZwX8Sbb8Zvm7FxWVfX6CGuE7x+E16VKoqL7Ok9vv7g@mail.gmail.com/

(OTOH it's already 6pm your time so I may very well be done with all
the quota nowait changes before you wake up :P)

> Hmmmmm - this means there's a potential problem with IOCB_NOWAIT
> here - if the dquots are not in memory, we're going to drop and then
> retake the ILOCK_EXCL without trylocks, potentially blocking a task
> that should not get blocked. That's a separate problem, though, and
> we probably need to plumb NOWAIT through to the dquot lookup cache
> miss case to solve that.

It wouldn't be that hard to turn that second parameter into the usual
uint flags argument, but I agree that's a separate patch.

How much you wanna bet the FB people have never turned on quota and
hence have not yet played whackanowait with that subsystem?

--D

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
Dave Chinner Nov. 29, 2022, 8:04 a.m. UTC | #3
On Mon, Nov 28, 2022 at 10:50:40PM -0800, Darrick J. Wong wrote:
> On Tue, Nov 29, 2022 at 05:31:04PM +1100, Dave Chinner wrote:
> > On Sun, Nov 27, 2022 at 10:36:29AM -0800, Darrick J. Wong wrote:
> > > From: Darrick J. Wong <djwong@kernel.org>
> > > 
> > > I've been running near-continuous integration testing of online fsck,
> > > and I've noticed that once a day, one of the ARM VMs will fail the test
> > > with out of order records in the data fork.
> > > 
> > > xfs/804 races fsstress with online scrub (aka scan but do not change
> > > anything), so I think this might be a bug in the core xfs code.  This
> > > also only seems to trigger if one runs the test for more than ~6 minutes
> > > via TIME_FACTOR=13 or something.
> > > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/tree/tests/xfs/804?h=djwong-wtf
> > .....
> > > So.  Fix this by moving the dqattach_locked call up, and add a comment
> > > about how we must attach the dquots *before* sampling the data/cow fork
> > > contents.
> > > 
> > > Fixes: a526c85c2236 ("xfs: move xfs_file_iomap_begin_delay around") # goes further back than this
> > > Signed-off-by: Darrick J. Wong <djwong@kernel.org>
> > > ---
> > >  fs/xfs/xfs_iomap.c |   12 ++++++++----
> > >  1 file changed, 8 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> > > index 1bdd7afc1010..d903f0586490 100644
> > > --- a/fs/xfs/xfs_iomap.c
> > > +++ b/fs/xfs/xfs_iomap.c
> > > @@ -984,6 +984,14 @@ xfs_buffered_write_iomap_begin(
> > >  	if (error)
> > >  		goto out_unlock;
> > >  
> > > +	/*
> > > +	 * Attach dquots before we access the data/cow fork mappings, because
> > > +	 * this function can cycle the ILOCK.
> > > +	 */
> > > +	error = xfs_qm_dqattach_locked(ip, false);
> > > +	if (error)
> > > +		goto out_unlock;
> > > +
> > >  	/*
> > >  	 * Search the data fork first to look up our source mapping.  We
> > >  	 * always need the data fork map, as we have to return it to the
> > > @@ -1071,10 +1079,6 @@ xfs_buffered_write_iomap_begin(
> > >  			allocfork = XFS_COW_FORK;
> > >  	}
> > >  
> > > -	error = xfs_qm_dqattach_locked(ip, false);
> > > -	if (error)
> > > -		goto out_unlock;
> > > -
> > >  	if (eof && offset + count > XFS_ISIZE(ip)) {
> > >  		/*
> > >  		 * Determine the initial size of the preallocation.
> > > 
> > 
> > Why not attached the dquots before we call xfs_ilock_for_iomap()?
> 
> I wanted to minimize the number of xfs_ilock calls -- under the scheme
> you outline, xfs_qm_dqattach will lock it once; a dquot cache miss
> will drop and retake it; and then xfs_ilock_for_iomap would take it yet
> again.  That's one more ilock song-and-dance than this patch does...

Ture, but we don't have an extra lock cycle if the dquots are
already attached to the inode - xfs_qm_dqattach() checks for
attached inodes before it takes the ILOCK to attach them. Hence if
we are doing lots of small writes to a file, we only take this extra
lock cycle for the first delalloc reservation that we make, not
every single one....

We have to do it this way for anything that runs an actual
transaction (like the direct IO write path we take if an extent size
hint is set) as we can't cycle the ILOCK within a transaction
context, so the code is already optimised for the "dquots already
attached" case....

> > That way we can just call xfs_qm_dqattach(ip, false) and just return
> > on failure immediately. That's exactly what we do in the
> > xfs_iomap_write_direct() path, and it avoids the need to mention
> > anything about lock cycling because we just don't care
> > about cycling the ILOCK to read in or allocate dquots before we
> > start the real work that needs to be done...
> 
> ...but I guess it's cleaner once you start assuming that dqattach has
> grown its own NOWAIT flag.  I'd sorta prefer to commit this corruption
> fix as it is and rearrange dqget with NOWAIT as a separate series since
> Linus has already warned us[1] to get things done sooner than later.
> 
> [1] https://lore.kernel.org/lkml/CAHk-=wgUZwX8Sbb8Zvm7FxWVfX6CGuE7x+E16VKoqL7Ok9vv7g@mail.gmail.com/

<shrug>

If that's your concern, then

Reviewed-by: Dave Chinner <dchinner@redhat.com>

However, as maintainer I was never concerned about being "too late
in the cycle". I'd just push it into the for next tree with a stable
tag and when it gets merged in a couple of weeks the stable
maintainers should notice it and backport it appropriately
automatically....

For distro backports, merging into the XFS tree is good enough to be
iconsidered upstream as it's pretty much guaranteed to end up in the
mainline tree once it's been merged by the maintainer....

> (OTOH it's already 6pm your time so I may very well be done with all
> the quota nowait changes before you wake up :P)

NOWAIT changes are definitely next cycle stuff :)

> > Hmmmmm - this means there's a potential problem with IOCB_NOWAIT
> > here - if the dquots are not in memory, we're going to drop and then
> > retake the ILOCK_EXCL without trylocks, potentially blocking a task
> > that should not get blocked. That's a separate problem, though, and
> > we probably need to plumb NOWAIT through to the dquot lookup cache
> > miss case to solve that.
> 
> It wouldn't be that hard to turn that second parameter into the usual
> uint flags argument, but I agree that's a separate patch.

*nod*

> How much you wanna bet the FB people have never turned on quota and
> hence have not yet played whackanowait with that subsystem?

No bet, we both know the odds. :/

Indeed, set an extent size hint on a file and then run io_uring
async buffered writes and watch all the massive long tail latencies
that occur on the transaction reservations and btree block IO and
locking in the allocation path....

Cheers,

Dave.
Darrick J. Wong Nov. 29, 2022, 9:03 p.m. UTC | #4
On Tue, Nov 29, 2022 at 07:04:50PM +1100, Dave Chinner wrote:
> On Mon, Nov 28, 2022 at 10:50:40PM -0800, Darrick J. Wong wrote:
> > On Tue, Nov 29, 2022 at 05:31:04PM +1100, Dave Chinner wrote:
> > > On Sun, Nov 27, 2022 at 10:36:29AM -0800, Darrick J. Wong wrote:
> > > > From: Darrick J. Wong <djwong@kernel.org>
> > > > 
> > > > I've been running near-continuous integration testing of online fsck,
> > > > and I've noticed that once a day, one of the ARM VMs will fail the test
> > > > with out of order records in the data fork.
> > > > 
> > > > xfs/804 races fsstress with online scrub (aka scan but do not change
> > > > anything), so I think this might be a bug in the core xfs code.  This
> > > > also only seems to trigger if one runs the test for more than ~6 minutes
> > > > via TIME_FACTOR=13 or something.
> > > > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfstests-dev.git/tree/tests/xfs/804?h=djwong-wtf
> > > .....
> > > > So.  Fix this by moving the dqattach_locked call up, and add a comment
> > > > about how we must attach the dquots *before* sampling the data/cow fork
> > > > contents.
> > > > 
> > > > Fixes: a526c85c2236 ("xfs: move xfs_file_iomap_begin_delay around") # goes further back than this
> > > > Signed-off-by: Darrick J. Wong <djwong@kernel.org>
> > > > ---
> > > >  fs/xfs/xfs_iomap.c |   12 ++++++++----
> > > >  1 file changed, 8 insertions(+), 4 deletions(-)
> > > > 
> > > > diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> > > > index 1bdd7afc1010..d903f0586490 100644
> > > > --- a/fs/xfs/xfs_iomap.c
> > > > +++ b/fs/xfs/xfs_iomap.c
> > > > @@ -984,6 +984,14 @@ xfs_buffered_write_iomap_begin(
> > > >  	if (error)
> > > >  		goto out_unlock;
> > > >  
> > > > +	/*
> > > > +	 * Attach dquots before we access the data/cow fork mappings, because
> > > > +	 * this function can cycle the ILOCK.
> > > > +	 */
> > > > +	error = xfs_qm_dqattach_locked(ip, false);
> > > > +	if (error)
> > > > +		goto out_unlock;
> > > > +
> > > >  	/*
> > > >  	 * Search the data fork first to look up our source mapping.  We
> > > >  	 * always need the data fork map, as we have to return it to the
> > > > @@ -1071,10 +1079,6 @@ xfs_buffered_write_iomap_begin(
> > > >  			allocfork = XFS_COW_FORK;
> > > >  	}
> > > >  
> > > > -	error = xfs_qm_dqattach_locked(ip, false);
> > > > -	if (error)
> > > > -		goto out_unlock;
> > > > -
> > > >  	if (eof && offset + count > XFS_ISIZE(ip)) {
> > > >  		/*
> > > >  		 * Determine the initial size of the preallocation.
> > > > 
> > > 
> > > Why not attached the dquots before we call xfs_ilock_for_iomap()?
> > 
> > I wanted to minimize the number of xfs_ilock calls -- under the scheme
> > you outline, xfs_qm_dqattach will lock it once; a dquot cache miss
> > will drop and retake it; and then xfs_ilock_for_iomap would take it yet
> > again.  That's one more ilock song-and-dance than this patch does...
> 
> Ture, but we don't have an extra lock cycle if the dquots are
> already attached to the inode - xfs_qm_dqattach() checks for
> attached inodes before it takes the ILOCK to attach them. Hence if
> we are doing lots of small writes to a file, we only take this extra
> lock cycle for the first delalloc reservation that we make, not
> every single one....
> 
> We have to do it this way for anything that runs an actual
> transaction (like the direct IO write path we take if an extent size
> hint is set) as we can't cycle the ILOCK within a transaction
> context, so the code is already optimised for the "dquots already
> attached" case....

<nod> In the end, I decided to rewrite the patch to xfs_qm_dqattach at
the start of xfs_buffered_write_iomap_begin.  I'll send that shortly.

> > > That way we can just call xfs_qm_dqattach(ip, false) and just return
> > > on failure immediately. That's exactly what we do in the
> > > xfs_iomap_write_direct() path, and it avoids the need to mention
> > > anything about lock cycling because we just don't care
> > > about cycling the ILOCK to read in or allocate dquots before we
> > > start the real work that needs to be done...
> > 
> > ...but I guess it's cleaner once you start assuming that dqattach has
> > grown its own NOWAIT flag.  I'd sorta prefer to commit this corruption
> > fix as it is and rearrange dqget with NOWAIT as a separate series since
> > Linus has already warned us[1] to get things done sooner than later.
> > 
> > [1] https://lore.kernel.org/lkml/CAHk-=wgUZwX8Sbb8Zvm7FxWVfX6CGuE7x+E16VKoqL7Ok9vv7g@mail.gmail.com/
> 
> <shrug>
> 
> If that's your concern, then
> 
> Reviewed-by: Dave Chinner <dchinner@redhat.com>

Thanks! ;)

> However, as maintainer I was never concerned about being "too late
> in the cycle". I'd just push it into the for next tree with a stable
> tag and when it gets merged in a couple of weeks the stable
> maintainers should notice it and backport it appropriately
> automatically....

<nod> Normally I wouldn't care about timing since it's a bugfix, but I
kinda want to get all these sharp ends wrapped up, to minimize the
number of fixes that we still have to work on for -rc1+ in January.

> For distro backports, merging into the XFS tree is good enough to be
> iconsidered upstream as it's pretty much guaranteed to end up in the
> mainline tree once it's been merged by the maintainer....
> 
> > (OTOH it's already 6pm your time so I may very well be done with all
> > the quota nowait changes before you wake up :P)
> 
> NOWAIT changes are definitely next cycle stuff :)
> 
> > > Hmmmmm - this means there's a potential problem with IOCB_NOWAIT
> > > here - if the dquots are not in memory, we're going to drop and then
> > > retake the ILOCK_EXCL without trylocks, potentially blocking a task
> > > that should not get blocked. That's a separate problem, though, and
> > > we probably need to plumb NOWAIT through to the dquot lookup cache
> > > miss case to solve that.
> > 
> > It wouldn't be that hard to turn that second parameter into the usual
> > uint flags argument, but I agree that's a separate patch.
> 
> *nod*
> 
> > How much you wanna bet the FB people have never turned on quota and
> > hence have not yet played whackanowait with that subsystem?
> 
> No bet, we both know the odds. :/
> 
> Indeed, set an extent size hint on a file and then run io_uring
> async buffered writes and watch all the massive long tail latencies
> that occur on the transaction reservations and btree block IO and
> locking in the allocation path....

Granted, I wonder what would

--D

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
diff mbox series

Patch

diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 1bdd7afc1010..d903f0586490 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -984,6 +984,14 @@  xfs_buffered_write_iomap_begin(
 	if (error)
 		goto out_unlock;
 
+	/*
+	 * Attach dquots before we access the data/cow fork mappings, because
+	 * this function can cycle the ILOCK.
+	 */
+	error = xfs_qm_dqattach_locked(ip, false);
+	if (error)
+		goto out_unlock;
+
 	/*
 	 * Search the data fork first to look up our source mapping.  We
 	 * always need the data fork map, as we have to return it to the
@@ -1071,10 +1079,6 @@  xfs_buffered_write_iomap_begin(
 			allocfork = XFS_COW_FORK;
 	}
 
-	error = xfs_qm_dqattach_locked(ip, false);
-	if (error)
-		goto out_unlock;
-
 	if (eof && offset + count > XFS_ISIZE(ip)) {
 		/*
 		 * Determine the initial size of the preallocation.