diff mbox series

[4/5] xfs: drop inactive dquots before inactivating inodes

Message ID 162250087317.490412.346108244268292896.stgit@locust (mailing list archive)
State New, archived
Headers show
Series xfs: clean up quotaoff inode walks | expand

Commit Message

Darrick J. Wong May 31, 2021, 10:41 p.m. UTC
From: Darrick J. Wong <djwong@kernel.org>

During quotaoff, the incore inode scan to detach dquots from inodes
won't touch inodes that have lost their VFS state but haven't yet been
queued for reclaim.  This isn't strictly a problem because we drop the
dquots at the end of inactivation, but if we detect this situation
before starting inactivation, we can drop the inactive dquots early to
avoid delaying quotaoff further.

Signed-off-by: Darrick J. Wong <djwong@kernel.org>
---
 fs/xfs/xfs_super.c |   32 ++++++++++++++++++++++++++++----
 1 file changed, 28 insertions(+), 4 deletions(-)

Comments

Dave Chinner June 1, 2021, 12:35 a.m. UTC | #1
On Mon, May 31, 2021 at 03:41:13PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <djwong@kernel.org>
> 
> During quotaoff, the incore inode scan to detach dquots from inodes
> won't touch inodes that have lost their VFS state but haven't yet been
> queued for reclaim.  This isn't strictly a problem because we drop the
> dquots at the end of inactivation, but if we detect this situation
> before starting inactivation, we can drop the inactive dquots early to
> avoid delaying quotaoff further.
> 
> Signed-off-by: Darrick J. Wong <djwong@kernel.org>
> ---
>  fs/xfs/xfs_super.c |   32 ++++++++++++++++++++++++++++----
>  1 file changed, 28 insertions(+), 4 deletions(-)
> 
> 
> diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
> index a2dab05332ac..79f1cd1a0221 100644
> --- a/fs/xfs/xfs_super.c
> +++ b/fs/xfs/xfs_super.c
> @@ -637,22 +637,46 @@ xfs_fs_destroy_inode(
>  	struct inode		*inode)
>  {
>  	struct xfs_inode	*ip = XFS_I(inode);
> +	struct xfs_mount	*mp = ip->i_mount;
>  
>  	trace_xfs_destroy_inode(ip);
>  
>  	ASSERT(!rwsem_is_locked(&inode->i_rwsem));
> -	XFS_STATS_INC(ip->i_mount, vn_rele);
> -	XFS_STATS_INC(ip->i_mount, vn_remove);
> +	XFS_STATS_INC(mp, vn_rele);
> +	XFS_STATS_INC(mp, vn_remove);
> +
> +	/*
> +	 * If a quota type is turned off but we still have a dquot attached to
> +	 * the inode, detach it before processing this inode to avoid delaying
> +	 * quotaoff for longer than is necessary.
> +	 *
> +	 * The inode has no VFS state and hasn't been tagged for any kind of
> +	 * reclamation, which means that iget, quotaoff, blockgc, and reclaim
> +	 * will not touch it.  It is therefore safe to do this locklessly
> +	 * because we have the only reference here.
> +	 */
> +	if (!XFS_IS_UQUOTA_ON(mp)) {
> +		xfs_qm_dqrele(ip->i_udquot);
> +		ip->i_udquot = NULL;
> +	}
> +	if (!XFS_IS_GQUOTA_ON(mp)) {
> +		xfs_qm_dqrele(ip->i_gdquot);
> +		ip->i_gdquot = NULL;
> +	}
> +	if (!XFS_IS_PQUOTA_ON(mp)) {
> +		xfs_qm_dqrele(ip->i_pdquot);
> +		ip->i_pdquot = NULL;
> +	}
>  
>  	xfs_inactive(ip);

Shouldn't we just make xfs_inactive() unconditionally detatch dquots
rather than just in the conditional case it does now after attaching
dquots because it has to make modifications? For inodes that don't
require any inactivation work, we get the same thing, and for those
that do running a few extra transactions before dropping the dquots
isn't going to make a huge difference to the quotaoff latency....

Cheers,

Dave.
Darrick J. Wong June 1, 2021, 7:53 p.m. UTC | #2
On Tue, Jun 01, 2021 at 10:35:06AM +1000, Dave Chinner wrote:
> On Mon, May 31, 2021 at 03:41:13PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@kernel.org>
> > 
> > During quotaoff, the incore inode scan to detach dquots from inodes
> > won't touch inodes that have lost their VFS state but haven't yet been
> > queued for reclaim.  This isn't strictly a problem because we drop the
> > dquots at the end of inactivation, but if we detect this situation
> > before starting inactivation, we can drop the inactive dquots early to
> > avoid delaying quotaoff further.
> > 
> > Signed-off-by: Darrick J. Wong <djwong@kernel.org>
> > ---
> >  fs/xfs/xfs_super.c |   32 ++++++++++++++++++++++++++++----
> >  1 file changed, 28 insertions(+), 4 deletions(-)
> > 
> > 
> > diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
> > index a2dab05332ac..79f1cd1a0221 100644
> > --- a/fs/xfs/xfs_super.c
> > +++ b/fs/xfs/xfs_super.c
> > @@ -637,22 +637,46 @@ xfs_fs_destroy_inode(
> >  	struct inode		*inode)
> >  {
> >  	struct xfs_inode	*ip = XFS_I(inode);
> > +	struct xfs_mount	*mp = ip->i_mount;
> >  
> >  	trace_xfs_destroy_inode(ip);
> >  
> >  	ASSERT(!rwsem_is_locked(&inode->i_rwsem));
> > -	XFS_STATS_INC(ip->i_mount, vn_rele);
> > -	XFS_STATS_INC(ip->i_mount, vn_remove);
> > +	XFS_STATS_INC(mp, vn_rele);
> > +	XFS_STATS_INC(mp, vn_remove);
> > +
> > +	/*
> > +	 * If a quota type is turned off but we still have a dquot attached to
> > +	 * the inode, detach it before processing this inode to avoid delaying
> > +	 * quotaoff for longer than is necessary.
> > +	 *
> > +	 * The inode has no VFS state and hasn't been tagged for any kind of
> > +	 * reclamation, which means that iget, quotaoff, blockgc, and reclaim
> > +	 * will not touch it.  It is therefore safe to do this locklessly
> > +	 * because we have the only reference here.
> > +	 */
> > +	if (!XFS_IS_UQUOTA_ON(mp)) {
> > +		xfs_qm_dqrele(ip->i_udquot);
> > +		ip->i_udquot = NULL;
> > +	}
> > +	if (!XFS_IS_GQUOTA_ON(mp)) {
> > +		xfs_qm_dqrele(ip->i_gdquot);
> > +		ip->i_gdquot = NULL;
> > +	}
> > +	if (!XFS_IS_PQUOTA_ON(mp)) {
> > +		xfs_qm_dqrele(ip->i_pdquot);
> > +		ip->i_pdquot = NULL;
> > +	}
> >  
> >  	xfs_inactive(ip);
> 
> Shouldn't we just make xfs_inactive() unconditionally detatch dquots
> rather than just in the conditional case it does now after attaching
> dquots because it has to make modifications? For inodes that don't
> require any inactivation work, we get the same thing, and for those
> that do running a few extra transactions before dropping the dquots
> isn't going to make a huge difference to the quotaoff latency....

Actually... the previous patch does exactly that.  I'll drop this patch.

--D

> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
diff mbox series

Patch

diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index a2dab05332ac..79f1cd1a0221 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/xfs_super.c
@@ -637,22 +637,46 @@  xfs_fs_destroy_inode(
 	struct inode		*inode)
 {
 	struct xfs_inode	*ip = XFS_I(inode);
+	struct xfs_mount	*mp = ip->i_mount;
 
 	trace_xfs_destroy_inode(ip);
 
 	ASSERT(!rwsem_is_locked(&inode->i_rwsem));
-	XFS_STATS_INC(ip->i_mount, vn_rele);
-	XFS_STATS_INC(ip->i_mount, vn_remove);
+	XFS_STATS_INC(mp, vn_rele);
+	XFS_STATS_INC(mp, vn_remove);
+
+	/*
+	 * If a quota type is turned off but we still have a dquot attached to
+	 * the inode, detach it before processing this inode to avoid delaying
+	 * quotaoff for longer than is necessary.
+	 *
+	 * The inode has no VFS state and hasn't been tagged for any kind of
+	 * reclamation, which means that iget, quotaoff, blockgc, and reclaim
+	 * will not touch it.  It is therefore safe to do this locklessly
+	 * because we have the only reference here.
+	 */
+	if (!XFS_IS_UQUOTA_ON(mp)) {
+		xfs_qm_dqrele(ip->i_udquot);
+		ip->i_udquot = NULL;
+	}
+	if (!XFS_IS_GQUOTA_ON(mp)) {
+		xfs_qm_dqrele(ip->i_gdquot);
+		ip->i_gdquot = NULL;
+	}
+	if (!XFS_IS_PQUOTA_ON(mp)) {
+		xfs_qm_dqrele(ip->i_pdquot);
+		ip->i_pdquot = NULL;
+	}
 
 	xfs_inactive(ip);
 
-	if (!XFS_FORCED_SHUTDOWN(ip->i_mount) && ip->i_delayed_blks) {
+	if (!XFS_FORCED_SHUTDOWN(mp) && ip->i_delayed_blks) {
 		xfs_check_delalloc(ip, XFS_DATA_FORK);
 		xfs_check_delalloc(ip, XFS_COW_FORK);
 		ASSERT(0);
 	}
 
-	XFS_STATS_INC(ip->i_mount, vn_reclaim);
+	XFS_STATS_INC(mp, vn_reclaim);
 
 	/*
 	 * We should never get here with one of the reclaim flags already set.