diff mbox series

[18/24] xfs: reduce kswapd blocking on inode locking.

Message ID 20190801021752.4986-19-david@fromorbit.com (mailing list archive)
State Superseded
Headers show
Series mm, xfs: non-blocking inode reclaim | expand

Commit Message

Dave Chinner Aug. 1, 2019, 2:17 a.m. UTC
From: Dave Chinner <dchinner@redhat.com>

When doing async node reclaiming, we grab a batch of inodes that we
are likely able to reclaim and ignore those that are already
flushing. However, when we actually go to reclaim them, the first
thing we do is lock the inode. If we are racing with something
else reclaiming the inode or flushing it because it is dirty,
we block on the inode lock. Hence we can still block kswapd here.

Further, if we flush an inode, we also cluster all the other dirty
inodes in that cluster into the same IO, flush locking them all.
However, if the workload is operating on sequential inodes (e.g.
created by a tarball extraction) most of these inodes will be
sequntial in the cache and so in the same batch
we've already grabbed for reclaim scanning.

As a result, it is common for all the inodes in the batch to be
dirty and it is common for the first inode flushed to also flush all
the inodes in the reclaim batch. In which case, they are now all
going to be flush locked and we do not want to block on them.

Hence, for async reclaim (SYNC_TRYLOCK) make sure we always use
trylock semantics and abort reclaim of an inode as quickly as we can
without blocking kswapd.

Found via tracing and finding big batches of repeated lock/unlock
runs on inodes that we just flushed by write clustering during
reclaim.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_icache.c | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

Comments

Brian Foster Aug. 6, 2019, 6:22 p.m. UTC | #1
On Thu, Aug 01, 2019 at 12:17:46PM +1000, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> When doing async node reclaiming, we grab a batch of inodes that we
> are likely able to reclaim and ignore those that are already
> flushing. However, when we actually go to reclaim them, the first
> thing we do is lock the inode. If we are racing with something
> else reclaiming the inode or flushing it because it is dirty,
> we block on the inode lock. Hence we can still block kswapd here.
> 
> Further, if we flush an inode, we also cluster all the other dirty
> inodes in that cluster into the same IO, flush locking them all.
> However, if the workload is operating on sequential inodes (e.g.
> created by a tarball extraction) most of these inodes will be
> sequntial in the cache and so in the same batch
> we've already grabbed for reclaim scanning.
> 
> As a result, it is common for all the inodes in the batch to be
> dirty and it is common for the first inode flushed to also flush all
> the inodes in the reclaim batch. In which case, they are now all
> going to be flush locked and we do not want to block on them.
> 

Hmm... I think I'm missing something with this description. For dirty
inodes that are flushed in a cluster via reclaim as described, aren't we
already blocking on all of the flush locks by virtue of the synchronous
I/O associated with the flush of the first dirty inode in that
particular cluster?

Brian

> Hence, for async reclaim (SYNC_TRYLOCK) make sure we always use
> trylock semantics and abort reclaim of an inode as quickly as we can
> without blocking kswapd.
> 
> Found via tracing and finding big batches of repeated lock/unlock
> runs on inodes that we just flushed by write clustering during
> reclaim.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_icache.c | 23 ++++++++++++++++++-----
>  1 file changed, 18 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index 2fa2f8dcf86b..e6b9030875b9 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -1104,11 +1104,23 @@ xfs_reclaim_inode(
>  
>  restart:
>  	error = 0;
> -	xfs_ilock(ip, XFS_ILOCK_EXCL);
> -	if (!xfs_iflock_nowait(ip)) {
> -		if (!(sync_mode & SYNC_WAIT))
> +	/*
> +	 * Don't try to flush the inode if another inode in this cluster has
> +	 * already flushed it after we did the initial checks in
> +	 * xfs_reclaim_inode_grab().
> +	 */
> +	if (sync_mode & SYNC_TRYLOCK) {
> +		if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL))
>  			goto out;
> -		xfs_iflock(ip);
> +		if (!xfs_iflock_nowait(ip))
> +			goto out_unlock;
> +	} else {
> +		xfs_ilock(ip, XFS_ILOCK_EXCL);
> +		if (!xfs_iflock_nowait(ip)) {
> +			if (!(sync_mode & SYNC_WAIT))
> +				goto out_unlock;
> +			xfs_iflock(ip);
> +		}
>  	}
>  
>  	if (XFS_FORCED_SHUTDOWN(ip->i_mount)) {
> @@ -1215,9 +1227,10 @@ xfs_reclaim_inode(
>  
>  out_ifunlock:
>  	xfs_ifunlock(ip);
> +out_unlock:
> +	xfs_iunlock(ip, XFS_ILOCK_EXCL);
>  out:
>  	xfs_iflags_clear(ip, XFS_IRECLAIM);
> -	xfs_iunlock(ip, XFS_ILOCK_EXCL);
>  	/*
>  	 * We could return -EAGAIN here to make reclaim rescan the inode tree in
>  	 * a short while. However, this just burns CPU time scanning the tree
> -- 
> 2.22.0
>
Dave Chinner Aug. 6, 2019, 9:33 p.m. UTC | #2
On Tue, Aug 06, 2019 at 02:22:13PM -0400, Brian Foster wrote:
> On Thu, Aug 01, 2019 at 12:17:46PM +1000, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@redhat.com>
> > 
> > When doing async node reclaiming, we grab a batch of inodes that we
> > are likely able to reclaim and ignore those that are already
> > flushing. However, when we actually go to reclaim them, the first
> > thing we do is lock the inode. If we are racing with something
> > else reclaiming the inode or flushing it because it is dirty,
> > we block on the inode lock. Hence we can still block kswapd here.
> > 
> > Further, if we flush an inode, we also cluster all the other dirty
> > inodes in that cluster into the same IO, flush locking them all.
> > However, if the workload is operating on sequential inodes (e.g.
> > created by a tarball extraction) most of these inodes will be
> > sequntial in the cache and so in the same batch
> > we've already grabbed for reclaim scanning.
> > 
> > As a result, it is common for all the inodes in the batch to be
> > dirty and it is common for the first inode flushed to also flush all
> > the inodes in the reclaim batch. In which case, they are now all
> > going to be flush locked and we do not want to block on them.
> > 
> 
> Hmm... I think I'm missing something with this description. For dirty
> inodes that are flushed in a cluster via reclaim as described, aren't we
> already blocking on all of the flush locks by virtue of the synchronous
> I/O associated with the flush of the first dirty inode in that
> particular cluster?

Currently we end up issuing IO and waiting for it, so by the time we
get to the next inode in the cluster, it's already been cleaned and
unlocked.

However, as we go to non-blocking scanning, if we hit one
flush-locked inode in a batch, it's entirely likely that the rest of
the inodes in the batch are also flush locked, and so we should
always try to skip over them in non-blocking reclaim.

This is really just a stepping stone in the logic to the way the
LRU isolation function works - it's entirely non-blocking and full
of lock order inversions, so everything has to run under try-lock
semantics. This is essentially starting that restructuring, based on
the observation that sequential inodes are flushed in batches...

Cheers,

Dave.
Brian Foster Aug. 7, 2019, 11:30 a.m. UTC | #3
On Wed, Aug 07, 2019 at 07:33:53AM +1000, Dave Chinner wrote:
> On Tue, Aug 06, 2019 at 02:22:13PM -0400, Brian Foster wrote:
> > On Thu, Aug 01, 2019 at 12:17:46PM +1000, Dave Chinner wrote:
> > > From: Dave Chinner <dchinner@redhat.com>
> > > 
> > > When doing async node reclaiming, we grab a batch of inodes that we
> > > are likely able to reclaim and ignore those that are already
> > > flushing. However, when we actually go to reclaim them, the first
> > > thing we do is lock the inode. If we are racing with something
> > > else reclaiming the inode or flushing it because it is dirty,
> > > we block on the inode lock. Hence we can still block kswapd here.
> > > 
> > > Further, if we flush an inode, we also cluster all the other dirty
> > > inodes in that cluster into the same IO, flush locking them all.
> > > However, if the workload is operating on sequential inodes (e.g.
> > > created by a tarball extraction) most of these inodes will be
> > > sequntial in the cache and so in the same batch
> > > we've already grabbed for reclaim scanning.
> > > 
> > > As a result, it is common for all the inodes in the batch to be
> > > dirty and it is common for the first inode flushed to also flush all
> > > the inodes in the reclaim batch. In which case, they are now all
> > > going to be flush locked and we do not want to block on them.
> > > 
> > 
> > Hmm... I think I'm missing something with this description. For dirty
> > inodes that are flushed in a cluster via reclaim as described, aren't we
> > already blocking on all of the flush locks by virtue of the synchronous
> > I/O associated with the flush of the first dirty inode in that
> > particular cluster?
> 
> Currently we end up issuing IO and waiting for it, so by the time we
> get to the next inode in the cluster, it's already been cleaned and
> unlocked.
> 

Right..

> However, as we go to non-blocking scanning, if we hit one
> flush-locked inode in a batch, it's entirely likely that the rest of
> the inodes in the batch are also flush locked, and so we should
> always try to skip over them in non-blocking reclaim.
> 

This makes more sense. Note that the description is confusing because it
assumes context that doesn't exist in the code as of yet (i.e., no
mention of the nonblocking mode) and so isn't clear to the reader. If
the purpose is preparation for a future change, please note that clearly
in the commit log.

Second (and not necessarily caused by this patch), the ireclaim flag
semantics are kind of a mess. As you've already noted, we currently
block on some locks even with SYNC_TRYLOCK, yet the cluster flushing
code has no concept of these flags (so we always trylock, never wait on
unpin, for some reason use the shared ilock vs. the exclusive ilock,
etc.). Further, with this patch TRYLOCK|WAIT basically means that if we
happen to get the lock, we flush and wait on I/O so we can free the
inode(s), but if somebody else has flushed the inode (we don't get the
flush lock) we decide not to wait on the I/O that might (or might not)
already be in progress. I find that a bit inconsistent. It also makes me
slightly concerned that we're (ab)using flag semantics for a bug fix
(waiting on inodes we've just flushed from the same task), but it looks
like this is all going to change quite a bit still so I'm not going to
worry too much about this mostly existing mess until I grok the bigger
picture changes... :P

Brian

> This is really just a stepping stone in the logic to the way the
> LRU isolation function works - it's entirely non-blocking and full
> of lock order inversions, so everything has to run under try-lock
> semantics. This is essentially starting that restructuring, based on
> the observation that sequential inodes are flushed in batches...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
Dave Chinner Aug. 7, 2019, 11:16 p.m. UTC | #4
On Wed, Aug 07, 2019 at 07:30:09AM -0400, Brian Foster wrote:
> Second (and not necessarily caused by this patch), the ireclaim flag
> semantics are kind of a mess. As you've already noted, we currently
> block on some locks even with SYNC_TRYLOCK, yet the cluster flushing
> code has no concept of these flags (so we always trylock, never wait on
> unpin, for some reason use the shared ilock vs. the exclusive ilock,
> etc.). Further, with this patch TRYLOCK|WAIT basically means that if we
> happen to get the lock, we flush and wait on I/O so we can free the
> inode(s), but if somebody else has flushed the inode (we don't get the
> flush lock) we decide not to wait on the I/O that might (or might not)
> already be in progress. I find that a bit inconsistent. It also makes me
> slightly concerned that we're (ab)using flag semantics for a bug fix
> (waiting on inodes we've just flushed from the same task), but it looks
> like this is all going to change quite a bit still so I'm not going to
> worry too much about this mostly existing mess until I grok the bigger
> picture changes... :P

Yes, SYNC_TRYLOCK/SYNC_WAIT semantics are a mess, but they all go
away later in the patchset.  Non-blocking reclaim makes SYNC_TRYLOCK
go away because everything becomes try-lock based, and SYNC_WAIT goes
away because only the xfs_reclaim_inodes() function needs to wait
for reclaim completion and so that gets it's own LRU walker
implementation and the mode parameter is removed.

Cheers,

Dave.
diff mbox series

Patch

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 2fa2f8dcf86b..e6b9030875b9 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -1104,11 +1104,23 @@  xfs_reclaim_inode(
 
 restart:
 	error = 0;
-	xfs_ilock(ip, XFS_ILOCK_EXCL);
-	if (!xfs_iflock_nowait(ip)) {
-		if (!(sync_mode & SYNC_WAIT))
+	/*
+	 * Don't try to flush the inode if another inode in this cluster has
+	 * already flushed it after we did the initial checks in
+	 * xfs_reclaim_inode_grab().
+	 */
+	if (sync_mode & SYNC_TRYLOCK) {
+		if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL))
 			goto out;
-		xfs_iflock(ip);
+		if (!xfs_iflock_nowait(ip))
+			goto out_unlock;
+	} else {
+		xfs_ilock(ip, XFS_ILOCK_EXCL);
+		if (!xfs_iflock_nowait(ip)) {
+			if (!(sync_mode & SYNC_WAIT))
+				goto out_unlock;
+			xfs_iflock(ip);
+		}
 	}
 
 	if (XFS_FORCED_SHUTDOWN(ip->i_mount)) {
@@ -1215,9 +1227,10 @@  xfs_reclaim_inode(
 
 out_ifunlock:
 	xfs_ifunlock(ip);
+out_unlock:
+	xfs_iunlock(ip, XFS_ILOCK_EXCL);
 out:
 	xfs_iflags_clear(ip, XFS_IRECLAIM);
-	xfs_iunlock(ip, XFS_ILOCK_EXCL);
 	/*
 	 * We could return -EAGAIN here to make reclaim rescan the inode tree in
 	 * a short while. However, this just burns CPU time scanning the tree