diff mbox series

[3/4] xfs: allow lazy removal of inodes from the inodegc queues

Message ID 20240319001707.3430251-4-david@fromorbit.com (mailing list archive)
State New
Headers show
Series xfs: recycle inactive inodes immediately | expand

Commit Message

Dave Chinner March 19, 2024, 12:15 a.m. UTC
From: Dave Chinner <dchinner@redhat.com>

To allow us to recycle inodes that are awaiting inactivation, we
need to enable lazy removal of inodes from the list. Th elist is a
lockless single linked variant, so we can't just remove inodes from
the list at will.

Instead, we can remove them lazily whenever inodegc runs by enabling
the inodegc processing to determine whether inactivation needs to be
done at processing time rather than queuing time.

We've already modified the queuing code to only queue the inode if
it isn't already queued, so here all we need to do is modify the
queue processing to determine if inactivation needs to be done.

Hence we introduce the behaviour that we can cancel inactivation
processing simply by clearing the XFS_NEED_INACTIVE flag on the
inode. Processing will check this flag and skip inactivation
processing if it is not set. The flag is always set at queuing time,
regardless of whether the inode is already one the queues or not.
Hence if it is not set at processing time, it means that something
has cancelled the inactivation and we should just remove it from the
list and then leave it alone.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
---
 fs/xfs/xfs_icache.c | 36 +++++++++++++++++++++++++++++-------
 1 file changed, 29 insertions(+), 7 deletions(-)
diff mbox series

Patch

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 559b8f71dc91..7359753b892b 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -1882,13 +1882,21 @@  xfs_inodegc_worker(
 		int	error;
 
 		/*
-		 * Switch state to inactivating and remove the inode from the
-		 * gclist. This allows the use of llist_on_list() in the queuing
-		 * code to determine if the inode is already on an inodegc
-		 * queue.
+		 * Remove the inode from the gclist and determine if it needs to
+		 * be processed. The XFS_NEED_INACTIVE flag gets cleared if the
+		 * inode is reactivated after queuing, but the list removal is
+		 * lazy and left up to us.
+		 *
+		 * We always remove the inode from the list to allow the use of
+		 * llist_on_list() in the queuing code to determine if the inode
+		 * is already on an inodegc queue.
 		 */
 		spin_lock(&ip->i_flags_lock);
 		init_llist_node(&ip->i_gclist);
+		if (!(ip->i_flags & XFS_NEED_INACTIVE)) {
+			spin_unlock(&ip->i_flags_lock);
+			continue;
+		}
 		ip->i_flags |= XFS_INACTIVATING;
 		ip->i_flags &= ~XFS_NEED_INACTIVE;
 		spin_unlock(&ip->i_flags_lock);
@@ -2160,7 +2168,6 @@  xfs_inode_mark_reclaimable(
 	struct xfs_inode	*ip)
 {
 	struct xfs_mount	*mp = ip->i_mount;
-	bool			need_inactive;
 
 	XFS_STATS_INC(mp, vn_reclaim);
 
@@ -2169,8 +2176,23 @@  xfs_inode_mark_reclaimable(
 	 */
 	ASSERT_ALWAYS(!xfs_iflags_test(ip, XFS_ALL_IRECLAIM_FLAGS));
 
-	need_inactive = xfs_inode_needs_inactive(ip);
-	if (need_inactive) {
+	/*
+	 * If the inode is already queued for inactivation because it was
+	 * re-activated and is now being reclaimed again (e.g. fs has been
+	 * frozen for a while) we must ensure that the inode waits for inodegc
+	 * to be run and removes it from the inodegc queue before it moves to
+	 * the reclaimable state and gets freed.
+	 *
+	 * We don't care about races here. We can't race with a list addition
+	 * because only one thread can be evicting the inode from the VFS cache,
+	 * hence false negatives can't occur and we only need to worry about
+	 * list removal races.  If we get a false positive from a list removal
+	 * race, then the inode goes through the inactive list whether it needs
+	 * to or not. This will slow down reclaim of this inode slightly but
+	 * should have no other side effects.
+	 */
+	if (llist_on_list(&ip->i_gclist) ||
+	    xfs_inode_needs_inactive(ip)) {
 		xfs_inodegc_queue(ip);
 		return;
 	}