diff mbox

[v2,4/6] fs/dcache: Avoid the try_lock loops in dentry_kill()

Message ID 20180222235025.28662-5-john.ogness@linutronix.de (mailing list archive)
State New, archived
Headers show

Commit Message

John Ogness Feb. 22, 2018, 11:50 p.m. UTC
dentry_kill() holds dentry->d_lock and needs to acquire both
dentry->d_inode->i_lock and dentry->d_parent->d_lock. This cannot be
done with spin_lock() operations because it's the reverse of the
regular lock order. To avoid ABBA deadlocks it is done with two
trylock loops.

Trylock loops are problematic in two scenarios:

  1) PREEMPT_RT converts spinlocks to 'sleeping' spinlocks, which are
     preemptible. As a consequence the i_lock holder can be preempted
     by a higher priority task. If that task executes the trylock loop
     it will do so forever and live lock.

  2) In virtual machines trylock loops are problematic as well. The
     VCPU on which the i_lock holder runs can be scheduled out and a
     task on a different VCPU can loop for a whole time slice. In the
     worst case this can lead to starvation. Commits 47be61845c77
     ("fs/dcache.c: avoid soft-lockup in dput()") and 046b961b45f9
     ("shrink_dentry_list(): take parent's d_lock earlier") are
     addressing exactly those symptoms.

Avoid the trylock loops by using dentry_lock_inode() and lock_parent()
which take the locks in the appropriate order. As both functions might
drop dentry->lock briefly, this requires rechecking of the dentry
content as it might have changed while the lock was dropped.
dentry_lock_inode() performs the checks internally, but lock_parent()
relies on the caller to perform the checks.

Signed-off-by: John Ogness <john.ogness@linutronix.de>
---
 fs/dcache.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 54 insertions(+), 16 deletions(-)

Comments

Al Viro Feb. 23, 2018, 2:22 a.m. UTC | #1
On Fri, Feb 23, 2018 at 12:50:23AM +0100, John Ogness wrote:
>  static struct dentry *dentry_kill(struct dentry *dentry)
>  	__releases(dentry->d_lock)
>  {
> -	struct inode *inode = dentry->d_inode;
> -	struct dentry *parent = NULL;
> +	int saved_count = dentry->d_lockref.count;

	Umm...  How can that be not 1?  After all, fast_dput() should
never return false without ->d_lock being held *and* ->d_count being
equal to 1.

> +	/*
> +	 * d_inode might have changed if d_lock was temporarily
> +	 * dropped. If it changed it is necessary to start over
> +	 * because a wrong inode (or no inode) lock is held.
> +	 */

If it might have changed, we are fucked.

> +out_ref_changed:
> +	/*
> +	 * The refcount was incremented while dentry->d_lock was dropped.
> +	 * Just decrement the refcount, unlock, and tell the caller to
> +	 * stop the directory walk.
> +	 */
> +	if (!WARN_ON(dentry->d_lockref.count < 1))
> +		dentry->d_lockref.count--;
> +
>  	spin_unlock(&dentry->d_lock);
> -	return dentry; /* try again with same dentry */
> +
> +	return NULL;

No.  This is completely wrong.  If somebody else has found the sucker
while we dropped the lock and even got around to playing with refcount,
they might have done more than that.

In particular, they might have *dropped* their reference, after e.g.
picking it as our inode's alias and rehashed the fucker.  Making
our decision not to retain it no longer valid.  And your code will
not notice that.
Al Viro Feb. 23, 2018, 3:12 a.m. UTC | #2
On Fri, Feb 23, 2018 at 02:22:43AM +0000, Al Viro wrote:

> No.  This is completely wrong.  If somebody else has found the sucker
> while we dropped the lock and even got around to playing with refcount,
> they might have done more than that.
> 
> In particular, they might have *dropped* their reference, after e.g.
> picking it as our inode's alias and rehashed the fucker.  Making
> our decision not to retain it no longer valid.  And your code will
> not notice that.

PS: I really wonder if we should treat the failure to trylock ->i_lock
and parent's ->d_lock at that point (we are already off the fast path
here) as
	* drop all spinlocks we'd got
	* grab ->i_lock
	* grab ->d_lock
	* lock_parent()
	* act as if fast_dput() has returned 0, only remember to drop ->i_lock
and unlock parent before dropping ->d_lock if we decide to keep it.

IOW, add

static inline bool retain_dentry(struct dentry *dentry)
{
        WARN_ON(d_in_lookup(dentry));

        /* Unreachable? Get rid of it */
        if (unlikely(d_unhashed(dentry)))
                return false;

        if (unlikely(dentry->d_flags & DCACHE_DISCONNECTED))
                return false;

        if (unlikely(dentry->d_flags & DCACHE_OP_DELETE)) {
                if (dentry->d_op->d_delete(dentry))
                        return false;
        }

	dentry_lru_add(dentry);
	dentry->d_lockref.count--;
	return true;
}

then have dput() do
{
        if (unlikely(!dentry))
                return;
repeat:
        might_sleep();

        rcu_read_lock();
        if (likely(fast_dput(dentry))) {
                rcu_read_unlock();
                return;
        }

        /* Slow case: now with the dentry lock held */
        rcu_read_unlock();
	if (likely(retain_dentry(dentry))) {
		spin_unlock(&dentry->d_lock);
		return;
	}
	dentry = dentry_kill(dentry);
	if (dentry)
		goto repeat;
}

with dentry_kill() being pretty much as it is now, except that
it would be ending on

failed:
	spin_unlock(&dentry->d_lock);
	spin_lock(&inode->i_lock);
	spin_lock(&dentry->d_lock);
	parent = lock_parent(dentry);
	/* retain_dentry() needs ->count == 1 already checked)
	if (dentry->d_lockref.count == 1 && !retain_dentry(dentry)) {
		__dentry_kill(dentry);
		return parent;
	}
	/* we are keeping it, after all */
	if (inode)
		spin_unlock(&inode->i_lock);
	spin_unlock(&dentry->d_lock);
	if (parent)
		spin_unlock(&parent->d_lock);
	return NULL;
}
Al Viro Feb. 23, 2018, 3:16 a.m. UTC | #3
On Fri, Feb 23, 2018 at 03:12:14AM +0000, Al Viro wrote:

> 	/* retain_dentry() needs ->count == 1 already checked)

... obviously not even compile-tested ;-)
Al Viro Feb. 23, 2018, 5:46 a.m. UTC | #4
On Fri, Feb 23, 2018 at 03:12:14AM +0000, Al Viro wrote:

> failed:
> 	spin_unlock(&dentry->d_lock);
> 	spin_lock(&inode->i_lock);
> 	spin_lock(&dentry->d_lock);
> 	parent = lock_parent(dentry);

Hmm...  Negative dentry case obviously is trickier - not to mention oopsen,
it might have become positive under us.  Bugger...  OTOH, it's not much
trickier - with negative dentry we can only fail on trying to lock the
parent, in which case we should just check that it's still negative before
killing it off.  If it has gone positive on us, we'll just unlock the
parent and we are back to the normal "positive dentry, only ->d_lock held"
case.  At most one retry there - once it's positive, it stays positive.
So,

static struct dentry *dentry_kill(struct dentry *dentry)
        __releases(dentry->d_lock)
{
	struct inode *inode = dentry->d_inode;
	struct dentry *parent = NULL;

	if (inode && unlikely(!spin_trylock(&inode->i_lock)))
		goto no_locks;

	if (!IS_ROOT(dentry)) {
		parent = dentry->d_parent;
		if (unlikely(!spin_trylock(&parent->d_lock))) {
			if (inode) {
				spin_unlock(&inode->i_lock);
				goto no_locks;
			}
			goto need_parent;
		}
	}
kill_it:
	__dentry_kill(dentry);
	return parent;

no_locks:	/* positive, only ->d_lock held */
	spin_unlock(&dentry->d_lock);
	spin_lock(&inode->i_lock);
	spin_lock(&dentry->d_lock);
need_parent:
	parent = lock_parent(dentry);
	if (unlikely(dentry->d_lockref.count != 1 || retain_dentry(dentry))) {
		/* we are keeping it, after all */
		if (inode)
			spin_unlock(&inode->i_lock);
		spin_unlock(&dentry->d_lock);
		if (parent)
			spin_unlock(&parent->d_lock);
		return NULL;
	}
	/* it should die */
	if (inode)	/* was positive, ->d_inode unchanged, locks held */
		goto kill_it;
	inode = dentry->d_inode;	// READ_ONCE?
	if (!inode)	/* still negative, locks held */
		goto kill_it;
	/* negative became positive; it can't become negative again */
	if (parent)
		spin_unlock(&parent->d_lock);
	goto no_locks;	/* once */
}
diff mbox

Patch

diff --git a/fs/dcache.c b/fs/dcache.c
index bfdf1ff237f2..082361939b84 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -696,27 +696,67 @@  static bool dentry_lock_inode(struct dentry *dentry)
 static struct dentry *dentry_kill(struct dentry *dentry)
 	__releases(dentry->d_lock)
 {
-	struct inode *inode = dentry->d_inode;
-	struct dentry *parent = NULL;
+	int saved_count = dentry->d_lockref.count;
+	struct dentry *parent;
+	struct inode *inode;
 
-	if (inode && unlikely(!spin_trylock(&inode->i_lock)))
-		goto failed;
+again:
+	inode = dentry->d_inode;
 
-	if (!IS_ROOT(dentry)) {
-		parent = dentry->d_parent;
-		if (unlikely(!spin_trylock(&parent->d_lock))) {
-			if (inode)
-				spin_unlock(&inode->i_lock);
-			goto failed;
-		}
+	/*
+	 * Lock the inode. It will fail if the refcount
+	 * changed while trying to lock the inode.
+	 */
+	if (inode && !dentry_lock_inode(dentry))
+		goto out_ref_changed;
+
+	parent = lock_parent(dentry);
+
+	/*
+	 * Check refcount because it might have changed
+	 * if d_lock was temporarily dropped.
+	 */
+	if (unlikely(dentry->d_lockref.count != saved_count)) {
+		if (parent)
+			spin_unlock(&parent->d_lock);
+		if (inode)
+			spin_unlock(&inode->i_lock);
+		goto out_ref_changed;
+	}
+
+	/*
+	 * d_inode might have changed if d_lock was temporarily
+	 * dropped. If it changed it is necessary to start over
+	 * because a wrong inode (or no inode) lock is held.
+	 */
+	if (unlikely(inode != dentry->d_inode)) {
+		if (parent)
+			spin_unlock(&parent->d_lock);
+		if (inode)
+			spin_unlock(&inode->i_lock);
+		goto again;
 	}
 
 	__dentry_kill(dentry);
 	return parent;
 
-failed:
+out_ref_changed:
+	/*
+	 * The refcount was incremented while dentry->d_lock was dropped.
+	 * Just decrement the refcount, unlock, and tell the caller to
+	 * stop the directory walk.
+	 *
+	 * For paranoia reasons check whether the refcount is < 1. If so,
+	 * report the detection and avoid the decrement which would just
+	 * cause a problem in some other place. The warning might be
+	 * helpful to decode the root cause of the refcounting bug.
+	 */
+	if (!WARN_ON(dentry->d_lockref.count < 1))
+		dentry->d_lockref.count--;
+
 	spin_unlock(&dentry->d_lock);
-	return dentry; /* try again with same dentry */
+
+	return NULL;
 }
 
 /*
@@ -888,10 +928,8 @@  void dput(struct dentry *dentry)
 
 kill_it:
 	dentry = dentry_kill(dentry);
-	if (dentry) {
-		cond_resched();
+	if (dentry)
 		goto repeat;
-	}
 }
 EXPORT_SYMBOL(dput);