diff mbox series

[1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

Message ID 20230329160203.191380-2-frederic@kernel.org (mailing list archive)
State Accepted
Commit a248edc739a384c835db1ce9ab486f3327647467
Headers show
Series rcu/nocb: Shrinker related boring fixes | expand

Commit Message

Frederic Weisbecker March 29, 2023, 4:02 p.m. UTC
The shrinker may run concurrently with callbacks (de-)offloading. As
such, calling rcu_nocb_lock() is very dangerous because it does a
conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
an imbalance.

Fix this with protecting against (de-)offloading using the barrier mutex.
Although if the barrier mutex is contended, which should be rare, then
step aside so as not to trigger a mutex VS allocation
dependency chain.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 25 ++++++++++++++++++++++++-
 1 file changed, 24 insertions(+), 1 deletion(-)

Comments

Paul E. McKenney March 29, 2023, 8:44 p.m. UTC | #1
On Wed, Mar 29, 2023 at 06:02:00PM +0200, Frederic Weisbecker wrote:
> The shrinker may run concurrently with callbacks (de-)offloading. As
> such, calling rcu_nocb_lock() is very dangerous because it does a
> conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
> lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
> an imbalance.
> 
> Fix this with protecting against (de-)offloading using the barrier mutex.
> Although if the barrier mutex is contended, which should be rare, then
> step aside so as not to trigger a mutex VS allocation
> dependency chain.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>  kernel/rcu/tree_nocb.h | 25 ++++++++++++++++++++++++-
>  1 file changed, 24 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index f2280616f9d5..1a86883902ce 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1336,13 +1336,33 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  	unsigned long flags;
>  	unsigned long count = 0;
>  
> +	/*
> +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> +	 * may be ignored or imbalanced.
> +	 */
> +	if (!mutex_trylock(&rcu_state.barrier_mutex)) {

This looks much better, thank you!

> +		/*
> +		 * But really don't insist if barrier_mutex is contended since we
> +		 * can't guarantee that it will never engage in a dependency
> +		 * chain involving memory allocation. The lock is seldom contended
> +		 * anyway.
> +		 */
> +		return 0;
> +	}
> +
>  	/* Snapshot count of all CPUs */
>  	for_each_possible_cpu(cpu) {
>  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> -		int _count = READ_ONCE(rdp->lazy_len);
> +		int _count;
> +
> +		if (!rcu_rdp_is_offloaded(rdp))
> +			continue;
> +
> +		_count = READ_ONCE(rdp->lazy_len);
>  
>  		if (_count == 0)
>  			continue;
> +

And I just might have unconfused myself here.  We get here only if this
CPU is offloaded, in which case it might also have non-zero ->lazy_len,
so this is in fact *not* dead code.

>  		rcu_nocb_lock_irqsave(rdp, flags);
>  		WRITE_ONCE(rdp->lazy_len, 0);
>  		rcu_nocb_unlock_irqrestore(rdp, flags);
> @@ -1352,6 +1372,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  		if (sc->nr_to_scan <= 0)
>  			break;
>  	}
> +
> +	mutex_unlock(&rcu_state.barrier_mutex);
> +
>  	return count ? count : SHRINK_STOP;
>  }
>  
> -- 
> 2.34.1
>
Frederic Weisbecker March 29, 2023, 9:18 p.m. UTC | #2
On Wed, Mar 29, 2023 at 01:44:53PM -0700, Paul E. McKenney wrote:
> On Wed, Mar 29, 2023 at 06:02:00PM +0200, Frederic Weisbecker wrote:
> > +		/*
> > +		 * But really don't insist if barrier_mutex is contended since we
> > +		 * can't guarantee that it will never engage in a dependency
> > +		 * chain involving memory allocation. The lock is seldom contended
> > +		 * anyway.
> > +		 */
> > +		return 0;
> > +	}
> > +
> >  	/* Snapshot count of all CPUs */
> >  	for_each_possible_cpu(cpu) {
> >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > -		int _count = READ_ONCE(rdp->lazy_len);
> > +		int _count;
> > +
> > +		if (!rcu_rdp_is_offloaded(rdp))
> > +			continue;
> > +
> > +		_count = READ_ONCE(rdp->lazy_len);
> >  
> >  		if (_count == 0)
> >  			continue;
> > +
> 
> And I just might have unconfused myself here.  We get here only if this
> CPU is offloaded, in which case it might also have non-zero ->lazy_len,
> so this is in fact *not* dead code.

Right. Now whether it's really alive remains to be proven ;)
diff mbox series

Patch

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index f2280616f9d5..1a86883902ce 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1336,13 +1336,33 @@  lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 	unsigned long flags;
 	unsigned long count = 0;
 
+	/*
+	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
+	 * may be ignored or imbalanced.
+	 */
+	if (!mutex_trylock(&rcu_state.barrier_mutex)) {
+		/*
+		 * But really don't insist if barrier_mutex is contended since we
+		 * can't guarantee that it will never engage in a dependency
+		 * chain involving memory allocation. The lock is seldom contended
+		 * anyway.
+		 */
+		return 0;
+	}
+
 	/* Snapshot count of all CPUs */
 	for_each_possible_cpu(cpu) {
 		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
-		int _count = READ_ONCE(rdp->lazy_len);
+		int _count;
+
+		if (!rcu_rdp_is_offloaded(rdp))
+			continue;
+
+		_count = READ_ONCE(rdp->lazy_len);
 
 		if (_count == 0)
 			continue;
+
 		rcu_nocb_lock_irqsave(rdp, flags);
 		WRITE_ONCE(rdp->lazy_len, 0);
 		rcu_nocb_unlock_irqrestore(rdp, flags);
@@ -1352,6 +1372,9 @@  lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 		if (sc->nr_to_scan <= 0)
 			break;
 	}
+
+	mutex_unlock(&rcu_state.barrier_mutex);
+
 	return count ? count : SHRINK_STOP;
 }