diff mbox series

[1/4] rcu/nocb: Protect lazy shrinker against concurrent (de-)offloading

Message ID 20230322194456.2331527-2-frederic@kernel.org (mailing list archive)
State Superseded
Headers show
Series rcu/nocb: Shrinker related boring fixes | expand

Commit Message

Frederic Weisbecker March 22, 2023, 7:44 p.m. UTC
The shrinker may run concurrently with callbacks (de-)offloading. As
such, calling rcu_nocb_lock() is very dangerous because it does a
conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
an imbalance.

Fix this with protecting against (de-)offloading using the barrier mutex.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

Comments

Paul E. McKenney March 22, 2023, 11:18 p.m. UTC | #1
On Wed, Mar 22, 2023 at 08:44:53PM +0100, Frederic Weisbecker wrote:
> The shrinker may run concurrently with callbacks (de-)offloading. As
> such, calling rcu_nocb_lock() is very dangerous because it does a
> conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
> lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
> an imbalance.
> 
> Fix this with protecting against (de-)offloading using the barrier mutex.
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>

Good catch!!!  A few questions, comments, and speculations below.

							Thanx, Paul

> ---
>  kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
>  1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index f2280616f9d5..dd9b655ae533 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  	unsigned long flags;
>  	unsigned long count = 0;
>  
> +	/*
> +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> +	 * may be ignored or imbalanced.
> +	 */
> +	mutex_lock(&rcu_state.barrier_mutex);

I was worried about this possibly leading to out-of-memory deadlock,
but if I recall correctly, the (de-)offloading process never allocates
memory, so this should be OK?

The other concern was that the (de-)offloading operation might take a
long time, but the usual cause for that is huge numbers of callbacks,
in which case letting them free their memory is not necessarily a bad
strategy.

> +
>  	/* Snapshot count of all CPUs */
>  	for_each_possible_cpu(cpu) {
>  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> -		int _count = READ_ONCE(rdp->lazy_len);
> +		int _count;
> +
> +		if (!rcu_rdp_is_offloaded(rdp))
> +			continue;

If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?

Or can it contain garbage after a de-offloading operation?

> +		_count = READ_ONCE(rdp->lazy_len);
>  
>  		if (_count == 0)
>  			continue;
> +
>  		rcu_nocb_lock_irqsave(rdp, flags);
>  		WRITE_ONCE(rdp->lazy_len, 0);
>  		rcu_nocb_unlock_irqrestore(rdp, flags);
> @@ -1352,6 +1364,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
>  		if (sc->nr_to_scan <= 0)
>  			break;
>  	}
> +
> +	mutex_unlock(&rcu_state.barrier_mutex);
> +
>  	return count ? count : SHRINK_STOP;
>  }
>  
> -- 
> 2.34.1
>
Joel Fernandes March 24, 2023, 12:55 a.m. UTC | #2
On Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney wrote:
> On Wed, Mar 22, 2023 at 08:44:53PM +0100, Frederic Weisbecker wrote:
> > The shrinker may run concurrently with callbacks (de-)offloading. As
> > such, calling rcu_nocb_lock() is very dangerous because it does a
> > conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
> > lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
> > an imbalance.
> > 
> > Fix this with protecting against (de-)offloading using the barrier mutex.
> > 
> > Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> 
> Good catch!!!  A few questions, comments, and speculations below.

Added a few more. ;)

> > ---
> >  kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
> >  1 file changed, 16 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > index f2280616f9d5..dd9b655ae533 100644
> > --- a/kernel/rcu/tree_nocb.h
> > +++ b/kernel/rcu/tree_nocb.h
> > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >  	unsigned long flags;
> >  	unsigned long count = 0;
> >  
> > +	/*
> > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > +	 * may be ignored or imbalanced.
> > +	 */
> > +	mutex_lock(&rcu_state.barrier_mutex);
> 
> I was worried about this possibly leading to out-of-memory deadlock,
> but if I recall correctly, the (de-)offloading process never allocates
> memory, so this should be OK?

Maybe trylock is better then? If we can't make progress, may be better to let
kswapd free memory by other means than blocking on the mutex.

ISTR, from my Android days that there are weird lockdep issues that happen
when locking in a shrinker (due to the 'fake lock' dependency added during
reclaim).

> The other concern was that the (de-)offloading operation might take a
> long time, but the usual cause for that is huge numbers of callbacks,
> in which case letting them free their memory is not necessarily a bad
> strategy.
> 
> > +
> >  	/* Snapshot count of all CPUs */
> >  	for_each_possible_cpu(cpu) {
> >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > -		int _count = READ_ONCE(rdp->lazy_len);
> > +		int _count;
> > +
> > +		if (!rcu_rdp_is_offloaded(rdp))
> > +			continue;
> 
> If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?

Did you mean de-offloaded? If it is offloaded, that means nocb is active so
there could be lazy CBs queued. Or did I miss something?

thanks,

 - Joel


> Or can it contain garbage after a de-offloading operation?
> 
> > +		_count = READ_ONCE(rdp->lazy_len);
> >  
> >  		if (_count == 0)
> >  			continue;
> > +
> >  		rcu_nocb_lock_irqsave(rdp, flags);
> >  		WRITE_ONCE(rdp->lazy_len, 0);
> >  		rcu_nocb_unlock_irqrestore(rdp, flags);
> > @@ -1352,6 +1364,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >  		if (sc->nr_to_scan <= 0)
> >  			break;
> >  	}
> > +
> > +	mutex_unlock(&rcu_state.barrier_mutex);
> > +
> >  	return count ? count : SHRINK_STOP;
> >  }
> >  
> > -- 
> > 2.34.1
> >
Paul E. McKenney March 24, 2023, 1:06 a.m. UTC | #3
On Fri, Mar 24, 2023 at 12:55:23AM +0000, Joel Fernandes wrote:
> On Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney wrote:
> > On Wed, Mar 22, 2023 at 08:44:53PM +0100, Frederic Weisbecker wrote:
> > > The shrinker may run concurrently with callbacks (de-)offloading. As
> > > such, calling rcu_nocb_lock() is very dangerous because it does a
> > > conditional locking. The worst outcome is that rcu_nocb_lock() doesn't
> > > lock but rcu_nocb_unlock() eventually unlocks, or the reverse, creating
> > > an imbalance.
> > > 
> > > Fix this with protecting against (de-)offloading using the barrier mutex.
> > > 
> > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> > 
> > Good catch!!!  A few questions, comments, and speculations below.
> 
> Added a few more. ;)
> 
> > > ---
> > >  kernel/rcu/tree_nocb.h | 17 ++++++++++++++++-
> > >  1 file changed, 16 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > > index f2280616f9d5..dd9b655ae533 100644
> > > --- a/kernel/rcu/tree_nocb.h
> > > +++ b/kernel/rcu/tree_nocb.h
> > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > >  	unsigned long flags;
> > >  	unsigned long count = 0;
> > >  
> > > +	/*
> > > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > +	 * may be ignored or imbalanced.
> > > +	 */
> > > +	mutex_lock(&rcu_state.barrier_mutex);
> > 
> > I was worried about this possibly leading to out-of-memory deadlock,
> > but if I recall correctly, the (de-)offloading process never allocates
> > memory, so this should be OK?
> 
> Maybe trylock is better then? If we can't make progress, may be better to let
> kswapd free memory by other means than blocking on the mutex.
> 
> ISTR, from my Android days that there are weird lockdep issues that happen
> when locking in a shrinker (due to the 'fake lock' dependency added during
> reclaim).

This stuff gets tricky quickly.  ;-)

> > The other concern was that the (de-)offloading operation might take a
> > long time, but the usual cause for that is huge numbers of callbacks,
> > in which case letting them free their memory is not necessarily a bad
> > strategy.
> > 
> > > +
> > >  	/* Snapshot count of all CPUs */
> > >  	for_each_possible_cpu(cpu) {
> > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > +		int _count;
> > > +
> > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > +			continue;
> > 
> > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> 
> Did you mean de-offloaded? If it is offloaded, that means nocb is active so
> there could be lazy CBs queued. Or did I miss something?

You are quite right, offloaded for ->lazy_len to be zero.

							Thanx, Paul.

> thanks,
> 
>  - Joel
> 
> 
> > Or can it contain garbage after a de-offloading operation?
> > 
> > > +		_count = READ_ONCE(rdp->lazy_len);
> > >  
> > >  		if (_count == 0)
> > >  			continue;
> > > +
> > >  		rcu_nocb_lock_irqsave(rdp, flags);
> > >  		WRITE_ONCE(rdp->lazy_len, 0);
> > >  		rcu_nocb_unlock_irqrestore(rdp, flags);
> > > @@ -1352,6 +1364,9 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > >  		if (sc->nr_to_scan <= 0)
> > >  			break;
> > >  	}
> > > +
> > > +	mutex_unlock(&rcu_state.barrier_mutex);
> > > +
> > >  	return count ? count : SHRINK_STOP;
> > >  }
> > >  
> > > -- 
> > > 2.34.1
> > >
Frederic Weisbecker March 24, 2023, 10:09 p.m. UTC | #4
Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a écrit :
> > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> >  	unsigned long flags;
> >  	unsigned long count = 0;
> >  
> > +	/*
> > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > +	 * may be ignored or imbalanced.
> > +	 */
> > +	mutex_lock(&rcu_state.barrier_mutex);
> 
> I was worried about this possibly leading to out-of-memory deadlock,
> but if I recall correctly, the (de-)offloading process never allocates
> memory, so this should be OK?

Good point. It _should_ be fine but like you, Joel and Hillf pointed out
it's asking for trouble.

We could try Joel's idea to use mutex_trylock() as a best effort, which
should be fine as it's mostly uncontended.

The alternative is to force nocb locking and check the offloading state
right after. So instead of:

	rcu_nocb_lock_irqsave(rdp, flags);
	//flush stuff
	rcu_nocb_unlock_irqrestore(rdp, flags);

Have:

	raw_spin_lock_irqsave(rdp->nocb_lock, flags);
	if (!rcu_rdp_is_offloaded(rdp))
		raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
		continue;
	}
	//flush stuff
	rcu_nocb_unlock_irqrestore(rdp, flags);

But it's not pretty and also disqualifies the last two patches as
rcu_nocb_mask can't be iterated safely anymore.

What do you think?

> >  	/* Snapshot count of all CPUs */
> >  	for_each_possible_cpu(cpu) {
> >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > -		int _count = READ_ONCE(rdp->lazy_len);
> > +		int _count;
> > +
> > +		if (!rcu_rdp_is_offloaded(rdp))
> > +			continue;
> 
> If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> 
> Or can it contain garbage after a de-offloading operation?

If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
Bypass is flushed and disabled atomically early on de-offloading and the
flush resets ->lazy_len.

Thanks.
Paul E. McKenney March 24, 2023, 10:51 p.m. UTC | #5
On Fri, Mar 24, 2023 at 11:09:08PM +0100, Frederic Weisbecker wrote:
> Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a écrit :
> > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > >  	unsigned long flags;
> > >  	unsigned long count = 0;
> > >  
> > > +	/*
> > > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > +	 * may be ignored or imbalanced.
> > > +	 */
> > > +	mutex_lock(&rcu_state.barrier_mutex);
> > 
> > I was worried about this possibly leading to out-of-memory deadlock,
> > but if I recall correctly, the (de-)offloading process never allocates
> > memory, so this should be OK?
> 
> Good point. It _should_ be fine but like you, Joel and Hillf pointed out
> it's asking for trouble.
> 
> We could try Joel's idea to use mutex_trylock() as a best effort, which
> should be fine as it's mostly uncontended.
> 
> The alternative is to force nocb locking and check the offloading state
> right after. So instead of:
> 
> 	rcu_nocb_lock_irqsave(rdp, flags);
> 	//flush stuff
> 	rcu_nocb_unlock_irqrestore(rdp, flags);
> 
> Have:
> 
> 	raw_spin_lock_irqsave(rdp->nocb_lock, flags);
> 	if (!rcu_rdp_is_offloaded(rdp))
> 		raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
> 		continue;
> 	}
> 	//flush stuff
> 	rcu_nocb_unlock_irqrestore(rdp, flags);
> 
> But it's not pretty and also disqualifies the last two patches as
> rcu_nocb_mask can't be iterated safely anymore.
> 
> What do you think?

The mutex_trylock() approach does have the advantage of simplicity,
and as you say should do well given low contention.

Which reminds me, what sort of test strategy did you have in mind?
Memory exhaustion can have surprising effects.

> > >  	/* Snapshot count of all CPUs */
> > >  	for_each_possible_cpu(cpu) {
> > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > +		int _count;
> > > +
> > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > +			continue;
> > 
> > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > 
> > Or can it contain garbage after a de-offloading operation?
> 
> If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> Bypass is flushed and disabled atomically early on de-offloading and the
> flush resets ->lazy_len.

Whew!  At the moment, I don't feel strongly about whether or not
the following code should (1) read the value, (2) warn on non-zero,
(3) assume zero without reading, or (4) some other option that is not
occurring to me.  Your choice!

							Thanx, Paul
Frederic Weisbecker March 26, 2023, 8:01 p.m. UTC | #6
Le Fri, Mar 24, 2023 at 03:51:54PM -0700, Paul E. McKenney a écrit :
> On Fri, Mar 24, 2023 at 11:09:08PM +0100, Frederic Weisbecker wrote:
> > Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a écrit :
> > > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > > >  	unsigned long flags;
> > > >  	unsigned long count = 0;
> > > >  
> > > > +	/*
> > > > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > > +	 * may be ignored or imbalanced.
> > > > +	 */
> > > > +	mutex_lock(&rcu_state.barrier_mutex);
> > > 
> > > I was worried about this possibly leading to out-of-memory deadlock,
> > > but if I recall correctly, the (de-)offloading process never allocates
> > > memory, so this should be OK?
> > 
> > Good point. It _should_ be fine but like you, Joel and Hillf pointed out
> > it's asking for trouble.
> > 
> > We could try Joel's idea to use mutex_trylock() as a best effort, which
> > should be fine as it's mostly uncontended.
> > 
> > The alternative is to force nocb locking and check the offloading state
> > right after. So instead of:
> > 
> > 	rcu_nocb_lock_irqsave(rdp, flags);
> > 	//flush stuff
> > 	rcu_nocb_unlock_irqrestore(rdp, flags);
> > 
> > Have:
> > 
> > 	raw_spin_lock_irqsave(rdp->nocb_lock, flags);
> > 	if (!rcu_rdp_is_offloaded(rdp))
> > 		raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
> > 		continue;
> > 	}
> > 	//flush stuff
> > 	rcu_nocb_unlock_irqrestore(rdp, flags);
> > 
> > But it's not pretty and also disqualifies the last two patches as
> > rcu_nocb_mask can't be iterated safely anymore.
> > 
> > What do you think?
> 
> The mutex_trylock() approach does have the advantage of simplicity,
> and as you say should do well given low contention.
> 
> Which reminds me, what sort of test strategy did you have in mind?
> Memory exhaustion can have surprising effects.

The best I can do is to trigger the count and scan callbacks through
the shrinker debugfs and see if it crashes or not :-)

> 
> > > >  	/* Snapshot count of all CPUs */
> > > >  	for_each_possible_cpu(cpu) {
> > > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > > +		int _count;
> > > > +
> > > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > > +			continue;
> > > 
> > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > > 
> > > Or can it contain garbage after a de-offloading operation?
> > 
> > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > Bypass is flushed and disabled atomically early on de-offloading and the
> > flush resets ->lazy_len.
> 
> Whew!  At the moment, I don't feel strongly about whether or not
> the following code should (1) read the value, (2) warn on non-zero,
> (3) assume zero without reading, or (4) some other option that is not
> occurring to me.  Your choice!

(2) looks like a good idea!

Thanks.
Paul E. McKenney March 26, 2023, 9:45 p.m. UTC | #7
On Sun, Mar 26, 2023 at 10:01:34PM +0200, Frederic Weisbecker wrote:
> Le Fri, Mar 24, 2023 at 03:51:54PM -0700, Paul E. McKenney a écrit :
> > On Fri, Mar 24, 2023 at 11:09:08PM +0100, Frederic Weisbecker wrote:
> > > Le Wed, Mar 22, 2023 at 04:18:24PM -0700, Paul E. McKenney a écrit :
> > > > > @@ -1336,13 +1336,25 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
> > > > >  	unsigned long flags;
> > > > >  	unsigned long count = 0;
> > > > >  
> > > > > +	/*
> > > > > +	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
> > > > > +	 * may be ignored or imbalanced.
> > > > > +	 */
> > > > > +	mutex_lock(&rcu_state.barrier_mutex);
> > > > 
> > > > I was worried about this possibly leading to out-of-memory deadlock,
> > > > but if I recall correctly, the (de-)offloading process never allocates
> > > > memory, so this should be OK?
> > > 
> > > Good point. It _should_ be fine but like you, Joel and Hillf pointed out
> > > it's asking for trouble.
> > > 
> > > We could try Joel's idea to use mutex_trylock() as a best effort, which
> > > should be fine as it's mostly uncontended.
> > > 
> > > The alternative is to force nocb locking and check the offloading state
> > > right after. So instead of:
> > > 
> > > 	rcu_nocb_lock_irqsave(rdp, flags);
> > > 	//flush stuff
> > > 	rcu_nocb_unlock_irqrestore(rdp, flags);
> > > 
> > > Have:
> > > 
> > > 	raw_spin_lock_irqsave(rdp->nocb_lock, flags);
> > > 	if (!rcu_rdp_is_offloaded(rdp))
> > > 		raw_spin_unlock_irqrestore(rdp->nocb_lock, flags);
> > > 		continue;
> > > 	}
> > > 	//flush stuff
> > > 	rcu_nocb_unlock_irqrestore(rdp, flags);
> > > 
> > > But it's not pretty and also disqualifies the last two patches as
> > > rcu_nocb_mask can't be iterated safely anymore.
> > > 
> > > What do you think?
> > 
> > The mutex_trylock() approach does have the advantage of simplicity,
> > and as you say should do well given low contention.
> > 
> > Which reminds me, what sort of test strategy did you have in mind?
> > Memory exhaustion can have surprising effects.
> 
> The best I can do is to trigger the count and scan callbacks through
> the shrinker debugfs and see if it crashes or not :-)

Sounds like a good start.  Maybe also a good finish?  ;-)

> > > > >  	/* Snapshot count of all CPUs */
> > > > >  	for_each_possible_cpu(cpu) {
> > > > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > > > +		int _count;
> > > > > +
> > > > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > > > +			continue;
> > > > 
> > > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > > > 
> > > > Or can it contain garbage after a de-offloading operation?
> > > 
> > > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > > Bypass is flushed and disabled atomically early on de-offloading and the
> > > flush resets ->lazy_len.
> > 
> > Whew!  At the moment, I don't feel strongly about whether or not
> > the following code should (1) read the value, (2) warn on non-zero,
> > (3) assume zero without reading, or (4) some other option that is not
> > occurring to me.  Your choice!
> 
> (2) looks like a good idea!

Sounds good to me!

							Thanx, Paul
Frederic Weisbecker March 29, 2023, 4:07 p.m. UTC | #8
On Sun, Mar 26, 2023 at 02:45:18PM -0700, Paul E. McKenney wrote:
> On Sun, Mar 26, 2023 at 10:01:34PM +0200, Frederic Weisbecker wrote:
> > > > > >  	/* Snapshot count of all CPUs */
> > > > > >  	for_each_possible_cpu(cpu) {
> > > > > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > > > > +		int _count;
> > > > > > +
> > > > > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > > > > +			continue;
> > > > > 
> > > > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > > > > 
> > > > > Or can it contain garbage after a de-offloading operation?
> > > > 
> > > > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > > > Bypass is flushed and disabled atomically early on de-offloading and the
> > > > flush resets ->lazy_len.
> > > 
> > > Whew!  At the moment, I don't feel strongly about whether or not
> > > the following code should (1) read the value, (2) warn on non-zero,
> > > (3) assume zero without reading, or (4) some other option that is not
> > > occurring to me.  Your choice!
> > 
> > (2) looks like a good idea!
> 
> Sounds good to me!

So since we now iterate rcu_nocb_mask after the patchset, there is no more
deoffloaded rdp to check. Meanwhile I put a WARN in the new series making
sure that an rdp in rcu_nocb_mask is also offloaded (heh!)
Paul E. McKenney March 29, 2023, 8:45 p.m. UTC | #9
On Wed, Mar 29, 2023 at 06:07:58PM +0200, Frederic Weisbecker wrote:
> On Sun, Mar 26, 2023 at 02:45:18PM -0700, Paul E. McKenney wrote:
> > On Sun, Mar 26, 2023 at 10:01:34PM +0200, Frederic Weisbecker wrote:
> > > > > > >  	/* Snapshot count of all CPUs */
> > > > > > >  	for_each_possible_cpu(cpu) {
> > > > > > >  		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
> > > > > > > -		int _count = READ_ONCE(rdp->lazy_len);
> > > > > > > +		int _count;
> > > > > > > +
> > > > > > > +		if (!rcu_rdp_is_offloaded(rdp))
> > > > > > > +			continue;
> > > > > > 
> > > > > > If the CPU is offloaded, isn't ->lazy_len guaranteed to be zero?
> > > > > > 
> > > > > > Or can it contain garbage after a de-offloading operation?
> > > > > 
> > > > > If it's deoffloaded, ->lazy_len is indeed (supposed to be) guaranteed to be zero.
> > > > > Bypass is flushed and disabled atomically early on de-offloading and the
> > > > > flush resets ->lazy_len.
> > > > 
> > > > Whew!  At the moment, I don't feel strongly about whether or not
> > > > the following code should (1) read the value, (2) warn on non-zero,
> > > > (3) assume zero without reading, or (4) some other option that is not
> > > > occurring to me.  Your choice!
> > > 
> > > (2) looks like a good idea!
> > 
> > Sounds good to me!
> 
> So since we now iterate rcu_nocb_mask after the patchset, there is no more
> deoffloaded rdp to check. Meanwhile I put a WARN in the new series making
> sure that an rdp in rcu_nocb_mask is also offloaded (heh!)

Sounds good, thank you!

							Thanx, Paul
diff mbox series

Patch

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index f2280616f9d5..dd9b655ae533 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -1336,13 +1336,25 @@  lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 	unsigned long flags;
 	unsigned long count = 0;
 
+	/*
+	 * Protect against concurrent (de-)offloading. Otherwise nocb locking
+	 * may be ignored or imbalanced.
+	 */
+	mutex_lock(&rcu_state.barrier_mutex);
+
 	/* Snapshot count of all CPUs */
 	for_each_possible_cpu(cpu) {
 		struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
-		int _count = READ_ONCE(rdp->lazy_len);
+		int _count;
+
+		if (!rcu_rdp_is_offloaded(rdp))
+			continue;
+
+		_count = READ_ONCE(rdp->lazy_len);
 
 		if (_count == 0)
 			continue;
+
 		rcu_nocb_lock_irqsave(rdp, flags);
 		WRITE_ONCE(rdp->lazy_len, 0);
 		rcu_nocb_unlock_irqrestore(rdp, flags);
@@ -1352,6 +1364,9 @@  lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
 		if (sc->nr_to_scan <= 0)
 			break;
 	}
+
+	mutex_unlock(&rcu_state.barrier_mutex);
+
 	return count ? count : SHRINK_STOP;
 }