diff mbox series

[03/10] rcu/nocb: Remove needless LOAD-ACQUIRE

Message ID 20230908203603.5865-4-frederic@kernel.org (mailing list archive)
State Accepted
Commit e07c4343f6456a7108279a58f7f7791e3c020d9f
Headers show
Series rcu cleanups | expand

Commit Message

Frederic Weisbecker Sept. 8, 2023, 8:35 p.m. UTC
The LOAD-ACQUIRE access performed on rdp->nocb_cb_sleep advertizes
ordering callback execution against grace period completion. However
this is contradicted by the following:

* This LOAD-ACQUIRE doesn't pair with anything. The only counterpart
  barrier that can be found is the smp_mb() placed after callbacks
  advancing in nocb_gp_wait(). However the barrier is placed _after_
  ->nocb_cb_sleep write.

* Callbacks can be concurrently advanced between the LOAD-ACQUIRE on
  ->nocb_cb_sleep and the call to rcu_segcblist_extract_done_cbs() in
  rcu_do_batch(), making any ordering based on ->nocb_cb_sleep broken.

* Both rcu_segcblist_extract_done_cbs() and rcu_advance_cbs() are called
  under the nocb_lock, the latter hereby providing already the desired
  ACQUIRE semantics.

Therefore it is safe to access ->nocb_cb_sleep with a simple compiler
barrier.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/rcu/tree_nocb.h | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Joel Fernandes Sept. 9, 2023, 1:48 a.m. UTC | #1
On Fri, Sep 8, 2023 at 4:36 PM Frederic Weisbecker <frederic@kernel.org> wrote:
>
> The LOAD-ACQUIRE access performed on rdp->nocb_cb_sleep advertizes
> ordering callback execution against grace period completion. However
> this is contradicted by the following:
>
> * This LOAD-ACQUIRE doesn't pair with anything. The only counterpart
>   barrier that can be found is the smp_mb() placed after callbacks
>   advancing in nocb_gp_wait(). However the barrier is placed _after_
>   ->nocb_cb_sleep write.

Hmm, on one side you have:

WRITE_ONCE(rdp->nocb_cb_sleep, false);
smp_mb();
swake_up_one(&rdp->nocb_cb_wq);   /* wakeup -- consider this to be a STORE */

And on another side you have:
swait_event_interruptible_exclusive(rdp->nocb_cb_wq, ..cond..) /*
consider this to be a LOAD */
smp_load_acquire(&rdp->nocb_cb_sleep)
/* exec CBs (LOAD operations) */

So there seems to be pairing AFAICS.

But maybe you are referring to pairing between advancing the callbacks
and storing to nocb_cb_sleep. In this case, the RELEASE of the nocb
unlock operation just after advancing should be providing the
ordering, but we still need the acquire this patch deletes.

> * Callbacks can be concurrently advanced between the LOAD-ACQUIRE on
>   ->nocb_cb_sleep and the call to rcu_segcblist_extract_done_cbs() in
>   rcu_do_batch(), making any ordering based on ->nocb_cb_sleep broken.

If you don't mind, could you elaborate more?

> * Both rcu_segcblist_extract_done_cbs() and rcu_advance_cbs() are called
>   under the nocb_lock, the latter hereby providing already the desired
>   ACQUIRE semantics.

The acquire orders loads to nocb_cb_sleep with all later loads/stores.
I am not sure how nocb_lock gives that same behavior since that's
doing ACQUIRE on the lock access itself and not on nocb_cb_sleep
access, I'd appreciate it if we can debate this out.

Every few months I need a memory-ordering workout so this can be that.
;-) You could be onto something.

thanks,

 - Joel



>
> Therefore it is safe to access ->nocb_cb_sleep with a simple compiler
> barrier.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
>  kernel/rcu/tree_nocb.h | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> index b9eab359c597..6e63ba4788e1 100644
> --- a/kernel/rcu/tree_nocb.h
> +++ b/kernel/rcu/tree_nocb.h
> @@ -933,8 +933,7 @@ static void nocb_cb_wait(struct rcu_data *rdp)
>                 swait_event_interruptible_exclusive(rdp->nocb_cb_wq,
>                                                     nocb_cb_wait_cond(rdp));
>
> -               // VVV Ensure CB invocation follows _sleep test.
> -               if (smp_load_acquire(&rdp->nocb_cb_sleep)) { // ^^^
> +               if (READ_ONCE(rdp->nocb_cb_sleep)) {
>                         WARN_ON(signal_pending(current));
>                         trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WokeEmpty"));
>                 }
> --
> 2.41.0
>
Joel Fernandes Sept. 9, 2023, 1:50 a.m. UTC | #2
> > * Callbacks can be concurrently advanced between the LOAD-ACQUIRE on
> >   ->nocb_cb_sleep and the call to rcu_segcblist_extract_done_cbs() in
> >   rcu_do_batch(), making any ordering based on ->nocb_cb_sleep broken.
>
> If you don't mind, could you elaborate more?

Ah, I see you deleted the counterpart memory barrier in the next
patch. I was reading the patches in order, so I did not notice. I'll
go read that as well. It might make sense to combine this and the next
patch, not sure.

 - Joel
Frederic Weisbecker Sept. 10, 2023, 9:17 p.m. UTC | #3
Le Fri, Sep 08, 2023 at 09:48:44PM -0400, Joel Fernandes a écrit :
> On Fri, Sep 8, 2023 at 4:36 PM Frederic Weisbecker <frederic@kernel.org> wrote:
> >
> > The LOAD-ACQUIRE access performed on rdp->nocb_cb_sleep advertizes
> > ordering callback execution against grace period completion. However
> > this is contradicted by the following:
> >
> > * This LOAD-ACQUIRE doesn't pair with anything. The only counterpart
> >   barrier that can be found is the smp_mb() placed after callbacks
> >   advancing in nocb_gp_wait(). However the barrier is placed _after_
> >   ->nocb_cb_sleep write.
> 
> Hmm, on one side you have:
> 
> WRITE_ONCE(rdp->nocb_cb_sleep, false);
> smp_mb();
> swake_up_one(&rdp->nocb_cb_wq);   /* wakeup -- consider this to be a STORE */
> 
> And on another side you have:
> swait_event_interruptible_exclusive(rdp->nocb_cb_wq, ..cond..) /*
> consider this to be a LOAD */
> smp_load_acquire(&rdp->nocb_cb_sleep)
> /* exec CBs (LOAD operations) */
> 
> So there seems to be pairing AFAICS.

I must be confused, that would give such pattern:

         WRITE X                LOAD Y
         smp_mb()
         WRITE Y                smp_load_acquire(X)

How does this pair?

> 
> But maybe you are referring to pairing between advancing the callbacks
> and storing to nocb_cb_sleep. In this case, the RELEASE of the nocb
> unlock operation just after advancing should be providing the
> ordering

Right.

> but we still need the acquire this patch deletes.

Why?

> 
> > * Callbacks can be concurrently advanced between the LOAD-ACQUIRE on
> >   ->nocb_cb_sleep and the call to rcu_segcblist_extract_done_cbs() in
> >   rcu_do_batch(), making any ordering based on ->nocb_cb_sleep broken.
> 
> If you don't mind, could you elaborate more?

So imagine:

1) Some callbacks are pending
2) A grace period completes, nocb_gp_wait() advance some callbacks to DONE and
   some callbacks to WAIT, another grace period starts to handle the latter.
3) Because some callbacks are ready to invoke, nocb_gp_wait() sets
   rdp->nocb_cb_sleep to false and wakes up nocb_cb_wait()
4) nocb_cb_wait() does smp_load_acquire(rdp->nocb_cb_sleep) and proceeds
   with rcu_do_batch() but it gets preempted right before.
5) The new grace period completes.
6) nocb_gp_wait() does one more round and advances the WAIT callbacks to the
   non-empty DONE segment. Also it doesn't need to wake up nocb_cb_wait()
   since it's pending and ->nocb_cb_sleep is still false but it force writes
   again ->nocb_cb_sleep to false.
7) nocb_cb_wait() resumes and calls rcu_do_batch() without doing a new
   load-acquire on ->nocb_cb_sleep, this means the ordering only applies to the
   callbacks that were moved to DONE on step 2) but not to those moved to DONE
   on step 6).

> 
> > * Both rcu_segcblist_extract_done_cbs() and rcu_advance_cbs() are called
> >   under the nocb_lock, the latter hereby providing already the desired
> >   ACQUIRE semantics.
> 
> The acquire orders loads to nocb_cb_sleep with all later loads/stores.
> I am not sure how nocb_lock gives that same behavior since that's
> doing ACQUIRE on the lock access itself and not on nocb_cb_sleep
> access, I'd appreciate it if we can debate this out.

Well, the nocb_lock releases not only the write to nocb_cb_sleep but also
everything that precedes it. So it plays the same role and, most importantly,
it's acquired before calling rcu_segcblist_extract_done_cbs().

> 
> Every few months I need a memory-ordering workout so this can be that.
> ;-) You could be onto something.

No worries, I have some more headaches upcoming for all of us on the plate  ;-)

Thanks.
diff mbox series

Patch

diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index b9eab359c597..6e63ba4788e1 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -933,8 +933,7 @@  static void nocb_cb_wait(struct rcu_data *rdp)
 		swait_event_interruptible_exclusive(rdp->nocb_cb_wq,
 						    nocb_cb_wait_cond(rdp));
 
-		// VVV Ensure CB invocation follows _sleep test.
-		if (smp_load_acquire(&rdp->nocb_cb_sleep)) { // ^^^
+		if (READ_ONCE(rdp->nocb_cb_sleep)) {
 			WARN_ON(signal_pending(current));
 			trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WokeEmpty"));
 		}