diff mbox series

[3/3] rcu: Comment on the extraneous delta test on rcu_seq_done_exact()

Message ID 20250324170156.469763-4-joelagnelf@nvidia.com (mailing list archive)
State New
Headers show
Series [1/3] rcu: Replace magic number with meaningful constant in rcu_seq_done_exact() | expand

Commit Message

Joel Fernandes March 24, 2025, 5:01 p.m. UTC
From: Frederic Weisbecker <frederic@kernel.org>

The numbers used in rcu_seq_done_exact() lack some explanation behind
their magic. Especially after the commit:

    85aad7cc4178 ("rcu: Fix get_state_synchronize_rcu_full() GP-start detection")

which reported a subtle issue where a new GP sequence snapshot was taken
on the root node state while a grace period had already been started and
reflected on the global state sequence but not yet on the root node
sequence, making a polling user waiting on a wrong already started grace
period that would ignore freshly online CPUs.

The fix involved taking the snaphot on the global state sequence and
waiting on the root node sequence. And since a grace period is first
started on the global state and only afterward reflected on the root
node, a snapshot taken on the global state sequence might be two full
grace periods ahead of the root node as in the following example:

rnp->gp_seq = rcu_state.gp_seq = 0

    CPU 0                                           CPU 1
    -----                                           -----
    // rcu_state.gp_seq = 1
    rcu_seq_start(&rcu_state.gp_seq)
                                                    // snap = 8
                                                    snap = rcu_seq_snap(&rcu_state.gp_seq)
                                                    // Two full GP differences
                                                    rcu_seq_done_exact(&rnp->gp_seq, snap)
    // rnp->gp_seq = 1
    WRITE_ONCE(rnp->gp_seq, rcu_state.gp_seq);

Add a comment about those expectations and to clarify the magic within
the relevant function.

Note that the issue arises mainly with the use of rcu_seq_done_exact()
which has a much tigher guardband (of 2 GPs) to ensure the false-negative
window of the API during wraparound is limited to just 2 GPs.
rcu_seq_done() does not have such strict requirements, however its large
false-negative window of ULONG_MAX/2 is not ideal for the polling API.
However, this also means care is needed to ensure the guardband is as
large as needed to avoid the example scenario describe above which a
warning added in an earlier patch does.

[ Comment wordsmithing by Joel ]

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Reviewed-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
 kernel/rcu/rcu.h | 9 +++++++++
 1 file changed, 9 insertions(+)

Comments

Paul E. McKenney March 26, 2025, 10:37 p.m. UTC | #1
On Mon, Mar 24, 2025 at 01:01:55PM -0400, Joel Fernandes wrote:
> From: Frederic Weisbecker <frederic@kernel.org>
> 
> The numbers used in rcu_seq_done_exact() lack some explanation behind
> their magic. Especially after the commit:
> 
>     85aad7cc4178 ("rcu: Fix get_state_synchronize_rcu_full() GP-start detection")
> 
> which reported a subtle issue where a new GP sequence snapshot was taken
> on the root node state while a grace period had already been started and
> reflected on the global state sequence but not yet on the root node
> sequence, making a polling user waiting on a wrong already started grace
> period that would ignore freshly online CPUs.
> 
> The fix involved taking the snaphot on the global state sequence and
> waiting on the root node sequence. And since a grace period is first
> started on the global state and only afterward reflected on the root
> node, a snapshot taken on the global state sequence might be two full
> grace periods ahead of the root node as in the following example:
> 
> rnp->gp_seq = rcu_state.gp_seq = 0
> 
>     CPU 0                                           CPU 1
>     -----                                           -----
>     // rcu_state.gp_seq = 1
>     rcu_seq_start(&rcu_state.gp_seq)
>                                                     // snap = 8
>                                                     snap = rcu_seq_snap(&rcu_state.gp_seq)
>                                                     // Two full GP differences
>                                                     rcu_seq_done_exact(&rnp->gp_seq, snap)
>     // rnp->gp_seq = 1
>     WRITE_ONCE(rnp->gp_seq, rcu_state.gp_seq);
> 
> Add a comment about those expectations and to clarify the magic within
> the relevant function.
> 
> Note that the issue arises mainly with the use of rcu_seq_done_exact()
> which has a much tigher guardband (of 2 GPs) to ensure the false-negative
> window of the API during wraparound is limited to just 2 GPs.
> rcu_seq_done() does not have such strict requirements, however its large
> false-negative window of ULONG_MAX/2 is not ideal for the polling API.
> However, this also means care is needed to ensure the guardband is as
> large as needed to avoid the example scenario describe above which a
> warning added in an earlier patch does.
> 
> [ Comment wordsmithing by Joel ]
> 
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> Reviewed-by: Paul E. McKenney <paulmck@kernel.org>

Looks good, and I stand by my Reviewed-by.  ;-)

							Thanx, Paul

> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
> ---
>  kernel/rcu/rcu.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
> index 5e1ee570bb27..db63f330768c 100644
> --- a/kernel/rcu/rcu.h
> +++ b/kernel/rcu/rcu.h
> @@ -160,6 +160,15 @@ static inline bool rcu_seq_done(unsigned long *sp, unsigned long s)
>   * Given a snapshot from rcu_seq_snap(), determine whether or not a
>   * full update-side operation has occurred, but do not allow the
>   * (ULONG_MAX / 2) safety-factor/guard-band.
> + *
> + * The token returned by get_state_synchronize_rcu_full() is based on
> + * rcu_state.gp_seq but it is tested in poll_state_synchronize_rcu_full()
> + * against the root rnp->gp_seq. Since rcu_seq_start() is first called
> + * on rcu_state.gp_seq and only later reflected on the root rnp->gp_seq,
> + * it is possible that rcu_seq_snap(rcu_state.gp_seq) returns 2 full grace
> + * periods ahead of the root rnp->gp_seq. To prevent false-positives with the
> + * full polling API that a wrap around instantly completed the GP, when nothing
> + * like that happened, adjust for the 2 GPs in the ULONG_CMP_LT().
>   */
>  static inline bool rcu_seq_done_exact(unsigned long *sp, unsigned long s)
>  {
> -- 
> 2.43.0
>
Joel Fernandes March 26, 2025, 10:51 p.m. UTC | #2
> On Mar 26, 2025, at 6:37 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
> 
> On Mon, Mar 24, 2025 at 01:01:55PM -0400, Joel Fernandes wrote:
>> From: Frederic Weisbecker <frederic@kernel.org>
>> 
>> The numbers used in rcu_seq_done_exact() lack some explanation behind
>> their magic. Especially after the commit:
>> 
>>    85aad7cc4178 ("rcu: Fix get_state_synchronize_rcu_full() GP-start detection")
>> 
>> which reported a subtle issue where a new GP sequence snapshot was taken
>> on the root node state while a grace period had already been started and
>> reflected on the global state sequence but not yet on the root node
>> sequence, making a polling user waiting on a wrong already started grace
>> period that would ignore freshly online CPUs.
>> 
>> The fix involved taking the snaphot on the global state sequence and
>> waiting on the root node sequence. And since a grace period is first
>> started on the global state and only afterward reflected on the root
>> node, a snapshot taken on the global state sequence might be two full
>> grace periods ahead of the root node as in the following example:
>> 
>> rnp->gp_seq = rcu_state.gp_seq = 0
>> 
>>    CPU 0                                           CPU 1
>>    -----                                           -----
>>    // rcu_state.gp_seq = 1
>>    rcu_seq_start(&rcu_state.gp_seq)
>>                                                    // snap = 8
>>                                                    snap = rcu_seq_snap(&rcu_state.gp_seq)
>>                                                    // Two full GP differences
>>                                                    rcu_seq_done_exact(&rnp->gp_seq, snap)
>>    // rnp->gp_seq = 1
>>    WRITE_ONCE(rnp->gp_seq, rcu_state.gp_seq);
>> 
>> Add a comment about those expectations and to clarify the magic within
>> the relevant function.
>> 
>> Note that the issue arises mainly with the use of rcu_seq_done_exact()
>> which has a much tigher guardband (of 2 GPs) to ensure the false-negative
>> window of the API during wraparound is limited to just 2 GPs.
>> rcu_seq_done() does not have such strict requirements, however its large
>> false-negative window of ULONG_MAX/2 is not ideal for the polling API.
>> However, this also means care is needed to ensure the guardband is as
>> large as needed to avoid the example scenario describe above which a
>> warning added in an earlier patch does.
>> 
>> [ Comment wordsmithing by Joel ]
>> 
>> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
>> Reviewed-by: Paul E. McKenney <paulmck@kernel.org>
> 
> Looks good, and I stand by my Reviewed-by.  ;-)

Thanks, I will queue this one for 6.16.

- Joel


> 
>                            Thanx, Paul
> 
>> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
>> ---
>> kernel/rcu/rcu.h | 9 +++++++++
>> 1 file changed, 9 insertions(+)
>> 
>> diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
>> index 5e1ee570bb27..db63f330768c 100644
>> --- a/kernel/rcu/rcu.h
>> +++ b/kernel/rcu/rcu.h
>> @@ -160,6 +160,15 @@ static inline bool rcu_seq_done(unsigned long *sp, unsigned long s)
>>  * Given a snapshot from rcu_seq_snap(), determine whether or not a
>>  * full update-side operation has occurred, but do not allow the
>>  * (ULONG_MAX / 2) safety-factor/guard-band.
>> + *
>> + * The token returned by get_state_synchronize_rcu_full() is based on
>> + * rcu_state.gp_seq but it is tested in poll_state_synchronize_rcu_full()
>> + * against the root rnp->gp_seq. Since rcu_seq_start() is first called
>> + * on rcu_state.gp_seq and only later reflected on the root rnp->gp_seq,
>> + * it is possible that rcu_seq_snap(rcu_state.gp_seq) returns 2 full grace
>> + * periods ahead of the root rnp->gp_seq. To prevent false-positives with the
>> + * full polling API that a wrap around instantly completed the GP, when nothing
>> + * like that happened, adjust for the 2 GPs in the ULONG_CMP_LT().
>>  */
>> static inline bool rcu_seq_done_exact(unsigned long *sp, unsigned long s)
>> {
>> --
>> 2.43.0
>>
diff mbox series

Patch

diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
index 5e1ee570bb27..db63f330768c 100644
--- a/kernel/rcu/rcu.h
+++ b/kernel/rcu/rcu.h
@@ -160,6 +160,15 @@  static inline bool rcu_seq_done(unsigned long *sp, unsigned long s)
  * Given a snapshot from rcu_seq_snap(), determine whether or not a
  * full update-side operation has occurred, but do not allow the
  * (ULONG_MAX / 2) safety-factor/guard-band.
+ *
+ * The token returned by get_state_synchronize_rcu_full() is based on
+ * rcu_state.gp_seq but it is tested in poll_state_synchronize_rcu_full()
+ * against the root rnp->gp_seq. Since rcu_seq_start() is first called
+ * on rcu_state.gp_seq and only later reflected on the root rnp->gp_seq,
+ * it is possible that rcu_seq_snap(rcu_state.gp_seq) returns 2 full grace
+ * periods ahead of the root rnp->gp_seq. To prevent false-positives with the
+ * full polling API that a wrap around instantly completed the GP, when nothing
+ * like that happened, adjust for the 2 GPs in the ULONG_CMP_LT().
  */
 static inline bool rcu_seq_done_exact(unsigned long *sp, unsigned long s)
 {