Message ID | 20250324170156.469763-3-joelagnelf@nvidia.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [1/3] rcu: Replace magic number with meaningful constant in rcu_seq_done_exact() | expand |
On Mon, Mar 24, 2025 at 01:01:54PM -0400, Joel Fernandes wrote: > The previous patch improved the rcu_seq_done_exact() function by adding > a meaningful constant for the guardband. > > Ensure that this is working for the future by a quick check during > rcu_gp_init(). > > Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com> This is a good test for the guardband being way too short. Are there other tests the should be run, possibly on a separate gp_seq used only for testing? Should the test below be under CONFIG_PROVE_RCU? Thanx, Paul > --- > kernel/rcu/tree.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index 659f83e71048..29ddbcbea25e 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -1798,6 +1798,7 @@ static noinline_for_stack bool rcu_gp_init(void) > struct rcu_data *rdp; > struct rcu_node *rnp = rcu_get_root(); > bool start_new_poll; > + unsigned long old_gp_seq; > > WRITE_ONCE(rcu_state.gp_activity, jiffies); > raw_spin_lock_irq_rcu_node(rnp); > @@ -1825,7 +1826,11 @@ static noinline_for_stack bool rcu_gp_init(void) > */ > start_new_poll = rcu_sr_normal_gp_init(); > /* Record GP times before starting GP, hence rcu_seq_start(). */ > + old_gp_seq = rcu_state.gp_seq; > rcu_seq_start(&rcu_state.gp_seq); > + /* Ensure that rcu_seq_done_exact() guardband doesn't give false positives. */ > + WARN_ON_ONCE(rcu_seq_done_exact(&old_gp_seq, rcu_seq_snap(&rcu_state.gp_seq))); > + > ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq); > trace_rcu_grace_period(rcu_state.name, rcu_state.gp_seq, TPS("start")); > rcu_poll_gp_seq_start(&rcu_state.gp_seq_polled_snap); > -- > 2.43.0 >
> On Mar 26, 2025, at 6:36 PM, Paul E. McKenney <paulmck@kernel.org> wrote: > > On Mon, Mar 24, 2025 at 01:01:54PM -0400, Joel Fernandes wrote: >> The previous patch improved the rcu_seq_done_exact() function by adding >> a meaningful constant for the guardband. >> >> Ensure that this is working for the future by a quick check during >> rcu_gp_init(). >> >> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com> > > This is a good test for the guardband being way too short. Thanks. Let me know if I could add your review tag! > > Are there other tests the should be run, possibly on a separate gp_seq > used only for testing? Should the test below be under CONFIG_PROVE_RCU? Yes, I could move it to PROVE RCU and it should be sufficient for testing. The other test I was working on is to force the counter wrapping and hence gpwrap which is related. Maybe we could also some testing around false negatives not happening too often (for examples with the rcu_seq_done()). I will add more tests if I come across usecases. Thanks! Joel > > Thanx, Paul > >> --- >> kernel/rcu/tree.c | 5 +++++ >> 1 file changed, 5 insertions(+) >> >> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c >> index 659f83e71048..29ddbcbea25e 100644 >> --- a/kernel/rcu/tree.c >> +++ b/kernel/rcu/tree.c >> @@ -1798,6 +1798,7 @@ static noinline_for_stack bool rcu_gp_init(void) >> struct rcu_data *rdp; >> struct rcu_node *rnp = rcu_get_root(); >> bool start_new_poll; >> + unsigned long old_gp_seq; >> >> WRITE_ONCE(rcu_state.gp_activity, jiffies); >> raw_spin_lock_irq_rcu_node(rnp); >> @@ -1825,7 +1826,11 @@ static noinline_for_stack bool rcu_gp_init(void) >> */ >> start_new_poll = rcu_sr_normal_gp_init(); >> /* Record GP times before starting GP, hence rcu_seq_start(). */ >> + old_gp_seq = rcu_state.gp_seq; >> rcu_seq_start(&rcu_state.gp_seq); >> + /* Ensure that rcu_seq_done_exact() guardband doesn't give false positives. */ >> + WARN_ON_ONCE(rcu_seq_done_exact(&old_gp_seq, rcu_seq_snap(&rcu_state.gp_seq))); >> + >> ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq); >> trace_rcu_grace_period(rcu_state.name, rcu_state.gp_seq, TPS("start")); >> rcu_poll_gp_seq_start(&rcu_state.gp_seq_polled_snap); >> -- >> 2.43.0 >>
On Wed, Mar 26, 2025 at 10:50:13PM +0000, Joel Fernandes wrote: > > > > On Mar 26, 2025, at 6:36 PM, Paul E. McKenney <paulmck@kernel.org> wrote: > > > > On Mon, Mar 24, 2025 at 01:01:54PM -0400, Joel Fernandes wrote: > >> The previous patch improved the rcu_seq_done_exact() function by adding > >> a meaningful constant for the guardband. > >> > >> Ensure that this is working for the future by a quick check during > >> rcu_gp_init(). > >> > >> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com> > > > > This is a good test for the guardband being way too short. > > Thanks. Let me know if I could add your review tag! > > > Are there other tests the should be run, possibly on a separate gp_seq > > used only for testing? Should the test below be under CONFIG_PROVE_RCU? > > Yes, I could move it to PROVE RCU and it should be sufficient for testing. > > The other test I was working on is to force the counter wrapping and hence gpwrap which is related. Very good on both counts. > Maybe we could also some testing around false negatives not happening too often (for examples with the rcu_seq_done()). > > I will add more tests if I come across usecases. Keep the counter from just after the start (or just before the end) of the previous grace period and verify that it also has not ended just after the start of the current grace period? Thanx, Paul > Thanks! > > Joel > > > > > Thanx, Paul > > > >> --- > >> kernel/rcu/tree.c | 5 +++++ > >> 1 file changed, 5 insertions(+) > >> > >> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > >> index 659f83e71048..29ddbcbea25e 100644 > >> --- a/kernel/rcu/tree.c > >> +++ b/kernel/rcu/tree.c > >> @@ -1798,6 +1798,7 @@ static noinline_for_stack bool rcu_gp_init(void) > >> struct rcu_data *rdp; > >> struct rcu_node *rnp = rcu_get_root(); > >> bool start_new_poll; > >> + unsigned long old_gp_seq; > >> > >> WRITE_ONCE(rcu_state.gp_activity, jiffies); > >> raw_spin_lock_irq_rcu_node(rnp); > >> @@ -1825,7 +1826,11 @@ static noinline_for_stack bool rcu_gp_init(void) > >> */ > >> start_new_poll = rcu_sr_normal_gp_init(); > >> /* Record GP times before starting GP, hence rcu_seq_start(). */ > >> + old_gp_seq = rcu_state.gp_seq; > >> rcu_seq_start(&rcu_state.gp_seq); > >> + /* Ensure that rcu_seq_done_exact() guardband doesn't give false positives. */ > >> + WARN_ON_ONCE(rcu_seq_done_exact(&old_gp_seq, rcu_seq_snap(&rcu_state.gp_seq))); > >> + > >> ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq); > >> trace_rcu_grace_period(rcu_state.name, rcu_state.gp_seq, TPS("start")); > >> rcu_poll_gp_seq_start(&rcu_state.gp_seq_polled_snap); > >> -- > >> 2.43.0 > >>
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 659f83e71048..29ddbcbea25e 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1798,6 +1798,7 @@ static noinline_for_stack bool rcu_gp_init(void) struct rcu_data *rdp; struct rcu_node *rnp = rcu_get_root(); bool start_new_poll; + unsigned long old_gp_seq; WRITE_ONCE(rcu_state.gp_activity, jiffies); raw_spin_lock_irq_rcu_node(rnp); @@ -1825,7 +1826,11 @@ static noinline_for_stack bool rcu_gp_init(void) */ start_new_poll = rcu_sr_normal_gp_init(); /* Record GP times before starting GP, hence rcu_seq_start(). */ + old_gp_seq = rcu_state.gp_seq; rcu_seq_start(&rcu_state.gp_seq); + /* Ensure that rcu_seq_done_exact() guardband doesn't give false positives. */ + WARN_ON_ONCE(rcu_seq_done_exact(&old_gp_seq, rcu_seq_snap(&rcu_state.gp_seq))); + ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq); trace_rcu_grace_period(rcu_state.name, rcu_state.gp_seq, TPS("start")); rcu_poll_gp_seq_start(&rcu_state.gp_seq_polled_snap);
The previous patch improved the rcu_seq_done_exact() function by adding a meaningful constant for the guardband. Ensure that this is working for the future by a quick check during rcu_gp_init(). Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com> --- kernel/rcu/tree.c | 5 +++++ 1 file changed, 5 insertions(+)