Message ID | 20220917164200.511783-3-joel@joelfernandes.org (mailing list archive) |
---|---|
State | Accepted |
Commit | 467e9e2ff121ec538f78065cba2608da32774f7f |
Headers | show |
Series | Preparatory patches borrowed from lazy rcu v5 | expand |
On Sat, Sep 17, 2022 at 04:41:59PM +0000, Joel Fernandes (Google) wrote: > When the bypass cblist gets too big or its timeout has occurred, it is > flushed into the main cblist. However, the bypass timer is still running > and the behavior is that it would eventually expire and wake the GP > thread. > > Since we are going to use the bypass cblist for lazy CBs, do the wakeup > soon as the flush for "too big or too long" bypass list happens. > Otherwise, long delays can happen for callbacks which get promoted from > lazy to non-lazy. > > This is a good thing to do anyway (regardless of future lazy patches), > since it makes the behavior consistent with behavior of other code paths > where flushing into the ->cblist makes the GP kthread into a > non-sleeping state quickly. > > [ Frederic Weisbec: changes to not do wake up GP thread unless needed, > comment changes ]. > > Reviewed-by: Frederic Weisbecker <frederic@kernel.org> > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> Queued and pushed this and 1/3, thank you both! Thanx, Paul > --- > kernel/rcu/tree_nocb.h | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h > index 0a5f0ef41484..04c87f250e01 100644 > --- a/kernel/rcu/tree_nocb.h > +++ b/kernel/rcu/tree_nocb.h > @@ -433,8 +433,9 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, > if ((ncbs && j != READ_ONCE(rdp->nocb_bypass_first)) || > ncbs >= qhimark) { > rcu_nocb_lock(rdp); > + *was_alldone = !rcu_segcblist_pend_cbs(&rdp->cblist); > + > if (!rcu_nocb_flush_bypass(rdp, rhp, j)) { > - *was_alldone = !rcu_segcblist_pend_cbs(&rdp->cblist); > if (*was_alldone) > trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, > TPS("FirstQ")); > @@ -447,7 +448,12 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, > rcu_advance_cbs_nowake(rdp->mynode, rdp); > rdp->nocb_gp_adv_time = j; > } > - rcu_nocb_unlock_irqrestore(rdp, flags); > + > + // The flush succeeded and we moved CBs into the regular list. > + // Don't wait for the wake up timer as it may be too far ahead. > + // Wake up the GP thread now instead, if the cblist was empty. > + __call_rcu_nocb_wake(rdp, *was_alldone, flags); > + > return true; // Callback already enqueued. > } > > -- > 2.37.3.968.ga6b4b080e4-goog >
diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index 0a5f0ef41484..04c87f250e01 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -433,8 +433,9 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, if ((ncbs && j != READ_ONCE(rdp->nocb_bypass_first)) || ncbs >= qhimark) { rcu_nocb_lock(rdp); + *was_alldone = !rcu_segcblist_pend_cbs(&rdp->cblist); + if (!rcu_nocb_flush_bypass(rdp, rhp, j)) { - *was_alldone = !rcu_segcblist_pend_cbs(&rdp->cblist); if (*was_alldone) trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("FirstQ")); @@ -447,7 +448,12 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, rcu_advance_cbs_nowake(rdp->mynode, rdp); rdp->nocb_gp_adv_time = j; } - rcu_nocb_unlock_irqrestore(rdp, flags); + + // The flush succeeded and we moved CBs into the regular list. + // Don't wait for the wake up timer as it may be too far ahead. + // Wake up the GP thread now instead, if the cblist was empty. + __call_rcu_nocb_wake(rdp, *was_alldone, flags); + return true; // Callback already enqueued. }