Message ID | 20230117021955.1967316-1-qiang1.zhang@intel.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | rcu: Remove impossible wakeup rcu GP kthread action from rcu_report_qs_rdp() | expand |
On Tue, Jan 17, 2023 at 10:19:55AM +0800, Zqiang wrote: > When inovke rcu_report_qs_rdp(), if current CPU's rcu_data structure's -> > grpmask has not been cleared from the corresponding rcu_node structure's > ->qsmask, after that will clear and report quiescent state, but in this > time, this also means that current grace period is not end, the current > grace period is ongoing, because the rcu_gp_in_progress() currently return > true, so for non-offloaded rdp, invoke rcu_accelerate_cbs() is impossible > to return true. > > This commit therefore remove impossible rcu_gp_kthread_wake() calling. > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > --- > kernel/rcu/tree.c | 5 +---- > 1 file changed, 1 insertion(+), 4 deletions(-) > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index b2c204529478..477eb1a374e5 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -1956,7 +1956,6 @@ rcu_report_qs_rdp(struct rcu_data *rdp) > { > unsigned long flags; > unsigned long mask; > - bool needwake = false; > bool needacc = false; > struct rcu_node *rnp; > > @@ -1988,7 +1987,7 @@ rcu_report_qs_rdp(struct rcu_data *rdp) > * NOCB kthreads have their own way to deal with that... > */ > if (!rcu_rdp_is_offloaded(rdp)) { > - needwake = rcu_accelerate_cbs(rnp, rdp); > + rcu_accelerate_cbs(rnp, rdp); If it is impossible, we should use WARN_ON_ONCE() or similar. Just in case the system disagrees on the impossibility. ;-) Thanx, Paul > } else if (!rcu_segcblist_completely_offloaded(&rdp->cblist)) { > /* > * ...but NOCB kthreads may miss or delay callbacks acceleration > @@ -2000,8 +1999,6 @@ rcu_report_qs_rdp(struct rcu_data *rdp) > rcu_disable_urgency_upon_qs(rdp); > rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags); > /* ^^^ Released rnp->lock */ > - if (needwake) > - rcu_gp_kthread_wake(); > > if (needacc) { > rcu_nocb_lock_irqsave(rdp, flags); > -- > 2.25.1 >
On Tue, Jan 17, 2023 at 10:19:55AM +0800, Zqiang wrote: > When inovke rcu_report_qs_rdp(), if current CPU's rcu_data structure's -> > grpmask has not been cleared from the corresponding rcu_node structure's > ->qsmask, after that will clear and report quiescent state, but in this > time, this also means that current grace period is not end, the current > grace period is ongoing, because the rcu_gp_in_progress() currently return > true, so for non-offloaded rdp, invoke rcu_accelerate_cbs() is impossible > to return true. > > This commit therefore remove impossible rcu_gp_kthread_wake() calling. > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > --- > kernel/rcu/tree.c | 5 +---- > 1 file changed, 1 insertion(+), 4 deletions(-) > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index b2c204529478..477eb1a374e5 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -1956,7 +1956,6 @@ rcu_report_qs_rdp(struct rcu_data *rdp) > { > unsigned long flags; > unsigned long mask; > - bool needwake = false; > bool needacc = false; > struct rcu_node *rnp; > > @@ -1988,7 +1987,7 @@ rcu_report_qs_rdp(struct rcu_data *rdp) > * NOCB kthreads have their own way to deal with that... > */ > if (!rcu_rdp_is_offloaded(rdp)) { > - needwake = rcu_accelerate_cbs(rnp, rdp); > + rcu_accelerate_cbs(rnp, rdp); > >If it is impossible, we should use WARN_ON_ONCE() or similar. Just >in case the system disagrees on the impossibility. ;-) Thanks for suggestion, I will resend v2
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b2c204529478..477eb1a374e5 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1956,7 +1956,6 @@ rcu_report_qs_rdp(struct rcu_data *rdp) { unsigned long flags; unsigned long mask; - bool needwake = false; bool needacc = false; struct rcu_node *rnp; @@ -1988,7 +1987,7 @@ rcu_report_qs_rdp(struct rcu_data *rdp) * NOCB kthreads have their own way to deal with that... */ if (!rcu_rdp_is_offloaded(rdp)) { - needwake = rcu_accelerate_cbs(rnp, rdp); + rcu_accelerate_cbs(rnp, rdp); } else if (!rcu_segcblist_completely_offloaded(&rdp->cblist)) { /* * ...but NOCB kthreads may miss or delay callbacks acceleration @@ -2000,8 +1999,6 @@ rcu_report_qs_rdp(struct rcu_data *rdp) rcu_disable_urgency_upon_qs(rdp); rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags); /* ^^^ Released rnp->lock */ - if (needwake) - rcu_gp_kthread_wake(); if (needacc) { rcu_nocb_lock_irqsave(rdp, flags);
When inovke rcu_report_qs_rdp(), if current CPU's rcu_data structure's -> grpmask has not been cleared from the corresponding rcu_node structure's ->qsmask, after that will clear and report quiescent state, but in this time, this also means that current grace period is not end, the current grace period is ongoing, because the rcu_gp_in_progress() currently return true, so for non-offloaded rdp, invoke rcu_accelerate_cbs() is impossible to return true. This commit therefore remove impossible rcu_gp_kthread_wake() calling. Signed-off-by: Zqiang <qiang1.zhang@intel.com> --- kernel/rcu/tree.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)