diff mbox series

[08/12] rcu: Cleanup RCU urgency state for offline CPU

Message ID 20220620222032.3839547-8-paulmck@kernel.org (mailing list archive)
State Accepted
Commit fff58354cb66dd75bf411d48e0ec5ee85ec45d7f
Headers show
Series Miscellaneous fixes for v5.20 | expand

Commit Message

Paul E. McKenney June 20, 2022, 10:20 p.m. UTC
From: Zqiang <qiang1.zhang@intel.com>

When a CPU is slow to provide a quiescent state for a given grace
period, RCU takes steps to encourage that CPU to get with the
quiescent-state program in a more timely fashion.  These steps
include these flags in the rcu_data structure:

1.	->rcu_urgent_qs, which causes the scheduling-clock interrupt to
	request an otherwise pointless context switch from the scheduler.

2.	->rcu_need_heavy_qs, which causes both cond_resched() and RCU's
	context-switch hook to do an immediate momentary quiscent state.

3.	->rcu_need_heavy_qs, which causes the scheduler-clock tick to
	be enabled even on nohz_full CPUs with only one runnable task.

These flags are of course cleared once the corresponding CPU has passed
through a quiescent state.  Unless that quiescent state is the CPU
going offline, which means that when the CPU comes back online, it will
needlessly consume additional CPU time and incur additional latency,
which constitutes a minor but very real performance bug.

This commit therefore adds the call to rcu_disable_urgency_upon_qs()
that clears these flags to the CPU-hotplug offlining code path.

Signed-off-by: Zqiang <qiang1.zhang@intel.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/rcu/tree.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Neeraj Upadhyay June 21, 2022, 7:03 a.m. UTC | #1
On 6/21/2022 3:50 AM, Paul E. McKenney wrote:
> From: Zqiang <qiang1.zhang@intel.com>
> 
> When a CPU is slow to provide a quiescent state for a given grace
> period, RCU takes steps to encourage that CPU to get with the
> quiescent-state program in a more timely fashion.  These steps
> include these flags in the rcu_data structure:
> 
> 1.	->rcu_urgent_qs, which causes the scheduling-clock interrupt to
> 	request an otherwise pointless context switch from the scheduler.
> 
> 2.	->rcu_need_heavy_qs, which causes both cond_resched() and RCU's
> 	context-switch hook to do an immediate momentary quiscent state.
> 
> 3.	->rcu_need_heavy_qs, which causes the scheduler-clock tick to

nit: s/->rcu_need_heavy_qs/->rcu_forced_tick/ ?

> 	be enabled even on nohz_full CPUs with only one runnable task.
> 
> These flags are of course cleared once the corresponding CPU has passed
> through a quiescent state.  Unless that quiescent state is the CPU
> going offline, which means that when the CPU comes back online, it will
> needlessly consume additional CPU time and incur additional latency,
> which constitutes a minor but very real performance bug.
> 
> This commit therefore adds the call to rcu_disable_urgency_upon_qs()
> that clears these flags to the CPU-hotplug offlining code path.
> 
> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---


Reviewed-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>




Thanks
Neeraj

>   kernel/rcu/tree.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index f4a37f2032664..5445b19b48408 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -4446,6 +4446,7 @@ void rcu_report_dead(unsigned int cpu)
>   	rdp->rcu_ofl_gp_flags = READ_ONCE(rcu_state.gp_flags);
>   	if (rnp->qsmask & mask) { /* RCU waiting on outgoing CPU? */
>   		/* Report quiescent state -before- changing ->qsmaskinitnext! */
> +		rcu_disable_urgency_upon_qs(rdp);
>   		rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags);
>   		raw_spin_lock_irqsave_rcu_node(rnp, flags);
>   	}
Paul E. McKenney June 21, 2022, 10:24 p.m. UTC | #2
On Tue, Jun 21, 2022 at 12:33:12PM +0530, Neeraj Upadhyay wrote:
> 
> 
> On 6/21/2022 3:50 AM, Paul E. McKenney wrote:
> > From: Zqiang <qiang1.zhang@intel.com>
> > 
> > When a CPU is slow to provide a quiescent state for a given grace
> > period, RCU takes steps to encourage that CPU to get with the
> > quiescent-state program in a more timely fashion.  These steps
> > include these flags in the rcu_data structure:
> > 
> > 1.	->rcu_urgent_qs, which causes the scheduling-clock interrupt to
> > 	request an otherwise pointless context switch from the scheduler.
> > 
> > 2.	->rcu_need_heavy_qs, which causes both cond_resched() and RCU's
> > 	context-switch hook to do an immediate momentary quiscent state.
> > 
> > 3.	->rcu_need_heavy_qs, which causes the scheduler-clock tick to
> 
> nit: s/->rcu_need_heavy_qs/->rcu_forced_tick/ ?

This one got lost in the shuffle, so I will apply it on my next rebase.

> > 	be enabled even on nohz_full CPUs with only one runnable task.
> > 
> > These flags are of course cleared once the corresponding CPU has passed
> > through a quiescent state.  Unless that quiescent state is the CPU
> > going offline, which means that when the CPU comes back online, it will
> > needlessly consume additional CPU time and incur additional latency,
> > which constitutes a minor but very real performance bug.
> > 
> > This commit therefore adds the call to rcu_disable_urgency_upon_qs()
> > that clears these flags to the CPU-hotplug offlining code path.
> > 
> > Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > ---
> 
> 
> Reviewed-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>

And again, thank you!

							Thanx, Paul

> Thanks
> Neeraj
> 
> >   kernel/rcu/tree.c | 1 +
> >   1 file changed, 1 insertion(+)
> > 
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index f4a37f2032664..5445b19b48408 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -4446,6 +4446,7 @@ void rcu_report_dead(unsigned int cpu)
> >   	rdp->rcu_ofl_gp_flags = READ_ONCE(rcu_state.gp_flags);
> >   	if (rnp->qsmask & mask) { /* RCU waiting on outgoing CPU? */
> >   		/* Report quiescent state -before- changing ->qsmaskinitnext! */
> > +		rcu_disable_urgency_upon_qs(rdp);
> >   		rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags);
> >   		raw_spin_lock_irqsave_rcu_node(rnp, flags);
> >   	}
diff mbox series

Patch

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f4a37f2032664..5445b19b48408 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -4446,6 +4446,7 @@  void rcu_report_dead(unsigned int cpu)
 	rdp->rcu_ofl_gp_flags = READ_ONCE(rcu_state.gp_flags);
 	if (rnp->qsmask & mask) { /* RCU waiting on outgoing CPU? */
 		/* Report quiescent state -before- changing ->qsmaskinitnext! */
+		rcu_disable_urgency_upon_qs(rdp);
 		rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags);
 		raw_spin_lock_irqsave_rcu_node(rnp, flags);
 	}