diff mbox series

sched/cpufreq: Align trace event behavior of fast switching

Message ID 20190807153340.11516-1-douglas.raillard@arm.com (mailing list archive)
State Mainlined, archived
Delegated to: Rafael Wysocki
Headers show
Series sched/cpufreq: Align trace event behavior of fast switching | expand

Commit Message

Douglas RAILLARD Aug. 7, 2019, 3:33 p.m. UTC
Fast switching path only emits an event for the CPU of interest, whereas the
regular path emits an event for all the CPUs that had their frequency changed,
i.e. all the CPUs sharing the same policy.

With the current behavior, looking at cpu_frequency event for a given CPU that
is using the fast switching path will not give the correct frequency signal.

Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
---
 kernel/sched/cpufreq_schedutil.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Comments

Rafael J. Wysocki Aug. 7, 2019, 8:40 p.m. UTC | #1
On Wed, Aug 7, 2019 at 5:34 PM Douglas RAILLARD
<douglas.raillard@arm.com> wrote:
>
> Fast switching path only emits an event for the CPU of interest, whereas the
> regular path emits an event for all the CPUs that had their frequency changed,
> i.e. all the CPUs sharing the same policy.
>
> With the current behavior, looking at cpu_frequency event for a given CPU that
> is using the fast switching path will not give the correct frequency signal.

Do you actually have any systems where that is a problem?  If so, then
what are they?
Douglas RAILLARD Aug. 8, 2019, 4:18 p.m. UTC | #2
Hi Rafael,

On 8/7/19 9:40 PM, Rafael J. Wysocki wrote:
> On Wed, Aug 7, 2019 at 5:34 PM Douglas RAILLARD
> <douglas.raillard@arm.com> wrote:
>>
>> Fast switching path only emits an event for the CPU of interest, whereas the
>> regular path emits an event for all the CPUs that had their frequency changed,
>> i.e. all the CPUs sharing the same policy.
>>
>> With the current behavior, looking at cpu_frequency event for a given CPU that
>> is using the fast switching path will not give the correct frequency signal.
> 
> Do you actually have any systems where that is a problem?  If so, then
> what are they?
> 

That happens on Google Pixel 3 smartphone, which uses this cpufreq driver: drivers/cpufreq/qcom-cpufreq-hw.c.

[1] git clone https://git.linaro.org/people/amit.pundir/linux.git -b blueline-mainline-tracking

Thanks,
Douglas
Rafael J. Wysocki Aug. 26, 2019, 9:10 a.m. UTC | #3
On Wednesday, August 7, 2019 5:33:40 PM CEST Douglas RAILLARD wrote:
> Fast switching path only emits an event for the CPU of interest, whereas the
> regular path emits an event for all the CPUs that had their frequency changed,
> i.e. all the CPUs sharing the same policy.
> 
> With the current behavior, looking at cpu_frequency event for a given CPU that
> is using the fast switching path will not give the correct frequency signal.
> 
> Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
> ---
>  kernel/sched/cpufreq_schedutil.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 1f82ab108bab..975ccc3de807 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -153,6 +153,7 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
>  			      unsigned int next_freq)
>  {
>  	struct cpufreq_policy *policy = sg_policy->policy;
> +	int cpu;
>  
>  	if (!sugov_update_next_freq(sg_policy, time, next_freq))
>  		return;
> @@ -162,7 +163,11 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
>  		return;
>  
>  	policy->cur = next_freq;
> -	trace_cpu_frequency(next_freq, smp_processor_id());
> +
> +	if (trace_cpu_frequency_enabled()) {
> +		for_each_cpu(cpu, policy->cpus)
> +			trace_cpu_frequency(next_freq, cpu);
> +	}
>  }
>  
>  static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,
> 

Peter, any comments here?
Peter Zijlstra Aug. 26, 2019, 9:40 a.m. UTC | #4
On Mon, Aug 26, 2019 at 11:10:52AM +0200, Rafael J. Wysocki wrote:
> On Wednesday, August 7, 2019 5:33:40 PM CEST Douglas RAILLARD wrote:
> > Fast switching path only emits an event for the CPU of interest, whereas the
> > regular path emits an event for all the CPUs that had their frequency changed,
> > i.e. all the CPUs sharing the same policy.
> > 
> > With the current behavior, looking at cpu_frequency event for a given CPU that
> > is using the fast switching path will not give the correct frequency signal.
> > 
> > Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
> > ---
> >  kernel/sched/cpufreq_schedutil.c | 7 ++++++-
> >  1 file changed, 6 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index 1f82ab108bab..975ccc3de807 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -153,6 +153,7 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
> >  			      unsigned int next_freq)
> >  {
> >  	struct cpufreq_policy *policy = sg_policy->policy;
> > +	int cpu;
> >  
> >  	if (!sugov_update_next_freq(sg_policy, time, next_freq))
> >  		return;
> > @@ -162,7 +163,11 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
> >  		return;
> >  
> >  	policy->cur = next_freq;
> > -	trace_cpu_frequency(next_freq, smp_processor_id());
> > +
> > +	if (trace_cpu_frequency_enabled()) {
> > +		for_each_cpu(cpu, policy->cpus)
> > +			trace_cpu_frequency(next_freq, cpu);
> > +	}
> >  }
> >  
> >  static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,
> > 
> 
> Peter, any comments here?

I was thinking this would be a static map and dealing with it would be
something trivially done in post (or manually while reading), but sure,
whatever:

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Dietmar Eggemann Aug. 26, 2019, 9:51 a.m. UTC | #5
On 26/08/2019 11:40, Peter Zijlstra wrote:
> On Mon, Aug 26, 2019 at 11:10:52AM +0200, Rafael J. Wysocki wrote:
>> On Wednesday, August 7, 2019 5:33:40 PM CEST Douglas RAILLARD wrote:
>>> Fast switching path only emits an event for the CPU of interest, whereas the
>>> regular path emits an event for all the CPUs that had their frequency changed,
>>> i.e. all the CPUs sharing the same policy.
>>>
>>> With the current behavior, looking at cpu_frequency event for a given CPU that
>>> is using the fast switching path will not give the correct frequency signal.
>>>
>>> Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
>>> ---
>>>  kernel/sched/cpufreq_schedutil.c | 7 ++++++-
>>>  1 file changed, 6 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
>>> index 1f82ab108bab..975ccc3de807 100644
>>> --- a/kernel/sched/cpufreq_schedutil.c
>>> +++ b/kernel/sched/cpufreq_schedutil.c
>>> @@ -153,6 +153,7 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
>>>  			      unsigned int next_freq)
>>>  {
>>>  	struct cpufreq_policy *policy = sg_policy->policy;
>>> +	int cpu;
>>>  
>>>  	if (!sugov_update_next_freq(sg_policy, time, next_freq))
>>>  		return;
>>> @@ -162,7 +163,11 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
>>>  		return;
>>>  
>>>  	policy->cur = next_freq;
>>> -	trace_cpu_frequency(next_freq, smp_processor_id());
>>> +
>>> +	if (trace_cpu_frequency_enabled()) {
>>> +		for_each_cpu(cpu, policy->cpus)
>>> +			trace_cpu_frequency(next_freq, cpu);
>>> +	}
>>>  }
>>>  
>>>  static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,
>>>
>>
>> Peter, any comments here?
> 
> I was thinking this would be a static map and dealing with it would be
> something trivially done in post (or manually while reading), but sure,
> whatever:
> 
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

I think our EAS tooling expects the behavior of the non-fast-switching
driver (cpufreq.c cpufreq_notify_transition() CPUFREQ_POSTCHANGE). Pixel
3 is the first device with a fast-switching driver we test on.

Not sure about the extra  'if trace_cpu_frequency_enabled()' but I guess
it doesn't hurt.
Peter Zijlstra Aug. 26, 2019, 11:24 a.m. UTC | #6
On Mon, Aug 26, 2019 at 11:51:17AM +0200, Dietmar Eggemann wrote:

> Not sure about the extra  'if trace_cpu_frequency_enabled()' but I guess
> it doesn't hurt.

Without that you do that for_each_cpu() iteration unconditionally, even
if the tracepoint is disabled.
Dietmar Eggemann Aug. 26, 2019, 12:12 p.m. UTC | #7
On 26/08/2019 13:24, Peter Zijlstra wrote:
> On Mon, Aug 26, 2019 at 11:51:17AM +0200, Dietmar Eggemann wrote:
> 
>> Not sure about the extra  'if trace_cpu_frequency_enabled()' but I guess
>> it doesn't hurt.
> 
> Without that you do that for_each_cpu() iteration unconditionally, even
> if the tracepoint is disabled.

Makes sense, I'm wondering if we want this in
cpufreq_notify_transition() CPUFREQ_POSTCHANGE for the
non-fast-switching drivers as well.
Rafael J. Wysocki Aug. 28, 2019, 9:44 a.m. UTC | #8
On Monday, August 26, 2019 11:40:58 AM CEST Peter Zijlstra wrote:
> On Mon, Aug 26, 2019 at 11:10:52AM +0200, Rafael J. Wysocki wrote:
> > On Wednesday, August 7, 2019 5:33:40 PM CEST Douglas RAILLARD wrote:
> > > Fast switching path only emits an event for the CPU of interest, whereas the
> > > regular path emits an event for all the CPUs that had their frequency changed,
> > > i.e. all the CPUs sharing the same policy.
> > > 
> > > With the current behavior, looking at cpu_frequency event for a given CPU that
> > > is using the fast switching path will not give the correct frequency signal.
> > > 
> > > Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
> > > ---
> > >  kernel/sched/cpufreq_schedutil.c | 7 ++++++-
> > >  1 file changed, 6 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > > index 1f82ab108bab..975ccc3de807 100644
> > > --- a/kernel/sched/cpufreq_schedutil.c
> > > +++ b/kernel/sched/cpufreq_schedutil.c
> > > @@ -153,6 +153,7 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
> > >  			      unsigned int next_freq)
> > >  {
> > >  	struct cpufreq_policy *policy = sg_policy->policy;
> > > +	int cpu;
> > >  
> > >  	if (!sugov_update_next_freq(sg_policy, time, next_freq))
> > >  		return;
> > > @@ -162,7 +163,11 @@ static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
> > >  		return;
> > >  
> > >  	policy->cur = next_freq;
> > > -	trace_cpu_frequency(next_freq, smp_processor_id());
> > > +
> > > +	if (trace_cpu_frequency_enabled()) {
> > > +		for_each_cpu(cpu, policy->cpus)
> > > +			trace_cpu_frequency(next_freq, cpu);
> > > +	}
> > >  }
> > >  
> > >  static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,
> > > 
> > 
> > Peter, any comments here?
> 
> I was thinking this would be a static map and dealing with it would be
> something trivially done in post (or manually while reading), but sure,
> whatever:
> 
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> 

Thanks, queuing up this one for 5.4.
diff mbox series

Patch

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 1f82ab108bab..975ccc3de807 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -153,6 +153,7 @@  static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
 			      unsigned int next_freq)
 {
 	struct cpufreq_policy *policy = sg_policy->policy;
+	int cpu;
 
 	if (!sugov_update_next_freq(sg_policy, time, next_freq))
 		return;
@@ -162,7 +163,11 @@  static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
 		return;
 
 	policy->cur = next_freq;
-	trace_cpu_frequency(next_freq, smp_processor_id());
+
+	if (trace_cpu_frequency_enabled()) {
+		for_each_cpu(cpu, policy->cpus)
+			trace_cpu_frequency(next_freq, cpu);
+	}
 }
 
 static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,