diff mbox series

[v3,2/3] sched/fair: Take thermal pressure into account while estimating energy

Message ID 20210610150324.22919-3-lukasz.luba@arm.com (mailing list archive)
State Not Applicable, archived
Headers show
Series Add allowed CPU capacity knowledge to EAS | expand

Commit Message

Lukasz Luba June 10, 2021, 3:03 p.m. UTC
Energy Aware Scheduling (EAS) needs to be able to predict the frequency
requests made by the SchedUtil governor to properly estimate energy used
in the future. It has to take into account CPUs utilization and forecast
Performance Domain (PD) frequency. There is a corner case when the max
allowed frequency might be reduced due to thermal. SchedUtil is aware of
that reduced frequency, so it should be taken into account also in EAS
estimations.

SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
to 'policy::max'. SchedUtil is responsible to respect that upper limit
while setting the frequency through CPUFreq drivers. This effective
frequency is stored internally in 'sugov_policy::next_freq' and EAS has
to predict that value.

In the existing code the raw value of arch_scale_cpu_capacity() is used
for clamping the returned CPU utilization from effective_cpu_util().
This patch fixes issue with too big single CPU utilization, by introducing
clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
capacity reduced by thermal pressure signal. We rely on this load avg
geometric series in similar way as other mechanisms in the scheduler.

Thanks to knowledge about allowed CPU capacity, we don't get too big value
for a single CPU utilization, which is then added to the util sum. The
util sum is used as a source of information for estimating whole PD energy.
To avoid wrong energy estimation in EAS (due to capped frequency), make
sure that the calculation of util sum is aware of allowed CPU capacity.

This thermal pressure might be visible in scenarios where the CPUs are not
heavily loaded, but some other component (like GPU) drastically reduced
available power budget and increased the SoC temperature. Thus, we still
use EAS for task placement and CPUs are not over-utilized.

Signed-off-by: Lukasz Luba <lukasz.luba@arm.com>
---
 kernel/sched/fair.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

Comments

Lukasz Luba June 14, 2021, 3:29 p.m. UTC | #1
Hi Vincent,

Gentle ping. Could you have a look at this implementation, please?


On 6/10/21 4:03 PM, Lukasz Luba wrote:

[snip]

> @@ -6527,8 +6527,12 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>   	struct cpumask *pd_mask = perf_domain_span(pd);
>   	unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
>   	unsigned long max_util = 0, sum_util = 0;
> +	unsigned long _cpu_cap, thermal_pressure;
>   	int cpu;
>   
> +	thermal_pressure = arch_scale_thermal_pressure(cpumask_first(pd_mask));
> +	_cpu_cap = cpu_cap - thermal_pressure;

I've done the implementation according to your suggestion. That should
provide the consistent usage.

Regards,
Lukasz
Vincent Guittot June 14, 2021, 3:48 p.m. UTC | #2
On Mon, 14 Jun 2021 at 17:29, Lukasz Luba <lukasz.luba@arm.com> wrote:
>
> Hi Vincent,
>
> Gentle ping. Could you have a look at this implementation, please?

Ah yes, this has been lost in my inbox. Let me have a look at it

>
>
> On 6/10/21 4:03 PM, Lukasz Luba wrote:
>
> [snip]
>
> > @@ -6527,8 +6527,12 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> >       struct cpumask *pd_mask = perf_domain_span(pd);
> >       unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
> >       unsigned long max_util = 0, sum_util = 0;
> > +     unsigned long _cpu_cap, thermal_pressure;
> >       int cpu;
> >
> > +     thermal_pressure = arch_scale_thermal_pressure(cpumask_first(pd_mask));
> > +     _cpu_cap = cpu_cap - thermal_pressure;
>
> I've done the implementation according to your suggestion. That should
> provide the consistent usage.
>
> Regards,
> Lukasz
Vincent Guittot June 14, 2021, 4:03 p.m. UTC | #3
On Thu, 10 Jun 2021 at 17:03, Lukasz Luba <lukasz.luba@arm.com> wrote:
>
> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
> requests made by the SchedUtil governor to properly estimate energy used
> in the future. It has to take into account CPUs utilization and forecast
> Performance Domain (PD) frequency. There is a corner case when the max
> allowed frequency might be reduced due to thermal. SchedUtil is aware of
> that reduced frequency, so it should be taken into account also in EAS
> estimations.
>
> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
> to 'policy::max'. SchedUtil is responsible to respect that upper limit
> while setting the frequency through CPUFreq drivers. This effective
> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
> to predict that value.
>
> In the existing code the raw value of arch_scale_cpu_capacity() is used
> for clamping the returned CPU utilization from effective_cpu_util().
> This patch fixes issue with too big single CPU utilization, by introducing
> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
> capacity reduced by thermal pressure signal. We rely on this load avg

you don't rely on load avg value but on raw thermal pressure value now

> geometric series in similar way as other mechanisms in the scheduler.
>
> Thanks to knowledge about allowed CPU capacity, we don't get too big value
> for a single CPU utilization, which is then added to the util sum. The
> util sum is used as a source of information for estimating whole PD energy.
> To avoid wrong energy estimation in EAS (due to capped frequency), make
> sure that the calculation of util sum is aware of allowed CPU capacity.
>
> This thermal pressure might be visible in scenarios where the CPUs are not
> heavily loaded, but some other component (like GPU) drastically reduced
> available power budget and increased the SoC temperature. Thus, we still
> use EAS for task placement and CPUs are not over-utilized.
>
> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com>
> ---
>  kernel/sched/fair.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 161b92aa1c79..237726217dad 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6527,8 +6527,12 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>         struct cpumask *pd_mask = perf_domain_span(pd);
>         unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
>         unsigned long max_util = 0, sum_util = 0;
> +       unsigned long _cpu_cap, thermal_pressure;
>         int cpu;
>
> +       thermal_pressure = arch_scale_thermal_pressure(cpumask_first(pd_mask));

Do you really need to use this intermediate variable thermal_pressure
? Seems to be used only below

With these 2 comments above fixed,

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

> +       _cpu_cap = cpu_cap - thermal_pressure;
> +
>         /*
>          * The capacity state of CPUs of the current rd can be driven by CPUs
>          * of another rd if they belong to the same pd. So, account for the
> @@ -6564,8 +6568,10 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>                  * is already enough to scale the EM reported power
>                  * consumption at the (eventually clamped) cpu_capacity.
>                  */
> -               sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
> -                                              ENERGY_UTIL, NULL);
> +               cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
> +                                             ENERGY_UTIL, NULL);
> +
> +               sum_util += min(cpu_util, _cpu_cap);
>
>                 /*
>                  * Performance domain frequency: utilization clamping
> @@ -6576,7 +6582,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>                  */
>                 cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
>                                               FREQUENCY_UTIL, tsk);
> -               max_util = max(max_util, cpu_util);
> +               max_util = max(max_util, min(cpu_util, _cpu_cap));
>         }
>
>         return em_cpu_energy(pd->em_pd, max_util, sum_util);
> --
> 2.17.1
>
Lukasz Luba June 14, 2021, 6:22 p.m. UTC | #4
On 6/14/21 5:03 PM, Vincent Guittot wrote:
> On Thu, 10 Jun 2021 at 17:03, Lukasz Luba <lukasz.luba@arm.com> wrote:

[snip]

>> In the existing code the raw value of arch_scale_cpu_capacity() is used
>> for clamping the returned CPU utilization from effective_cpu_util().
>> This patch fixes issue with too big single CPU utilization, by introducing
>> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
>> capacity reduced by thermal pressure signal. We rely on this load avg
> 
> you don't rely on load avg value but on raw thermal pressure value now

Good catch, I'll change that description.

> 
>> geometric series in similar way as other mechanisms in the scheduler.
>>

[snip]

>>
>> +       thermal_pressure = arch_scale_thermal_pressure(cpumask_first(pd_mask));
> 
> Do you really need to use this intermediate variable thermal_pressure
> ? Seems to be used only below

True, it's used only here. I'll remove this variable in the v4.

> 
> With these 2 comments above fixed,
> 
> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

Thank you for the review!

Regards,
Lukasz
diff mbox series

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 161b92aa1c79..237726217dad 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6527,8 +6527,12 @@  compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
 	struct cpumask *pd_mask = perf_domain_span(pd);
 	unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
 	unsigned long max_util = 0, sum_util = 0;
+	unsigned long _cpu_cap, thermal_pressure;
 	int cpu;
 
+	thermal_pressure = arch_scale_thermal_pressure(cpumask_first(pd_mask));
+	_cpu_cap = cpu_cap - thermal_pressure;
+
 	/*
 	 * The capacity state of CPUs of the current rd can be driven by CPUs
 	 * of another rd if they belong to the same pd. So, account for the
@@ -6564,8 +6568,10 @@  compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
 		 * is already enough to scale the EM reported power
 		 * consumption at the (eventually clamped) cpu_capacity.
 		 */
-		sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
-					       ENERGY_UTIL, NULL);
+		cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
+					      ENERGY_UTIL, NULL);
+
+		sum_util += min(cpu_util, _cpu_cap);
 
 		/*
 		 * Performance domain frequency: utilization clamping
@@ -6576,7 +6582,7 @@  compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
 		 */
 		cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
 					      FREQUENCY_UTIL, tsk);
-		max_util = max(max_util, cpu_util);
+		max_util = max(max_util, min(cpu_util, _cpu_cap));
 	}
 
 	return em_cpu_energy(pd->em_pd, max_util, sum_util);