diff mbox

[RFC,v2,1/6] sched/fair: Create util_fits_capacity()

Message ID 20180406153607.17815-2-dietmar.eggemann@arm.com (mailing list archive)
State RFC, archived
Headers show

Commit Message

Dietmar Eggemann April 6, 2018, 3:36 p.m. UTC
The functionality that a given utilization fits into a given capacity
is factored out into a separate function.

Currently it is only used in wake_cap() but will be re-used to figure
out if a cpu or a scheduler group is over-utilized.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---
 kernel/sched/fair.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Comments

Viresh Kumar April 12, 2018, 7:02 a.m. UTC | #1
On 06-04-18, 16:36, Dietmar Eggemann wrote:
> The functionality that a given utilization fits into a given capacity
> is factored out into a separate function.
> 
> Currently it is only used in wake_cap() but will be re-used to figure
> out if a cpu or a scheduler group is over-utilized.
> 
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
> ---
>  kernel/sched/fair.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0951d1c58d2f..0a76ad2ef022 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6574,6 +6574,11 @@ static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
>  	return min_t(unsigned long, util, capacity_orig_of(cpu));
>  }
>  
> +static inline int util_fits_capacity(unsigned long util, unsigned long capacity)
> +{
> +	return capacity * 1024 > util * capacity_margin;

This changes the behavior slightly compared to existing code. If that
wasn't intentional, perhaps you should use >= here.

> +}
> +
>  /*
>   * Disable WAKE_AFFINE in the case where task @p doesn't fit in the
>   * capacity of either the waking CPU @cpu or the previous CPU @prev_cpu.
> @@ -6595,7 +6600,7 @@ static int wake_cap(struct task_struct *p, int cpu, int prev_cpu)
>  	/* Bring task utilization in sync with prev_cpu */
>  	sync_entity_load_avg(&p->se);
>  
> -	return min_cap * 1024 < task_util(p) * capacity_margin;
> +	return !util_fits_capacity(task_util(p), min_cap);
>  }
>  
>  /*
> -- 
> 2.11.0
Dietmar Eggemann April 12, 2018, 8:20 a.m. UTC | #2
On 04/12/2018 09:02 AM, Viresh Kumar wrote:
> On 06-04-18, 16:36, Dietmar Eggemann wrote:
>> The functionality that a given utilization fits into a given capacity
>> is factored out into a separate function.
>>
>> Currently it is only used in wake_cap() but will be re-used to figure
>> out if a cpu or a scheduler group is over-utilized.
>>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
>> ---
>>   kernel/sched/fair.c | 7 ++++++-
>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 0951d1c58d2f..0a76ad2ef022 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6574,6 +6574,11 @@ static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
>>   	return min_t(unsigned long, util, capacity_orig_of(cpu));
>>   }
>>   
>> +static inline int util_fits_capacity(unsigned long util, unsigned long capacity)
>> +{
>> +	return capacity * 1024 > util * capacity_margin;
> 
> This changes the behavior slightly compared to existing code. If that
> wasn't intentional, perhaps you should use >= here.

You're right here ... Already on our v3 list. Thanks!

The 'misfit' patch-set comes with a similar function 
task_fits_capacity() so we have to align on this one with this patch-set 
as well.

[...]
diff mbox

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0951d1c58d2f..0a76ad2ef022 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6574,6 +6574,11 @@  static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
 	return min_t(unsigned long, util, capacity_orig_of(cpu));
 }
 
+static inline int util_fits_capacity(unsigned long util, unsigned long capacity)
+{
+	return capacity * 1024 > util * capacity_margin;
+}
+
 /*
  * Disable WAKE_AFFINE in the case where task @p doesn't fit in the
  * capacity of either the waking CPU @cpu or the previous CPU @prev_cpu.
@@ -6595,7 +6600,7 @@  static int wake_cap(struct task_struct *p, int cpu, int prev_cpu)
 	/* Bring task utilization in sync with prev_cpu */
 	sync_entity_load_avg(&p->se);
 
-	return min_cap * 1024 < task_util(p) * capacity_margin;
+	return !util_fits_capacity(task_util(p), min_cap);
 }
 
 /*