[RFC,5/7] sched/cpufreq: sugov_cpu_is_busy for shared policy
diff mbox series

Message ID 20190508174301.4828-6-douglas.raillard@arm.com
State RFC, archived
Headers show
Series
  • sched/cpufreq: Make schedutil energy aware
Related show

Commit Message

Douglas Raillard May 8, 2019, 5:42 p.m. UTC
From: Douglas RAILLARD <douglas.raillard@arm.com>

Allow using sugov_cpu_is_busy() from sugov_update_shared(). This means
that the heuristic needs to return stable results across multiple calls
for a given CPU, even if there has been no utilization change since last
call.

sugov_cpu_is_busy() currently both checks business status and updates
the counters, so let's decouple the two actions.

Signed-off-by: Douglas RAILLARD <douglas.raillard@arm.com>
---
 kernel/sched/cpufreq_schedutil.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

Patch
diff mbox series

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index a52c66559321..a12b7e5bc028 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -178,12 +178,17 @@  static bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu)
 {
 	unsigned long idle_calls = tick_nohz_get_idle_calls_cpu(sg_cpu->cpu);
 	bool ret = idle_calls == sg_cpu->saved_idle_calls;
+	return ret;
+}
 
+static void sugov_cpu_is_busy_update(struct sugov_cpu *sg_cpu)
+{
+	unsigned long idle_calls = tick_nohz_get_idle_calls_cpu(sg_cpu->cpu);
 	sg_cpu->saved_idle_calls = idle_calls;
-	return ret;
 }
 #else
 static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
+static void sugov_cpu_is_busy_update(struct sugov_cpu *sg_cpu) {}
 #endif /* CONFIG_NO_HZ_COMMON */
 
 /**
@@ -503,6 +508,7 @@  static void sugov_update_single(struct update_util_data *hook, u64 time,
 		return;
 
 	busy = sugov_cpu_is_busy(sg_cpu);
+	sugov_cpu_is_busy_update(sg_cpu);
 
 	util = sugov_get_util(sg_cpu);
 	max = sg_cpu->max;