diff mbox

[RFC,10/10] sched: Include blocked load in weighted_cpuload

Message ID 1417529192-11579-11-git-send-email-morten.rasmussen@arm.com (mailing list archive)
State RFC, archived
Headers show

Commit Message

Morten Rasmussen Dec. 2, 2014, 2:06 p.m. UTC
Adds blocked_load_avg to weighted_cpuload() to take recently runnable
tasks into account in load-balancing decisions. This changes the nature
of weighted_cpuload() as it may >0 while there are currently no runnable
tasks on the cpu rq. Hence care must be taken in the load-balance code
to use cfs_rq->runnable_load_avg or nr_running when current rq status is
needed.

This patch is highly experimental and will probably have require
additional updates of the users of weighted_cpuload().

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
---
 kernel/sched/fair.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
diff mbox

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bd950b2..ad0ebb7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4349,7 +4349,8 @@  static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 /* Used instead of source_load when we know the type == 0 */
 static unsigned long weighted_cpuload(const int cpu)
 {
-	return cpu_rq(cpu)->cfs.runnable_load_avg;
+	return cpu_rq(cpu)->cfs.runnable_load_avg
+		+ cpu_rq(cpu)->cfs.blocked_load_avg;
 }
 
 /*