From patchwork Tue Jul 7 18:24:03 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 6738471 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id ADC569F46B for ; Tue, 7 Jul 2015 19:00:41 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AE6B620727 for ; Tue, 7 Jul 2015 19:00:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5CBD72071B for ; Tue, 7 Jul 2015 19:00:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932854AbbGGSW4 (ORCPT ); Tue, 7 Jul 2015 14:22:56 -0400 Received: from foss.arm.com ([217.140.101.70]:37511 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932840AbbGGSWw (ORCPT ); Tue, 7 Jul 2015 14:22:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9893B5E4; Tue, 7 Jul 2015 11:23:18 -0700 (PDT) Received: from e105550-lin.cambridge.arm.com (e105550-lin.cambridge.arm.com [10.2.131.193]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 84ED13F23A; Tue, 7 Jul 2015 11:22:49 -0700 (PDT) From: Morten Rasmussen To: peterz@infradead.org, mingo@redhat.com Cc: vincent.guittot@linaro.org, daniel.lezcano@linaro.org, Dietmar Eggemann , yuyang.du@intel.com, mturquette@baylibre.com, rjw@rjwysocki.net, Juri Lelli , sgurrappadi@nvidia.com, pang.xunlei@zte.com.cn, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Subject: [RFCv5 PATCH 20/46] sched: Relocated get_cpu_usage() and change return type Date: Tue, 7 Jul 2015 19:24:03 +0100 Message-Id: <1436293469-25707-21-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1436293469-25707-1-git-send-email-morten.rasmussen@arm.com> References: <1436293469-25707-1-git-send-email-morten.rasmussen@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move get_cpu_usage() to an earlier position in fair.c and change return type to unsigned long as negative usage doesn't make much sense. All other load and capacity related functions use unsigned long including the caller of get_cpu_usage(). cc: Ingo Molnar cc: Peter Zijlstra Signed-off-by: Morten Rasmussen --- kernel/sched/fair.c | 78 ++++++++++++++++++++++++++--------------------------- 1 file changed, 39 insertions(+), 39 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 70f81fc..78d3081 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4802,6 +4802,45 @@ static unsigned long capacity_curr_of(int cpu) >> SCHED_CAPACITY_SHIFT; } +/* + * get_cpu_usage returns the amount of capacity of a CPU that is used by CFS + * tasks. The unit of the return value must be the one of capacity so we can + * compare the usage with the capacity of the CPU that is available for CFS + * task (ie cpu_capacity). + * + * cfs.utilization_load_avg is the sum of running time of runnable tasks on a + * CPU. It represents the amount of utilization of a CPU in the range + * [0..capacity_orig] where capacity_orig is the cpu_capacity available at the + * highest frequency (arch_scale_freq_capacity()). The usage of a CPU converges + * towards a sum equal to or less than the current capacity (capacity_curr <= + * capacity_orig) of the CPU because it is the running time on this CPU scaled + * by capacity_curr. Nevertheless, cfs.utilization_load_avg can be higher than + * capacity_curr or even higher than capacity_orig because of unfortunate + * rounding in avg_period and running_load_avg or just after migrating tasks + * (and new task wakeups) until the average stabilizes with the new running + * time. We need to check that the usage stays into the range + * [0..capacity_orig] and cap if necessary. Without capping the usage, a group + * could be seen as overloaded (CPU0 usage at 121% + CPU1 usage at 80%) whereas + * CPU1 has 20% of available capacity. We allow usage to overshoot + * capacity_curr (but not capacity_orig) as it useful for predicting the + * capacity required after task migrations (scheduler-driven DVFS). + */ + +static unsigned long get_cpu_usage(int cpu) +{ + int sum; + unsigned long usage = cpu_rq(cpu)->cfs.utilization_load_avg; + unsigned long blocked = cpu_rq(cpu)->cfs.utilization_blocked_avg; + unsigned long capacity_orig = capacity_orig_of(cpu); + + sum = usage + blocked; + + if (sum >= capacity_orig) + return capacity_orig; + + return sum; +} + static inline bool energy_aware(void) { return sched_feat(ENERGY_AWARE); @@ -5055,45 +5094,6 @@ static int select_idle_sibling(struct task_struct *p, int target) } /* - * get_cpu_usage returns the amount of capacity of a CPU that is used by CFS - * tasks. The unit of the return value must be the one of capacity so we can - * compare the usage with the capacity of the CPU that is available for CFS - * task (ie cpu_capacity). - * - * cfs.utilization_load_avg is the sum of running time of runnable tasks on a - * CPU. It represents the amount of utilization of a CPU in the range - * [0..capacity_orig] where capacity_orig is the cpu_capacity available at the - * highest frequency (arch_scale_freq_capacity()). The usage of a CPU converges - * towards a sum equal to or less than the current capacity (capacity_curr <= - * capacity_orig) of the CPU because it is the running time on this CPU scaled - * by capacity_curr. Nevertheless, cfs.utilization_load_avg can be higher than - * capacity_curr or even higher than capacity_orig because of unfortunate - * rounding in avg_period and running_load_avg or just after migrating tasks - * (and new task wakeups) until the average stabilizes with the new running - * time. We need to check that the usage stays into the range - * [0..capacity_orig] and cap if necessary. Without capping the usage, a group - * could be seen as overloaded (CPU0 usage at 121% + CPU1 usage at 80%) whereas - * CPU1 has 20% of available capacity. We allow usage to overshoot - * capacity_curr (but not capacity_orig) as it useful for predicting the - * capacity required after task migrations (scheduler-driven DVFS). - */ - -static int get_cpu_usage(int cpu) -{ - int sum; - unsigned long usage = cpu_rq(cpu)->cfs.utilization_load_avg; - unsigned long blocked = cpu_rq(cpu)->cfs.utilization_blocked_avg; - unsigned long capacity_orig = capacity_orig_of(cpu); - - sum = usage + blocked; - - if (sum >= capacity_orig) - return capacity_orig; - - return sum; -} - -/* * select_task_rq_fair: Select target runqueue for the waking task in domains * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE, * SD_BALANCE_FORK, or SD_BALANCE_EXEC.