From patchwork Tue May 12 19:38:55 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 6390931 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 9AD5D9F32B for ; Tue, 12 May 2015 19:43:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 98B8920373 for ; Tue, 12 May 2015 19:43:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6EE34202D1 for ; Tue, 12 May 2015 19:43:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933325AbbELTn0 (ORCPT ); Tue, 12 May 2015 15:43:26 -0400 Received: from foss.arm.com ([217.140.101.70]:33926 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933901AbbELTih (ORCPT ); Tue, 12 May 2015 15:38:37 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A84702A; Tue, 12 May 2015 12:38:00 -0700 (PDT) Received: from e105550-lin.cambridge.arm.com (e105550-lin.cambridge.arm.com [10.2.131.193]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4EFB73F218; Tue, 12 May 2015 12:38:35 -0700 (PDT) From: Morten Rasmussen To: peterz@infradead.org, mingo@redhat.com Cc: vincent.guittot@linaro.org, Dietmar Eggemann , yuyang.du@intel.com, preeti@linux.vnet.ibm.com, mturquette@linaro.org, rjw@rjwysocki.net, Juri Lelli , sgurrappadi@nvidia.com, pang.xunlei@zte.com.cn, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, morten.rasmussen@arm.com Subject: [RFCv4 PATCH 20/34] sched: Relocated get_cpu_usage() and change return type Date: Tue, 12 May 2015 20:38:55 +0100 Message-Id: <1431459549-18343-21-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1431459549-18343-1-git-send-email-morten.rasmussen@arm.com> References: <1431459549-18343-1-git-send-email-morten.rasmussen@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move get_cpu_usage() to an earlier position in fair.c and change return type to unsigned long as negative usage doesn't make much sense. All other load and capacity related functions use unsigned long including the caller of get_cpu_usage(). cc: Ingo Molnar cc: Peter Zijlstra Signed-off-by: Morten Rasmussen --- kernel/sched/fair.c | 78 ++++++++++++++++++++++++++--------------------------- 1 file changed, 39 insertions(+), 39 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4a8404a..70f2700 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4787,6 +4787,45 @@ static unsigned long capacity_curr_of(int cpu) >> SCHED_CAPACITY_SHIFT; } +/* + * get_cpu_usage returns the amount of capacity of a CPU that is used by CFS + * tasks. The unit of the return value must be the one of capacity so we can + * compare the usage with the capacity of the CPU that is available for CFS + * task (ie cpu_capacity). + * + * cfs.utilization_load_avg is the sum of running time of runnable tasks on a + * CPU. It represents the amount of utilization of a CPU in the range + * [0..capacity_orig] where capacity_orig is the cpu_capacity available at the + * highest frequency (arch_scale_freq_capacity()). The usage of a CPU converges + * towards a sum equal to or less than the current capacity (capacity_curr <= + * capacity_orig) of the CPU because it is the running time on this CPU scaled + * by capacity_curr. Nevertheless, cfs.utilization_load_avg can be higher than + * capacity_curr or even higher than capacity_orig because of unfortunate + * rounding in avg_period and running_load_avg or just after migrating tasks + * (and new task wakeups) until the average stabilizes with the new running + * time. We need to check that the usage stays into the range + * [0..capacity_orig] and cap if necessary. Without capping the usage, a group + * could be seen as overloaded (CPU0 usage at 121% + CPU1 usage at 80%) whereas + * CPU1 has 20% of available capacity. We allow usage to overshoot + * capacity_curr (but not capacity_orig) as it useful for predicting the + * capacity required after task migrations (scheduler-driven DVFS). + */ + +static unsigned long get_cpu_usage(int cpu) +{ + int sum; + unsigned long usage = cpu_rq(cpu)->cfs.utilization_load_avg; + unsigned long blocked = cpu_rq(cpu)->cfs.utilization_blocked_avg; + unsigned long capacity_orig = capacity_orig_of(cpu); + + sum = usage + blocked; + + if (sum >= capacity_orig) + return capacity_orig; + + return sum; +} + static inline bool energy_aware(void) { return sched_feat(ENERGY_AWARE); @@ -5040,45 +5079,6 @@ static int select_idle_sibling(struct task_struct *p, int target) } /* - * get_cpu_usage returns the amount of capacity of a CPU that is used by CFS - * tasks. The unit of the return value must be the one of capacity so we can - * compare the usage with the capacity of the CPU that is available for CFS - * task (ie cpu_capacity). - * - * cfs.utilization_load_avg is the sum of running time of runnable tasks on a - * CPU. It represents the amount of utilization of a CPU in the range - * [0..capacity_orig] where capacity_orig is the cpu_capacity available at the - * highest frequency (arch_scale_freq_capacity()). The usage of a CPU converges - * towards a sum equal to or less than the current capacity (capacity_curr <= - * capacity_orig) of the CPU because it is the running time on this CPU scaled - * by capacity_curr. Nevertheless, cfs.utilization_load_avg can be higher than - * capacity_curr or even higher than capacity_orig because of unfortunate - * rounding in avg_period and running_load_avg or just after migrating tasks - * (and new task wakeups) until the average stabilizes with the new running - * time. We need to check that the usage stays into the range - * [0..capacity_orig] and cap if necessary. Without capping the usage, a group - * could be seen as overloaded (CPU0 usage at 121% + CPU1 usage at 80%) whereas - * CPU1 has 20% of available capacity. We allow usage to overshoot - * capacity_curr (but not capacity_orig) as it useful for predicting the - * capacity required after task migrations (scheduler-driven DVFS). - */ - -static int get_cpu_usage(int cpu) -{ - int sum; - unsigned long usage = cpu_rq(cpu)->cfs.utilization_load_avg; - unsigned long blocked = cpu_rq(cpu)->cfs.utilization_blocked_avg; - unsigned long capacity_orig = capacity_orig_of(cpu); - - sum = usage + blocked; - - if (sum >= capacity_orig) - return capacity_orig; - - return sum; -} - -/* * select_task_rq_fair: Select target runqueue for the waking task in domains * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE, * SD_BALANCE_FORK, or SD_BALANCE_EXEC.