From patchwork Thu Jul 3 16:26:02 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 4476231 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id E1771BEEAA for ; Thu, 3 Jul 2014 16:29:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1BE93203DC for ; Thu, 3 Jul 2014 16:29:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 12595203AD for ; Thu, 3 Jul 2014 16:29:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752890AbaGCQ27 (ORCPT ); Thu, 3 Jul 2014 12:28:59 -0400 Received: from service87.mimecast.com ([91.220.42.44]:45695 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759340AbaGCQ0U (ORCPT ); Thu, 3 Jul 2014 12:26:20 -0400 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Thu, 03 Jul 2014 17:26:18 +0100 Received: from e103034-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Thu, 3 Jul 2014 17:26:18 +0100 From: Morten Rasmussen To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, peterz@infradead.org, mingo@kernel.org Cc: rjw@rjwysocki.net, vincent.guittot@linaro.org, daniel.lezcano@linaro.org, preeti@linux.vnet.ibm.com, Dietmar.Eggemann@arm.com, pjt@google.com Subject: [RFCv2 PATCH 15/23] sched, cpufreq: Introduce current cpu compute capacity into scheduler Date: Thu, 3 Jul 2014 17:26:02 +0100 Message-Id: <1404404770-323-16-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> References: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> X-OriginalArrivalTime: 03 Jul 2014 16:26:18.0506 (UTC) FILETIME=[84AE66A0:01CF96DB] X-MC-Unique: 114070317261810701 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The scheduler is currently unaware of frequency changes and the current compute capacity offered by the cpus. This patch is not the solution. It is a hack to give us something to experiment with for now. A proper solution could be based on the frequency invariant load tracking proposed in the past: https://lkml.org/lkml/2013/4/16/289 The best way to get current compute capacity is likely to be architecture specific. A potential solution is therefore to let the architecture implement get_curr_capacity() instead. This patch should _not_ be considered safe. Signed-off-by: Morten Rasmussen --- drivers/cpufreq/cpufreq.c | 2 ++ include/linux/sched.h | 2 ++ kernel/sched/fair.c | 11 +++++++++++ kernel/sched/sched.h | 2 ++ 4 files changed, 17 insertions(+) diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index abda660..a2b788d 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -28,6 +28,7 @@ #include #include #include +#include #include /** @@ -315,6 +316,7 @@ static void __cpufreq_notify_transition(struct cpufreq_policy *policy, pr_debug("FREQ: %lu - CPU: %lu\n", (unsigned long)freqs->new, (unsigned long)freqs->cpu); trace_cpu_frequency(freqs->new, freqs->cpu); + set_curr_capacity(freqs->cpu, (freqs->new*1024)/policy->max); srcu_notifier_call_chain(&cpufreq_transition_notifier_list, CPUFREQ_POSTCHANGE, freqs); if (likely(policy) && likely(policy->cpu == freqs->cpu)) diff --git a/include/linux/sched.h b/include/linux/sched.h index e5d8d57..faebd87 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -3025,4 +3025,6 @@ static inline unsigned long rlimit_max(unsigned int limit) return task_rlimit_max(current, limit); } +void set_curr_capacity(int cpu, long capacity); + #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 37e9ea1..9720f04 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7564,9 +7564,20 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) atomic64_set(&cfs_rq->decay_counter, 1); atomic_long_set(&cfs_rq->removed_load, 0); atomic_long_set(&cfs_rq->uw_removed_load, 0); + atomic_long_set(&cfs_rq->curr_capacity, 1024); #endif } +void set_curr_capacity(int cpu, long capacity) +{ + atomic_long_set(&cpu_rq(cpu)->cfs.curr_capacity, capacity); +} + +static inline unsigned long get_curr_capacity(int cpu) +{ + return atomic_long_read(&cpu_rq(cpu)->cfs.curr_capacity); +} + #ifdef CONFIG_FAIR_GROUP_SCHED static void task_move_group_fair(struct task_struct *p, int on_rq) { diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 455d152..a6d5239 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -342,6 +342,8 @@ struct cfs_rq { u64 last_decay; atomic_long_t removed_load, uw_removed_load; + atomic_long_t curr_capacity; + #ifdef CONFIG_FAIR_GROUP_SCHED /* Required to track per-cpu representation of a task_group */ u32 tg_runnable_contrib;