From patchwork Tue Jul 7 18:24:26 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 6738041 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 5232DC05AC for ; Tue, 7 Jul 2015 18:54:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6591B2071B for ; Tue, 7 Jul 2015 18:54:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D210F20727 for ; Tue, 7 Jul 2015 18:54:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933200AbbGGSyI (ORCPT ); Tue, 7 Jul 2015 14:54:08 -0400 Received: from foss.arm.com ([217.140.101.70]:37750 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932996AbbGGSXv (ORCPT ); Tue, 7 Jul 2015 14:23:51 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0FA3947D; Tue, 7 Jul 2015 11:24:18 -0700 (PDT) Received: from e105550-lin.cambridge.arm.com (e105550-lin.cambridge.arm.com [10.2.131.193]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C46A53F23A; Tue, 7 Jul 2015 11:23:48 -0700 (PDT) From: Morten Rasmussen To: peterz@infradead.org, mingo@redhat.com Cc: vincent.guittot@linaro.org, daniel.lezcano@linaro.org, Dietmar Eggemann , yuyang.du@intel.com, mturquette@baylibre.com, rjw@rjwysocki.net, Juri Lelli , sgurrappadi@nvidia.com, pang.xunlei@zte.com.cn, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Juri Lelli Subject: [RFCv5 PATCH 43/46] sched/{fair, cpufreq_sched}: add reset_capacity interface Date: Tue, 7 Jul 2015 19:24:26 +0100 Message-Id: <1436293469-25707-44-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1436293469-25707-1-git-send-email-morten.rasmussen@arm.com> References: <1436293469-25707-1-git-send-email-morten.rasmussen@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Juri Lelli When a CPU is going idle it is pointless to ask for an OPP update as we would wake up another task only to ask for the same capacity we are already running at (utilization gets moved to blocked_utilization). We thus add cpufreq_sched_reset_capacity() interface to just reset our current capacity request without triggering any real update. At wakeup we will use the decayed utilization to select an appropriate OPP. cc: Ingo Molnar cc: Peter Zijlstra Signed-off-by: Juri Lelli --- kernel/sched/cpufreq_sched.c | 12 ++++++++++++ kernel/sched/fair.c | 10 +++++++--- kernel/sched/sched.h | 3 +++ 3 files changed, 22 insertions(+), 3 deletions(-) diff --git a/kernel/sched/cpufreq_sched.c b/kernel/sched/cpufreq_sched.c index 7071528..06ff183 100644 --- a/kernel/sched/cpufreq_sched.c +++ b/kernel/sched/cpufreq_sched.c @@ -203,6 +203,18 @@ void cpufreq_sched_set_cap(int cpu, unsigned long capacity) return; } +/** + * cpufreq_sched_reset_capacity - interface to scheduler for resetting capacity + * requests + * @cpu: cpu whose capacity request has to be reset + * + * This _wont trigger_ any capacity update. + */ +void cpufreq_sched_reset_cap(int cpu) +{ + per_cpu(pcpu_capacity, cpu) = 0; +} + static inline void set_sched_energy_freq(void) { if (!sched_energy_freq()) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bb49499..323331f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4427,9 +4427,13 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (sched_energy_freq() && task_sleep) { unsigned long req_cap = get_cpu_usage(cpu_of(rq)); - req_cap = req_cap * capacity_margin - >> SCHED_CAPACITY_SHIFT; - cpufreq_sched_set_cap(cpu_of(rq), req_cap); + if (rq->cfs.nr_running) { + req_cap = req_cap * capacity_margin + >> SCHED_CAPACITY_SHIFT; + cpufreq_sched_set_cap(cpu_of(rq), req_cap); + } else { + cpufreq_sched_reset_cap(cpu_of(rq)); + } } } hrtick_update(rq); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 85be5d8..f1ff5bb 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1487,9 +1487,12 @@ static inline bool sched_energy_freq(void) #ifdef CONFIG_CPU_FREQ_GOV_SCHED void cpufreq_sched_set_cap(int cpu, unsigned long util); +void cpufreq_sched_reset_cap(int cpu); #else static inline void cpufreq_sched_set_cap(int cpu, unsigned long util) { } +static inline void cpufreq_sched_reset_cap(int cpu) +{ } #endif static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta)