From patchwork Fri Jun 26 23:53:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Turquette X-Patchwork-Id: 6683691 X-Patchwork-Delegate: rjw@sisk.pl Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A1450C05AD for ; Fri, 26 Jun 2015 23:55:08 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BB4692068E for ; Fri, 26 Jun 2015 23:55:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3A98C2069F for ; Fri, 26 Jun 2015 23:55:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030185AbbFZXyb (ORCPT ); Fri, 26 Jun 2015 19:54:31 -0400 Received: from mail-pa0-f51.google.com ([209.85.220.51]:33869 "EHLO mail-pa0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755243AbbFZXyO (ORCPT ); Fri, 26 Jun 2015 19:54:14 -0400 Received: by pabvl15 with SMTP id vl15so75913797pab.1 for ; Fri, 26 Jun 2015 16:54:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ozmyklJGYyWrZ+wMfxdEda8b/f4kY6CviM9nXYe0uGI=; b=gw7dVWY5nz9obpmS+lDAJQlQqsf3Mh34OeKdLlcFhvmWJCPbuur82HI+5XeLq86zql zQH03CuK+lcaf7dtEvi1N4meXKPkA+SFmoW0OlYOpxB8PTZqS4qnOdM9yZqb6NMMD9Iw WRJYI1ismmL9NG10FpOw4KBR/ugu7IqlhGkjcxXBKAuhwsR6WiWnaFGiBBHKACXXbfeZ T5ZHatAo6POcj2Vbpi55WuoR0jr7dbrggW2jAfad8agBJSD3+y5A9j5i5ElxRmZLCTbA L+VBF1/1FilWiIS//XjdinnjLfxdasesUHABfUvwYUZfZjhc8wCkv4oEWBLStORwCq3o hb3A== X-Gm-Message-State: ALoCoQl6YqDAWTyDsCNMM4z3BSuIZBifu4to2xivZQQcWQLkZxMiN+ZBDyr461QXpT6LvBhUIrUD X-Received: by 10.66.193.130 with SMTP id ho2mr7959759pac.111.1435362853507; Fri, 26 Jun 2015 16:54:13 -0700 (PDT) Received: from quantum.home (pool-71-119-96-202.lsanca.fios.verizon.net. [71.119.96.202]) by mx.google.com with ESMTPSA id wh6sm34425418pbc.96.2015.06.26.16.54.11 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 26 Jun 2015 16:54:12 -0700 (PDT) From: Michael Turquette X-Google-Original-From: Michael Turquette To: peterz@infradead.org, mingo@kernel.org Cc: linux-kernel@vger.kernel.org, preeti@linux.vnet.ibm.com, Morten.Rasmussen@arm.com, riel@redhat.com, efault@gmx.de, nicolas.pitre@linaro.org, daniel.lezcano@linaro.org, dietmar.eggemann@arm.com, vincent.guittot@linaro.org, amit.kucheria@linaro.org, juri.lelli@arm.com, rjw@rjwysocki.net, viresh.kumar@linaro.org, ashwin.chaugule@linaro.org, alex.shi@linaro.org, linux-pm@vger.kernel.org, abelvesa@gmail.com, pebolle@tiscali.nl, Michael Turquette Subject: [PATCH v3 4/4] [RFC] sched: cfs: cpu frequency scaling policy Date: Fri, 26 Jun 2015 16:53:44 -0700 Message-Id: <1435362824-26734-5-git-send-email-mturquette@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1435362824-26734-1-git-send-email-mturquette@linaro.org> References: <1435362824-26734-1-git-send-email-mturquette@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Michael Turquette Implements a very simple policy to scale cpu frequency as a function of cfs utilization. This policy is a placeholder until something better comes along. Its purpose is to illustrate how to use the cpufreq_sched_set_capacity api and allow interested parties to hack on this stuff. Signed-off-by: Michael Turquette --- Changes in v3: Split out into separate patch Capacity calculation moved from cpufreq governor to cfs Removed use of static key. Replaced with Kconfig option kernel/sched/fair.c | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 46855d0..5ccc384 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4217,6 +4217,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) { struct cfs_rq *cfs_rq; struct sched_entity *se = &p->se; + unsigned long utilization, capacity; for_each_sched_entity(se) { if (se->on_rq) @@ -4252,6 +4253,19 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) update_rq_runnable_avg(rq, rq->nr_running); add_nr_running(rq, 1); } + +#ifdef CONFIG_CPU_FREQ_GOV_SCHED + /* add 25% margin to current utilization */ + utilization = rq->cfs.utilization_load_avg; + capacity = utilization + (utilization >> 2); + + /* handle rounding errors */ + capacity = (capacity > SCHED_LOAD_SCALE) ? SCHED_LOAD_SCALE : + capacity; + + cpufreq_sched_set_cap(cpu_of(rq), capacity); +#endif + hrtick_update(rq); } @@ -4267,6 +4281,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) struct cfs_rq *cfs_rq; struct sched_entity *se = &p->se; int task_sleep = flags & DEQUEUE_SLEEP; + unsigned long utilization, capacity; for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); @@ -4313,6 +4328,19 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) sub_nr_running(rq, 1); update_rq_runnable_avg(rq, 1); } + +#ifdef CONFIG_CPU_FREQ_GOV_SCHED + /* add 25% margin to current utilization */ + utilization = rq->cfs.utilization_load_avg; + capacity = utilization + (utilization >> 2); + + /* handle rounding errors */ + capacity = (capacity > SCHED_LOAD_SCALE) ? SCHED_LOAD_SCALE : + capacity; + + cpufreq_sched_set_cap(cpu_of(rq), capacity); +#endif + hrtick_update(rq); } @@ -7806,6 +7834,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued) { struct cfs_rq *cfs_rq; struct sched_entity *se = &curr->se; + unsigned long utilization, capacity; for_each_sched_entity(se) { cfs_rq = cfs_rq_of(se); @@ -7816,6 +7845,18 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued) task_tick_numa(rq, curr); update_rq_runnable_avg(rq, 1); + +#ifdef CONFIG_CPU_FREQ_GOV_SCHED + /* add 25% margin to current utilization */ + utilization = rq->cfs.utilization_load_avg; + capacity = utilization + (utilization >> 2); + + /* handle rounding errors */ + capacity = (capacity > SCHED_LOAD_SCALE) ? SCHED_LOAD_SCALE : + capacity; + + cpufreq_sched_set_cap(cpu_of(rq), capacity); +#endif } /*