From patchwork Mon Mar 14 05:22:06 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Turquette X-Patchwork-Id: 8576531 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id BAB4B9F3D1 for ; Mon, 14 Mar 2016 05:28:31 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CE4EE2043C for ; Mon, 14 Mar 2016 05:28:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BA33C20260 for ; Mon, 14 Mar 2016 05:28:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933644AbcCNF2N (ORCPT ); Mon, 14 Mar 2016 01:28:13 -0400 Received: from mail-pa0-f44.google.com ([209.85.220.44]:35404 "EHLO mail-pa0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933285AbcCNF0R (ORCPT ); Mon, 14 Mar 2016 01:26:17 -0400 Received: by mail-pa0-f44.google.com with SMTP id td3so121781287pab.2 for ; Sun, 13 Mar 2016 22:26:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=twho1EGCc65zQqX3wHapVA4lZ5njPZ4ef7S1rmv6m2c=; b=tyyfBNd8rsGIdlZ3BhDg36/UWisUiCda3OHY485a2Znlw7cL7tLZH/k/Kg7UQRzUIl 9WbTjT0LRcQg/3+RT27hfpE9jYuuOrkwXXgMpIkRkCHz9elgfWbkfkptVsLJLL9KCTeq sdIo82y8++HGaX9imkOXYslm6VwMhv0rgJaXfLxD4tJtEFaLhHx1rh3F6SWzImWyRMs/ ol7weA0AMR7SzoqS8qv71/JU3RVFc0Xdfacx6AuooBZR9RbTx4toai+rntSaabgXzWDT QolSfPbagSUZ/9f4CJCvh/K4YWht6kv9W26jJMx0f77sv80fQOVKOqnHpDTK5c14E4ad RVZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=twho1EGCc65zQqX3wHapVA4lZ5njPZ4ef7S1rmv6m2c=; b=nI8IaUBPfgfPvZAXtWntD5lrtT/VJJyNJ5p2/M8tdjQAPr20xx7U9HzrmSl+gDWOCe NXkZSO1c6J76dfLbNYYp6ibyvLeNQIaL0UDXnuYkjlW8o9I1XXIsS1Heaorae92EIMlh RgZ20d6jPcS6tBlLeyDrpQVW7a7l84h8YEijehCGKXl9ZtOEsxHX3N48uqi4sJ3rUw6n SDHeyz8LK/5grbRIIH3IprCDlfTb6r5LAe7NdOor93jT2IlpMRXJWRyueMuxekBCc/4M c01Z4MbmEOCyognyKyDR0d2N+8u1j25PhXDkg/L8WziR/vcL/78oRlJqplDB99ZiEtEt ow0A== X-Gm-Message-State: AD7BkJJ1au48IXggwXcOoGyP/18qyYe5xiW4LRgYwl6GpqIZFJ0KLMkYcA6B7T1tYUiPfpNk X-Received: by 10.67.23.161 with SMTP id ib1mr34062668pad.156.1457933176913; Sun, 13 Mar 2016 22:26:16 -0700 (PDT) Received: from localhost (cpe-172-248-200-249.socal.res.rr.com. [172.248.200.249]) by smtp.gmail.com with ESMTPSA id 9sm28633331pfm.10.2016.03.13.22.26.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Mar 2016 22:26:15 -0700 (PDT) From: Michael Turquette X-Google-Original-From: Michael Turquette To: peterz@infradead.org, rjw@rjwysocki.net Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Juri.Lelli@arm.com, steve.muckle@linaro.org, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, vincent.guittot@linaro.org, Michael Turquette Subject: [PATCH 2/8] sched/fair: add margin to utilization update Date: Sun, 13 Mar 2016 22:22:06 -0700 Message-Id: <1457932932-28444-3-git-send-email-mturquette+renesas@baylibre.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1457932932-28444-1-git-send-email-mturquette+renesas@baylibre.com> References: <1457932932-28444-1-git-send-email-mturquette+renesas@baylibre.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Utilization contributions to cfs_rq->avg.util_avg are scaled for both microarchitecture-invariance as well as frequency-invariance. This means that any given utilization contribution will be scaled against the current cpu capacity (cpu frequency). Contributions from long running tasks, whose utilization grows larger over time, will asymptotically approach the current capacity. This causes a problem when using this utilization signal to select a target cpu capacity (cpu frequency), as our signal will never exceed the current capacity, which would otherwise be our signal to increase frequency. Solve this by introducing a default capacity margin that is added to the utilization signal when requesting a change to capacity (cpu frequency). The margin is 1280, or 1.25 x SCHED_CAPACITY_SCALE (1024). This is equivalent to similar margins such as the default 125 value assigned to struct sched_domain.imbalance_pct for load balancing, and to the 80% up_threshold used by the legacy cpufreq ondemand governor. Signed-off-by: Michael Turquette --- kernel/sched/fair.c | 18 ++++++++++++++++-- kernel/sched/sched.h | 3 +++ 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a32f281..29e8bae 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -100,6 +100,19 @@ const_debug unsigned int sysctl_sched_migration_cost = 500000UL; */ unsigned int __read_mostly sysctl_sched_shares_window = 10000000UL; +/* + * Add a 25% margin globally to all capacity requests from cfs. This is + * equivalent to an 80% up_threshold in legacy governors like ondemand. + * + * This is required as task utilization increases. The frequency-invariant + * utilization will asymptotically approach the current capacity of the cpu and + * the additional margin will cross the threshold into the next capacity state. + * + * XXX someday expand to separate, per-call site margins? e.g. enqueue, fork, + * task_tick, load_balance, etc + */ +unsigned long cfs_capacity_margin = CAPACITY_MARGIN_DEFAULT; + #ifdef CONFIG_CFS_BANDWIDTH /* * Amount of runtime to allocate from global (tg) to local (per-cfs_rq) pool @@ -2840,6 +2853,8 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) if (cpu == smp_processor_id() && &rq->cfs == cfs_rq) { unsigned long max = rq->cpu_capacity_orig; + unsigned long cap = cfs_rq->avg.util_avg * + cfs_capacity_margin / max; /* * There are a few boundary cases this might miss but it should @@ -2852,8 +2867,7 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) * thread is a different class (!fair), nor will the utilization * number include things like RT tasks. */ - cpufreq_update_util(rq_clock(rq), - min(cfs_rq->avg.util_avg, max), max); + cpufreq_update_util(rq_clock(rq), min(cap, max), max); } } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index f06dfca..8c93ed2 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -27,6 +27,9 @@ extern __read_mostly int scheduler_running; extern unsigned long calc_load_update; extern atomic_long_t calc_load_tasks; +#define CAPACITY_MARGIN_DEFAULT 1280; +extern unsigned long cfs_capacity_margin; + extern void calc_global_load_tick(struct rq *this_rq); extern long calc_load_fold_active(struct rq *this_rq);