From patchwork Sun Jul 23 15:54:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 9858469 X-Patchwork-Delegate: rjw@sisk.pl Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 223C7602B9 for ; Sun, 23 Jul 2017 15:54:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1353028550 for ; Sun, 23 Jul 2017 15:54:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 07CA728556; Sun, 23 Jul 2017 15:54:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5046E28550 for ; Sun, 23 Jul 2017 15:54:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755370AbdGWPyg (ORCPT ); Sun, 23 Jul 2017 11:54:36 -0400 Received: from mail-pg0-f48.google.com ([74.125.83.48]:35580 "EHLO mail-pg0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755368AbdGWPye (ORCPT ); Sun, 23 Jul 2017 11:54:34 -0400 Received: by mail-pg0-f48.google.com with SMTP id v190so47128782pgv.2 for ; Sun, 23 Jul 2017 08:54:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=EfT+2sYxZDW2DTvJu2GPgOTheV1aoNqGgEotuUWMcbo=; b=I2WWAh1GCZRAsaTlleNn5UISUoXRLUb6BIVIaOBQ9bt8pneqELmdYLG/MxUW3N80yF JpbByf5DkOHp+Q4TBBOB1HN6PQ3H7G4jsT2ttQ1LcUaavujF99PTe5SUPJAQ9gVosjU4 YvEycxOeB54BZrrxAfwTNNz+B9HwoqBcH3Le0s5Yfpl2weg9+gCxRM5hXIWukIJzI30J OmtFTVdv96UT/9Sw/KdM5UgeHk8Nv9Q+jXnPRoCj8HvAvEdBw2SkqPZ+wHR6LNJuDhHG zFrIQDpv3xN1vYUK+oQ6v9diLB7xtl1qWlv9wRHNr4gIidP48V0Pjq11xijstuKYC0PD h/ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=EfT+2sYxZDW2DTvJu2GPgOTheV1aoNqGgEotuUWMcbo=; b=f40kglskxD2cFgms77wy2RlT6ImhfAbfjW0UxbI9llSyNJs+BzyDVfKtmv+gqni4nB R+a7/yzJYyS+ZqVH5Q8sHbsJRojlEtXAo9nOFmP/SFqAfpQ/pHmG1gDU9He1+Pj/WL27 P8r33nFvKOk2fDi9jwf+Ukr0TpV3KpYg8ZxWX5SG7rWldRAzZWJz3G+RZbRNdCDDE5ws w3KmLuQaZn2jcZNMLszu3rYT/wKVXSbHZYIric3igh/BdIg5v6v2+aIIZAR92yO5jAbm ZkNWHTzbzqgPcbDr6jKtqfPjj8mX3syEgrJLNDm8UD1aYak2YNqh5TNcjG0x6WbgYuWx u+Og== X-Gm-Message-State: AIVw110B9wZWdqVXz/85avgKvsY8znwDzC4Dr45gibRwZy7d8jrQvQSC t71FLJg/f0oEWrmH X-Received: by 10.98.147.216 with SMTP id r85mr13591479pfk.329.1500825274233; Sun, 23 Jul 2017 08:54:34 -0700 (PDT) Received: from joelaf-glaptop0.roam.corp.google.com (c-24-130-92-142.hsd1.ca.comcast.net. [24.130.92.142]) by smtp.gmail.com with ESMTPSA id k26sm17389870pfb.145.2017.07.23.08.54.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 23 Jul 2017 08:54:33 -0700 (PDT) From: Joel Fernandes To: linux-kernel@vger.kernel.org Cc: Juri Lelli , Patrick Bellasi , Andres Oportus , Dietmar Eggemann , linux-pm@vger.kernel.org, Joel Fernandes , Srinivas Pandruvada , Len Brown , "Rafael J . Wysocki" , Viresh Kumar , Ingo Molnar , Peter Zijlstra Subject: [PATCH v7 1/2] cpufreq: schedutil: Make iowait boost more energy efficient Date: Sun, 23 Jul 2017 08:54:25 -0700 Message-Id: <20170723155426.9170-1-joelaf@google.com> X-Mailer: git-send-email 2.14.0.rc0.284.gd933b75aa4-goog Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently the iowait_boost feature in schedutil makes the frequency go to max on iowait wakeups. This feature was added to handle a case that Peter described where the throughput of operations involving continuous I/O requests [1] is reduced due to running at a lower frequency, however the lower throughput itself causes utilization to be low and hence causing frequency to be low hence its "stuck". Instead of going to max, its also possible to achieve the same effect by ramping up to max if there are repeated in_iowait wakeups happening. This patch is an attempt to do that. We start from a lower frequency (policy->min) and double the boost for every consecutive iowait update until we reach the maximum iowait boost frequency (iowait_boost_max). I ran a synthetic test (continuous O_DIRECT writes in a loop) on an x86 machine with intel_pstate in passive mode using schedutil. In this test the iowait_boost value ramped from 800MHz to 4GHz in 60ms. The patch achieves the desired improved throughput as the existing behavior. [1] https://patchwork.kernel.org/patch/9735885/ Cc: Srinivas Pandruvada Cc: Len Brown Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Ingo Molnar Cc: Peter Zijlstra Suggested-by: Peter Zijlstra Suggested-by: Viresh Kumar Signed-off-by: Joel Fernandes Acked-by: Viresh Kumar --- kernel/sched/cpufreq_schedutil.c | 38 ++++++++++++++++++++++++++++++++------ 1 file changed, 32 insertions(+), 6 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 622eed1b7658..570ab6e779e6 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -53,6 +53,7 @@ struct sugov_cpu { struct update_util_data update_util; struct sugov_policy *sg_policy; + bool iowait_boost_pending; unsigned long iowait_boost; unsigned long iowait_boost_max; u64 last_update; @@ -172,30 +173,54 @@ static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags) { if (flags & SCHED_CPUFREQ_IOWAIT) { - sg_cpu->iowait_boost = sg_cpu->iowait_boost_max; + if (sg_cpu->iowait_boost_pending) + return; + + sg_cpu->iowait_boost_pending = true; + + if (sg_cpu->iowait_boost) { + sg_cpu->iowait_boost <<= 1; + if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max) + sg_cpu->iowait_boost = sg_cpu->iowait_boost_max; + } else { + sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->min; + } } else if (sg_cpu->iowait_boost) { s64 delta_ns = time - sg_cpu->last_update; /* Clear iowait_boost if the CPU apprears to have been idle. */ - if (delta_ns > TICK_NSEC) + if (delta_ns > TICK_NSEC) { sg_cpu->iowait_boost = 0; + sg_cpu->iowait_boost_pending = false; + } } } static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util, unsigned long *max) { - unsigned long boost_util = sg_cpu->iowait_boost; - unsigned long boost_max = sg_cpu->iowait_boost_max; + unsigned long boost_util, boost_max; - if (!boost_util) + if (!sg_cpu->iowait_boost) return; + if (sg_cpu->iowait_boost_pending) { + sg_cpu->iowait_boost_pending = false; + } else { + sg_cpu->iowait_boost >>= 1; + if (sg_cpu->iowait_boost < sg_cpu->sg_policy->policy->min) { + sg_cpu->iowait_boost = 0; + return; + } + } + + boost_util = sg_cpu->iowait_boost; + boost_max = sg_cpu->iowait_boost_max; + if (*util * boost_max < *max * boost_util) { *util = boost_util; *max = boost_max; } - sg_cpu->iowait_boost >>= 1; } #ifdef CONFIG_NO_HZ_COMMON @@ -267,6 +292,7 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) delta_ns = time - j_sg_cpu->last_update; if (delta_ns > TICK_NSEC) { j_sg_cpu->iowait_boost = 0; + j_sg_cpu->iowait_boost_pending = false; continue; } if (j_sg_cpu->flags & SCHED_CPUFREQ_RT_DL)