From patchwork Fri Jun 9 10:15:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 9777895 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 706926034B for ; Fri, 9 Jun 2017 10:16:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 61AAB27FA1 for ; Fri, 9 Jun 2017 10:16:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5679D285DD; Fri, 9 Jun 2017 10:16:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0776527FA1 for ; Fri, 9 Jun 2017 10:16:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751597AbdFIKQQ (ORCPT ); Fri, 9 Jun 2017 06:16:16 -0400 Received: from mail-pg0-f51.google.com ([74.125.83.51]:35893 "EHLO mail-pg0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751605AbdFIKQO (ORCPT ); Fri, 9 Jun 2017 06:16:14 -0400 Received: by mail-pg0-f51.google.com with SMTP id a70so25254220pge.3 for ; Fri, 09 Jun 2017 03:16:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=0QbDX/mZmYSoCmTdjiKRTSBdoKx7mrzT7QcdJDyyNUs=; b=F0NupnS773TW1ZqP4wxGzIPnW8Taw4/8I3ZSdgJlO2oCuctT3bbSSCa/EykyIQDcS+ uwsQc67ZkTLIPxBFch+9cJmXMGlu2lp6pqqRtcICxwej97Meq0b5TfbrzSeXAYBi7YhL 3vUw5M5XCXChtenGP5o2s7QCA6r4+A+4JjwWk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=0QbDX/mZmYSoCmTdjiKRTSBdoKx7mrzT7QcdJDyyNUs=; b=D+dyDvXleG0BAzmzw9Qlf2tycEp50mA0h0Hvvqnmw3ISEMroLU+N/oduIaSkkGGgFk ZUD1/QQ4BGEPwwfqw6Q++noDnXN5Tn1Scf51KgI0YgR+9r4XLr9BQiqgTMt7ct9TEQsJ Aav0FkoIhbFvUHqfNDfly+lX6H7Z5Ogrc230rz+hB3GiBItzzWdZSAX3xwO3Do8XWsCT a/JGVElt6St9+f0evMFLyqMEe1ERUE993QjFKvIE2p1ImBgljiZ3qs9uJxqHFRyoxX9L qcA+8rkFDW7+ISqpwBcwrG419hX0LYC7Z93wJ3hzNHYM4JE6sZ6vVJf/BlWkLEk7gJfp oMEg== X-Gm-Message-State: AODbwcDikfA2kMm38By/pKvjhUSGGWpgV8mTRMeDwUNbDVfTKMgNaxPm hpWUPvtPffC5mbP2fQ4qvg== X-Received: by 10.84.229.70 with SMTP id d6mr39504789pln.263.1497003368894; Fri, 09 Jun 2017 03:16:08 -0700 (PDT) Received: from localhost ([122.172.91.138]) by smtp.gmail.com with ESMTPSA id u73sm1998814pfi.105.2017.06.09.03.16.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Jun 2017 03:16:08 -0700 (PDT) From: Viresh Kumar To: Rafael Wysocki , Ingo Molnar , Peter Zijlstra Cc: Viresh Kumar , linaro-kernel@lists.linaro.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, Vincent Guittot , Juri Lelli , patrick.bellasi@arm.com, john.ettedgui@gmail.com, Srinivas Pandruvada , Joel Fernandes , Morten Rasmussen Subject: [PATCH 1/3] cpufreq: schedutil: Restore cached_raw_freq behavior Date: Fri, 9 Jun 2017 15:45:54 +0530 Message-Id: X-Mailer: git-send-email 2.13.0.70.g6367777092d9 In-Reply-To: References: In-Reply-To: References: Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The purpose of the "cached_raw_freq" field is to avoid a call to cpufreq_driver_resolve_freq() when the next selected frequency is same as the one selected last time and we can use sg_policy->next_freq then. With the recent changes (reduce frequencies slower), we update the next frequency at the very last moment from sugov_update_commit() and that breaks the working of cached_raw_freq somewhat. Here is an example to illustrate what happens now. Min freq: 1 GHz Max and current freq: 4 GHz - get_next_freq() gets called and it calculates the next frequency to be 1 GHz and so it updates cached_raw_freq as 1 GHz as well. It then calls cpufreq_driver_resolve_freq() and that also returns 1 GHz. - We then call sugov_update_commit() and it updates the sg_policy->next_freq to 2.5 GHz. - get_next_freq() gets called again and this time it calculates the next frequency as 2.5 GHz. Even when the previous next_freq was set to 2.5 GHz, we end up calling cpufreq_driver_resolve_freq() as cached_raw_freq was set to 1 GHz. Moreover, it is not right to update the target frequency after we have called cpufreq_driver_resolve_freq() as that was called to map the target frequency to the driver supported one, so that we can avoid the update completely if we are already at the driver supported frequency. Fix this by moving the newly added code to get_next_freq() before the cached_raw_freq is accessed/updated. Also add a minor comment above that code to explain why it is done. We cannot take a simple average anymore as the "freq" here can be well below policy->min and we may end up reducing the frequency drastically. Take care of that by making sure "freq" is at least as much as policy->min. Fixes: 39b64aa1c007 ("cpufreq: schedutil: Reduce frequencies slower") Signed-off-by: Viresh Kumar --- kernel/sched/cpufreq_schedutil.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 622eed1b7658..1852bd73d903 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -101,9 +101,6 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, if (sg_policy->next_freq == next_freq) return; - if (sg_policy->next_freq > next_freq) - next_freq = (sg_policy->next_freq + next_freq) >> 1; - sg_policy->next_freq = next_freq; sg_policy->last_freq_update_time = time; @@ -151,6 +148,17 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, freq = (freq + (freq >> 2)) * util / max; + /* + * Reduce frequency gradually to avoid undesirable performance drops. + * Before that we need to make sure that freq >= policy->min, else we + * may still end up reducing frequency rapidly. + */ + if (freq < policy->min) + freq = policy->min; + + if (sg_policy->next_freq > freq) + freq = (sg_policy->next_freq + freq) >> 1; + if (freq == sg_policy->cached_raw_freq && sg_policy->next_freq != UINT_MAX) return sg_policy->next_freq; sg_policy->cached_raw_freq = freq;