From patchwork Tue May 22 12:22:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 10418551 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B8592600CC for ; Tue, 22 May 2018 12:23:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A6A0428B7A for ; Tue, 22 May 2018 12:23:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9AC3928B99; Tue, 22 May 2018 12:23:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5105028B7A for ; Tue, 22 May 2018 12:23:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751301AbeEVMXB (ORCPT ); Tue, 22 May 2018 08:23:01 -0400 Received: from cloudserver094114.home.pl ([79.96.170.134]:41935 "EHLO cloudserver094114.home.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751161AbeEVMXA (ORCPT ); Tue, 22 May 2018 08:23:00 -0400 Received: from 79.184.253.39.ipv4.supernova.orange.pl (79.184.253.39) (HELO aspire.rjw.lan) by serwer1319399.home.pl (79.96.170.134) with SMTP (IdeaSmtpServer 0.83) id 1cc3648c788e77bf; Tue, 22 May 2018 14:22:58 +0200 From: "Rafael J. Wysocki" To: Viresh Kumar , "Joel Fernandes (Google.)" Cc: "Rafael J. Wysocki" , Linux Kernel Mailing List , "Joel Fernandes (Google)" , "Rafael J . Wysocki" , Peter Zijlstra , Ingo Molnar , Patrick Bellasi , Juri Lelli , Luca Abeni , Todd Kjos , Claudio Scordino , kernel-team@android.com, Linux PM Subject: Re: [PATCH v2] schedutil: Allow cpufreq requests to be made even when kthread kicked Date: Tue, 22 May 2018 14:22:20 +0200 Message-ID: <4237890.zlzv5C60QP@aspire.rjw.lan> In-Reply-To: References: <20180518185501.173552-1-joel@joelfernandes.org> <20180522113844.5rz3skjeck57arft@vireshk-i7> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Tuesday, May 22, 2018 1:42:05 PM CEST Rafael J. Wysocki wrote: > On Tue, May 22, 2018 at 1:38 PM, Viresh Kumar wrote: > > On 22-05-18, 13:31, Rafael J. Wysocki wrote: > >> So below is my (compiled-only) version of the $subject patch, obviously based > >> on the Joel's work. > >> > >> Roughly, what it does is to move the fast_switch_enabled path entirely to > >> sugov_update_single() and take the spinlock around sugov_update_commit() > >> in the one-CPU case too. [cut] > > > > Why do you assume that fast switch isn't possible in shared policy > > cases ? It infact is already enabled for few drivers. I hope that fast_switch is not used with devfs_possible_from_any_cpu set in the one-CPU policy case, as that looks racy even without any patching. > OK, so the fast_switch thing needs to be left outside of the spinlock > in the single case only. Fair enough. That would be something like the patch below (again, compiled-only). --- kernel/sched/cpufreq_schedutil.c | 67 +++++++++++++++++++++++++++------------ 1 file changed, 47 insertions(+), 20 deletions(-) Index: linux-pm/kernel/sched/cpufreq_schedutil.c =================================================================== --- linux-pm.orig/kernel/sched/cpufreq_schedutil.c +++ linux-pm/kernel/sched/cpufreq_schedutil.c @@ -92,9 +92,6 @@ static bool sugov_should_update_freq(str !cpufreq_can_do_remote_dvfs(sg_policy->policy)) return false; - if (sg_policy->work_in_progress) - return false; - if (unlikely(sg_policy->need_freq_update)) return true; @@ -103,25 +100,41 @@ static bool sugov_should_update_freq(str return delta_ns >= sg_policy->freq_update_delay_ns; } -static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, - unsigned int next_freq) +static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time, + unsigned int next_freq) { - struct cpufreq_policy *policy = sg_policy->policy; - if (sg_policy->next_freq == next_freq) - return; + return false; sg_policy->next_freq = next_freq; sg_policy->last_freq_update_time = time; - if (policy->fast_switch_enabled) { - next_freq = cpufreq_driver_fast_switch(policy, next_freq); - if (!next_freq) - return; + return true; +} - policy->cur = next_freq; - trace_cpu_frequency(next_freq, smp_processor_id()); - } else { +static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time, + unsigned int next_freq) +{ + struct cpufreq_policy *policy = sg_policy->policy; + + if (!sugov_update_next_freq(sg_policy, time, next_freq)) + return; + + next_freq = cpufreq_driver_fast_switch(policy, next_freq); + if (!next_freq) + return; + + policy->cur = next_freq; + trace_cpu_frequency(next_freq, smp_processor_id()); +} + +static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, + unsigned int next_freq) +{ + if (!sugov_update_next_freq(sg_policy, time, next_freq)) + return; + + if (!sg_policy->work_in_progress) { sg_policy->work_in_progress = true; irq_work_queue(&sg_policy->irq_work); } @@ -307,7 +320,13 @@ static void sugov_update_single(struct u sg_policy->cached_raw_freq = 0; } - sugov_update_commit(sg_policy, time, next_f); + if (sg_policy->policy->fast_switch_enabled) { + sugov_fast_switch(sg_policy, time, next_f); + } else { + raw_spin_lock(&sg_policy->update_lock); + sugov_update_commit(sg_policy, time, next_f); + raw_spin_unlock(&sg_policy->update_lock); + } } static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) @@ -367,7 +386,10 @@ sugov_update_shared(struct update_util_d if (sugov_should_update_freq(sg_policy, time)) { next_f = sugov_next_freq_shared(sg_cpu, time); - sugov_update_commit(sg_policy, time, next_f); + if (sg_policy->policy->fast_switch_enabled) + sugov_fast_switch(sg_policy, time, next_f); + else + sugov_update_commit(sg_policy, time, next_f); } raw_spin_unlock(&sg_policy->update_lock); @@ -376,13 +398,18 @@ sugov_update_shared(struct update_util_d static void sugov_work(struct kthread_work *work) { struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work); + unsigned int next_freq; + unsigned long flags; + + raw_spin_lock_irqsave(&sg_policy->update_lock, flags); + next_freq = sg_policy->next_freq; + sg_policy->work_in_progress = false; + raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags); mutex_lock(&sg_policy->work_lock); - __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq, + __cpufreq_driver_target(sg_policy->policy, next_freq, CPUFREQ_RELATION_L); mutex_unlock(&sg_policy->work_lock); - - sg_policy->work_in_progress = false; } static void sugov_irq_work(struct irq_work *irq_work)