From patchwork Mon May 7 14:43:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Scordino X-Patchwork-Id: 10384259 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C5A5760318 for ; Mon, 7 May 2018 14:43:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C065828BF9 for ; Mon, 7 May 2018 14:43:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B51FE28C07; Mon, 7 May 2018 14:43:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, URG_BIZ autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 83E6C28BF9 for ; Mon, 7 May 2018 14:43:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752443AbeEGOnv (ORCPT ); Mon, 7 May 2018 10:43:51 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:53239 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752422AbeEGOnq (ORCPT ); Mon, 7 May 2018 10:43:46 -0400 Received: by mail-wm0-f65.google.com with SMTP id w194so13786740wmf.2 for ; Mon, 07 May 2018 07:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=evidence-eu-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=zUayTLExaktn3vKtPxq28qgQbLLHATVemLkHYP9x0ms=; b=it5q1+VsBjCd/0YVJbmxGKQf0LKhnHCyPbskyv+mhjuoxQD40KeF8yaxTVp5Z72X4k m8x8/v6gp+9pD7SH0CEOeNZKddZ5ACOatq1bMZPh/vPeKEWyYU2ejhU7nwnWsDekCkny 9FNkQkoFpJtuKO2gV9zqTDkZemgOMZvEPlUWlDV5CYsWmuTzOIFicMUEcVn1epikZU4p dAd1M5/xjsGh5lbTZMFQFjyIm1qMzKaQ3xWVmV/YV6diPfzfCgHX9oyfszPXt3YCnxo7 EjlD8g6vgUDZZur2DVzz0WiCmM6rdkupn7nek0wPB9LAFLqdrSNTHCvt30MCAVWc8jTO ohUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=zUayTLExaktn3vKtPxq28qgQbLLHATVemLkHYP9x0ms=; b=f6HnakIc9P6jAExzwGP0Xx32NJpXm4bHnJ9iTCSPhPVZFa8RN6kcib1WxCHRfmEwzg Xj9vUI1oS8nWsQlW2GF3IJXeyXV8CNpJNWiFUR4CAEUovz3sO00XgDzX1Js/jl2b0MNO I6JaZWIyQtEPAODBRCMJ3B8mAOEUmMbS7C4X89Nt9U+/kSGbQGz6OsCGlJm68xCnCYPY o/2GiFxxYs6nDxQNdhZ6WTz28PT3RpyBzByKhFLvJ0pcotGhJsnfxuvPaoRssYklUvD6 BgpwQKBYp79ekhbpb0FzSAUxAtda/siL9ez0YwmWqZ/uqs65Aj03l67jGTZkPrjsgeQN dycg== X-Gm-Message-State: ALKqPweiUKlJwWYUuPtTpFXdVzOe/XCWp6soIJzVPnH7TpcAICau1MHz KMKGddOiA9xx1QfVKhYZOocTyg== X-Google-Smtp-Source: AB8JxZqwd21PAApxybX4xKXU2uPnKySpbOwsJ/UEQVk9hO/+u6nugRUyzzPDLuxo6YC4/rcw9UnlLw== X-Received: by 2002:a1c:a609:: with SMTP id p9-v6mr1055531wme.146.1525704225386; Mon, 07 May 2018 07:43:45 -0700 (PDT) Received: from localhost.localdomain (host92-93-static.8-79-b.business.telecomitalia.it. [79.8.93.92]) by smtp.gmail.com with ESMTPSA id d8-v6sm21559048wrb.52.2018.05.07.07.43.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 07 May 2018 07:43:44 -0700 (PDT) From: Claudio Scordino To: linux-kernel@vger.kernel.org Cc: Claudio Scordino , Viresh Kumar , "Rafael J . Wysocki" , Peter Zijlstra , Ingo Molnar , Patrick Bellasi , Juri Lelli , Luca Abeni , Joel Fernandes , linux-pm@vger.kernel.org Subject: [RFC PATCH] sched/cpufreq/schedutil: handling urgent frequency requests Date: Mon, 7 May 2018 16:43:35 +0200 Message-Id: <1525704215-8683-1-git-send-email-claudio@evidence.eu.com> X-Mailer: git-send-email 2.7.4 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP At OSPM, it was mentioned the issue about urgent CPU frequency requests arriving when a frequency switch is already in progress. Besides the various issues (physical time for switching frequency, on-going kthread activity, etc.) one (minor) issue is the kernel "forgetting" such request, thus waiting the next switch time for recomputing the needed frequency and behaving accordingly. This patch makes the kthread serve any urgent request occurred during the previous frequency switch. It introduces a specific flag, only set when the SCHED_DEADLINE scheduling class increases the CPU utilization, aiming at decreasing the likelihood of a deadline miss. Indeed, some preliminary tests in critical conditions (i.e. SCHED_DEADLINE tasks with short periods) have shown reductions of more than 10% of the average number of deadline misses. On the other hand, the increase in terms of energy consumption when running SCHED_DEADLINE tasks (not yet measured) is likely to be not negligible (especially in case of critical scenarios like "ramp up" utilizations). The patch is meant as follow-up discussion after OSPM. Signed-off-by: Claudio Scordino CC: Viresh Kumar CC: Rafael J. Wysocki CC: Peter Zijlstra CC: Ingo Molnar CC: Patrick Bellasi CC: Juri Lelli Cc: Luca Abeni CC: Joel Fernandes CC: linux-pm@vger.kernel.org --- kernel/sched/cpufreq_schedutil.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index d2c6083..4de06b0 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -41,6 +41,7 @@ struct sugov_policy { bool work_in_progress; bool need_freq_update; + bool urgent_freq_update; }; struct sugov_cpu { @@ -92,6 +93,14 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) !cpufreq_can_do_remote_dvfs(sg_policy->policy)) return false; + /* + * Continue computing the new frequency. In case of work_in_progress, + * the kthread will resched a change once the current transition is + * finished. + */ + if (sg_policy->urgent_freq_update) + return true; + if (sg_policy->work_in_progress) return false; @@ -121,6 +130,9 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, sg_policy->next_freq = next_freq; sg_policy->last_freq_update_time = time; + if (sg_policy->work_in_progress) + return; + if (policy->fast_switch_enabled) { next_freq = cpufreq_driver_fast_switch(policy, next_freq); if (!next_freq) @@ -274,7 +286,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; } static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu, struct sugov_policy *sg_policy) { if (cpu_util_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->util_dl) - sg_policy->need_freq_update = true; + sg_policy->urgent_freq_update = true; } static void sugov_update_single(struct update_util_data *hook, u64 time, @@ -383,8 +395,11 @@ static void sugov_work(struct kthread_work *work) struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work); mutex_lock(&sg_policy->work_lock); - __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq, + do { + sg_policy->urgent_freq_update = false; + __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq, CPUFREQ_RELATION_L); + } while (sg_policy->urgent_freq_update); mutex_unlock(&sg_policy->work_lock); sg_policy->work_in_progress = false; @@ -673,6 +688,7 @@ static int sugov_start(struct cpufreq_policy *policy) sg_policy->next_freq = UINT_MAX; sg_policy->work_in_progress = false; sg_policy->need_freq_update = false; + sg_policy->urgent_freq_update = false; sg_policy->cached_raw_freq = 0; for_each_cpu(cpu, policy->cpus) {