From patchwork Thu Sep 24 12:39:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ionela Voinescu X-Patchwork-Id: 11797203 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EC034618 for ; Thu, 24 Sep 2020 12:40:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DAD77235FD for ; Thu, 24 Sep 2020 12:40:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727809AbgIXMkS (ORCPT ); Thu, 24 Sep 2020 08:40:18 -0400 Received: from foss.arm.com ([217.140.110.172]:45062 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727570AbgIXMkR (ORCPT ); Thu, 24 Sep 2020 08:40:17 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8E63A1396; Thu, 24 Sep 2020 05:40:16 -0700 (PDT) Received: from e108754-lin.cambridge.arm.com (unknown [10.1.199.49]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C778A3F73B; Thu, 24 Sep 2020 05:40:14 -0700 (PDT) From: Ionela Voinescu To: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, catalin.marinas@arm.com, will@kernel.org, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: dietmar.eggemann@arm.com, qperret@google.com, valentin.schneider@arm.com, linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, ionela.voinescu@arm.com Subject: [PATCH 2/3] sched/topology: condition EAS enablement on FIE support Date: Thu, 24 Sep 2020 13:39:36 +0100 Message-Id: <20200924123937.20938-3-ionela.voinescu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200924123937.20938-1-ionela.voinescu@arm.com> References: <20200924123937.20938-1-ionela.voinescu@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org In order to make accurate predictions across CPUs and for all performance states, Energy Aware Scheduling (EAS) needs frequency-invariant load tracking signals. EAS task placement aims to minimize energy consumption, and does so in part by limiting the search space to only CPUs with the highest spare capacity (CPU capacity - CPU utilization) in their performance domain. Those candidates are the placement choices that will keep frequency at its lowest possible and therefore save the most energy. But without frequency invariance, a CPU's utilization is relative to the CPU's current performance level, and not relative to its maximum performance level, which determines its capacity. As a result, it will fail to correctly indicate any potential spare capacity obtained by an increase in a CPU's performance level. Therefore, a non-invariant utilization signal would render the EAS task placement logic invalid. Now that we properly report support for the Frequency Invariance Engine (FIE) through arch_scale_freq_invariant() for arm and arm64 systems, we can assert it is the case when initializing EAS. Warn and bail out otherwise. Signed-off-by: Ionela Voinescu Suggested-by: Quentin Perret Cc: Ingo Molnar Cc: Peter Zijlstra --- kernel/sched/topology.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 4073f693e2b5..348d563c2210 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -328,6 +328,7 @@ static void sched_energy_set(bool has_eas) * 3. no SMT is detected. * 4. the EM complexity is low enough to keep scheduling overheads low; * 5. schedutil is driving the frequency of all CPUs of the rd; + * 6. frequency invariance support is present; * * The complexity of the Energy Model is defined as: * @@ -376,6 +377,12 @@ static bool build_perf_domains(const struct cpumask *cpu_map) goto free; } + if (!arch_scale_freq_invariant()) { + pr_warn("rd %*pbl: Disabling EAS: frequency-invariant load tracking not supported", + cpumask_pr_args(cpu_map)); + goto free; + } + for_each_cpu(i, cpu_map) { /* Skip already covered CPUs. */ if (find_pd(pd, i))