From patchwork Fri Aug 2 18:16:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 13751858 Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50FBC15C15D; Fri, 2 Aug 2024 18:22:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.96.170.134 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722622926; cv=none; b=O25HowTx88aJOZa2KHPEmQdQSBj1JWrM0OnaamvSkILv2xttD0B2iAuAhAgcyEhGMiL9pf7+RgD68OFtiQU3ysZSz72snijwakb36oAWNcaFSV8VXgub7Is/FMDw2g2wRY9lbPAgVElEeGlIqR3fBLjdx5zog7bFwofiNCi8riA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722622926; c=relaxed/simple; bh=H36+ANriL1BHT8EJ8fXzIZhPt9HqxP4EVTcec1v8TJg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=PfeAAVQ0XJCivAouj4DBe1MBMVJuL11SL+a3LoqYD3NYyI7ZAKsyqV0BL6y5rinlD3AupbnlC8xtqz4hlC7G0aYyRLFfX0SSEH9l6U3D4qBM0kgivw0QZbnkya4e9EAWZZqUxGxj1MRY0FNC08DPzlaLHh5ZZqJ4HNMu4Rlksio= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net; spf=pass smtp.mailfrom=rjwysocki.net; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b=f/V6Vghw; arc=none smtp.client-ip=79.96.170.134 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b="f/V6Vghw" Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 6.2.0) id e4062f9b974f24c0; Fri, 2 Aug 2024 20:22:02 +0200 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id C7E0573B540; Fri, 2 Aug 2024 20:22:01 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=rjwysocki.net; s=dkim; t=1722622922; bh=H36+ANriL1BHT8EJ8fXzIZhPt9HqxP4EVTcec1v8TJg=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=f/V6VghwphxhKHr0UGnkvLvShK9vZiAsdZzfqH2ErhkDX1fs02sGnFviWpQCXShPG lTE5R+oC7INTax+Gti4gTEkR33xfq1NNRryI1hQKdONyzaR3f3ZBV411vimxzScSdd 6n89PFxfSLQGhQ9xooI06OGexCzNkAdjuPoUQB1j84tJzmnaS1KmYPRI9Kw+AM4/qH T2AjtBl1YWi6HDY6rm9QlwXTgl+MoprgM4BqDOMpSuwANyjkKSJjqka5XBfI55FFoY lLY0zlTMSGm/OvUYus+5RHMD0DIhu+Ze8WLnnBZc2taCC7DZ31UodheOG0Chcf6jk2 NkG54X9VYFJ7Q== From: "Rafael J. Wysocki" To: x86 Maintainers Cc: LKML , Linux PM , Thomas Gleixner , Peter Zijlstra , Srinivas Pandruvada , "Rafael J. Wysocki" , Dietmar Eggemann , Ricardo Neri , Tim Chen Subject: [PATCH v1 1/3] x86/sched: Introduce arch_rebuild_sched_domains() Date: Fri, 02 Aug 2024 20:16:37 +0200 Message-ID: <2189204.irdbgypaU6@rjwysocki.net> In-Reply-To: <4908113.GXAFRqVoOG@rjwysocki.net> References: <4908113.GXAFRqVoOG@rjwysocki.net> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: spam:low X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeeftddrkedtgdduvdefucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecujffqoffgrffnpdggtffipffknecuuegrihhlohhuthemucduhedtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenogfuphgrmhfkphculdeftddtmdenucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttdejnecuhfhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqnecuggftrfgrthhtvghrnhepvdffueeitdfgvddtudegueejtdffteetgeefkeffvdeftddttdeuhfegfedvjefhnecukfhppeduleehrddufeeirdduledrleegnecuufhprghmkfhppeduleehrddufeeirdduledrleegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehinhgvthepudelhedrudefiedrudelrdelgedphhgvlhhopehkrhgvrggthhgvrhdrlhhotggrlhhnvghtpdhmrghilhhfrhhomhepfdftrghfrggvlhculfdrucghhihsohgtkhhifdcuoehrjhifsehrjhifhihsohgtkhhirdhnvghtqedpnhgspghrtghpthhtohepuddtpdhrtghpthhtohepgiekieeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqphhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohepthhglhigsehlihhnuhhtrhhonhhigidruggvpdhrtghpthhtohepphgvthgvrhiisehinhhfrhgruggvrggurdhorhhgpdhrtghpthhtohepshhrihhnihhvrghsrdhprghnughruhhvrggurgeslhhinhhugidrihhnthgvlhdrtghomhdprhgtphhtthhopehrrghfrggvlheskhgvrhhnvghlrdhorhhgpdhrtghpthhtohepughivghtmhgrrhdrvghgghgvmhgrnhhnsegrrhhmrdgtohhm X-DCC--Metrics: v370.home.net.pl 1024; Body=10 Fuz1=10 Fuz2=10 From: Rafael J. Wysocki Add arch_rebuild_sched_domains() for rebuilding scheduling domains and updating topology on x86 and make the ITMT code use it. First of all, this reduces code duplication somewhat and eliminates a need to use an extern variable, but it will also lay the ground for subsequent work related to CPU capacity scaling. No intentional functional impact. Signed-off-by: Rafael J. Wysocki --- arch/x86/include/asm/topology.h | 6 ++++-- arch/x86/kernel/itmt.c | 12 ++++-------- arch/x86/kernel/smpboot.c | 10 +++++++++- 3 files changed, 17 insertions(+), 11 deletions(-) Index: linux-pm/arch/x86/include/asm/topology.h =================================================================== --- linux-pm.orig/arch/x86/include/asm/topology.h +++ linux-pm/arch/x86/include/asm/topology.h @@ -235,8 +235,6 @@ struct pci_bus; int x86_pci_root_bus_node(int bus); void x86_pci_root_bus_resources(int bus, struct list_head *resources); -extern bool x86_topology_update; - #ifdef CONFIG_SCHED_MC_PRIO #include @@ -284,9 +282,13 @@ static inline long arch_scale_freq_capac extern void arch_set_max_freq_ratio(bool turbo_disabled); extern void freq_invariance_set_perf_ratio(u64 ratio, bool turbo_disabled); + +void arch_rebuild_sched_domains(void); #else static inline void arch_set_max_freq_ratio(bool turbo_disabled) { } static inline void freq_invariance_set_perf_ratio(u64 ratio, bool turbo_disabled) { } + +static inline void arch_rebuild_sched_domains(void) { } #endif extern void arch_scale_freq_tick(void); Index: linux-pm/arch/x86/kernel/itmt.c =================================================================== --- linux-pm.orig/arch/x86/kernel/itmt.c +++ linux-pm/arch/x86/kernel/itmt.c @@ -54,10 +54,8 @@ static int sched_itmt_update_handler(str old_sysctl = sysctl_sched_itmt_enabled; ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); - if (!ret && write && old_sysctl != sysctl_sched_itmt_enabled) { - x86_topology_update = true; - rebuild_sched_domains(); - } + if (!ret && write && old_sysctl != sysctl_sched_itmt_enabled) + arch_rebuild_sched_domains(); mutex_unlock(&itmt_update_mutex); @@ -114,8 +112,7 @@ int sched_set_itmt_support(void) sysctl_sched_itmt_enabled = 1; - x86_topology_update = true; - rebuild_sched_domains(); + arch_rebuild_sched_domains(); mutex_unlock(&itmt_update_mutex); @@ -150,8 +147,7 @@ void sched_clear_itmt_support(void) if (sysctl_sched_itmt_enabled) { /* disable sched_itmt if we are no longer ITMT capable */ sysctl_sched_itmt_enabled = 0; - x86_topology_update = true; - rebuild_sched_domains(); + arch_rebuild_sched_domains(); } mutex_unlock(&itmt_update_mutex); Index: linux-pm/arch/x86/kernel/smpboot.c =================================================================== --- linux-pm.orig/arch/x86/kernel/smpboot.c +++ linux-pm/arch/x86/kernel/smpboot.c @@ -39,6 +39,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include #include #include #include @@ -125,7 +126,7 @@ static DEFINE_PER_CPU_ALIGNED(struct mwa int __read_mostly __max_smt_threads = 1; /* Flag to indicate if a complete sched domain rebuild is required */ -bool x86_topology_update; +static bool x86_topology_update; int arch_update_cpu_topology(void) { @@ -135,6 +136,13 @@ int arch_update_cpu_topology(void) return retval; } +#ifdef CONFIG_X86_64 +void arch_rebuild_sched_domains(void) { + x86_topology_update = true; + rebuild_sched_domains(); +} +#endif + static unsigned int smpboot_warm_reset_vector_count; static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip) From patchwork Fri Aug 2 18:18:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 13751857 Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B25C15C156; Fri, 2 Aug 2024 18:22:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.96.170.134 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722622925; cv=none; b=T6XxmI6yZ/KeimLCOJaAIqAGI1/JJMeqg7ndOB67UHDTgIbpIkUshvb68TVykG6ggMXuhV4FZ2dFu5/i3JWIqxuICNK1IU4O3Lwg+w6qeY/i6a+xTFzpEZpSWzgbxtV0kXmcJ2wsddhYIJqavleB0Im4U/llo6gm6EcLbmQ/M4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722622925; c=relaxed/simple; bh=aP6Gg9+Jnivonz04qjklwItWAXtgzvLiwyGo+Gmfa8k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=BGP0ueuw4eofhBBNqI4NQnxsRrANf6zmvnBV+ir7IzPM67ts4Bp4HtgbrbjCy26xKwiKVv/lph8sFKNJBRiTqEW829PAITp90SRiSbnHNjK/tsxNthhWIyhQ6ly7YH+1D9UUFyaFxjeek6jrOukbk05Nqnc+1o32EpMaz+OpWe0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net; spf=pass smtp.mailfrom=rjwysocki.net; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b=TkhnJTm9; arc=none smtp.client-ip=79.96.170.134 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b="TkhnJTm9" Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 6.2.0) id 7aee8a2d45229686; Fri, 2 Aug 2024 20:22:01 +0200 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id ADB3C73B540; Fri, 2 Aug 2024 20:22:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=rjwysocki.net; s=dkim; t=1722622921; bh=aP6Gg9+Jnivonz04qjklwItWAXtgzvLiwyGo+Gmfa8k=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=TkhnJTm91Td6Ql6vdDXTdg013EbjOqdyWi3R1dGHecqLAXKeoYe3Qu8c8cuyexIyE XJDmAOci2HOQSTXQxXi8Q03Q64VxBnT83G3CEIslwu66wAM0uVdNtAH29v78SOyJ7R Ror15Z8YKfzWA6FXpQrk58PDAiIve3a8qZ05tG93zIfQNMsvGWykiJyr6xF0qeT1bI an6eixDyYXYMwO/nam3GuAYU9To+ojlllB235BCOUC5Gvls/U+YXswAoArdXnRQuh5 l6rr1oEQMXqe122iEygfZi7VL03kOGa2HxZy0bcptWV4OTcdewqVtQisJx6SpGwHaP 6wD/5VF9HY76Q== From: "Rafael J. Wysocki" To: x86 Maintainers Cc: LKML , Linux PM , Thomas Gleixner , Peter Zijlstra , Srinivas Pandruvada , "Rafael J. Wysocki" , Dietmar Eggemann , Ricardo Neri , Tim Chen Subject: [PATCH v1 2/3] x86/sched: Add basic support for CPU capacity scaling Date: Fri, 02 Aug 2024 20:18:57 +0200 Message-ID: <3300400.aeNJFYEL58@rjwysocki.net> In-Reply-To: <4908113.GXAFRqVoOG@rjwysocki.net> References: <4908113.GXAFRqVoOG@rjwysocki.net> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: spam:low X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeeftddrkedtgdduvdefucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecujffqoffgrffnpdggtffipffknecuuegrihhlohhuthemucduhedtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenogfuphgrmhfkphculdeftddtmdenucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttdejnecuhfhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqnecuggftrfgrthhtvghrnhepvdffueeitdfgvddtudegueejtdffteetgeefkeffvdeftddttdeuhfegfedvjefhnecukfhppeduleehrddufeeirdduledrleegnecuufhprghmkfhppeduleehrddufeeirdduledrleegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehinhgvthepudelhedrudefiedrudelrdelgedphhgvlhhopehkrhgvrggthhgvrhdrlhhotggrlhhnvghtpdhmrghilhhfrhhomhepfdftrghfrggvlhculfdrucghhihsohgtkhhifdcuoehrjhifsehrjhifhihsohgtkhhirdhnvghtqedpnhgspghrtghpthhtohepuddtpdhrtghpthhtohepgiekieeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqphhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohepthhglhigsehlihhnuhhtrhhonhhigidruggvpdhrtghpthhtohepphgvthgvrhiisehinhhfrhgruggvrggurdhorhhgpdhrtghpthhtohepshhrihhnihhvrghsrdhprghnughruhhvrggurgeslhhinhhugidrihhnthgvlhdrtghomhdprhgtphhtthhopehrrghfrggvlheskhgvrhhnvghlrdhorhhgpdhrtghpthhtohepughivghtmhgrrhdrvghgghgvmhgrnhhnsegrrhhmrdgtohhm X-DCC--Metrics: v370.home.net.pl 1024; Body=10 Fuz1=10 Fuz2=10 From: Rafael J. Wysocki In order be able to compute the sizes of tasks consistently across all CPUs in a hybrid system, it is necessary to provide CPU capacity scaling information to the scheduler via arch_scale_cpu_capacity(). Moreover, the value returned by arch_scale_freq_capacity() for the given CPU must correspond to the arch_scale_cpu_capacity() return value for it, or utilization computations will be inaccurate. Add support for it through per-CPU variables holding the capacity and maximum-to-base frequency ratio (times SCHED_CAPACITY_SCALE) that will be returned by arch_scale_cpu_capacity() and used by scale_freq_tick() to compute arch_freq_scale for the current CPU, respectively. In order to avoid adding measurable overhead for non-hybrid x86 systems, which are the vast majority in the field, whether or not the new hybrid CPU capacity scaling will be in effect is controlled by a static key. This static key is set by calling arch_enable_hybrid_capacity_scale() which also allocates memory for the per-CPU data and initializes it. Next, arch_set_cpu_capacity() is used to set the per-CPU variables mentioned above for each CPU and arch_rebuild_sched_domains() needs to be called for the scheduler to realize that capacity-aware scheduling can be used going forward. Signed-off-by: Rafael J. Wysocki --- arch/x86/include/asm/topology.h | 13 +++++ arch/x86/kernel/cpu/aperfmperf.c | 91 ++++++++++++++++++++++++++++++++++++++- 2 files changed, 103 insertions(+), 1 deletion(-) Index: linux-pm/arch/x86/include/asm/topology.h =================================================================== --- linux-pm.orig/arch/x86/include/asm/topology.h +++ linux-pm/arch/x86/include/asm/topology.h @@ -280,11 +280,24 @@ static inline long arch_scale_freq_capac } #define arch_scale_freq_capacity arch_scale_freq_capacity +bool arch_enable_hybrid_capacity_scale(void); +void arch_set_cpu_capacity(int cpu, unsigned long cap, unsigned long base_cap, + unsigned long max_freq, unsigned long base_freq); + +unsigned long arch_scale_cpu_capacity(int cpu); +#define arch_scale_cpu_capacity arch_scale_cpu_capacity + extern void arch_set_max_freq_ratio(bool turbo_disabled); extern void freq_invariance_set_perf_ratio(u64 ratio, bool turbo_disabled); void arch_rebuild_sched_domains(void); #else +static inline bool arch_enable_hybrid_capacity_scale(void) { return false; } +static inline void arch_set_cpu_capacity(int cpu, unsigned long cap, + unsigned long base_cap, + unsigned long max_freq, + unsigned long base_freq) { } + static inline void arch_set_max_freq_ratio(bool turbo_disabled) { } static inline void freq_invariance_set_perf_ratio(u64 ratio, bool turbo_disabled) { } Index: linux-pm/arch/x86/kernel/cpu/aperfmperf.c =================================================================== --- linux-pm.orig/arch/x86/kernel/cpu/aperfmperf.c +++ linux-pm/arch/x86/kernel/cpu/aperfmperf.c @@ -300,6 +300,8 @@ static void register_freq_invariance_sys static inline void register_freq_invariance_syscore_ops(void) {} #endif +static DEFINE_STATIC_KEY_FALSE(arch_hybrid_cap_scale_key); + static void freq_invariance_enable(void) { if (static_branch_unlikely(&arch_scale_freq_key)) { @@ -332,6 +334,7 @@ static void disable_freq_invariance_work int cpu; static_branch_disable(&arch_scale_freq_key); + static_branch_disable(&arch_hybrid_cap_scale_key); /* * Set arch_freq_scale to a default value on all cpus @@ -347,9 +350,90 @@ static DECLARE_WORK(disable_freq_invaria DEFINE_PER_CPU(unsigned long, arch_freq_scale) = SCHED_CAPACITY_SCALE; EXPORT_PER_CPU_SYMBOL_GPL(arch_freq_scale); +struct arch_hybrid_cpu_scale { + unsigned long capacity; + unsigned long freq_ratio; +}; + +static struct arch_hybrid_cpu_scale __percpu *arch_cpu_scale; + +/** + * arch_enable_hybrid_capacity_scale - Enable hybrid CPU capacity scaling + * + * Allocate memory for per-CPU data used by hybrid CPU capacity scaling, + * initialize it and set the static key controlling its code paths. + * + * Must be called before arch_set_cpu_capacity(). + */ +bool arch_enable_hybrid_capacity_scale(void) +{ + int cpu; + + if (!arch_scale_freq_invariant()) + return false; + + if (static_branch_unlikely(&arch_hybrid_cap_scale_key)) { + WARN_ON_ONCE(1); + return false; + } + arch_cpu_scale = alloc_percpu(struct arch_hybrid_cpu_scale); + if (!arch_cpu_scale) + return false; + + for_each_possible_cpu(cpu) { + per_cpu_ptr(arch_cpu_scale, cpu)->capacity = SCHED_CAPACITY_SCALE; + per_cpu_ptr(arch_cpu_scale, cpu)->freq_ratio = arch_max_freq_ratio; + } + + static_branch_enable(&arch_hybrid_cap_scale_key); + + pr_info("Hybrid CPU capacity scaling enabled\n"); + + return true; +} + +/** + * arch_set_cpu_capacity - Set scale-invariance parameters for a CPU + * @cpu: Target CPU. + * @cap: Capacity of @cpu, relative to @base_cap, at its maximum frequency. + * @base_cap: System-wide maximum CPU capacity. + * @max_freq: Frequency of @cpu corresponding to @cap. + * @base_freq: Frequency of @cpu at which APERF and MPERF count. + * + * The units in which @cap and @base_cap are expressed do not matter, so long + * as they are consistent, because the former is effectively divided by the + * latter. Analogously for @max_freq and @base_freq. + * + * After calling this function for all CPUs, call arch_rebuild_sched_domains() + * to let the scheduler know that capacity-aware scheduling can be used going + * forward. + */ +void arch_set_cpu_capacity(int cpu, unsigned long cap, unsigned long base_cap, + unsigned long max_freq, unsigned long base_freq) +{ + if (static_branch_likely(&arch_hybrid_cap_scale_key)) { + WRITE_ONCE(per_cpu_ptr(arch_cpu_scale, cpu)->capacity, + div_u64(cap << SCHED_CAPACITY_SHIFT, base_cap)); + WRITE_ONCE(per_cpu_ptr(arch_cpu_scale, cpu)->freq_ratio, + div_u64(max_freq << SCHED_CAPACITY_SHIFT, base_freq)); + } else { + WARN_ON_ONCE(1); + } +} + +unsigned long arch_scale_cpu_capacity(int cpu) +{ + if (static_branch_unlikely(&arch_hybrid_cap_scale_key)) + return READ_ONCE(per_cpu_ptr(arch_cpu_scale, cpu)->capacity); + + return SCHED_CAPACITY_SCALE; +} +EXPORT_SYMBOL_GPL(arch_scale_cpu_capacity); + static void scale_freq_tick(u64 acnt, u64 mcnt) { u64 freq_scale; + u64 freq_ratio; if (!arch_scale_freq_invariant()) return; @@ -357,7 +441,12 @@ static void scale_freq_tick(u64 acnt, u6 if (check_shl_overflow(acnt, 2*SCHED_CAPACITY_SHIFT, &acnt)) goto error; - if (check_mul_overflow(mcnt, arch_max_freq_ratio, &mcnt) || !mcnt) + if (static_branch_unlikely(&arch_hybrid_cap_scale_key)) + freq_ratio = READ_ONCE(this_cpu_ptr(arch_cpu_scale)->freq_ratio); + else + freq_ratio = arch_max_freq_ratio; + + if (check_mul_overflow(mcnt, freq_ratio, &mcnt) || !mcnt) goto error; freq_scale = div64_u64(acnt, mcnt); From patchwork Fri Aug 2 18:21:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Rafael J. Wysocki" X-Patchwork-Id: 13751856 Received: from cloudserver094114.home.pl (cloudserver094114.home.pl [79.96.170.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 865F515C150; Fri, 2 Aug 2024 18:22:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=79.96.170.134 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722622925; cv=none; b=o7ZhguSy5WawZabj+GFffsrt2NEVd+kWH0bvT8r+JlS1GyVQeHfBb0TxjeuRYdKBHItXvV2WN+9CqrFr8Bk8m0QVsv/qhLoMGcmeDUDpcZ/D9M6zK/GPZomtkNB/iv9vGRL4ibmb9fxZipUOgvwJWhOcpfwq5ZWZw5oAQGCbpUQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722622925; c=relaxed/simple; bh=fvuYgD8w/6ITvJyml+9ZvATHC3zmBGQA7HDOSPxODIw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NleMiCx3rqtYpnPE5Wpe7VzNWTR0jNr42p/bos3MqXGHuOBicvEYaTx55/kGBqBQGy4CWWD2G/maVgTHTInFlQp2fibM/O9HmZ4kcMsqCPNqY9GWLSXSbRefpGNpR3rgs47/2YyGLEBskBGyibmefWNKEPyY+AEhBn8aLoMItk0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net; spf=pass smtp.mailfrom=rjwysocki.net; dkim=fail (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b=p1Ih+8MV reason="signature verification failed"; arc=none smtp.client-ip=79.96.170.134 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rjwysocki.net Authentication-Results: smtp.subspace.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=rjwysocki.net header.i=@rjwysocki.net header.b="p1Ih+8MV" Received: from localhost (127.0.0.1) (HELO v370.home.net.pl) by /usr/run/smtp (/usr/run/postfix/private/idea_relay_lmtp) via UNIX with SMTP (IdeaSmtpServer 6.2.0) id 25e1e61871fd8b66; Fri, 2 Aug 2024 20:22:00 +0200 Received: from kreacher.localnet (unknown [195.136.19.94]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by cloudserver094114.home.pl (Postfix) with ESMTPSA id D236B73B540; Fri, 2 Aug 2024 20:21:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=rjwysocki.net; s=dkim; t=1722622920; bh=fvuYgD8w/6ITvJyml+9ZvATHC3zmBGQA7HDOSPxODIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=p1Ih+8MVLAbYBhpt3OAukITF0VfNWOFAGK77v54ogUJbiOtSSkaHENXX6EH7x0pe4 4xJY0nHwyfT1Ux0CplYFH+URdADJJTlfRMSXa1buKnEc0DYSvqrVdJowlR91wjaY3N 61VRjRdu9PApPtZkVNfy+e/V35p2R00FlrHrOXyQ7QfsIuUVndyfTMoGKWQh2lTBea 7JxvisCXAfZhyOvIVMvmSYmM/VgVWWm463zRE/y4SO7n6VAqf+b2bLAOvKf5fjKTnF gw6eGcAVSyoNj7BIJ+prQtmhST8UhWrA5aXHjzsJEd3M4aNSrEc3tJoY8HwTv/Wiej Zf7gyqHg3gDEg== From: "Rafael J. Wysocki" To: x86 Maintainers Cc: LKML , Linux PM , Thomas Gleixner , Peter Zijlstra , Srinivas Pandruvada , "Rafael J. Wysocki" , Dietmar Eggemann , Ricardo Neri , Tim Chen Subject: [PATCH v1 3/3] cpufreq: intel_pstate: Set asymmetric CPU capacity on hybrid systems Date: Fri, 02 Aug 2024 20:21:44 +0200 Message-ID: <10504155.nUPlyArG6x@rjwysocki.net> In-Reply-To: <4908113.GXAFRqVoOG@rjwysocki.net> References: <4908113.GXAFRqVoOG@rjwysocki.net> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CLIENT-IP: 195.136.19.94 X-CLIENT-HOSTNAME: 195.136.19.94 X-VADE-SPAMSTATE: spam:low X-VADE-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeeftddrkedtgdduvdefucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecujffqoffgrffnpdggtffipffknecuuegrihhlohhuthemucduhedtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenogfuphgrmhfkphculdeftddtmdenucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttdejnecuhfhrohhmpedftfgrfhgrvghlucflrdcuhgihshhotghkihdfuceorhhjfiesrhhjfiihshhotghkihdrnhgvtheqnecuggftrfgrthhtvghrnhepvdffueeitdfgvddtudegueejtdffteetgeefkeffvdeftddttdeuhfegfedvjefhnecukfhppeduleehrddufeeirdduledrleegnecuufhprghmkfhppeduleehrddufeeirdduledrleegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehinhgvthepudelhedrudefiedrudelrdelgedphhgvlhhopehkrhgvrggthhgvrhdrlhhotggrlhhnvghtpdhmrghilhhfrhhomhepfdftrghfrggvlhculfdrucghhihsohgtkhhifdcuoehrjhifsehrjhifhihsohgtkhhirdhnvghtqedpnhgspghrtghpthhtohepuddtpdhrtghpthhtohepgiekieeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtoheplhhinhhugidqphhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohepthhglhigsehlihhnuhhtrhhonhhigidruggvpdhrtghpthhtohepphgvthgvrhiisehinhhfrhgruggvrggurdhorhhgpdhrtghpthhtohepshhrihhnihhvrghsrdhprghnughruhhvrggurgeslhhinhhugidrihhnthgvlhdrtghomhdprhgtphhtthhopehrrghfrggvlheskhgvrhhnvghlrdhorhhgpdhrtghpthhtohepughivghtmhgrrhdrvghgghgvmhgrnhhnsegrrhhmrdgtohhm X-DCC--Metrics: v370.home.net.pl 1024; Body=10 Fuz1=10 Fuz2=10 From: Rafael J. Wysocki Make intel_pstate use the HWP_HIGHEST_PERF values from MSR_HWP_CAPABILITIES to set asymmetric CPU capacity information via the previously introduced arch_set_cpu_capacity() on hybrid systems without SMT. Setting asymmetric CPU capacity is generally necessary to allow the scheduler to compute task sizes in a consistent way across all CPUs in a system where they differ by capacity. That, in turn, should help to improve scheduling decisions. It is also necessary for the schedutil cpufreq governor to operate as expected on hybrid systems where tasks migrate between CPUs of different capacities. The underlying observation is that intel_pstate already uses MSR_HWP_CAPABILITIES to get CPU performance information which is exposed by it via sysfs and CPU performance scaling is based on it. Thus using this information for setting asymmetric CPU capacity is consistent with what the driver has been doing already. Moreover, HWP_HIGHEST_PERF reflects the maximum capacity of a given CPU including both the instructions-per-cycle (IPC) factor and the maximum turbo frequency and the units in which that value is expressed are the same for all CPUs in the system, so the maximum capacity ratio between two CPUs can be obtained by computing the ratio of their HWP_HIGHEST_PERF values. Of course, in principle that capacity ratio need not be directly applicable at lower frequencies, so using it for providing the asymmetric CPU capacity information to the scheduler is a rough approximation, but it is as good as it gets. Also, measurements indicate that this approximation is not too bad in practice. If the given system is hybrid and non-SMT, the new code disables ITMT support in the scheduler (because it may get in the way of asymmetric CPU capacity code in the scheduler that automatically gets enabled by setting asymmetric CPU capacity) after initializing all online CPUs and finds the one with the maximum HWP_HIGHEST_PERF value. Next, it computes the capacity number for each (online) CPU by dividing the product of its HWP_HIGHEST_PERF and SCHED_CAPACITY_SCALE by the maximum HWP_HIGHEST_PERF. When a CPU goes offline, its capacity is reset to SCHED_CAPACITY_SCALE and if it is the one with the maximum HWP_HIGHEST_PERF value, the capacity numbers for all of the other online CPUs are recomputed. This also takes care of a cleanup during driver operation mode changes. Analogously, when a new CPU goes online, its capacity number is updated and if its HWP_HIGHEST_PERF value is greater than the current maximum one, the capacity numbers for all of the other online CPUs are recomputed. The case when the driver is notified of a CPU capacity change, either through the HWP interrupt or through an ACPI notification, is handled similarly to the CPU online case above, except that if the target CPU is the current highest-capacity one and its capacity is reduced, the capacity numbers for all of the other online CPUs need to be recomputed either. If the driver's "no_trubo" sysfs attribute is updated, all of the CPU capacity information is computed from scratch to reflect the new turbo status. Signed-off-by: Rafael J. Wysocki --- drivers/cpufreq/intel_pstate.c | 210 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 206 insertions(+), 4 deletions(-) Index: linux-pm/drivers/cpufreq/intel_pstate.c =================================================================== --- linux-pm.orig/drivers/cpufreq/intel_pstate.c +++ linux-pm/drivers/cpufreq/intel_pstate.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include @@ -215,6 +216,7 @@ struct global_params { * @hwp_req_cached: Cached value of the last HWP Request MSR * @hwp_cap_cached: Cached value of the last HWP Capabilities MSR * @last_io_update: Last time when IO wake flag was set + * @capacity_perf: Highest perf used for scale invariance * @sched_flags: Store scheduler flags for possible cross CPU update * @hwp_boost_min: Last HWP boosted min performance * @suspended: Whether or not the driver has been suspended. @@ -253,6 +255,7 @@ struct cpudata { u64 hwp_req_cached; u64 hwp_cap_cached; u64 last_io_update; + unsigned int capacity_perf; unsigned int sched_flags; u32 hwp_boost_min; bool suspended; @@ -295,6 +298,7 @@ static int hwp_mode_bdw __ro_after_init; static bool per_cpu_limits __ro_after_init; static bool hwp_forced __ro_after_init; static bool hwp_boost __read_mostly; +static bool hwp_is_hybrid; static struct cpufreq_driver *intel_pstate_driver __read_mostly; @@ -934,6 +938,106 @@ static struct freq_attr *hwp_cpufreq_att NULL, }; +static struct cpudata *hybrid_max_perf_cpu __read_mostly; +/* + * Protects hybrid_max_perf_cpu, the capacity_perf fields in struct cpudata, + * and the x86 arch scale-invariance information from concurrent updates. + */ +static DEFINE_MUTEX(hybrid_capacity_lock); + +static void hybrid_set_cpu_capacity(struct cpudata *cpu) +{ + arch_set_cpu_capacity(cpu->cpu, cpu->capacity_perf, + hybrid_max_perf_cpu->capacity_perf, + cpu->capacity_perf, + cpu->pstate.max_pstate_physical); + + pr_debug("CPU%d: perf = %u, max. perf = %u, base perf = %d\n", cpu->cpu, + cpu->capacity_perf, hybrid_max_perf_cpu->capacity_perf, + cpu->pstate.max_pstate_physical); +} + +static void hybrid_clear_cpu_capacity(unsigned int cpunum) +{ + arch_set_cpu_capacity(cpunum, 1, 1, 1, 1); +} + +static void hybrid_get_capacity_perf(struct cpudata *cpu) +{ + if (READ_ONCE(global.no_turbo)) { + cpu->capacity_perf = cpu->pstate.max_pstate_physical; + return; + } + + cpu->capacity_perf = HWP_HIGHEST_PERF(READ_ONCE(cpu->hwp_cap_cached)); +} + +static void hybrid_set_capacity_of_cpus(void) +{ + int cpunum; + + for_each_online_cpu(cpunum) { + struct cpudata *cpu = all_cpu_data[cpunum]; + + if (cpu) + hybrid_set_cpu_capacity(cpu); + } +} + +static void hybrid_update_cpu_scaling(void) +{ + struct cpudata *max_perf_cpu = NULL; + unsigned int max_cap_perf = 0; + int cpunum; + + for_each_online_cpu(cpunum) { + struct cpudata *cpu = all_cpu_data[cpunum]; + + if (!cpu) + continue; + + /* + * During initialization, CPU performance at full capacity needs + * to be determined. + */ + if (!hybrid_max_perf_cpu) + hybrid_get_capacity_perf(cpu); + + /* + * If hybrid_max_perf_cpu is not NULL at this point, it is + * being replaced, so don't take it into account when looking + * for the new one. + */ + if (cpu == hybrid_max_perf_cpu) + continue; + + if (cpu->capacity_perf > max_cap_perf) { + max_cap_perf = cpu->capacity_perf; + max_perf_cpu = cpu; + } + } + + if (max_perf_cpu) { + hybrid_max_perf_cpu = max_perf_cpu; + hybrid_set_capacity_of_cpus(); + } else { + pr_info("Found no CPUs with nonzero maximum performance\n"); + /* Revert to the flat CPU capacity structure. */ + for_each_online_cpu(cpunum) + hybrid_clear_cpu_capacity(cpunum); + } +} + +static void hybrid_init_cpu_scaling(void) +{ + mutex_lock(&hybrid_capacity_lock); + + hybrid_max_perf_cpu = NULL; + hybrid_update_cpu_scaling(); + + mutex_unlock(&hybrid_capacity_lock); +} + static void __intel_pstate_get_hwp_cap(struct cpudata *cpu) { u64 cap; @@ -962,6 +1066,43 @@ static void intel_pstate_get_hwp_cap(str } } +static void hybrid_update_capacity(struct cpudata *cpu) +{ + unsigned int max_cap_perf; + + mutex_lock(&hybrid_capacity_lock); + + if (!hybrid_max_perf_cpu) + goto unlock; + + /* + * The maximum performance of the CPU may have changed, but assume + * that the performance of the other CPUs has not changed. + */ + max_cap_perf = hybrid_max_perf_cpu->capacity_perf; + + intel_pstate_get_hwp_cap(cpu); + + hybrid_get_capacity_perf(cpu); + /* Should hybrid_max_perf_cpu be replaced by this CPU? */ + if (cpu->capacity_perf > max_cap_perf) { + hybrid_max_perf_cpu = cpu; + hybrid_set_capacity_of_cpus(); + goto unlock; + } + + /* If this CPU is hybrid_max_perf_cpu, should it be replaced? */ + if (cpu == hybrid_max_perf_cpu && cpu->capacity_perf < max_cap_perf) { + hybrid_update_cpu_scaling(); + goto unlock; + } + + hybrid_set_cpu_capacity(cpu); + +unlock: + mutex_unlock(&hybrid_capacity_lock); +} + static void intel_pstate_hwp_set(unsigned int cpu) { struct cpudata *cpu_data = all_cpu_data[cpu]; @@ -1070,6 +1211,22 @@ static void intel_pstate_hwp_offline(str value |= HWP_ENERGY_PERF_PREFERENCE(HWP_EPP_POWERSAVE); wrmsrl_on_cpu(cpu->cpu, MSR_HWP_REQUEST, value); + + mutex_lock(&hybrid_capacity_lock); + + if (!hybrid_max_perf_cpu) { + mutex_unlock(&hybrid_capacity_lock); + + return; + } + + if (hybrid_max_perf_cpu == cpu) + hybrid_update_cpu_scaling(); + + mutex_unlock(&hybrid_capacity_lock); + + /* Reset the capacity of the CPU going offline to the initial value. */ + hybrid_clear_cpu_capacity(cpu->cpu); } #define POWER_CTL_EE_ENABLE 1 @@ -1165,21 +1322,41 @@ static void __intel_pstate_update_max_fr static void intel_pstate_update_limits(unsigned int cpu) { struct cpufreq_policy *policy = cpufreq_cpu_acquire(cpu); + struct cpudata *cpudata; if (!policy) return; - __intel_pstate_update_max_freq(all_cpu_data[cpu], policy); + cpudata = all_cpu_data[cpu]; + + __intel_pstate_update_max_freq(cpudata, policy); + + /* Prevent the driver from being unregistered now. */ + mutex_lock(&intel_pstate_driver_lock); cpufreq_cpu_release(policy); + + hybrid_update_capacity(cpudata); + + mutex_unlock(&intel_pstate_driver_lock); } static void intel_pstate_update_limits_for_all(void) { int cpu; - for_each_possible_cpu(cpu) - intel_pstate_update_limits(cpu); + for_each_possible_cpu(cpu) { + struct cpufreq_policy *policy = cpufreq_cpu_acquire(cpu); + + if (!policy) + continue; + + __intel_pstate_update_max_freq(all_cpu_data[cpu], policy); + + cpufreq_cpu_release(policy); + } + + hybrid_init_cpu_scaling(); } /************************** sysfs begin ************************/ @@ -1618,6 +1795,13 @@ static void intel_pstate_notify_work(str __intel_pstate_update_max_freq(cpudata, policy); cpufreq_cpu_release(policy); + + /* + * The driver will not be unregistered while this function is + * running, so update the capacity without acquiring the driver + * lock. + */ + hybrid_update_capacity(cpudata); } wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_STATUS, 0); @@ -2034,8 +2218,10 @@ static void intel_pstate_get_cpu_pstates if (pstate_funcs.get_cpu_scaling) { cpu->pstate.scaling = pstate_funcs.get_cpu_scaling(cpu->cpu); - if (cpu->pstate.scaling != perf_ctl_scaling) + if (cpu->pstate.scaling != perf_ctl_scaling) { intel_pstate_hybrid_hwp_adjust(cpu); + hwp_is_hybrid = true; + } } else { cpu->pstate.scaling = perf_ctl_scaling; } @@ -2703,6 +2889,8 @@ static int intel_pstate_cpu_online(struc */ intel_pstate_hwp_reenable(cpu); cpu->suspended = false; + + hybrid_update_capacity(cpu); } return 0; @@ -3143,6 +3331,20 @@ static int intel_pstate_register_driver( global.min_perf_pct = min_perf_pct_min(); + /* + * On hybrid systems, use asym capacity instead of ITMT, but because + * the capacity of SMT threads is not deterministic even approximately, + * do not do that when SMT is in use. + */ + if (hwp_is_hybrid && !sched_smt_active() && + arch_enable_hybrid_capacity_scale()) { + sched_clear_itmt_support(); + + hybrid_init_cpu_scaling(); + + arch_rebuild_sched_domains(); + } + return 0; }