diff mbox series

[28/37] arm64: Avoid cpus_have_const_cap() for ARM64_{SVE,SME,SME2,FA64}

Message ID 20230919092850.1940729-29-mark.rutland@arm.com (mailing list archive)
State New, archived
Headers show
Series arm64: Remove cpus_have_const_cap() | expand

Commit Message

Mark Rutland Sept. 19, 2023, 9:28 a.m. UTC
In system_supports_{sve,sme,sme2,fa64}() we use cpus_have_const_cap() to
check for the relevant cpucaps, but this is only necessary so that
sve_setup() and sme_setup() can run prior to alternatives being patched,
and otherwise alternative_has_cap_*() would be preferable.

For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.

Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.

All of system_supports_{sve,sme,sme2,fa64}() will return false prior to
system cpucaps being detected. In the window between system cpucaps being
detected and patching alternatives, we need system_supports_sve() and
system_supports_sme() to run to initialize SVE and SME properties, but
all other users of system_supports_{sve,sme,sme2,fa64}() don't depend on
the relevant cpucap becoming true until alternatives are patched:

* No KVM code runs until after alternatives are patched, and so this can
  safely use cpus_have_final_cap() or alternative_has_cap_*().

* The cpuid_cpu_online() callback in arch/arm64/kernel/cpuinfo.c is
  registered later from cpuinfo_regs_init() as a device_initcall, and so
  this can safely use cpus_have_final_cap() or alternative_has_cap_*().

* The entry, signal, and ptrace code isn't reachable until userspace has
  run, and so this can safely use cpus_have_final_cap() or
  alternative_has_cap_*().

* Currently perf_reg_validate() will un-reserve the PERF_REG_ARM64_VG
  pseudo-register before alternatives are patched, and before
  sve_setup() has run. If a sampling event is created early enough, this
  would allow perf_ext_reg_value() to sample (the as-yet uninitialized)
  thread_struct::vl[] prior to alternatives being patched.

  It would be preferable to defer this until alternatives are patched,
  and this can safely use alternative_has_cap_*().

* The context-switch code will run during this window as part of
  stop_machine() used during alternatives_patch_all(), and potentially
  for other work if other kernel threads are created early. No threads
  require the use of SVE/SME/SME2/FA64 prior to alternatives being
  patched, and it would be preferable for the related context-switch
  logic to take effect after alternatives are patched so that ths is
  guaranteed to see a consistent system-wide state (e.g. anything
  initialized by sve_setup() and sme_setup().

  This can safely ues alternative_has_cap_*().

This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime. The sve_setup() and sme_setup() functions are modified to
use cpus_have_cap() directly so that they can observe the cpucaps being
set prior to alternatives being patched.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h | 8 ++++----
 arch/arm64/kernel/fpsimd.c          | 4 ++--
 2 files changed, 6 insertions(+), 6 deletions(-)

Comments

Mark Brown Sept. 19, 2023, 11:27 a.m. UTC | #1
On Tue, Sep 19, 2023 at 10:28:41AM +0100, Mark Rutland wrote:
> In system_supports_{sve,sme,sme2,fa64}() we use cpus_have_const_cap() to
> check for the relevant cpucaps, but this is only necessary so that
> sve_setup() and sme_setup() can run prior to alternatives being patched,
> and otherwise alternative_has_cap_*() would be preferable.

Reviewed-by: Mark Brown <broonie@kernel.org>
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 7c218828c06d5..4deaa94de36e8 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -776,22 +776,22 @@  static inline bool system_uses_ttbr0_pan(void)
 
 static __always_inline bool system_supports_sve(void)
 {
-	return cpus_have_const_cap(ARM64_SVE);
+	return alternative_has_cap_unlikely(ARM64_SVE);
 }
 
 static __always_inline bool system_supports_sme(void)
 {
-	return cpus_have_const_cap(ARM64_SME);
+	return alternative_has_cap_unlikely(ARM64_SME);
 }
 
 static __always_inline bool system_supports_sme2(void)
 {
-	return cpus_have_const_cap(ARM64_SME2);
+	return alternative_has_cap_unlikely(ARM64_SME2);
 }
 
 static __always_inline bool system_supports_fa64(void)
 {
-	return cpus_have_const_cap(ARM64_SME_FA64);
+	return alternative_has_cap_unlikely(ARM64_SME_FA64);
 }
 
 static __always_inline bool system_supports_tpidr2(void)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 6b7960c445daf..85bba83dda996 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1199,7 +1199,7 @@  void __init sve_setup(void)
 	DECLARE_BITMAP(tmp_map, SVE_VQ_MAX);
 	unsigned long b;
 
-	if (!system_supports_sve())
+	if (!cpus_have_cap(ARM64_SVE))
 		return;
 
 	/*
@@ -1358,7 +1358,7 @@  void __init sme_setup(void)
 	u64 smcr;
 	int min_bit;
 
-	if (!system_supports_sme())
+	if (!cpus_have_cap(ARM64_SME))
 		return;
 
 	/*