From patchwork Mon Oct 16 10:24:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13422943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 134A5CDB465 for ; Mon, 16 Oct 2023 10:26:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IyP+bOO+hJ+WapNjTiAARJXnsv+1cX5fdLH7IC5iSb4=; b=O9HbmmL6S0de/p dA7oxKRWJHvDitv0w/5bKqv78NK9VTF/yfFBf6HSp5QJ2aethmtp63f/Z1XL4/I9SfEp9NtlddE/K ixkPGThH5Rj9Wk63gOMf4GOw46gbwasP7/l4DQXR2kU65XR68/+R3NaUdfANsjuAaluKll2t1rzD+ cUOZEEzcc0eSAxCfFkjEoAqgR4SSLajdqKowgF+NnDN3oy6sS0aqTGBa4+DLR6Yg5FdILQUiXYcDo ex44SsCGtbdk9G7yLX8QFXSVEyTVqrXOdTQtL6XoKqD8KSI5j/KEy73Dg+ndm18pxvnLP7lZdHCzu Rre470+qhBrZIssQCYlQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qsKnW-009FLH-1N; Mon, 16 Oct 2023 10:26:06 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qsKnI-009F7B-32 for linux-arm-kernel@lists.infradead.org; Mon, 16 Oct 2023 10:25:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3136DFEC; Mon, 16 Oct 2023 03:26:32 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 580773F762; Mon, 16 Oct 2023 03:25:49 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: ardb@kernel.org, bertrand.marquis@arm.com, boris.ostrovsky@oracle.com, broonie@kernel.org, catalin.marinas@arm.com, daniel.lezcano@linaro.org, james.morse@arm.com, jgross@suse.com, kristina.martsenko@arm.com, mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev, pcc@google.com, sstabellini@kernel.org, suzuki.poulose@arm.com, tglx@linutronix.de, vladimir.murzin@arm.com, will@kernel.org Subject: [PATCH v4 10/38] arm64: Explicitly save/restore CPACR when probing SVE and SME Date: Mon, 16 Oct 2023 11:24:33 +0100 Message-Id: <20231016102501.3643901-11-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20231016102501.3643901-1-mark.rutland@arm.com> References: <20231016102501.3643901-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231016_032553_088584_F83EE142 X-CRM114-Status: GOOD ( 17.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When a CPUs onlined we first probe for supported features and propetites, and then we subsequently enable features that have been detected. This is a little problematic for SVE and SME, as some properties (e.g. vector lengths) cannot be probed while they are disabled. Due to this, the code probing for SVE properties has to enable SVE for EL1 prior to proving, and the code probing for SME properties has to enable SME for EL1 prior to probing. We never disable SVE or SME for EL1 after probing. It would be a little nicer to transiently enable SVE and SME during probing, leaving them both disabled unless explicitly enabled, as this would make it much easier to catch unintentional usage (e.g. when they are not present system-wide). This patch reworks the SVE and SME feature probing code to only transiently enable support at EL1, disabling after probing is complete. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Mark Brown Cc: Suzuki K Poulose Cc: Will Deacon Reviewed-by: Mark Brown --- arch/arm64/include/asm/fpsimd.h | 26 ++++++++++++++++++++++++++ arch/arm64/kernel/cpufeature.c | 24 ++++++++++++++++++++++-- arch/arm64/kernel/fpsimd.c | 7 ++----- 3 files changed, 50 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 8df46f186c64b..bd7bea92dae07 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -32,6 +32,32 @@ #define VFP_STATE_SIZE ((32 * 8) + 4) #endif +static inline unsigned long cpacr_save_enable_kernel_sve(void) +{ + unsigned long old = read_sysreg(cpacr_el1); + unsigned long set = CPACR_EL1_FPEN_EL1EN | CPACR_EL1_ZEN_EL1EN; + + write_sysreg(old | set, cpacr_el1); + isb(); + return old; +} + +static inline unsigned long cpacr_save_enable_kernel_sme(void) +{ + unsigned long old = read_sysreg(cpacr_el1); + unsigned long set = CPACR_EL1_FPEN_EL1EN | CPACR_EL1_SMEN_EL1EN; + + write_sysreg(old | set, cpacr_el1); + isb(); + return old; +} + +static inline void cpacr_restore(unsigned long cpacr) +{ + write_sysreg(cpacr, cpacr_el1); + isb(); +} + /* * When we defined the maximum SVE vector length we defined the ABI so * that the maximum vector length included all the reserved for future diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 5add7d06469d8..e3e0d3d3bd6b7 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1040,13 +1040,19 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) if (IS_ENABLED(CONFIG_ARM64_SVE) && id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) { + unsigned long cpacr = cpacr_save_enable_kernel_sve(); + info->reg_zcr = read_zcr_features(); init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr); vec_init_vq_map(ARM64_VEC_SVE); + + cpacr_restore(cpacr); } if (IS_ENABLED(CONFIG_ARM64_SME) && id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) { + unsigned long cpacr = cpacr_save_enable_kernel_sme(); + info->reg_smcr = read_smcr_features(); /* * We mask out SMPS since even if the hardware @@ -1056,6 +1062,8 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS; init_cpu_ftr_reg(SYS_SMCR_EL1, info->reg_smcr); vec_init_vq_map(ARM64_VEC_SME); + + cpacr_restore(cpacr); } if (id_aa64pfr1_mte(info->reg_id_aa64pfr1)) @@ -1291,6 +1299,8 @@ void update_cpu_features(int cpu, if (IS_ENABLED(CONFIG_ARM64_SVE) && id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) { + unsigned long cpacr = cpacr_save_enable_kernel_sve(); + info->reg_zcr = read_zcr_features(); taint |= check_update_ftr_reg(SYS_ZCR_EL1, cpu, info->reg_zcr, boot->reg_zcr); @@ -1298,10 +1308,14 @@ void update_cpu_features(int cpu, /* Probe vector lengths */ if (!system_capabilities_finalized()) vec_update_vq_map(ARM64_VEC_SVE); + + cpacr_restore(cpacr); } if (IS_ENABLED(CONFIG_ARM64_SME) && id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) { + unsigned long cpacr = cpacr_save_enable_kernel_sme(); + info->reg_smcr = read_smcr_features(); /* * We mask out SMPS since even if the hardware @@ -1315,6 +1329,8 @@ void update_cpu_features(int cpu, /* Probe vector lengths */ if (!system_capabilities_finalized()) vec_update_vq_map(ARM64_VEC_SME); + + cpacr_restore(cpacr); } /* @@ -3174,6 +3190,8 @@ static void verify_local_elf_hwcaps(void) static void verify_sve_features(void) { + unsigned long cpacr = cpacr_save_enable_kernel_sve(); + u64 safe_zcr = read_sanitised_ftr_reg(SYS_ZCR_EL1); u64 zcr = read_zcr_features(); @@ -3186,11 +3204,13 @@ static void verify_sve_features(void) cpu_die_early(); } - /* Add checks on other ZCR bits here if necessary */ + cpacr_restore(cpacr); } static void verify_sme_features(void) { + unsigned long cpacr = cpacr_save_enable_kernel_sme(); + u64 safe_smcr = read_sanitised_ftr_reg(SYS_SMCR_EL1); u64 smcr = read_smcr_features(); @@ -3203,7 +3223,7 @@ static void verify_sme_features(void) cpu_die_early(); } - /* Add checks on other SMCR bits here if necessary */ + cpacr_restore(cpacr); } static void verify_hyp_capabilities(void) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 91e44ac7150f9..601b973f90ad2 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1174,7 +1174,7 @@ void sve_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) * Read the pseudo-ZCR used by cpufeatures to identify the supported SVE * vector length. * - * Use only if SVE is present. + * Use only if SVE is present and enabled. * This function clobbers the SVE vector length. */ u64 read_zcr_features(void) @@ -1183,7 +1183,6 @@ u64 read_zcr_features(void) * Set the maximum possible VL, and write zeroes to all other * bits to see if they stick. */ - sve_kernel_enable(NULL); write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL1); /* Return LEN value that would be written to get the maximum VL */ @@ -1337,13 +1336,11 @@ void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) * Read the pseudo-SMCR used by cpufeatures to identify the supported * vector length. * - * Use only if SME is present. + * Use only if SME is present and enabled. * This function clobbers the SME vector length. */ u64 read_smcr_features(void) { - sme_kernel_enable(NULL); - /* * Set the maximum possible VL. */