From patchwork Fri Oct 11 07:50:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13832212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A5C45CFC611 for ; Fri, 11 Oct 2024 07:55:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=freQTC7SUTYBEt5zzWG2zLInXWLFUOSNW59V1ud9hv8=; b=vLYp95wQjWrTkCvtvKJCUuxtXq kc8E9/nQu4c8RpC5uvK18qL2MJdvGdxVxpAT6/2KHbYtxw0n3WbWr3u9pHqiuWmC8BJhNxvBe3KJG eGkISlHAKlhmqIqGiLctjZ1+snbzxJt7sXP2tkfgUEshw6MAu+700wbGsmaaE96AZ+EowrSK3+nFH hvPdP8lFfvpdF7SkvU0ppxqBEsL02FYaldtfnWy8s3UdeJCHbBvuhhuaDlnrm3m6o7U/hOK9lzLN8 cVrokougJ91lxldi56m2KJw1sukQ8246ZP6letDqUX4gDbxJG0EX4hOEFVdO0ulpBUnsmpASTlFMZ 4NxBfCxw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1szAUD-0000000Fchf-1lJi; Fri, 11 Oct 2024 07:54:57 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1szARV-0000000FcJS-3V8F for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2024 07:52:13 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XPzNH3nD9z6JB4V; Fri, 11 Oct 2024 15:51:43 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id 641F3140A78; Fri, 11 Oct 2024 15:52:06 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Fri, 11 Oct 2024 09:51:59 +0200 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , Subject: [RFC PATCH 1/6] arm64: Modify callback matches() fn to take a target info Date: Fri, 11 Oct 2024 08:50:48 +0100 Message-ID: <20241011075053.80540-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> References: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241011_005210_218523_62E89963 X-CRM114-Status: GOOD ( 19.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation of identifying the errata associated with a particular target CPU, modify the matches fn to take a target pointer. For capabilities representing system features, this is not used and named as __unused. In subsequent patches, errata workarounds matches function will make use of the target pointer instead of the read_cpuid_id() to check the cpu model. No functional changes intended. Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/cpufeature.h | 7 ++- arch/arm64/include/asm/spectre.h | 8 +-- arch/arm64/kernel/cpu_errata.c | 19 +++--- arch/arm64/kernel/cpufeature.c | 97 +++++++++++++++++------------ arch/arm64/kernel/proton-pack.c | 13 ++-- 5 files changed, 82 insertions(+), 62 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 3d261cc123c1..c7b1d3ae469e 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -335,7 +335,8 @@ struct arm64_cpu_capabilities { const char *desc; u16 capability; u16 type; - bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope); + bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope, + void *target); /* * Take the appropriate actions to configure this capability * for this CPU. If the capability is detected by the kernel @@ -398,12 +399,12 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap) */ static inline bool cpucap_multi_entry_cap_matches(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *target) { const struct arm64_cpu_capabilities *caps; for (caps = entry->match_list; caps->matches; caps++) - if (caps->matches(caps, scope)) + if (caps->matches(caps, scope, target)) return true; return false; diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h index 0c4d9045c31f..295de6a08bc2 100644 --- a/arch/arm64/include/asm/spectre.h +++ b/arch/arm64/include/asm/spectre.h @@ -82,21 +82,21 @@ static __always_inline void arm64_apply_bp_hardening(void) } enum mitigation_state arm64_get_spectre_v2_state(void); -bool has_spectre_v2(const struct arm64_cpu_capabilities *cap, int scope); +bool has_spectre_v2(const struct arm64_cpu_capabilities *cap, int scope, void *target); void spectre_v2_enable_mitigation(const struct arm64_cpu_capabilities *__unused); -bool has_spectre_v3a(const struct arm64_cpu_capabilities *cap, int scope); +bool has_spectre_v3a(const struct arm64_cpu_capabilities *cap, int scope, void *target); void spectre_v3a_enable_mitigation(const struct arm64_cpu_capabilities *__unused); enum mitigation_state arm64_get_spectre_v4_state(void); -bool has_spectre_v4(const struct arm64_cpu_capabilities *cap, int scope); +bool has_spectre_v4(const struct arm64_cpu_capabilities *cap, int scope, void *target); void spectre_v4_enable_mitigation(const struct arm64_cpu_capabilities *__unused); void spectre_v4_enable_task_mitigation(struct task_struct *tsk); enum mitigation_state arm64_get_meltdown_state(void); enum mitigation_state arm64_get_spectre_bhb_state(void); -bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope); +bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope, void *target); u8 spectre_bhb_loop_affected(int scope); void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused); bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index dfefbdf4073a..37464f100a21 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -15,7 +15,8 @@ #include static bool __maybe_unused -is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope) +is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope, + void *target) { const struct arm64_midr_revidr *fix; u32 midr = read_cpuid_id(), revidr; @@ -35,14 +36,14 @@ is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope) static bool __maybe_unused is_affected_midr_range_list(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *target) { WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); return is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list); } static bool __maybe_unused -is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope) +is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope, void *target) { u32 model; @@ -57,7 +58,7 @@ is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope) static bool has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *__unused) { u64 mask = arm64_ftr_reg_ctrel0.strict_mask; u64 sys = arm64_ftr_reg_ctrel0.sys_val & mask; @@ -109,9 +110,9 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *cap) #ifdef CONFIG_ARM64_ERRATUM_1463225 static bool has_cortex_a76_erratum_1463225(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *target) { - return is_affected_midr_range_list(entry, scope) && is_kernel_in_hyp_mode(); + return is_affected_midr_range_list(entry, scope, target) && is_kernel_in_hyp_mode(); } #endif @@ -166,11 +167,11 @@ static const __maybe_unused struct midr_range tx2_family_cpus[] = { static bool __maybe_unused needs_tx2_tvm_workaround(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *target) { int i; - if (!is_affected_midr_range_list(entry, scope) || + if (!is_affected_midr_range_list(entry, scope, target) || !is_hyp_mode_available()) return false; @@ -184,7 +185,7 @@ needs_tx2_tvm_workaround(const struct arm64_cpu_capabilities *entry, static bool __maybe_unused has_neoverse_n1_erratum_1542419(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *target) { u32 midr = read_cpuid_id(); bool has_dic = read_cpuid_cachetype() & BIT(CTR_EL0_DIC_SHIFT); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 718728a85430..ac0cff5ab09d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1538,7 +1538,7 @@ u64 __read_sysreg_by_encoding(u32 sys_id) #include static bool -has_always(const struct arm64_cpu_capabilities *entry, int scope) +has_always(const struct arm64_cpu_capabilities *entry, int scope, void *__unused) { return true; } @@ -1581,7 +1581,8 @@ read_scoped_sysreg(const struct arm64_cpu_capabilities *entry, int scope) } static bool -has_user_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope) +has_user_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope, + void *__unused) { int mask; struct arm64_ftr_reg *regp; @@ -1601,7 +1602,8 @@ has_user_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope) } static bool -has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope) +has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope, + void *__unused) { u64 val = read_scoped_sysreg(entry, scope); return feature_matches(val, entry); @@ -1651,9 +1653,10 @@ static int __init aarch32_el0_sysfs_init(void) } device_initcall(aarch32_el0_sysfs_init); -static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope) +static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope, + void *__unused) { - if (!has_cpuid_feature(entry, scope)) + if (!has_cpuid_feature(entry, scope, NULL)) return allow_mismatched_32bit_el0; if (scope == SCOPE_SYSTEM) @@ -1662,11 +1665,12 @@ static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope) return true; } -static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, int scope) +static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, int scope, + void *__unused) { bool has_sre; - if (!has_cpuid_feature(entry, scope)) + if (!has_cpuid_feature(entry, scope, NULL)) return false; has_sre = gic_enable_sre(); @@ -1678,7 +1682,7 @@ static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, } static bool has_cache_idc(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *__unused) { u64 ctr; @@ -1703,7 +1707,7 @@ static void cpu_emulate_effective_ctr(const struct arm64_cpu_capabilities *__unu } static bool has_cache_dic(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *__unused) { u64 ctr; @@ -1716,7 +1720,8 @@ static bool has_cache_dic(const struct arm64_cpu_capabilities *entry, } static bool __maybe_unused -has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope) +has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope, + void *__unused) { /* * Kdump isn't guaranteed to power-off all secondary CPUs, CNP @@ -1729,14 +1734,14 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope) if (cpus_have_cap(ARM64_WORKAROUND_NVIDIA_CARMEL_CNP)) return false; - return has_cpuid_feature(entry, scope); + return has_cpuid_feature(entry, scope, NULL); } static bool __meltdown_safe = true; static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *__unused) { /* List of CPUs that are not vulnerable and don't need KPTI */ static const struct midr_range kpti_safe_list[] = { @@ -1763,7 +1768,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list); /* Defer to CPU feature registers */ - if (has_cpuid_feature(entry, scope)) + if (has_cpuid_feature(entry, scope, NULL)) meltdown_safe = true; if (!meltdown_safe) @@ -1811,7 +1816,8 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, return !meltdown_safe; } -static bool has_nv1(const struct arm64_cpu_capabilities *entry, int scope) +static bool has_nv1(const struct arm64_cpu_capabilities *entry, int scope, + void *__unused) { /* * Although the Apple M2 family appears to support NV1, the @@ -1829,7 +1835,7 @@ static bool has_nv1(const struct arm64_cpu_capabilities *entry, int scope) }; return (__system_matches_cap(ARM64_HAS_NESTED_VIRT) && - !(has_cpuid_feature(entry, scope) || + !(has_cpuid_feature(entry, scope, NULL) || is_midr_in_range_list(read_cpuid_id(), nv1_ni_list))); } @@ -1852,7 +1858,8 @@ static bool has_lpa2_at_stage2(u64 mmfr0) return tgran == ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_LPA2; } -static bool has_lpa2(const struct arm64_cpu_capabilities *entry, int scope) +static bool has_lpa2(const struct arm64_cpu_capabilities *entry, int scope, + void *__unused) { u64 mmfr0; @@ -1860,7 +1867,8 @@ static bool has_lpa2(const struct arm64_cpu_capabilities *entry, int scope) return has_lpa2_at_stage1(mmfr0) && has_lpa2_at_stage2(mmfr0); } #else -static bool has_lpa2(const struct arm64_cpu_capabilities *entry, int scope) +static bool has_lpa2(const struct arm64_cpu_capabilities *entry, int scope, + void *__unused) { return false; } @@ -2018,7 +2026,7 @@ static bool cpu_has_broken_dbm(void) static bool cpu_can_use_dbm(const struct arm64_cpu_capabilities *cap) { - return has_cpuid_feature(cap, SCOPE_LOCAL_CPU) && + return has_cpuid_feature(cap, SCOPE_LOCAL_CPU, NULL) && !cpu_has_broken_dbm(); } @@ -2031,7 +2039,7 @@ static void cpu_enable_hw_dbm(struct arm64_cpu_capabilities const *cap) } static bool has_hw_dbm(const struct arm64_cpu_capabilities *cap, - int __unused) + int __unused, void *__unused2) { /* * DBM is a non-conflicting feature. i.e, the kernel can safely @@ -2071,7 +2079,7 @@ int get_cpu_with_amu_feat(void) static void cpu_amu_enable(struct arm64_cpu_capabilities const *cap) { - if (has_cpuid_feature(cap, SCOPE_LOCAL_CPU)) { + if (has_cpuid_feature(cap, SCOPE_LOCAL_CPU, NULL)) { cpumask_set_cpu(smp_processor_id(), &amu_cpus); /* 0 reference values signal broken/disabled counters */ @@ -2081,7 +2089,7 @@ static void cpu_amu_enable(struct arm64_cpu_capabilities const *cap) } static bool has_amu(const struct arm64_cpu_capabilities *cap, - int __unused) + int __unused, void *__unused2) { /* * The AMU extension is a non-conflicting feature: the kernel can @@ -2105,7 +2113,8 @@ int get_cpu_with_amu_feat(void) } #endif -static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused) +static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused, + void *__unused2) { return is_kernel_in_hyp_mode(); } @@ -2125,12 +2134,12 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused) } static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap, - int scope) + int scope, void *__unused) { if (kvm_get_mode() != KVM_MODE_NV) return false; - if (!has_cpuid_feature(cap, scope)) { + if (!has_cpuid_feature(cap, scope, NULL)) { pr_warn("unavailable: %s\n", cap->desc); return false; } @@ -2139,7 +2148,7 @@ static bool has_nested_virt_support(const struct arm64_cpu_capabilities *cap, } static bool hvhe_possible(const struct arm64_cpu_capabilities *entry, - int __unused) + int __unused, void *__unused2) { return arm64_test_sw_feature_override(ARM64_SW_FEATURE_OVERRIDE_HVHE); } @@ -2167,7 +2176,8 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused) #endif /* CONFIG_ARM64_RAS_EXTN */ #ifdef CONFIG_ARM64_PTR_AUTH -static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope) +static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope, + void *__unused) { int boot_val, sec_val; @@ -2194,17 +2204,20 @@ static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, } static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *__unused) { - bool api = has_address_auth_cpucap(cpucap_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope); - bool apa = has_address_auth_cpucap(cpucap_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA5], scope); - bool apa3 = has_address_auth_cpucap(cpucap_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA3], scope); + bool api = has_address_auth_cpucap(cpucap_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], + scope, NULL); + bool apa = has_address_auth_cpucap(cpucap_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA5], + scope, NULL); + bool apa3 = has_address_auth_cpucap(cpucap_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA3], scope, + NULL); return apa || apa3 || api; } static bool has_generic_auth(const struct arm64_cpu_capabilities *entry, - int __unused) + int __unused, void *__unused2) { bool gpi = __system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF); bool gpa = __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH_QARMA5); @@ -2224,7 +2237,7 @@ static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap) #ifdef CONFIG_ARM64_PSEUDO_NMI static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *__unused) { /* * ARM64_HAS_GIC_CPUIF_SYSREGS has a lower index, and is a boot CPU @@ -2238,7 +2251,7 @@ static bool can_use_gic_priorities(const struct arm64_cpu_capabilities *entry, } static bool has_gic_prio_relaxed_sync(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *__unused) { /* * If we're not using priority masking then we won't be poking PMR_EL1, @@ -2329,7 +2342,8 @@ static void elf_hwcap_fixup(void) } #ifdef CONFIG_KVM -static bool is_kvm_protected_mode(const struct arm64_cpu_capabilities *entry, int __unused) +static bool is_kvm_protected_mode(const struct arm64_cpu_capabilities *entry, int __unused, + void *__unused2) { return kvm_get_mode() == KVM_MODE_PROTECTED; } @@ -3061,7 +3075,8 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { }; #ifdef CONFIG_COMPAT -static bool compat_has_neon(const struct arm64_cpu_capabilities *cap, int scope) +static bool compat_has_neon(const struct arm64_cpu_capabilities *cap, int scope, + void *__unused) { /* * Check that all of MVFR1_EL1.{SIMDSP, SIMDInt, SIMDLS} are available, @@ -3156,7 +3171,7 @@ static void setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps) /* We support emulation of accesses to CPU ID feature registers */ cpu_set_named_feature(CPUID); for (; hwcaps->matches; hwcaps++) - if (hwcaps->matches(hwcaps, cpucap_default_scope(hwcaps))) + if (hwcaps->matches(hwcaps, cpucap_default_scope(hwcaps), NULL)) cap_set_elf_hwcap(hwcaps); } @@ -3170,7 +3185,7 @@ static void update_cpu_capabilities(u16 scope_mask) caps = cpucap_ptrs[i]; if (!caps || !(caps->type & scope_mask) || cpus_have_cap(caps->capability) || - !caps->matches(caps, cpucap_default_scope(caps))) + !caps->matches(caps, cpucap_default_scope(caps), NULL)) continue; if (caps->desc && !caps->cpus) @@ -3268,7 +3283,7 @@ static void verify_local_cpu_caps(u16 scope_mask) if (!caps || !(caps->type & scope_mask)) continue; - cpu_has_cap = caps->matches(caps, SCOPE_LOCAL_CPU); + cpu_has_cap = caps->matches(caps, SCOPE_LOCAL_CPU, NULL); system_has_cap = cpus_have_cap(caps->capability); if (system_has_cap) { @@ -3324,7 +3339,7 @@ __verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps) { for (; caps->matches; caps++) - if (cpus_have_elf_hwcap(caps) && !caps->matches(caps, SCOPE_LOCAL_CPU)) { + if (cpus_have_elf_hwcap(caps) && !caps->matches(caps, SCOPE_LOCAL_CPU, NULL)) { pr_crit("CPU%d: missing HWCAP: %s\n", smp_processor_id(), caps->desc); cpu_die_early(); @@ -3450,7 +3465,7 @@ bool this_cpu_has_cap(unsigned int n) const struct arm64_cpu_capabilities *cap = cpucap_ptrs[n]; if (cap) - return cap->matches(cap, SCOPE_LOCAL_CPU); + return cap->matches(cap, SCOPE_LOCAL_CPU, NULL); } return false; @@ -3468,7 +3483,7 @@ static bool __maybe_unused __system_matches_cap(unsigned int n) const struct arm64_cpu_capabilities *cap = cpucap_ptrs[n]; if (cap) - return cap->matches(cap, SCOPE_SYSTEM); + return cap->matches(cap, SCOPE_SYSTEM, NULL); } return false; } diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c index da53722f95d4..671b412b0634 100644 --- a/arch/arm64/kernel/proton-pack.c +++ b/arch/arm64/kernel/proton-pack.c @@ -199,7 +199,8 @@ static enum mitigation_state spectre_v2_get_cpu_fw_mitigation_state(void) } } -bool has_spectre_v2(const struct arm64_cpu_capabilities *entry, int scope) +bool has_spectre_v2(const struct arm64_cpu_capabilities *entry, int scope, + void *__unused) { WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); @@ -322,7 +323,8 @@ void spectre_v2_enable_mitigation(const struct arm64_cpu_capabilities *__unused) * an indirect trampoline for the hyp vectors so that guests can't read * VBAR_EL2 to defeat randomisation of the hypervisor VA layout. */ -bool has_spectre_v3a(const struct arm64_cpu_capabilities *entry, int scope) +bool has_spectre_v3a(const struct arm64_cpu_capabilities *entry, int scope, + void *target) { static const struct midr_range spectre_v3a_unsafe_list[] = { MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), @@ -508,7 +510,8 @@ static enum mitigation_state spectre_v4_get_cpu_fw_mitigation_state(void) } } -bool has_spectre_v4(const struct arm64_cpu_capabilities *cap, int scope) +bool has_spectre_v4(const struct arm64_cpu_capabilities *cap, int scope, + void *__unused) { enum mitigation_state state; @@ -955,7 +958,7 @@ static bool supports_ecbhb(int scope) } bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, - int scope) + int scope, void *__unused) { WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); @@ -1005,7 +1008,7 @@ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry) enum mitigation_state fw_state, state = SPECTRE_VULNERABLE; struct bp_hardening_data *data = this_cpu_ptr(&bp_hardening_data); - if (!is_spectre_bhb_affected(entry, SCOPE_LOCAL_CPU)) + if (!is_spectre_bhb_affected(entry, SCOPE_LOCAL_CPU, NULL)) return; if (arm64_get_spectre_v2_state() == SPECTRE_VULNERABLE) { From patchwork Fri Oct 11 07:50:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13832213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8FE7FCFC611 for ; Fri, 11 Oct 2024 07:56:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=s7ZctIcJ0cn+7juijKKhLuupgvrcayc2ebyOOkEtKOk=; b=oi88TI2x0OlKtrZ9V/GDQx0mWc ElJPVW6WlvLGkPQ084xKzOdbsGUwNwCxTsq9HyW9gPmdZXUJwCHf9M8CzPw09YHAwyEL1XnI0I2Ss UXSQbeH/Zh0tX+z85iQi6VPARHp5elvaF/WW3phG7npqxBa34kA8vtsrDiVb6u6AesZsYIVI+G2ht vza3JbmiQmhhvm2iNMQupIu6GjRs/ETtog9Ydy4ANHFGTHLLROlZfKJxDy9v40r3Qvh1PxQdOigbO MmixU6TuWTVdWeyftq8uWHmZM3Nf0bXgWjpM0j3k7Cx13dHxoQ5aHPopCLTTvB2lR1XV7yqjz/zsQ reWpj8Lw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1szAVX-0000000Fcpb-20iH; Fri, 11 Oct 2024 07:56:19 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1szARd-0000000FcKg-1NRD for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2024 07:52:18 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XPzNS51Jxz6JB4C; Fri, 11 Oct 2024 15:51:52 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id 8EF461400D7; Fri, 11 Oct 2024 15:52:15 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Fri, 11 Oct 2024 09:52:08 +0200 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , Subject: [RFC PATCH 2/6] KVM: arm64: Add support for VMM to set migration target Date: Fri, 11 Oct 2024 08:50:49 +0100 Message-ID: <20241011075053.80540-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> References: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241011_005217_694887_D8A73013 X-CRM114-Status: GOOD ( 15.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a VM IOCTL to allow userspace(VMM) to set a number of migration target CPUs. This can be used to tell KVM the list of targets this VM will encounter in its lifetime. In subsequent patches, KVM will use this information to enable errata associated with all the target CPUs for this VM. Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/include/uapi/asm/kvm.h | 12 ++++++++++ arch/arm64/kvm/arm.c | 38 +++++++++++++++++++++++++++++++ include/uapi/linux/kvm.h | 2 ++ 4 files changed, 55 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 329619c6fa96..952af38729bf 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -375,6 +375,9 @@ struct kvm_arch { * the associated pKVM instance in the hypervisor. */ struct kvm_protected_vm pkvm; + + u32 num_migrn_cpus; + struct migrn_target_cpu *migrn_cpu; }; struct kvm_vcpu_fault_info { diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 964df31da975..321f86d747c8 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -540,6 +540,18 @@ struct reg_mask_range { __u32 reserved[13]; }; +struct migrn_target_cpu { + __u32 midr; + __u32 revidr; + __u32 aidr; + __u32 reserved; +}; + +struct kvm_arm_migrn_cpus { + __u32 ncpus; + struct migrn_target_cpu entries[] __counted_by(ncpus); +}; + #endif #endif /* __ARM_KVM_H__ */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a0d01c46e408..a7eab4095683 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -48,6 +48,9 @@ #include "sys_regs.h" +/* For now set to 4 */ +#define MAX_MIGRN_TARGET_CPUS 4 + static enum kvm_mode kvm_mode = KVM_MODE_DEFAULT; enum kvm_wfx_trap_policy { @@ -267,6 +270,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_destroy_mpidr_data(kvm); kfree(kvm->arch.sysreg_masks); + kfree(kvm->arch.migrn_cpu); kvm_destroy_vcpus(kvm); kvm_unshare_hyp(kvm, kvm + 1); @@ -339,6 +343,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_SYSTEM_SUSPEND: case KVM_CAP_IRQFD_RESAMPLE: case KVM_CAP_COUNTER_OFFSET: + case KVM_CAP_ARM_MIGRN_TARGET_CPUS: r = 1; break; case KVM_CAP_SET_GUEST_DEBUG2: @@ -1904,6 +1909,39 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) return -EFAULT; return kvm_vm_ioctl_get_reg_writable_masks(kvm, &range); } + case KVM_ARM_SET_MIGRN_TARGET_CPUS: { + struct kvm_arm_migrn_cpus __user *user_cpus = argp; + struct kvm_arm_migrn_cpus cpus; + struct migrn_target_cpu *entries; + u32 size; + int ret; + + mutex_lock(&kvm->lock); + if (kvm->arch.num_migrn_cpus) { + ret = -EINVAL; + goto migrn_target_cpus_unlock; + } + if (copy_from_user(&cpus, user_cpus, sizeof(*user_cpus))) { + ret = -EFAULT; + goto migrn_target_cpus_unlock; + } + if (cpus.ncpus > MAX_MIGRN_TARGET_CPUS) { + ret = -E2BIG; + goto migrn_target_cpus_unlock; + } + size = sizeof(struct migrn_target_cpu) * cpus.ncpus; + entries = memdup_user(user_cpus->entries, size); + if (IS_ERR(entries)) { + ret = PTR_ERR(entries); + goto migrn_target_cpus_unlock; + } + kvm->arch.num_migrn_cpus = cpus.ncpus; + kvm->arch.migrn_cpu = entries; + ret = 0; +migrn_target_cpus_unlock: + mutex_unlock(&kvm->lock); + return ret; + } default: return -EINVAL; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 637efc055145..5ec035a7092a 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -933,6 +933,7 @@ struct kvm_enable_cap { #define KVM_CAP_PRE_FAULT_MEMORY 236 #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 +#define KVM_CAP_ARM_MIGRN_TARGET_CPUS 239 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1250,6 +1251,7 @@ struct kvm_vfio_spapr_tce { /* Available with KVM_CAP_COUNTER_OFFSET */ #define KVM_ARM_SET_COUNTER_OFFSET _IOW(KVMIO, 0xb5, struct kvm_arm_counter_offset) #define KVM_ARM_GET_REG_WRITABLE_MASKS _IOR(KVMIO, 0xb6, struct reg_mask_range) +#define KVM_ARM_SET_MIGRN_TARGET_CPUS _IOW(KVMIO, 0xb7, struct kvm_arm_migrn_cpus) /* ioctl for vm fd */ #define KVM_CREATE_DEVICE _IOWR(KVMIO, 0xe0, struct kvm_create_device) From patchwork Fri Oct 11 07:50:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13832214 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 892AECFC60F for ; Fri, 11 Oct 2024 07:57:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=hZ99JiNf/UUOeYXPsSH/hDlTmpEEIHq5f0QaqtKEnmI=; b=AypPQ9H3ngUATSWsyoQrJYR1Tx XgR9pDzxcpxUjxiuPEpMiZNUq7LrzD6OX8EvwjtnO0GjIojD26DWUTp9VUQBU8ce/7tyL/iMyKxG1 urjMMCd3u3TH1B1QqFcQcY3pM8g4MPTlMCz5Z5IaFoAZeqgaV0E0I8dsAZoQC4J8+V3fx/0XBI5lb 6vP/ALT1/65WBdeAKj0Zw2euW/ncjVoTT2aof1KMqftPiSKPsdn6KLR2FD3sf1dwhFBecs4IxQwrc PltkoHzOFAA5qU4wyk5kCUE9OmUwobAiNO41QhWg+4CdTvjlvZ0fk9/76V6cRxn0vIXEpLYpYYeER BlxDkRkA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1szAWr-0000000FcwK-0sEP; Fri, 11 Oct 2024 07:57:41 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1szARl-0000000FcMl-2nnJ for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2024 07:52:27 +0000 Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XPzNc5cnbz6JB3Y; Fri, 11 Oct 2024 15:52:00 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id A28B61401DC; Fri, 11 Oct 2024 15:52:23 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Fri, 11 Oct 2024 09:52:16 +0200 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , Subject: [RFC PATCH 3/6] KVM: arm64: Introduce a helper to retrieve errata Date: Fri, 11 Oct 2024 08:50:50 +0100 Message-ID: <20241011075053.80540-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> References: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241011_005226_038994_835E7D2E X-CRM114-Status: GOOD ( 18.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org And update the errata matches functions to use the target CPU values if it is set. Also introduce a "migration_safe_cap" to capabilities structure. This should be a statically allocated constant for any migration safe errata. This is because the existing "capability" value is a generated one and may be renumbered and reordered, hence cannot be used to set the bits in migration errata bitmap. Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/cpufeature.h | 8 ++++ arch/arm64/kernel/cpu_errata.c | 60 ++++++++++++++++++++++++----- arch/arm64/kernel/cpufeature.c | 14 +++++++ 3 files changed, 72 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index c7b1d3ae469e..eada7b9ac4ff 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -335,6 +335,13 @@ struct arm64_cpu_capabilities { const char *desc; u16 capability; u16 type; + /* + * For Erratum only. This should be a static enum value separate from the + * above generated capability value for this erratum. A non-zero value + * here indicates whether this can be safely enabled for migration purposes + * for a specified target CPU. + */ + u16 migration_safe_cap; bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope, void *target); /* @@ -625,6 +632,7 @@ void __init setup_system_features(void); void __init setup_user_features(void); void check_local_cpu_capabilities(void); +void arm_get_migrn_errata_map(void *migrn, unsigned long *errata_map); u64 read_sanitised_ftr_reg(u32 id); u64 __read_sysreg_by_encoding(u32 sys_id); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 37464f100a21..e0acb473312d 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -19,14 +20,26 @@ is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope, void *target) { const struct arm64_midr_revidr *fix; - u32 midr = read_cpuid_id(), revidr; + struct migrn_target_cpu *t_cpu = target; + u32 midr, revidr; + + if (t_cpu) { + midr = t_cpu->midr; + } else { + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + midr = read_cpuid_id(); + } - WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); if (!is_midr_in_range(midr, &entry->midr_range)) return false; midr &= MIDR_REVISION_MASK | MIDR_VARIANT_MASK; - revidr = read_cpuid(REVIDR_EL1); + + if (t_cpu) + revidr = t_cpu->revidr; + else + revidr = read_cpuid(REVIDR_EL1); + for (fix = entry->fixed_revs; fix && fix->revidr_mask; fix++) if (midr == fix->midr_rv && (revidr & fix->revidr_mask)) return false; @@ -38,18 +51,31 @@ static bool __maybe_unused is_affected_midr_range_list(const struct arm64_cpu_capabilities *entry, int scope, void *target) { - WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); - return is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list); + struct migrn_target_cpu *t_cpu = target; + u32 midr; + + if (t_cpu) { + midr = t_cpu->midr; + } else { + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + midr = read_cpuid_id(); + } + return is_midr_in_range_list(midr, entry->midr_range_list); } static bool __maybe_unused is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope, void *target) { + struct migrn_target_cpu *t_cpu = target; u32 model; - WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + if (t_cpu) { + model = t_cpu->midr; + } else { + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + model = read_cpuid_id(); + } - model = read_cpuid_id(); model &= MIDR_IMPLEMENTOR_MASK | (0xf00 << MIDR_PARTNUM_SHIFT) | MIDR_ARCHITECTURE_MASK; @@ -187,11 +213,25 @@ static bool __maybe_unused has_neoverse_n1_erratum_1542419(const struct arm64_cpu_capabilities *entry, int scope, void *target) { - u32 midr = read_cpuid_id(); - bool has_dic = read_cpuid_cachetype() & BIT(CTR_EL0_DIC_SHIFT); + struct migrn_target_cpu *t_cpu = target; + u32 midr; + bool has_dic; + + if (t_cpu) { + midr = t_cpu->midr; + /* + * TBD: Should we pass CTR_EL0 as well? or treat this + * as not safe for migration? For now set this as false. + */ + has_dic = false; + } else { + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + midr = read_cpuid_id(); + has_dic = read_cpuid_cachetype() & BIT(CTR_EL0_DIC_SHIFT); + } + const struct midr_range range = MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1); - WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); return is_midr_in_range(midr, &range) && has_dic; } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index ac0cff5ab09d..7b39b0a4aadd 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -3175,6 +3175,20 @@ static void setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps) cap_set_elf_hwcap(hwcaps); } +void arm_get_migrn_errata_map(void *target, unsigned long *errata_map) +{ + int i; + const struct arm64_cpu_capabilities *caps; + + for (i = 0; i < ARM64_NCAPS; i++) { + caps = cpucap_ptrs[i]; + if (!caps || !caps->migration_safe_cap || + !caps->matches(caps, cpucap_default_scope(caps), target)) + continue; + __set_bit(caps->migration_safe_cap, errata_map); + } +} + static void update_cpu_capabilities(u16 scope_mask) { int i; From patchwork Fri Oct 11 07:50:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13832215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 12C18CFC60F for ; Fri, 11 Oct 2024 07:59:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=agHx3by3Orw54O5CILtnEY12yrPBj8tCcxFOffa8z+Q=; b=dhYaJ/mIKEfwXhECBAGFFmA/DV Jv3lzkwj9Y9ViNdQFx6mkZ1xbfu9luxgRCLyQADvG3oHLq37vvo+o3cFMz4KGO9kol61tG0yBdjUw RdDXZ8UEHPKe/YevjBxPDoQq0qau9W7fwpMO9De601yZvKCnrcAl3CEOhR02zVrs0u5da3ah8INky PFf7VSLUFY1plTNr4xsBrB2dK19jEVSJ0gv8Y6Ph2dW7XWWJwiq71CH1M29XnoMUyTSAY8bdsFrYW Pp9l3XUH7Xl06f5ueRaynzQOycO6by6Qx8KOdbDOMcLQ2VLSEss92uAPM+r1qA4QGPjWTm6XlcNWj iGSUarWQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1szAYA-0000000Fd2X-3x6n; Fri, 11 Oct 2024 07:59:02 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1szARx-0000000FcPX-24ao for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2024 07:52:41 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XPzJB0qV2z6GFGy; Fri, 11 Oct 2024 15:48:10 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id 125D3140451; Fri, 11 Oct 2024 15:52:32 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Fri, 11 Oct 2024 09:52:24 +0200 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , Subject: [RFC PATCH 4/6] KVM: arm64: Add hypercall support for retrieving migration errata bitmap Date: Fri, 11 Oct 2024 08:50:51 +0100 Message-ID: <20241011075053.80540-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> References: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241011_005237_901882_6EDA01B9 X-CRM114-Status: GOOD ( 15.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If the VM requires migration and has set the list of target CPUs, retrieve the errata bitmap for those targets. The Guest can use this hypercall to get the errata bitmap. The bitmaps will be returned in R0-R3 registers. ToDo: For now, reused one of the reserved pKVM hypercall value for this as I noticed that the vendor_hyp_bmap size needs to be adjusted to have anything > 63 set. Signed-off-by: Shameer Kolothum --- arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/hypercalls.c | 20 ++++++++++++++++++++ include/linux/arm-smccc.h | 8 +++++++- 3 files changed, 28 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 321f86d747c8..a518cb3a744b 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -379,6 +379,7 @@ enum { enum { KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT = 0, KVM_REG_ARM_VENDOR_HYP_BIT_PTP = 1, + KVM_REG_ARM_VENDOR_HYP_BIT_MIGRN_ERRATA = 5, #ifdef __KERNEL__ KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT, #endif diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c index 5763d979d8ca..4433cd56cbc2 100644 --- a/arch/arm64/kvm/hypercalls.c +++ b/arch/arm64/kvm/hypercalls.c @@ -116,6 +116,9 @@ static bool kvm_smccc_test_fw_bmap(struct kvm_vcpu *vcpu, u32 func_id) case ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID: return test_bit(KVM_REG_ARM_VENDOR_HYP_BIT_PTP, &smccc_feat->vendor_hyp_bmap); + case ARM_SMCCC_VENDOR_HYP_KVM_MIGRN_ERRATA: + return test_bit(KVM_REG_ARM_VENDOR_HYP_BIT_MIGRN_ERRATA, + &smccc_feat->vendor_hyp_bmap); default: return false; } @@ -364,6 +367,23 @@ int kvm_smccc_call_handler(struct kvm_vcpu *vcpu) case ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID: kvm_ptp_get_time(vcpu, val); break; + case ARM_SMCCC_VENDOR_HYP_KVM_MIGRN_ERRATA: { + struct migrn_target_cpu *migrn_cpu = vcpu->kvm->arch.migrn_cpu; + unsigned long *errata_map; + + if (!migrn_cpu || (ARM64_NCAPS > (BITS_PER_LONG * 4))) + goto out; + + errata_map = bitmap_zalloc(ARM64_NCAPS, GFP_KERNEL); + if (!errata_map) + goto out; + for (int i = 0; i < vcpu->kvm->arch.num_migrn_cpus; i++, migrn_cpu++) + arm_get_migrn_errata_map(migrn_cpu, errata_map); + + bitmap_to_arr64(val, errata_map, ARM64_NCAPS); + bitmap_free(errata_map); + break; + } case ARM_SMCCC_TRNG_VERSION: case ARM_SMCCC_TRNG_FEATURES: case ARM_SMCCC_TRNG_GET_UUID: diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h index f59099a213d0..839b3dfad590 100644 --- a/include/linux/arm-smccc.h +++ b/include/linux/arm-smccc.h @@ -119,7 +119,7 @@ #define ARM_SMCCC_KVM_FUNC_HYP_MEMINFO 2 #define ARM_SMCCC_KVM_FUNC_MEM_SHARE 3 #define ARM_SMCCC_KVM_FUNC_MEM_UNSHARE 4 -#define ARM_SMCCC_KVM_FUNC_PKVM_RESV_5 5 +#define ARM_SMCCC_KVM_FUNC_MIGRN_ERRATA 5 #define ARM_SMCCC_KVM_FUNC_PKVM_RESV_6 6 #define ARM_SMCCC_KVM_FUNC_MMIO_GUARD 7 #define ARM_SMCCC_KVM_FUNC_PKVM_RESV_8 8 @@ -225,6 +225,12 @@ ARM_SMCCC_OWNER_VENDOR_HYP, \ ARM_SMCCC_KVM_FUNC_MMIO_GUARD) +#define ARM_SMCCC_VENDOR_HYP_KVM_MIGRN_ERRATA \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_VENDOR_HYP, \ + ARM_SMCCC_KVM_FUNC_MIGRN_ERRATA) + /* ptp_kvm counter type ID */ #define KVM_PTP_VIRT_COUNTER 0 #define KVM_PTP_PHYS_COUNTER 1 From patchwork Fri Oct 11 07:50:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13832219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B333FCFC60F for ; Fri, 11 Oct 2024 08:00:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=yUKuSrMYI3kc/K570btjL7r5JtUXXr8KyamToKrrtJc=; b=y5Om6e+PWOiALA8JtYYtKw+i3R KwuBVt1EE8lKTDDNR9UnlzrIIHu+Zi3oPbrnYiP2wolvuV3ef0Rx+VfzjyeyiDUN2IPANDiaYPoB6 /CFjJoDd85Z9pi6SoL7RdZjg9fXu8BNZ/iQ8myzrUw93Pd51fnHoJO7rRwwhsnVPAwwtAFjuzLL9r 3C/MzF+4j7dQxRvaWA+/rqMeEYjm6WAG3jiKs3HyMdTUzPnsDMC7G9395ifxWabegRp3Ib/eEiZ5O RRqne5imFmvNvo8Kth5BKUJcHGKs5QKd48SNliutowHBCOleMO7/yed/pSb6DI4mIdfTckFPLeT/k b8SkksiA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1szAZV-0000000FdD5-3NtQ; Fri, 11 Oct 2024 08:00:25 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1szAS2-0000000FcQV-0sVX for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2024 07:52:43 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XPzMm6PJNz6FGb8; Fri, 11 Oct 2024 15:51:16 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id 56C1014010C; Fri, 11 Oct 2024 15:52:40 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Fri, 11 Oct 2024 09:52:33 +0200 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , Subject: [RFC PATCH 5/6] arm64: Use hypercall to check for any migration related errata Date: Fri, 11 Oct 2024 08:50:52 +0100 Message-ID: <20241011075053.80540-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> References: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241011_005242_576608_6678686B X-CRM114-Status: GOOD ( 13.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We now have a hypercall to retrieve any migration related erratas for Guest kernels. Use that and update system_cpucaps based on any set errata. Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/paravirt.h | 4 ++++ arch/arm64/kernel/cpufeature.c | 27 +++++++++++++++++++++++++++ arch/arm64/kernel/paravirt.c | 18 ++++++++++++++++++ 3 files changed, 49 insertions(+) diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h index 9aa193e0e8f2..f2cbf2c51acc 100644 --- a/arch/arm64/include/asm/paravirt.h +++ b/arch/arm64/include/asm/paravirt.h @@ -19,11 +19,15 @@ static inline u64 paravirt_steal_clock(int cpu) } int __init pv_time_init(void); +int __init pv_update_migrn_errata(unsigned long *errata_map); #else #define pv_time_init() do {} while (0) +static inline int pv_update_migrn_errata(unsigned long *errata_map) +{ return -EINVAL; } + #endif // CONFIG_PARAVIRT #endif diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 7b39b0a4aadd..829b9eccec09 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -85,6 +85,7 @@ #include #include #include +#include #include #include #include @@ -111,6 +112,8 @@ static struct arm64_cpu_capabilities const __ro_after_init *cpucap_ptrs[ARM64_NC DECLARE_BITMAP(boot_cpucaps, ARM64_NCAPS); +DECLARE_BITMAP(migrn_safe_errata_map, ARM64_NCAPS); + bool arm64_use_ng_mappings = false; EXPORT_SYMBOL(arm64_use_ng_mappings); @@ -3189,6 +3192,29 @@ void arm_get_migrn_errata_map(void *target, unsigned long *errata_map) } } +static void update_migrn_errata(void) +{ + int i, j; + const struct arm64_cpu_capabilities *caps; + u16 scope = ARM64_CPUCAP_LOCAL_CPU_ERRATUM; + + if (pv_update_migrn_errata(migrn_safe_errata_map)) + return; + + for_each_set_bit(i, migrn_safe_errata_map, ARM64_NCAPS) { + for (j = 0; j < ARM64_NCAPS; j++) { + caps = cpucap_ptrs[j]; + if (!caps || !(caps->type & scope) || + caps->migration_safe_cap != i || + cpus_have_cap(caps->capability)) + continue; + if (caps->desc && !caps->cpus) + pr_info("Updated with migration errata: %s\n", caps->desc); + __set_bit(caps->capability, system_cpucaps); + } + } +} + static void update_cpu_capabilities(u16 scope_mask) { int i; @@ -3566,6 +3592,7 @@ static void __init setup_system_capabilities(void) * cpucaps. */ update_cpu_capabilities(SCOPE_SYSTEM); + update_migrn_errata(); enable_cpu_capabilities(SCOPE_ALL & ~SCOPE_BOOT_CPU); apply_alternatives_all(); diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c index aa718d6a9274..04b1930d0e9a 100644 --- a/arch/arm64/kernel/paravirt.c +++ b/arch/arm64/kernel/paravirt.c @@ -23,6 +23,7 @@ #include #include #include +#include struct static_key paravirt_steal_enabled; struct static_key paravirt_steal_rq_enabled; @@ -153,6 +154,23 @@ static bool __init has_pv_steal_clock(void) return (res.a0 == SMCCC_RET_SUCCESS); } +int __init pv_update_migrn_errata(unsigned long *errata_map) +{ + int ret; + struct arm_smccc_res res; + + ret = kvm_arm_hyp_service_available(ARM_SMCCC_KVM_FUNC_MIGRN_ERRATA); + if (ret <= 0) + return -EOPNOTSUPP; + + arm_smccc_1_1_invoke(ARM_SMCCC_VENDOR_HYP_KVM_MIGRN_ERRATA, &res); + if (res.a0 == SMCCC_RET_NOT_SUPPORTED) + return -EINVAL; + + bitmap_from_arr64(errata_map, &res, ARM64_NCAPS); + return 0; +} + int __init pv_time_init(void) { int ret; From patchwork Fri Oct 11 07:50:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13832220 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 58F4DCFC60F for ; Fri, 11 Oct 2024 08:02:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=X67SFSD9urHynx4SxgO+1e9KVmL5Kvjrfq9xNSfiUuQ=; b=Uf7dsXbO0VlJtw6+BjenC4tLxx EDZ7xMqG1W9r4ZBVuw08ZcH9gbNYkbhySImAtHbvoVXnmAZWqGf6Ubg8BHgwPfXD2v1avsRa5T5Ig jpLZfmjITvA22QNYjNM4vrfE7fjYz8CY/KUz2SzfN47HEsmX+hh5MlRAFtjTi6V/xG8mD4HpS+ZfC 9+CoxZ89Hmj/mbWGOXBtLwG+K95tEerX5K2VZKgmQll41TSpSLcM7FuQaT3b5aoi2qRnIGL76EWmw /a9mOitI5Xr0LT9IGxnkhKRw8Hhe6tXzG+y0Ldlub1Hb40tCdQvcPukJU5lysDa4dLh4Ro21PJtkH U7RNYhoA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1szAaq-0000000FdS3-2kpK; Fri, 11 Oct 2024 08:01:48 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1szASA-0000000FcRR-25ct for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2024 07:52:52 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XPzP56Kp0z6JB4v; Fri, 11 Oct 2024 15:52:25 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id AA5EB140A78; Fri, 11 Oct 2024 15:52:48 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Fri, 11 Oct 2024 09:52:41 +0200 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , Subject: [RFC PATCH 6/6] arm64: errata: Set migration_safe_cap for MIDR based errata Date: Fri, 11 Oct 2024 08:50:53 +0100 Message-ID: <20241011075053.80540-7-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> References: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241011_005250_871517_77B5DDEB X-CRM114-Status: GOOD ( 14.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org These errata can be enabled for a Guest if it has set migration Target CPUs. ToDo: This needs careful vetting. Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/cpu_migrn_errata.h | 42 +++++++++++++++++++++++ arch/arm64/kernel/cpu_errata.c | 34 +++++++++++++++++- 2 files changed, 75 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/cpu_migrn_errata.h diff --git a/arch/arm64/include/asm/cpu_migrn_errata.h b/arch/arm64/include/asm/cpu_migrn_errata.h new file mode 100644 index 000000000000..83d9f3f16e74 --- /dev/null +++ b/arch/arm64/include/asm/cpu_migrn_errata.h @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef __ASM_CPU_MIGRN_ERRATA_H +#define __ASM_CPU_MIGRN_ERRATA_H + +/* Add an enum for any migration safe eraratum here*/ +enum { + ARM64_MIGRN_NOT_SUPPORTED = 0, + ARM64_MIGRN_WORKAROUND_CLEAN_CACHE, + ARM64_MIGRN_WORKAROUND_DEVICE_LOAD_ACQUIRE, + ARM64_MIGRN_WORKAROUND_834220, + ARM64_MIGRN_WORKAROUND_843419, + ARM64_MIGRN_WORKAROUND_845719, + ARM64_MIGRN_WORKAROUND_CAVIUM_23154, + ARM64_MIGRN_WORKAROUND_CAVIUM_27456, + ARM64_MIGRN_WORKAROUND_CAVIUM_30115, + ARM64_MIGRN_WORKAROUND_QCOM_FALKOR_E1003, + ARM64_MIGRN_WORKAROUND_REPEAT_TLBI, + ARM64_MIGRN_WORKAROUND_858921, + ARM64_MIGRN_WORKAROUND_1418040, + ARM64_MIGRN_WORKAROUND_SPECULATIVE_AT, + ARM64_MIGRN_WORKAROUND_1463225, + ARM64_MIGRN_WORKAROUND_CAVIUM_TX2_219_TVM, + ARM64_MIGRN_WORKAROUND_CAVIUM_TX2_219_PRFM, + ARM64_MIGRN_WORKAROUND_1508412, + ARM64_MIGRN_WORKAROUND_NVIDIA_CARMEL_CNP, + ARM64_MIGRN_WORKAROUND_TRBE_OVERWRITE_FILL_MODE, + ARM64_MIGRN_WORKAROUND_TSB_FLUSH_FAILURE, + ARM64_MIGRN_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE, + ARM64_MIGRN_WORKAROUND_2645198, + ARM64_MIGRN_WORKAROUND_2077057, + ARM64_MIGRN_WORKAROUND_2064142, + ARM64_MIGRN_WORKAROUND_2457168, + ARM64_MIGRN_WORKAROUND_2038923, + ARM64_MIGRN_WORKAROUND_1902691, + ARM64_MIGRN_CPUCAP_LOCAL_CPU_ERRATUM, + ARM64_MIGRN_WORKAROUND_2658417, + ARM64_MIGRN_WORKAROUND_SPECULATIVE_SSBS, + ARM64_MIGRN_WORKAROUND_SPECULATIVE_UNPRIV_LOAD, + ARM64_MIGRN_WORKAROUND_AMPERE_AC03_CPU_38, +}; +#endif diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index e0acb473312d..b36377c156b8 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -512,6 +513,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_WORKAROUND_CLEAN_CACHE, ERRATA_MIDR_RANGE_LIST(workaround_clean_cache), .cpu_enable = cpu_enable_cache_maint_trap, + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_CLEAN_CACHE, }, #endif #ifdef CONFIG_ARM64_ERRATUM_832075 @@ -522,6 +524,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ERRATA_MIDR_RANGE(MIDR_CORTEX_A57, 0, 0, 1, 2), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_DEVICE_LOAD_ACQUIRE, }, #endif #ifdef CONFIG_ARM64_ERRATUM_834220 @@ -532,6 +535,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ERRATA_MIDR_RANGE(MIDR_CORTEX_A57, 0, 0, 1, 2), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_834220, }, #endif #ifdef CONFIG_ARM64_ERRATUM_843419 @@ -541,6 +545,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .matches = cpucap_multi_entry_cap_matches, .match_list = erratum_843419_list, + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_843419, }, #endif #ifdef CONFIG_ARM64_ERRATUM_845719 @@ -548,6 +553,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "ARM erratum 845719", .capability = ARM64_WORKAROUND_845719, ERRATA_MIDR_RANGE_LIST(erratum_845719_list), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_845719, }, #endif #ifdef CONFIG_CAVIUM_ERRATUM_23154 @@ -556,6 +562,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_WORKAROUND_CAVIUM_23154, .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, ERRATA_MIDR_RANGE_LIST(cavium_erratum_23154_cpus), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_CAVIUM_23154, }, #endif #ifdef CONFIG_CAVIUM_ERRATUM_27456 @@ -563,6 +570,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "Cavium erratum 27456", .capability = ARM64_WORKAROUND_CAVIUM_27456, ERRATA_MIDR_RANGE_LIST(cavium_erratum_27456_cpus), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_CAVIUM_27456, }, #endif #ifdef CONFIG_CAVIUM_ERRATUM_30115 @@ -570,6 +578,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "Cavium erratum 30115", .capability = ARM64_WORKAROUND_CAVIUM_30115, ERRATA_MIDR_RANGE_LIST(cavium_erratum_30115_cpus), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_CAVIUM_30115, }, #endif { @@ -586,6 +595,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .matches = cpucap_multi_entry_cap_matches, .match_list = qcom_erratum_1003_list, + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_QCOM_FALKOR_E1003, }, #endif #ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI @@ -595,6 +605,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .matches = cpucap_multi_entry_cap_matches, .match_list = arm64_repeat_tlbi_list, + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_REPEAT_TLBI, }, #endif #ifdef CONFIG_ARM64_ERRATUM_858921 @@ -603,6 +614,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "ARM erratum 858921", .capability = ARM64_WORKAROUND_858921, ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_858921, }, #endif { @@ -647,6 +659,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { * in at any point in time. Wonderful. */ .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_1418040, }, #endif #ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT @@ -654,6 +667,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "ARM errata 1165522, 1319367, or 1530923", .capability = ARM64_WORKAROUND_SPECULATIVE_AT, ERRATA_MIDR_RANGE_LIST(erratum_speculative_at_list), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_SPECULATIVE_AT, }, #endif #ifdef CONFIG_ARM64_ERRATUM_1463225 @@ -663,6 +677,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .matches = has_cortex_a76_erratum_1463225, .midr_range_list = erratum_1463225, + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_1463225, }, #endif #ifdef CONFIG_CAVIUM_TX2_ERRATUM_219 @@ -671,11 +686,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_WORKAROUND_CAVIUM_TX2_219_TVM, ERRATA_MIDR_RANGE_LIST(tx2_family_cpus), .matches = needs_tx2_tvm_workaround, + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_CAVIUM_TX2_219_TVM, }, { .desc = "Cavium ThunderX2 erratum 219 (PRFM removal)", .capability = ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM, ERRATA_MIDR_RANGE_LIST(tx2_family_cpus), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_CAVIUM_TX2_219_PRFM, }, #endif #ifdef CONFIG_ARM64_ERRATUM_1542419 @@ -696,6 +713,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ERRATA_MIDR_RANGE(MIDR_CORTEX_A77, 0, 0, 1, 0), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_1508412, }, #endif #ifdef CONFIG_NVIDIA_CARMEL_CNP_ERRATUM @@ -704,6 +722,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "NVIDIA Carmel CNP erratum", .capability = ARM64_WORKAROUND_NVIDIA_CARMEL_CNP, ERRATA_MIDR_ALL_VERSIONS(MIDR_NVIDIA_CARMEL), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_NVIDIA_CARMEL_CNP, }, #endif #ifdef CONFIG_ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE @@ -717,6 +736,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE, .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, CAP_MIDR_RANGE_LIST(trbe_overwrite_fill_mode_cpus), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_TRBE_OVERWRITE_FILL_MODE, }, #endif #ifdef CONFIG_ARM64_WORKAROUND_TSB_FLUSH_FAILURE @@ -724,6 +744,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "ARM erratum 2067961 or 2054223", .capability = ARM64_WORKAROUND_TSB_FLUSH_FAILURE, ERRATA_MIDR_RANGE_LIST(tsb_flush_fail_cpus), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_TSB_FLUSH_FAILURE, }, #endif #ifdef CONFIG_ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE @@ -732,12 +753,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE, .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, CAP_MIDR_RANGE_LIST(trbe_write_out_of_range_cpus), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE, }, #endif #ifdef CONFIG_ARM64_ERRATUM_2645198 { .desc = "ARM erratum 2645198", .capability = ARM64_WORKAROUND_2645198, + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_2645198, ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A715) }, #endif @@ -746,12 +769,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "ARM erratum 2077057", .capability = ARM64_WORKAROUND_2077057, ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_2077057, }, #endif #ifdef CONFIG_ARM64_ERRATUM_2064142 { .desc = "ARM erratum 2064142", .capability = ARM64_WORKAROUND_2064142, + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_2064142, /* Cortex-A510 r0p0 - r0p2 */ ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2) @@ -762,6 +787,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "ARM erratum 2457168", .capability = ARM64_WORKAROUND_2457168, .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_2457168, /* Cortex-A510 r0p0-r1p1 */ CAP_MIDR_RANGE(MIDR_CORTEX_A510, 0, 0, 1, 1) @@ -771,7 +797,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { { .desc = "ARM erratum 2038923", .capability = ARM64_WORKAROUND_2038923, - + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_2038923, /* Cortex-A510 r0p0 - r0p2 */ ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2) }, @@ -780,6 +806,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { { .desc = "ARM erratum 1902691", .capability = ARM64_WORKAROUND_1902691, + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_1902691, /* Cortex-A510 r0p0 - r0p1 */ ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 1) @@ -791,6 +818,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_WORKAROUND_1742098, CAP_MIDR_RANGE_LIST(broken_aarch32_aes), .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, + .migration_safe_cap = ARM64_MIGRN_CPUCAP_LOCAL_CPU_ERRATUM, }, #endif #ifdef CONFIG_ARM64_ERRATUM_2658417 @@ -800,6 +828,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { /* Cortex-A510 r0p0 - r1p1 */ ERRATA_MIDR_RANGE(MIDR_CORTEX_A510, 0, 0, 1, 1), MIDR_FIXED(MIDR_CPU_VAR_REV(1,1), BIT(25)), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_2658417, }, #endif #ifdef CONFIG_ARM64_ERRATUM_3194386 @@ -807,6 +836,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "SSBS not fully self-synchronizing", .capability = ARM64_WORKAROUND_SPECULATIVE_SSBS, ERRATA_MIDR_RANGE_LIST(erratum_spec_ssbs_list), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_SPECULATIVE_SSBS, }, #endif #ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD @@ -815,6 +845,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD, /* Cortex-A520 r0p0 - r0p1 */ ERRATA_MIDR_RANGE_LIST(erratum_spec_unpriv_load_list), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_SPECULATIVE_UNPRIV_LOAD, }, #endif #ifdef CONFIG_AMPERE_ERRATUM_AC03_CPU_38 @@ -822,6 +853,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "AmpereOne erratum AC03_CPU_38", .capability = ARM64_WORKAROUND_AMPERE_AC03_CPU_38, ERRATA_MIDR_RANGE_LIST(erratum_ac03_cpu_38_list), + .migration_safe_cap = ARM64_MIGRN_WORKAROUND_AMPERE_AC03_CPU_38, }, #endif {