From patchwork Fri Oct 11 07:50:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13832214 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 892AECFC60F for ; Fri, 11 Oct 2024 07:57:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=hZ99JiNf/UUOeYXPsSH/hDlTmpEEIHq5f0QaqtKEnmI=; b=AypPQ9H3ngUATSWsyoQrJYR1Tx XgR9pDzxcpxUjxiuPEpMiZNUq7LrzD6OX8EvwjtnO0GjIojD26DWUTp9VUQBU8ce/7tyL/iMyKxG1 urjMMCd3u3TH1B1QqFcQcY3pM8g4MPTlMCz5Z5IaFoAZeqgaV0E0I8dsAZoQC4J8+V3fx/0XBI5lb 6vP/ALT1/65WBdeAKj0Zw2euW/ncjVoTT2aof1KMqftPiSKPsdn6KLR2FD3sf1dwhFBecs4IxQwrc PltkoHzOFAA5qU4wyk5kCUE9OmUwobAiNO41QhWg+4CdTvjlvZ0fk9/76V6cRxn0vIXEpLYpYYeER BlxDkRkA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1szAWr-0000000FcwK-0sEP; Fri, 11 Oct 2024 07:57:41 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1szARl-0000000FcMl-2nnJ for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2024 07:52:27 +0000 Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4XPzNc5cnbz6JB3Y; Fri, 11 Oct 2024 15:52:00 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id A28B61401DC; Fri, 11 Oct 2024 15:52:23 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Fri, 11 Oct 2024 09:52:16 +0200 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , Subject: [RFC PATCH 3/6] KVM: arm64: Introduce a helper to retrieve errata Date: Fri, 11 Oct 2024 08:50:50 +0100 Message-ID: <20241011075053.80540-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> References: <20241011075053.80540-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241011_005226_038994_835E7D2E X-CRM114-Status: GOOD ( 18.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org And update the errata matches functions to use the target CPU values if it is set. Also introduce a "migration_safe_cap" to capabilities structure. This should be a statically allocated constant for any migration safe errata. This is because the existing "capability" value is a generated one and may be renumbered and reordered, hence cannot be used to set the bits in migration errata bitmap. Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/cpufeature.h | 8 ++++ arch/arm64/kernel/cpu_errata.c | 60 ++++++++++++++++++++++++----- arch/arm64/kernel/cpufeature.c | 14 +++++++ 3 files changed, 72 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index c7b1d3ae469e..eada7b9ac4ff 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -335,6 +335,13 @@ struct arm64_cpu_capabilities { const char *desc; u16 capability; u16 type; + /* + * For Erratum only. This should be a static enum value separate from the + * above generated capability value for this erratum. A non-zero value + * here indicates whether this can be safely enabled for migration purposes + * for a specified target CPU. + */ + u16 migration_safe_cap; bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope, void *target); /* @@ -625,6 +632,7 @@ void __init setup_system_features(void); void __init setup_user_features(void); void check_local_cpu_capabilities(void); +void arm_get_migrn_errata_map(void *migrn, unsigned long *errata_map); u64 read_sanitised_ftr_reg(u32 id); u64 __read_sysreg_by_encoding(u32 sys_id); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 37464f100a21..e0acb473312d 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -19,14 +20,26 @@ is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope, void *target) { const struct arm64_midr_revidr *fix; - u32 midr = read_cpuid_id(), revidr; + struct migrn_target_cpu *t_cpu = target; + u32 midr, revidr; + + if (t_cpu) { + midr = t_cpu->midr; + } else { + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + midr = read_cpuid_id(); + } - WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); if (!is_midr_in_range(midr, &entry->midr_range)) return false; midr &= MIDR_REVISION_MASK | MIDR_VARIANT_MASK; - revidr = read_cpuid(REVIDR_EL1); + + if (t_cpu) + revidr = t_cpu->revidr; + else + revidr = read_cpuid(REVIDR_EL1); + for (fix = entry->fixed_revs; fix && fix->revidr_mask; fix++) if (midr == fix->midr_rv && (revidr & fix->revidr_mask)) return false; @@ -38,18 +51,31 @@ static bool __maybe_unused is_affected_midr_range_list(const struct arm64_cpu_capabilities *entry, int scope, void *target) { - WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); - return is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list); + struct migrn_target_cpu *t_cpu = target; + u32 midr; + + if (t_cpu) { + midr = t_cpu->midr; + } else { + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + midr = read_cpuid_id(); + } + return is_midr_in_range_list(midr, entry->midr_range_list); } static bool __maybe_unused is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope, void *target) { + struct migrn_target_cpu *t_cpu = target; u32 model; - WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + if (t_cpu) { + model = t_cpu->midr; + } else { + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + model = read_cpuid_id(); + } - model = read_cpuid_id(); model &= MIDR_IMPLEMENTOR_MASK | (0xf00 << MIDR_PARTNUM_SHIFT) | MIDR_ARCHITECTURE_MASK; @@ -187,11 +213,25 @@ static bool __maybe_unused has_neoverse_n1_erratum_1542419(const struct arm64_cpu_capabilities *entry, int scope, void *target) { - u32 midr = read_cpuid_id(); - bool has_dic = read_cpuid_cachetype() & BIT(CTR_EL0_DIC_SHIFT); + struct migrn_target_cpu *t_cpu = target; + u32 midr; + bool has_dic; + + if (t_cpu) { + midr = t_cpu->midr; + /* + * TBD: Should we pass CTR_EL0 as well? or treat this + * as not safe for migration? For now set this as false. + */ + has_dic = false; + } else { + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + midr = read_cpuid_id(); + has_dic = read_cpuid_cachetype() & BIT(CTR_EL0_DIC_SHIFT); + } + const struct midr_range range = MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1); - WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); return is_midr_in_range(midr, &range) && has_dic; } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index ac0cff5ab09d..7b39b0a4aadd 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -3175,6 +3175,20 @@ static void setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps) cap_set_elf_hwcap(hwcaps); } +void arm_get_migrn_errata_map(void *target, unsigned long *errata_map) +{ + int i; + const struct arm64_cpu_capabilities *caps; + + for (i = 0; i < ARM64_NCAPS; i++) { + caps = cpucap_ptrs[i]; + if (!caps || !caps->migration_safe_cap || + !caps->matches(caps, cpucap_default_scope(caps), target)) + continue; + __set_bit(caps->migration_safe_cap, errata_map); + } +} + static void update_cpu_capabilities(u16 scope_mask) { int i;