From patchwork Wed Feb 1 02:50:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13123674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34186C636CC for ; Wed, 1 Feb 2023 02:51:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231859AbjBACvo (ORCPT ); Tue, 31 Jan 2023 21:51:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230521AbjBACvk (ORCPT ); Tue, 31 Jan 2023 21:51:40 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE6A0367C8 for ; Tue, 31 Jan 2023 18:51:37 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id q5-20020a170902e30500b001967df62d0dso4659114plc.18 for ; Tue, 31 Jan 2023 18:51:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eEgPxTrBa38M8OqG/2NitqiE1MB1CNhiIOCdg1UgaWM=; b=Ts4oxMNH8nKWyay9YcSM/2zk9llOySMR+wP4TEGDmJ9rKMAKBm0RnXyaCZcmRScfLx VmCt6IHpcRr4RLfUlOQmiVZ3Invm+dNuolg/Gq7gLeHjiumHB6mBy2j/TamsrztSu9Qz y0Zl+nGN/IB4S44OTd6jtyOHYg/6QEMNIKWwkj/64J79jXBBWu5UT2ZjUGz7kfs2shOr KWwa+eug80eoKd7ixJmNslw5e+YN+XbKNoMRpQKb92SUi+zZ0T+CP2UhGWdwUIvIDX7P /x4aDnDBaMNDkBZip7Pg9EvvuiKHehTDMAmBBMTHXBj3KQrqriKw0xO/PXwNCEw2i2VX 29oA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eEgPxTrBa38M8OqG/2NitqiE1MB1CNhiIOCdg1UgaWM=; b=xVVC4aruX6gjNprYnpWAkCSysVQoRB3FFLeuF0yS52nHoyGpYXufX+cctrOTeHz57P OPXSs6gCUSMF0X1gbJcStx6ROVxhlE1E9JftQPwiO/RwEn/boaSzIBHjsBNVUOqzL8xz i5a6TJwkTIjmgQ0QOsjs/nii5NXGTeupl34+JUv6NPFXRBGg3jCXelIaPGlVR0jKXwJ1 sXmlTmqTvv73QFes8a8HAcNuhjwvNS/3tmraIJX6GUnlPiLBHdg2LjHaEYRJNqgmoWUt fVq68a+5HoSxOdCCLPBU3ukkwJSOSq+3ADDWD2o/FWUVgUmRTsOhfTs5u55l/Fk6NoVQ Yv2g== X-Gm-Message-State: AO0yUKXFtxOoGMqAb9F3vYDgTRrUogbpLzNNNERkgOOQsRnUcpVnK43t do224NCDwxCaVpI25aV3Jo+l16/pyCJDp4nJVhDwS3IvKb9rAMYu+Apoz7VkUp2/VWaRtdaStDC hAVaJLvcRX8n6zrsxMRnmjm1pA2YVwsmvfoE7PTXYbGcrxcg6FhQLNVA6eRFJSRnKRxi0XM4= X-Google-Smtp-Source: AK7set9dP66YsaoqD+43oh+UW1i8tol8rQKjHungvv2C3IC0kLQMUgBgQt3oUwhacETRKKX+S9oNNvNW9Oer5UiBtA== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:902:e5c7:b0:172:86a2:8e68 with SMTP id u7-20020a170902e5c700b0017286a28e68mr237129plf.27.1675219897161; Tue, 31 Jan 2023 18:51:37 -0800 (PST) Date: Wed, 1 Feb 2023 02:50:43 +0000 In-Reply-To: <20230201025048.205820-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230201025048.205820-1-jingzhangos@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230201025048.205820-2-jingzhangos@google.com> Subject: [PATCH v1 1/6] KVM: arm64: Move CPU ID feature registers emulation into a separate file From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Create a new file id_regs.c for CPU ID feature registers emulation code, which are moved from sys_regs.c and tweak sys_regs code accordingly. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/id_regs.c | 482 ++++++++++++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 460 ++---------------------------------- arch/arm64/kvm/sys_regs.h | 28 +++ 4 files changed, 537 insertions(+), 435 deletions(-) create mode 100644 arch/arm64/kvm/id_regs.c diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 5e33c2d4645a..374c77eda9f9 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -13,7 +13,7 @@ obj-$(CONFIG_KVM) += hyp/ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \ inject_fault.o va_layout.o handle_exit.o \ guest.o debug.o reset.o sys_regs.o stacktrace.o \ - vgic-sys-reg-v3.o fpsimd.o pkvm.o \ + vgic-sys-reg-v3.o fpsimd.o pkvm.o id_regs.o \ arch_timer.o trng.o vmid.o \ vgic/vgic.o vgic/vgic-init.o \ vgic/vgic-irqfd.o vgic/vgic-v2.o \ diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c new file mode 100644 index 000000000000..386b3fbdee9f --- /dev/null +++ b/arch/arm64/kvm/id_regs.c @@ -0,0 +1,482 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 - Google LLC + * Author: Jing Zhang + * + * Moved from arch/arm64/kvm/sys_regs.c + * Copyright (C) 2012,2013 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include +#include +#include +#include +#include + +#include "sys_regs.h" + +static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_has_pmu(vcpu)) + return vcpu->kvm->arch.dfr0_pmuver.imp; + + return vcpu->kvm->arch.dfr0_pmuver.unimp; +} + +static u8 perfmon_to_pmuver(u8 perfmon) +{ + switch (perfmon) { + case ID_DFR0_EL1_PerfMon_PMUv3: + return ID_AA64DFR0_EL1_PMUVer_IMP; + case ID_DFR0_EL1_PerfMon_IMPDEF: + return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; + default: + /* Anything ARMv8.1+ and NI have the same value. For now. */ + return perfmon; + } +} + +static u8 pmuver_to_perfmon(u8 pmuver) +{ + switch (pmuver) { + case ID_AA64DFR0_EL1_PMUVer_IMP: + return ID_DFR0_EL1_PerfMon_PMUv3; + case ID_AA64DFR0_EL1_PMUVer_IMP_DEF: + return ID_DFR0_EL1_PerfMon_IMPDEF; + default: + /* Anything ARMv8.1+ and NI have the same value. For now. */ + return pmuver; + } +} + +/* Read a sanitised cpufeature ID register by sys_reg_desc */ +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +{ + u32 id = reg_to_encoding(r); + u64 val; + + if (sysreg_visible_as_raz(vcpu, r)) + return 0; + + val = read_sanitised_ftr_reg(id); + + switch (id) { + case SYS_ID_AA64PFR0_EL1: + if (!vcpu_has_sve(vcpu)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), + (u64)vcpu->kvm->arch.pfr0_csv2); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), + (u64)vcpu->kvm->arch.pfr0_csv3); + if (kvm_vgic_global_state.type == VGIC_V3) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); + } + break; + case SYS_ID_AA64PFR1_EL1: + if (!kvm_has_mte(vcpu->kvm)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); + + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); + break; + case SYS_ID_AA64ISAR1_EL1: + if (!vcpu_has_ptrauth(vcpu)) + val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI)); + break; + case SYS_ID_AA64ISAR2_EL1: + if (!vcpu_has_ptrauth(vcpu)) + val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); + if (!cpus_have_final_cap(ARM64_HAS_WFXT)) + val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); + break; + case SYS_ID_AA64DFR0_EL1: + /* Limit debug to ARMv8.0 */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); + /* Set PMUver to the required version */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + vcpu_pmuver(vcpu)); + /* Hide SPE from guests */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); + break; + case SYS_ID_DFR0_EL1: + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), + pmuver_to_perfmon(vcpu_pmuver(vcpu))); + break; + case SYS_ID_AA64MMFR2_EL1: + val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; + break; + case SYS_ID_MMFR4_EL1: + val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX); + break; + } + + return val; +} + +/* cpufeature ID register access trap handlers */ + +static bool access_id_reg(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return write_to_read_only(vcpu, p, r); + + p->regval = read_id_reg(vcpu, r); + return true; +} + +/* + * cpufeature ID register user accessors + * + * For now, these registers are immutable for userspace, so no values + * are stored, and for set_id_reg() we don't allow the effective value + * to be changed. + */ +static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 *val) +{ + *val = read_id_reg(vcpu, rd); + return 0; +} + +static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 val) +{ + /* This is what we mean by invariant: you can't change it. */ + if (val != read_id_reg(vcpu, rd)) + return -EINVAL; + + return 0; +} + +static unsigned int id_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + u32 id = reg_to_encoding(r); + + switch (id) { + case SYS_ID_AA64ZFR0_EL1: + if (!vcpu_has_sve(vcpu)) + return REG_RAZ; + break; + } + + return 0; +} + +static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + /* + * AArch32 ID registers are UNKNOWN if AArch32 isn't implemented at any + * EL. Promote to RAZ/WI in order to guarantee consistency between + * systems. + */ + if (!kvm_supports_32bit_el0()) + return REG_RAZ | REG_USER_WI; + + return id_visibility(vcpu, r); +} + +static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 csv2, csv3; + + /* + * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as + * it doesn't promise more than what is actually provided (the + * guest could otherwise be covered in ectoplasmic residue). + */ + csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT); + if (csv2 > 1 || + (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) + return -EINVAL; + + /* Same thing for CSV3 */ + csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT); + if (csv3 > 1 || + (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) + return -EINVAL; + + /* We can only differ with CSV[23], and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); + if (val) + return -EINVAL; + + vcpu->kvm->arch.pfr0_csv2 = csv2; + vcpu->kvm->arch.pfr0_csv3 = csv3; + + return 0; +} + +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 pmuver, host_pmuver; + bool valid_pmu; + + host_pmuver = kvm_arm_pmu_get_pmuver_limit(); + + /* + * Allow AA64DFR0_EL1.PMUver to be set from userspace as long + * as it doesn't promise more than what the HW gives us. We + * allow an IMPDEF PMU though, only if no PMU is supported + * (KVM backward compatibility handling). + */ + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); + if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)) + return -EINVAL; + + valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF); + + /* Make sure view register and PMU support do match */ + if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) + return -EINVAL; + + /* We can only differ with PMUver, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + if (val) + return -EINVAL; + + if (valid_pmu) + vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; + else + vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; + + return 0; +} + +static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 perfmon, host_perfmon; + bool valid_pmu; + + host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); + + /* + * Allow DFR0_EL1.PerfMon to be set from userspace as long as + * it doesn't promise more than what the HW gives us on the + * AArch64 side (as everything is emulated with that), and + * that this is a PMUv3. + */ + perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val); + if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) || + (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3)) + return -EINVAL; + + valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF); + + /* Make sure view register and PMU support do match */ + if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) + return -EINVAL; + + /* We can only differ with PerfMon, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + if (val) + return -EINVAL; + + if (valid_pmu) + vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); + else + vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); + + return 0; +} + +/* sys_reg_desc initialiser for known cpufeature ID registers */ +#define ID_SANITISED(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = id_visibility, \ +} + +/* sys_reg_desc initialiser for known cpufeature ID registers */ +#define AA32_ID_SANITISED(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = aa32_id_visibility, \ +} + +/* + * sys_reg_desc initialiser for architecturally unallocated cpufeature ID + * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 + * (1 <= crm < 8, 0 <= Op2 < 8). + */ +#define ID_UNALLOCATED(crm, op2) { \ + Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility \ +} + +/* + * sys_reg_desc initialiser for known ID registers that we hide from guests. + * For now, these are exposed just like unallocated ID regs: they appear + * RAZ for the guest. + */ +#define ID_HIDDEN(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility, \ +} + +static const struct sys_reg_desc id_reg_descs[] = { + /* + * ID regs: all ID_SANITISED() entries here must have corresponding + * entries in arm64_ftr_regs[]. + */ + + /* AArch64 mappings of the AArch32 ID registers */ + /* CRm=1 */ + AA32_ID_SANITISED(ID_PFR0_EL1), + AA32_ID_SANITISED(ID_PFR1_EL1), + { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_dfr0_el1, + .visibility = aa32_id_visibility, }, + ID_HIDDEN(ID_AFR0_EL1), + AA32_ID_SANITISED(ID_MMFR0_EL1), + AA32_ID_SANITISED(ID_MMFR1_EL1), + AA32_ID_SANITISED(ID_MMFR2_EL1), + AA32_ID_SANITISED(ID_MMFR3_EL1), + + /* CRm=2 */ + AA32_ID_SANITISED(ID_ISAR0_EL1), + AA32_ID_SANITISED(ID_ISAR1_EL1), + AA32_ID_SANITISED(ID_ISAR2_EL1), + AA32_ID_SANITISED(ID_ISAR3_EL1), + AA32_ID_SANITISED(ID_ISAR4_EL1), + AA32_ID_SANITISED(ID_ISAR5_EL1), + AA32_ID_SANITISED(ID_MMFR4_EL1), + AA32_ID_SANITISED(ID_ISAR6_EL1), + + /* CRm=3 */ + AA32_ID_SANITISED(MVFR0_EL1), + AA32_ID_SANITISED(MVFR1_EL1), + AA32_ID_SANITISED(MVFR2_EL1), + ID_UNALLOCATED(3, 3), + AA32_ID_SANITISED(ID_PFR2_EL1), + ID_HIDDEN(ID_DFR1_EL1), + AA32_ID_SANITISED(ID_MMFR5_EL1), + ID_UNALLOCATED(3, 7), + + /* AArch64 ID registers */ + /* CRm=4 */ + { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + ID_SANITISED(ID_AA64PFR1_EL1), + ID_UNALLOCATED(4, 2), + ID_UNALLOCATED(4, 3), + ID_SANITISED(ID_AA64ZFR0_EL1), + ID_HIDDEN(ID_AA64SMFR0_EL1), + ID_UNALLOCATED(4, 6), + ID_UNALLOCATED(4, 7), + + /* CRm=5 */ + { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + ID_SANITISED(ID_AA64DFR1_EL1), + ID_UNALLOCATED(5, 2), + ID_UNALLOCATED(5, 3), + ID_HIDDEN(ID_AA64AFR0_EL1), + ID_HIDDEN(ID_AA64AFR1_EL1), + ID_UNALLOCATED(5, 6), + ID_UNALLOCATED(5, 7), + + /* CRm=6 */ + ID_SANITISED(ID_AA64ISAR0_EL1), + ID_SANITISED(ID_AA64ISAR1_EL1), + ID_SANITISED(ID_AA64ISAR2_EL1), + ID_UNALLOCATED(6, 3), + ID_UNALLOCATED(6, 4), + ID_UNALLOCATED(6, 5), + ID_UNALLOCATED(6, 6), + ID_UNALLOCATED(6, 7), + + /* CRm=7 */ + ID_SANITISED(ID_AA64MMFR0_EL1), + ID_SANITISED(ID_AA64MMFR1_EL1), + ID_SANITISED(ID_AA64MMFR2_EL1), + ID_UNALLOCATED(7, 3), + ID_UNALLOCATED(7, 4), + ID_UNALLOCATED(7, 5), + ID_UNALLOCATED(7, 6), + ID_UNALLOCATED(7, 7), +}; + +const struct sys_reg_desc *kvm_arm_find_id_reg(const struct sys_reg_params *params) +{ + return find_reg(params, id_reg_descs, ARRAY_SIZE(id_reg_descs)); +} + +void kvm_arm_reset_id_regs(struct kvm_vcpu *vcpu) +{ + unsigned long i; + + for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) + if (id_reg_descs[i].reset) + id_reg_descs[i].reset(vcpu, &id_reg_descs[i]); +} + +int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + return kvm_sys_reg_get_user(vcpu, reg, + id_reg_descs, ARRAY_SIZE(id_reg_descs)); +} + +int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + return kvm_sys_reg_set_user(vcpu, reg, + id_reg_descs, ARRAY_SIZE(id_reg_descs)); +} + +bool kvm_arm_check_idreg_table(void) +{ + return check_sysreg_table(id_reg_descs, ARRAY_SIZE(id_reg_descs), false); +} + +/* Assumed ordered tables, see kvm_sys_reg_table_init. */ +int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) +{ + const struct sys_reg_desc *i2, *end2; + unsigned int total = 0; + int err; + + i2 = id_reg_descs; + end2 = id_reg_descs + ARRAY_SIZE(id_reg_descs); + + while (i2 != end2) { + err = walk_one_sys_reg(vcpu, i2++, &uind, &total); + if (err) + return err; + } + return total; +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 8d2747ad50ba..56b0c596ddf6 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -52,16 +52,6 @@ static bool read_from_write_only(struct kvm_vcpu *vcpu, return false; } -static bool write_to_read_only(struct kvm_vcpu *vcpu, - struct sys_reg_params *params, - const struct sys_reg_desc *r) -{ - WARN_ONCE(1, "Unexpected sys_reg write to read-only register\n"); - print_sys_reg_instr(params); - kvm_inject_undefined(vcpu); - return false; -} - u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) { u64 val = 0x8badf00d8badf00d; @@ -1135,160 +1125,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, return true; } -static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) -{ - if (kvm_vcpu_has_pmu(vcpu)) - return vcpu->kvm->arch.dfr0_pmuver.imp; - - return vcpu->kvm->arch.dfr0_pmuver.unimp; -} - -static u8 perfmon_to_pmuver(u8 perfmon) -{ - switch (perfmon) { - case ID_DFR0_EL1_PerfMon_PMUv3: - return ID_AA64DFR0_EL1_PMUVer_IMP; - case ID_DFR0_EL1_PerfMon_IMPDEF: - return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; - default: - /* Anything ARMv8.1+ and NI have the same value. For now. */ - return perfmon; - } -} - -static u8 pmuver_to_perfmon(u8 pmuver) -{ - switch (pmuver) { - case ID_AA64DFR0_EL1_PMUVer_IMP: - return ID_DFR0_EL1_PerfMon_PMUv3; - case ID_AA64DFR0_EL1_PMUVer_IMP_DEF: - return ID_DFR0_EL1_PerfMon_IMPDEF; - default: - /* Anything ARMv8.1+ and NI have the same value. For now. */ - return pmuver; - } -} - -/* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) -{ - u32 id = reg_to_encoding(r); - u64 val; - - if (sysreg_visible_as_raz(vcpu, r)) - return 0; - - val = read_sanitised_ftr_reg(id); - - switch (id) { - case SYS_ID_AA64PFR0_EL1: - if (!vcpu_has_sve(vcpu)) - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); - if (kvm_vgic_global_state.type == VGIC_V3) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); - } - break; - case SYS_ID_AA64PFR1_EL1: - if (!kvm_has_mte(vcpu->kvm)) - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); - - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); - break; - case SYS_ID_AA64ISAR1_EL1: - if (!vcpu_has_ptrauth(vcpu)) - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI)); - break; - case SYS_ID_AA64ISAR2_EL1: - if (!vcpu_has_ptrauth(vcpu)) - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); - if (!cpus_have_final_cap(ARM64_HAS_WFXT)) - val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); - break; - case SYS_ID_AA64DFR0_EL1: - /* Limit debug to ARMv8.0 */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); - /* Set PMUver to the required version */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), - vcpu_pmuver(vcpu)); - /* Hide SPE from guests */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); - break; - case SYS_ID_DFR0_EL1: - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), - pmuver_to_perfmon(vcpu_pmuver(vcpu))); - break; - case SYS_ID_AA64MMFR2_EL1: - val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; - break; - case SYS_ID_MMFR4_EL1: - val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX); - break; - } - - return val; -} - -static unsigned int id_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) -{ - u32 id = reg_to_encoding(r); - - switch (id) { - case SYS_ID_AA64ZFR0_EL1: - if (!vcpu_has_sve(vcpu)) - return REG_RAZ; - break; - } - - return 0; -} - -static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) -{ - /* - * AArch32 ID registers are UNKNOWN if AArch32 isn't implemented at any - * EL. Promote to RAZ/WI in order to guarantee consistency between - * systems. - */ - if (!kvm_supports_32bit_el0()) - return REG_RAZ | REG_USER_WI; - - return id_visibility(vcpu, r); -} - -static unsigned int raz_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) -{ - return REG_RAZ; -} - -/* cpufeature ID register access trap handlers */ - -static bool access_id_reg(struct kvm_vcpu *vcpu, - struct sys_reg_params *p, - const struct sys_reg_desc *r) -{ - if (p->is_write) - return write_to_read_only(vcpu, p, r); - - p->regval = read_id_reg(vcpu, r); - return true; -} - /* Visibility overrides for SVE-specific control registers */ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) @@ -1299,144 +1135,6 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } -static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 csv2, csv3; - - /* - * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as - * it doesn't promise more than what is actually provided (the - * guest could otherwise be covered in ectoplasmic residue). - */ - csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT); - if (csv2 > 1 || - (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; - - /* Same thing for CSV3 */ - csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT); - if (csv3 > 1 || - (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; - - /* We can only differ with CSV[23], and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); - if (val) - return -EINVAL; - - vcpu->kvm->arch.pfr0_csv2 = csv2; - vcpu->kvm->arch.pfr0_csv3 = csv3; - - return 0; -} - -static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 pmuver, host_pmuver; - bool valid_pmu; - - host_pmuver = kvm_arm_pmu_get_pmuver_limit(); - - /* - * Allow AA64DFR0_EL1.PMUver to be set from userspace as long - * as it doesn't promise more than what the HW gives us. We - * allow an IMPDEF PMU though, only if no PMU is supported - * (KVM backward compatibility handling). - */ - pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); - if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)) - return -EINVAL; - - valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF); - - /* Make sure view register and PMU support do match */ - if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; - - /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - if (val) - return -EINVAL; - - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; - else - vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; - - return 0; -} - -static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 perfmon, host_perfmon; - bool valid_pmu; - - host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); - - /* - * Allow DFR0_EL1.PerfMon to be set from userspace as long as - * it doesn't promise more than what the HW gives us on the - * AArch64 side (as everything is emulated with that), and - * that this is a PMUv3. - */ - perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val); - if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) || - (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3)) - return -EINVAL; - - valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF); - - /* Make sure view register and PMU support do match */ - if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; - - /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - if (val) - return -EINVAL; - - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); - else - vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); - - return 0; -} - -/* - * cpufeature ID register user accessors - * - * For now, these registers are immutable for userspace, so no values - * are stored, and for set_id_reg() we don't allow the effective value - * to be changed. - */ -static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, - u64 *val) -{ - *val = read_id_reg(vcpu, rd); - return 0; -} - -static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, - u64 val) -{ - /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(vcpu, rd)) - return -EINVAL; - - return 0; -} - static int get_raz_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 *val) { @@ -1583,50 +1281,6 @@ static unsigned int mte_visibility(const struct kvm_vcpu *vcpu, .visibility = mte_visibility, \ } -/* sys_reg_desc initialiser for known cpufeature ID registers */ -#define ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = id_visibility, \ -} - -/* sys_reg_desc initialiser for known cpufeature ID registers */ -#define AA32_ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = aa32_id_visibility, \ -} - -/* - * sys_reg_desc initialiser for architecturally unallocated cpufeature ID - * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 - * (1 <= crm < 8, 0 <= Op2 < 8). - */ -#define ID_UNALLOCATED(crm, op2) { \ - Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility \ -} - -/* - * sys_reg_desc initialiser for known ID registers that we hide from guests. - * For now, these are exposed just like unallocated ID regs: they appear - * RAZ for the guest. - */ -#define ID_HIDDEN(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility, \ -} - /* * Architected system registers. * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 @@ -1681,87 +1335,6 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 }, - /* - * ID regs: all ID_SANITISED() entries here must have corresponding - * entries in arm64_ftr_regs[]. - */ - - /* AArch64 mappings of the AArch32 ID registers */ - /* CRm=1 */ - AA32_ID_SANITISED(ID_PFR0_EL1), - AA32_ID_SANITISED(ID_PFR1_EL1), - { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_dfr0_el1, - .visibility = aa32_id_visibility, }, - ID_HIDDEN(ID_AFR0_EL1), - AA32_ID_SANITISED(ID_MMFR0_EL1), - AA32_ID_SANITISED(ID_MMFR1_EL1), - AA32_ID_SANITISED(ID_MMFR2_EL1), - AA32_ID_SANITISED(ID_MMFR3_EL1), - - /* CRm=2 */ - AA32_ID_SANITISED(ID_ISAR0_EL1), - AA32_ID_SANITISED(ID_ISAR1_EL1), - AA32_ID_SANITISED(ID_ISAR2_EL1), - AA32_ID_SANITISED(ID_ISAR3_EL1), - AA32_ID_SANITISED(ID_ISAR4_EL1), - AA32_ID_SANITISED(ID_ISAR5_EL1), - AA32_ID_SANITISED(ID_MMFR4_EL1), - AA32_ID_SANITISED(ID_ISAR6_EL1), - - /* CRm=3 */ - AA32_ID_SANITISED(MVFR0_EL1), - AA32_ID_SANITISED(MVFR1_EL1), - AA32_ID_SANITISED(MVFR2_EL1), - ID_UNALLOCATED(3,3), - AA32_ID_SANITISED(ID_PFR2_EL1), - ID_HIDDEN(ID_DFR1_EL1), - AA32_ID_SANITISED(ID_MMFR5_EL1), - ID_UNALLOCATED(3,7), - - /* AArch64 ID registers */ - /* CRm=4 */ - { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, - ID_SANITISED(ID_AA64PFR1_EL1), - ID_UNALLOCATED(4,2), - ID_UNALLOCATED(4,3), - ID_SANITISED(ID_AA64ZFR0_EL1), - ID_HIDDEN(ID_AA64SMFR0_EL1), - ID_UNALLOCATED(4,6), - ID_UNALLOCATED(4,7), - - /* CRm=5 */ - { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, - ID_SANITISED(ID_AA64DFR1_EL1), - ID_UNALLOCATED(5,2), - ID_UNALLOCATED(5,3), - ID_HIDDEN(ID_AA64AFR0_EL1), - ID_HIDDEN(ID_AA64AFR1_EL1), - ID_UNALLOCATED(5,6), - ID_UNALLOCATED(5,7), - - /* CRm=6 */ - ID_SANITISED(ID_AA64ISAR0_EL1), - ID_SANITISED(ID_AA64ISAR1_EL1), - ID_SANITISED(ID_AA64ISAR2_EL1), - ID_UNALLOCATED(6,3), - ID_UNALLOCATED(6,4), - ID_UNALLOCATED(6,5), - ID_UNALLOCATED(6,6), - ID_UNALLOCATED(6,7), - - /* CRm=7 */ - ID_SANITISED(ID_AA64MMFR0_EL1), - ID_SANITISED(ID_AA64MMFR1_EL1), - ID_SANITISED(ID_AA64MMFR2_EL1), - ID_UNALLOCATED(7,3), - ID_UNALLOCATED(7,4), - ID_UNALLOCATED(7,5), - ID_UNALLOCATED(7,6), - ID_UNALLOCATED(7,7), - { SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 }, { SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 }, { SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 }, @@ -2375,8 +1948,8 @@ static const struct sys_reg_desc cp15_64_regs[] = { { SYS_DESC(SYS_AARCH32_CNTP_CVAL), access_arch_timer }, }; -static bool check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, - bool is_32) +bool check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, + bool is_32) { unsigned int i; @@ -2727,7 +2300,9 @@ static bool emulate_sys_reg(struct kvm_vcpu *vcpu, { const struct sys_reg_desc *r; - r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); + r = kvm_arm_find_id_reg(params); + if (!r) + r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); if (likely(r)) { perform_access(vcpu, params, r); @@ -2756,6 +2331,8 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu) { unsigned long i; + kvm_arm_reset_id_regs(vcpu); + for (i = 0; i < ARRAY_SIZE(sys_reg_descs); i++) if (sys_reg_descs[i].reset) sys_reg_descs[i].reset(vcpu, &sys_reg_descs[i]); @@ -3004,6 +2581,10 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (err != -ENOENT) return err; + err = kvm_arm_get_id_reg(vcpu, reg); + if (err != -ENOENT) + return err; + return kvm_sys_reg_get_user(vcpu, reg, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); } @@ -3048,6 +2629,10 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (err != -ENOENT) return err; + err = kvm_arm_set_id_reg(vcpu, reg); + if (err != -ENOENT) + return err; + return kvm_sys_reg_set_user(vcpu, reg, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); } @@ -3094,10 +2679,10 @@ static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind) return true; } -static int walk_one_sys_reg(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 __user **uind, - unsigned int *total) +int walk_one_sys_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 __user **uind, + unsigned int *total) { /* * Ignore registers we trap but don't save, @@ -3138,6 +2723,7 @@ unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu) { return ARRAY_SIZE(invariant_sys_regs) + num_demux_regs() + + kvm_arm_walk_id_regs(vcpu, (u64 __user *)NULL) + walk_sys_regs(vcpu, (u64 __user *)NULL); } @@ -3153,6 +2739,11 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) uindices++; } + err = kvm_arm_walk_id_regs(vcpu, uindices); + if (err < 0) + return err; + uindices += err; + err = walk_sys_regs(vcpu, uindices); if (err < 0) return err; @@ -3167,6 +2758,7 @@ int __init kvm_sys_reg_table_init(void) unsigned int i; /* Make sure tables are unique and in order. */ + valid &= kvm_arm_check_idreg_table(); valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false); valid &= check_sysreg_table(cp14_regs, ARRAY_SIZE(cp14_regs), true); valid &= check_sysreg_table(cp14_64_regs, ARRAY_SIZE(cp14_64_regs), true); diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index e4ebb3a379fd..3a7792679481 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -200,6 +200,22 @@ find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[], return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg); } +static inline unsigned int raz_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + return REG_RAZ; +} + +static inline bool write_to_read_only(struct kvm_vcpu *vcpu, + struct sys_reg_params *params, + const struct sys_reg_desc *r) +{ + WARN_ONCE(1, "Unexpected sys_reg write to read-only register\n"); + print_sys_reg_instr(params); + kvm_inject_undefined(vcpu); + return false; +} + const struct sys_reg_desc *get_reg_by_id(u64 id, const struct sys_reg_desc table[], unsigned int num); @@ -210,6 +226,18 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, const struct sys_reg_desc table[], unsigned int num); int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, const struct sys_reg_desc table[], unsigned int num); +bool check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, + bool is_32); +int walk_one_sys_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 __user **uind, + unsigned int *total); +const struct sys_reg_desc *kvm_arm_find_id_reg(const struct sys_reg_params *params); +void kvm_arm_reset_id_regs(struct kvm_vcpu *vcpu); +int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +bool kvm_arm_check_idreg_table(void); +int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind); #define AA32(_x) .aarch32_map = AA32_##_x #define Op0(_x) .Op0 = _x From patchwork Wed Feb 1 02:50:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13123675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAAEEC636D3 for ; Wed, 1 Feb 2023 02:51:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231787AbjBACvp (ORCPT ); Tue, 31 Jan 2023 21:51:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231226AbjBACvk (ORCPT ); Tue, 31 Jan 2023 21:51:40 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C4CF3BDBB for ; Tue, 31 Jan 2023 18:51:39 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id a4-20020a170902ecc400b00198a5dbaf88so1053723plh.12 for ; Tue, 31 Jan 2023 18:51:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RNQAAG2W257cIjJOfB4F5pMjBmkEAgg15MTy1dMbGKU=; b=e2rhYxh4lvuezENrCxsjvMBhhTiYNjicQiLAwaDRrEc/bj90lbp3Ps3IdSmtDcFS2R spgDEWuekLijUGc1S+aWqhJMTJ0bL7xpEnO6w4JE/7InNQmjxveF+guYLp+LgXYC/wBA DqzmFqMh3dawJBLOa6Hmae1YSVPjKGo+Zo71SDwq2KaoddFptP58e4Y3wc/0B/COUkky Cu7bmWlpBWegasQxz8WQwxm937t+wwmCTw11DU6LYtQRMmcPT1mrVAwzrImvCbUrKK82 6Xb9ogiZpbfWHeI4Djkq/F5xfYLxghOQDcrjPJP2Hk9I5JaD9G881i8JkmxptwM8YZzU ULMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RNQAAG2W257cIjJOfB4F5pMjBmkEAgg15MTy1dMbGKU=; b=XGFhNehohzdz0+E7Wv7ckuT75PnLMBY3EzW9zyOgl1cWi8xiK8OO2XX5Uo4SYS54hP 4K0di9vSDKQ3EAFXSHjuuEofdlOpKZGGyunGO0yv9NDL2N1QgcKr+2DqbMfm9xrlyx4A eJiGdZcLt45Nqiwl6I8w3Yy7aMpq5118a/s143RZq9mYLTPZWRDteZ27IvAqBjowadvA i3Y4Ida5fWW7B5CV3pW+vdDkOjFspVGgA4aS7rCJJExktoJlBVieQ2isTUcbQ8JcYmPy adS0UtpqnXB9lrxFqDoFibVRtuDoJ9RJvJbl5wJQnsDpYpqfW3vffHe62JeVaKbiJCMp ynbw== X-Gm-Message-State: AO0yUKWPeMLtfdkCAygchePDxF4NiHCJRpJS6owvBXwdYc9smuG/GUQh ImHYFPK+uuzD0QTyjddmjl9mkCQqer3OM96/H6nlvOGXcJBLydyuTp7krYeuCDwMzio475BxJhe +L6/NExa23UzlJeRnnQ0vd2jdlU5vamc3J3yl6enCX+X7inwdT52Dlf0SOH9UIo6Gg84YONg= X-Google-Smtp-Source: AK7set+coY/luGJv2OCNri1R5naNanlcg1VDMzCffP+dZOxRoQPBg4GDTmZ+xZo0tPFe2ju1m00VI/AkLCHfWe5grw== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:902:7608:b0:196:7555:f810 with SMTP id k8-20020a170902760800b001967555f810mr258252pll.7.1675219898896; Tue, 31 Jan 2023 18:51:38 -0800 (PST) Date: Wed, 1 Feb 2023 02:50:44 +0000 In-Reply-To: <20230201025048.205820-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230201025048.205820-1-jingzhangos@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230201025048.205820-3-jingzhangos@google.com> Subject: [PATCH v1 2/6] KVM: arm64: Save ID registers' sanitized value per guest From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Reiji Watanabe Introduce id_regs[] in kvm_arch as a storage of guest's ID registers, and save ID registers' sanitized value in the array at KVM_CREATE_VM. Use the saved ones when ID registers are read by the guest or userspace (via KVM_GET_ONE_REG). No functional change intended. Signed-off-by: Reiji Watanabe Co-developed-by: Jing Zhang Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 12 +++++++ arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/id_regs.c | 53 ++++++++++++++++++++++++++----- arch/arm64/kvm/sys_regs.c | 2 +- arch/arm64/kvm/sys_regs.h | 1 + 5 files changed, 60 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index b14a0199eba4..b1beef93465c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -240,6 +240,16 @@ struct kvm_arch { * the associated pKVM instance in the hypervisor. */ struct kvm_protected_vm pkvm; + + /* + * Save ID registers for the guest in id_regs[]. + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + */ +#define KVM_ARM_ID_REG_NUM 56 +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id)) +#define IDREG(kvm, id) kvm->arch.id_regs[IDREG_IDX(id)] + u64 id_regs[KVM_ARM_ID_REG_NUM]; }; struct kvm_vcpu_fault_info { @@ -967,6 +977,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, struct kvm_arm_copy_mte_tags *copy_tags); +void kvm_arm_set_default_id_regs(struct kvm *kvm); + /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 698787ed87e9..d525b71d0523 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -153,6 +153,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); + kvm_arm_set_default_id_regs(kvm); /* * Initialise the default PMUver before there is a chance to diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index 386b3fbdee9f..f53ce00ab14d 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -51,16 +51,20 @@ static u8 pmuver_to_perfmon(u8 pmuver) } } -/* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +/* + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + */ +static bool is_id_reg(u32 id) { - u32 id = reg_to_encoding(r); - u64 val; - - if (sysreg_visible_as_raz(vcpu, r)) - return 0; + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 && + sys_reg_CRm(id) < 8); +} - val = read_sanitised_ftr_reg(id); +u64 kvm_arm_read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id) +{ + u64 val = IDREG(vcpu->kvm, id); switch (id) { case SYS_ID_AA64PFR0_EL1: @@ -125,6 +129,14 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r return val; } +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +{ + if (sysreg_visible_as_raz(vcpu, r)) + return 0; + + return kvm_arm_read_id_reg_with_encoding(vcpu, reg_to_encoding(r)); +} + /* cpufeature ID register access trap handlers */ static bool access_id_reg(struct kvm_vcpu *vcpu, @@ -480,3 +492,28 @@ int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) } return total; } + +/* + * Set the guest's ID registers that are defined in id_reg_descs[] + * with ID_SANITISED() to the host's sanitized value. + */ +void kvm_arm_set_default_id_regs(struct kvm *kvm) +{ + int i; + u32 id; + u64 val; + + for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { + id = reg_to_encoding(&id_reg_descs[i]); + if (WARN_ON_ONCE(!is_id_reg(id))) + /* Shouldn't happen */ + continue; + + if (id_reg_descs[i].visibility == raz_visibility) + /* Hidden or reserved ID register */ + continue; + + val = read_sanitised_ftr_reg(id); + IDREG(kvm, id) = val; + } +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 56b0c596ddf6..4dec5b038d77 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -333,7 +333,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + u64 val = kvm_arm_read_id_reg_with_encoding(vcpu, SYS_ID_AA64MMFR1_EL1); u32 sr = reg_to_encoding(r); if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) { diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 3a7792679481..3b59cc940148 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -238,6 +238,7 @@ int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); bool kvm_arm_check_idreg_table(void); int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind); +u64 kvm_arm_read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id); #define AA32(_x) .aarch32_map = AA32_##_x #define Op0(_x) .Op0 = _x From patchwork Wed Feb 1 02:50:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13123676 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7070C38142 for ; Wed, 1 Feb 2023 02:51:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231714AbjBACvr (ORCPT ); Tue, 31 Jan 2023 21:51:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231717AbjBACvm (ORCPT ); Tue, 31 Jan 2023 21:51:42 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF3D93B3C1 for ; Tue, 31 Jan 2023 18:51:41 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id u15-20020a170902a60f00b00194d7d89168so9255290plq.10 for ; Tue, 31 Jan 2023 18:51:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jtA8OJbXoe0QFW21mEwKtvXGHpBgnZe8L3qzReVjKUA=; b=fXhhMzOwr3O8t7BTYHm6r+S8OYvBdR5e23axs9ydQrZLDuRrtsEcC9WN1XNkFH2rWT amAnslZyg0UvJYOxuIdorLuWvurkkJsK13o8aV3N+ryHgZoVudjgOxkBpMAVDE/e4amb eEToFfW9YWOdngPLol95yHqDQ3b3Ppzo+VcQjK1nMOu0jByqHKz8dtZj8v8z00Y0M5ky 1dC3MZTwf6PhFemMBE9oXTHhAwecmZeoeGlRk40bB8Fp07T9r/pEa6TR8KiZraM5L/L1 t3uwkiT8irpHqk6eIXzK58tH5e15mzz8MJGR6BdIRhP6VgF3Pw0HEsbtDXj8rZp08hHI 2HRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jtA8OJbXoe0QFW21mEwKtvXGHpBgnZe8L3qzReVjKUA=; b=rrB62HQg7IU1CQmWHFVv8E2kn3t7lZlhB6DlTQ24o2c3EvNc6qYyYNMp6tlwRt75DO 9Raodo6QPoSo9bmOJJ6fIabEvRywkiyCvMbW/adxsUpPXY2smO3pZjVW/NuzNNZNvZX1 KC5BFC86/5gj6K5wy3hXq9jvyAHcUXMlOKMskE3Rz+c+MoFsk0Asf8MY64Q8HCf2zEUr eK12WUP/iX7NFpl3KcOGpWFds0SoUTazNV31Y+mE/GbJbV1Ew4jEiZ2otrq+uac8uB3N 5tHkFYRhReaC2DXyp5PN3bUx5Zfp0v+SqPM7XCyM06HHNweFuMFp8qeI1LkEsctDax5i EqCw== X-Gm-Message-State: AO0yUKWZbFo4uZZBxZJpFaz1Bb+fU9+TjRwc9WxSr218BgzAiWCIiiuu p9PfcjdtLL0juQqpn9kwZGR98he4oIeDIiCzF0KsqPIxhDmYOaRYvXfPaqGPO0X1U4zNTSl0W3p n7csBETU897dTz9OnuE7itl/9MloJ3idUykquVVPbcM0Ch91DfhhbVZ6Nx0RcHGNiPfTucBM= X-Google-Smtp-Source: AK7set/iZnCZiaWSDZD4YPdKBK98pfHE92kCKD48HnVnF5FD/69mopaG2vFmhcK4wXYYXo99CDCTOcty5ugXK3Mgnw== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:90a:74cf:b0:22c:dfb:a9da with SMTP id p15-20020a17090a74cf00b0022c0dfba9damr84622pjl.115.1675219901051; Tue, 31 Jan 2023 18:51:41 -0800 (PST) Date: Wed, 1 Feb 2023 02:50:45 +0000 In-Reply-To: <20230201025048.205820-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230201025048.205820-1-jingzhangos@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230201025048.205820-4-jingzhangos@google.com> Subject: [PATCH v1 3/6] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3] From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org With per guest ID registers, ID_AA64PFR0_EL1.[CSV2|CSV3] settings from userspace can be stored in its corresponding ID register. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 3 +-- arch/arm64/kvm/arm.c | 19 +------------------ arch/arm64/kvm/hyp/nvhe/sys_regs.c | 7 +++---- arch/arm64/kvm/id_regs.c | 30 ++++++++++++++++++++++-------- 4 files changed, 27 insertions(+), 32 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index b1beef93465c..fabb30185a4a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -225,8 +225,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - u8 pfr0_csv2; - u8 pfr0_csv3; struct { u8 imp:4; u8 unimp:4; @@ -249,6 +247,7 @@ struct kvm_arch { #define KVM_ARM_ID_REG_NUM 56 #define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id)) #define IDREG(kvm, id) kvm->arch.id_regs[IDREG_IDX(id)] +#define IDREG_RD(kvm, rd) IDREG(kvm, reg_to_encoding(rd)) u64 id_regs[KVM_ARM_ID_REG_NUM]; }; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index d525b71d0523..d8ba5106bf51 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -104,22 +104,6 @@ static int kvm_arm_default_max_vcpus(void) return vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS; } -static void set_default_spectre(struct kvm *kvm) -{ - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv2 = 1; - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv3 = 1; -} - /** * kvm_arch_init_vm - initializes a VM data structure * @kvm: pointer to the KVM struct @@ -151,9 +135,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->max_vcpus = kvm_arm_default_max_vcpus(); - set_default_spectre(kvm); - kvm_arm_init_hypercalls(kvm); kvm_arm_set_default_id_regs(kvm); + kvm_arm_init_hypercalls(kvm); /* * Initialise the default PMUver before there is a chance to diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 0f9ac25afdf4..03919d342136 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -92,10 +92,9 @@ static u64 get_pvm_id_aa64pfr0(const struct kvm_vcpu *vcpu) PVM_ID_AA64PFR0_RESTRICT_UNSIGNED); /* Spectre and Meltdown mitigation in KVM */ - set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), - (u64)kvm->arch.pfr0_csv2); - set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), - (u64)kvm->arch.pfr0_csv3); + set_mask |= IDREG(kvm, SYS_ID_AA64PFR0_EL1) & + (ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); return (id_aa64pfr0_el1_sys_val & allow_mask) | set_mask; } diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index f53ce00ab14d..bc5d9bc84eb1 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -71,12 +71,6 @@ u64 kvm_arm_read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id) if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), - (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), - (u64)vcpu->kvm->arch.pfr0_csv3); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -208,6 +202,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, u64 val) { u8 csv2, csv3; + u64 sval = val; /* * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as @@ -232,8 +227,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - vcpu->kvm->arch.pfr0_csv2 = csv2; - vcpu->kvm->arch.pfr0_csv3 = csv3; + IDREG_RD(vcpu->kvm, rd) = sval; return 0; } @@ -516,4 +510,24 @@ void kvm_arm_set_default_id_regs(struct kvm *kvm) val = read_sanitised_ftr_reg(id); IDREG(kvm, id) = val; } + /* + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. + */ + val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); + + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } + + IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; } From patchwork Wed Feb 1 02:50:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13123677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD4E4C63797 for ; Wed, 1 Feb 2023 02:51:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231895AbjBACvs (ORCPT ); Tue, 31 Jan 2023 21:51:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230521AbjBACvo (ORCPT ); Tue, 31 Jan 2023 21:51:44 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 941923BDBB for ; Tue, 31 Jan 2023 18:51:43 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id w12-20020a170903310c00b0019663abbe88so6048895plc.20 for ; Tue, 31 Jan 2023 18:51:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0IGgYlV2YluUk+TGyAtX4/ZqYNoaZrpYTz1eulwaqT8=; b=VcLoRzYY0W+13Iaj7GikG+aDb3dgA5DAOAlmQ4R+sUyYDMkMd9yoh+vwjm6EuSQJds v+TVKjRtu8fukcsAAj7W9HgrReSAKo6m8GsE9Zg4l97op8zc+/p2gdzJZZ91u6m0tWq8 T1xV1i4edK+hEhAMHmhTDc9lJECMDmZAo905zpi5tJQ1RdVStCBh1BEruj7Kq65eHqRl ygM59xlBQFZFAUM15WUnkhe+2Q1eBA8Pe2KeWb1XVtAMq4xv6ZR3HS2Wldk92CyBEd6V 2IP7RQ11eBpTCklHGeMXqZsl+p+m6XxXhVS2hP2qmCbkYm7n/XS94jgL8KvAs8t+mpZf aU5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0IGgYlV2YluUk+TGyAtX4/ZqYNoaZrpYTz1eulwaqT8=; b=jEkGugrl1HKTziyN820BTwu59fyG2VaNdgRzQdRUy4c5tIJVUm2zhyrjyLS+hA1LgS tSXmbgEsu+Do48yNnXm+UGmbExhnpN6C8v41IHkCk4RyeFNWwd47nOY8DaIav03VMCgE FstDcO3lSgMys03hohye67ifuufwrE3Bnh0M4Epu8wTMxrVmvpMgMWGhDu7/mBWTk0ai TTsaMvJ00vcevmkgiz+Igl+6jHLiEXuPPPhe+Zop8yll8798iwS2uM8i1BrRyl65nmPj 99Q3JF53IMEvExmrqT950hBiRHb4tJCaHptnOkxlxXK9sfbl98TeW4GYISBAo8srnqz/ bZXA== X-Gm-Message-State: AO0yUKXxWArZxPCRBKjMU43oE/7QGd2YWAeV4Yf01VHts8DO+7Ef+GQ7 Sh1YvvzlRXWOv+NR6Zmu5aNwS5ci06HZM4on3Eoevq70h9CmXPORG8avOlB8UixUzGMcLsBClfY r+FrGP0iBNb02IO4/N9Tr54lTOLEWyOd+F2uOwCiuOpCUS6TsZ0EaiR1CXSo8hOBzPSOJnoE= X-Google-Smtp-Source: AK7set8e2JuOfVBvSZJmYExBOgRlUp7Xahi/nXUWFlbzYF8wyTA3Q48u3OKL1YWSNbb9Wy4LL8UriodvIhb/sR4KyA== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a05:6a00:148e:b0:58d:b5d2:fce2 with SMTP id v14-20020a056a00148e00b0058db5d2fce2mr160711pfu.52.1675219902773; Tue, 31 Jan 2023 18:51:42 -0800 (PST) Date: Wed, 1 Feb 2023 02:50:46 +0000 In-Reply-To: <20230201025048.205820-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230201025048.205820-1-jingzhangos@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230201025048.205820-5-jingzhangos@google.com> Subject: [PATCH v1 4/6] KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org With per guest ID registers, PMUver settings from userspace can be stored in its corresponding ID register. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 5 ----- arch/arm64/kvm/arm.c | 6 ------ arch/arm64/kvm/id_regs.c | 33 ++++++++++++++++++++----------- include/kvm/arm_pmu.h | 6 ++++-- 4 files changed, 25 insertions(+), 25 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index fabb30185a4a..1ab443b52c46 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -225,11 +225,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - struct { - u8 imp:4; - u8 unimp:4; - } dfr0_pmuver; - /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index d8ba5106bf51..25bd95650223 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -138,12 +138,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm_arm_set_default_id_regs(kvm); kvm_arm_init_hypercalls(kvm); - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit(); - return 0; err_free_cpumask: diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index bc5d9bc84eb1..5eade7d380af 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -19,10 +19,12 @@ static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { - if (kvm_vcpu_has_pmu(vcpu)) - return vcpu->kvm->arch.dfr0_pmuver.imp; - - return vcpu->kvm->arch.dfr0_pmuver.unimp; + u8 pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)); + if (kvm_vcpu_has_pmu(vcpu) || pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) + return pmuver; + else + return 0; } static u8 perfmon_to_pmuver(u8 perfmon) @@ -263,10 +265,9 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; - else - vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) |= + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), pmuver); return 0; } @@ -303,10 +304,9 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); - else - vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) |= + FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), perfmon_to_pmuver(perfmon)); return 0; } @@ -530,4 +530,13 @@ void kvm_arm_set_default_id_regs(struct kvm *kvm) } IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; + + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + IDREG(kvm, SYS_ID_AA64DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + IDREG(kvm, SYS_ID_AA64DFR0_EL1) |= + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + kvm_arm_pmu_get_pmuver_limit()); } diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 628775334d5e..eef67b7d9751 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -92,8 +92,10 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); /* * Evaluates as true when emulating PMUv3p5, and false otherwise. */ -#define kvm_pmu_is_3p5(vcpu) \ - (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5) +#define kvm_pmu_is_3p5(vcpu) \ + (kvm_vcpu_has_pmu(vcpu) && \ + FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), \ + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)) >= ID_AA64DFR0_EL1_PMUVer_V3P5) u8 kvm_arm_pmu_get_pmuver_limit(void); From patchwork Wed Feb 1 02:50:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13123678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E858BC636D3 for ; Wed, 1 Feb 2023 02:51:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231917AbjBACvt (ORCPT ); Tue, 31 Jan 2023 21:51:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231226AbjBACvr (ORCPT ); Tue, 31 Jan 2023 21:51:47 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82A384AA62 for ; Tue, 31 Jan 2023 18:51:45 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id v11-20020a62a50b000000b00593b72f9027so4328776pfm.20 for ; Tue, 31 Jan 2023 18:51:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GY5YkAO2JmGWnrtTXf/JtWjEsDzeKZvCbDusCDXXDRg=; b=W6o8xRVrjZpa5j0eMnlqak3MRagJ4owbwI69dMtKXg0KwUJSov9GHR+qTx/hqPhP2t H2/mD/8fDXWWcAPn8k07XCvnheY7jxyEXpud8jetSWq2aVcIRs2LgEAZXk6o0Ey+nceb Tj+ILzGZM2hs6Lub1m1RTPrcSa5seXkK91xQd4U7Z0Ys93SUdd4SiZcId5H+8hZcqLx/ gSQP7yGhcf/r3OYSmCUDjRHN/p8u5rxRZwcwt81Tdx2wQFi5YEP9h581qyfW8tNpzmyF pCNY1yjMyAksha0J6FK7KHeQk/K1+Nj1KyGIhQFv4LaYijfQj7d7B+7qFUSZHHh751Cv La5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GY5YkAO2JmGWnrtTXf/JtWjEsDzeKZvCbDusCDXXDRg=; b=snGCKEfoevticvlFphj08ORs/JF9z4t5o7y/9R4+ncxScYQX7HYNU4VvU2yIiE0wjH f5ZeiXHInQgaaxZeAzJMsaLFU9VmWjTrXUf+c/ZtT/4/rasc4jSG47sL9XqJWBkLRADi +S/lxDkZcZyfR+MslVPuMURtfc/Hgod4VIGwMbt//xcA6cirpZBsQ6CvCeGWpaw68vwV y6+sHRcRygru/Hmd0qaHTbICCVwv8uA5u8Ezi36JDboEzOCY0kkFfYu+aPBdgrR5NgeG HPUI0fAJqdlp81r4k710wArV1O1hrrDiK1glvEYJbh93avHbTLdjS892UPF8zDcWf4PI StAg== X-Gm-Message-State: AO0yUKVUMKNvIOZhYjYObzYZRMW+vZYpd6y1mrkRaj0NRylPm6B0tbM7 vjRIDIRUBiAWL8PWj8tn4QdGPaNmk9hhnsv/yPOytytXkxjrEDadNle+Ka5CWueHeRGptd3vayJ v+Ks7hkG6uCr2Ikw4N3GUmncDho/V37GPDOz9Hst8gZHGVDXVbgkiADUFADwyo9QFQ3j2vbY= X-Google-Smtp-Source: AK7set9O06En0E+/r2WJHCwu/lOZb2376kz/fsx+eRkhVRPRi0I9XAUyksxuyYMMt3fh1HF/Xf2N4W1zhqpAioaSgw== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:aa7:9f43:0:b0:592:edcc:7f3c with SMTP id h3-20020aa79f43000000b00592edcc7f3cmr175083pfr.37.1675219904757; Tue, 31 Jan 2023 18:51:44 -0800 (PST) Date: Wed, 1 Feb 2023 02:50:47 +0000 In-Reply-To: <20230201025048.205820-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230201025048.205820-1-jingzhangos@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230201025048.205820-6-jingzhangos@google.com> Subject: [PATCH v1 5/6] KVM: arm64: Introduce ID register specific descriptor From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce an ID feature register specific descriptor to include ID register specific fields and callbacks besides its corresponding general system register descriptor. New fields for ID register descriptor would be added later when it is necessary to support a writable ID register. No functional change intended. Co-developed-by: Reiji Watanabe Signed-off-by: Reiji Watanabe Signed-off-by: Jing Zhang --- arch/arm64/kvm/id_regs.c | 187 +++++++++++++++++++++++++++++--------- arch/arm64/kvm/sys_regs.c | 2 +- arch/arm64/kvm/sys_regs.h | 1 + 3 files changed, 144 insertions(+), 46 deletions(-) diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index 5eade7d380af..435f38e3ceb3 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -17,6 +17,10 @@ #include "sys_regs.h" +struct id_reg_desc { + const struct sys_reg_desc reg_desc; +}; + static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { u8 pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), @@ -312,21 +316,25 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, } /* sys_reg_desc initialiser for known cpufeature ID registers */ -#define ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = id_visibility, \ +#define ID_SANITISED(name) { \ + .reg_desc = { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = id_visibility, \ + }, \ } /* sys_reg_desc initialiser for known cpufeature ID registers */ -#define AA32_ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = aa32_id_visibility, \ +#define AA32_ID_SANITISED(name) { \ + .reg_desc = { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = aa32_id_visibility, \ + }, \ } /* @@ -334,12 +342,14 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 * (1 <= crm < 8, 0 <= Op2 < 8). */ -#define ID_UNALLOCATED(crm, op2) { \ - Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility \ +#define ID_UNALLOCATED(crm, op2) { \ + .reg_desc = { \ + Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility \ + }, \ } /* @@ -347,15 +357,17 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, * For now, these are exposed just like unallocated ID regs: they appear * RAZ for the guest. */ -#define ID_HIDDEN(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility, \ +#define ID_HIDDEN(name) { \ + .reg_desc = { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility, \ + }, \ } -static const struct sys_reg_desc id_reg_descs[] = { +static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { /* * ID regs: all ID_SANITISED() entries here must have corresponding * entries in arm64_ftr_regs[]. @@ -365,9 +377,13 @@ static const struct sys_reg_desc id_reg_descs[] = { /* CRm=1 */ AA32_ID_SANITISED(ID_PFR0_EL1), AA32_ID_SANITISED(ID_PFR1_EL1), - { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_dfr0_el1, - .visibility = aa32_id_visibility, }, + { .reg_desc = { + SYS_DESC(SYS_ID_DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_dfr0_el1, + .visibility = aa32_id_visibility, }, + }, ID_HIDDEN(ID_AFR0_EL1), AA32_ID_SANITISED(ID_MMFR0_EL1), AA32_ID_SANITISED(ID_MMFR1_EL1), @@ -396,8 +412,12 @@ static const struct sys_reg_desc id_reg_descs[] = { /* AArch64 ID registers */ /* CRm=4 */ - { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + { .reg_desc = { + SYS_DESC(SYS_ID_AA64PFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_aa64pfr0_el1, }, + }, ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4, 2), ID_UNALLOCATED(4, 3), @@ -407,8 +427,12 @@ static const struct sys_reg_desc id_reg_descs[] = { ID_UNALLOCATED(4, 7), /* CRm=5 */ - { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + { .reg_desc = { + SYS_DESC(SYS_ID_AA64DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_aa64dfr0_el1, }, + }, ID_SANITISED(ID_AA64DFR1_EL1), ID_UNALLOCATED(5, 2), ID_UNALLOCATED(5, 3), @@ -440,7 +464,13 @@ static const struct sys_reg_desc id_reg_descs[] = { const struct sys_reg_desc *kvm_arm_find_id_reg(const struct sys_reg_params *params) { - return find_reg(params, id_reg_descs, ARRAY_SIZE(id_reg_descs)); + u32 id; + + id = reg_to_encoding(params); + if (!is_id_reg(id)) + return NULL; + + return &id_reg_descs[IDREG_IDX(id)].reg_desc; } void kvm_arm_reset_id_regs(struct kvm_vcpu *vcpu) @@ -448,39 +478,106 @@ void kvm_arm_reset_id_regs(struct kvm_vcpu *vcpu) unsigned long i; for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) - if (id_reg_descs[i].reset) - id_reg_descs[i].reset(vcpu, &id_reg_descs[i]); + if (id_reg_descs[i].reg_desc.reset) + id_reg_descs[i].reg_desc.reset(vcpu, &id_reg_descs[i].reg_desc); } int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { - return kvm_sys_reg_get_user(vcpu, reg, - id_reg_descs, ARRAY_SIZE(id_reg_descs)); + u64 __user *uaddr = (u64 __user *)(unsigned long)reg->addr; + const struct sys_reg_desc *r; + struct sys_reg_params params; + u64 val; + int ret; + u32 id; + + if (!index_to_params(reg->id, ¶ms)) + return -ENOENT; + id = reg_to_encoding(¶ms); + + if (!is_id_reg(id)) + return -ENOENT; + + r = &id_reg_descs[IDREG_IDX(id)].reg_desc; + if (r->get_user) { + ret = (r->get_user)(vcpu, r, &val); + } else { + ret = 0; + val = 0; + } + + if (!ret) + ret = put_user(val, uaddr); + + return ret; } int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { - return kvm_sys_reg_set_user(vcpu, reg, - id_reg_descs, ARRAY_SIZE(id_reg_descs)); + u64 __user *uaddr = (u64 __user *)(unsigned long)reg->addr; + const struct sys_reg_desc *r; + struct sys_reg_params params; + u64 val; + int ret; + u32 id; + + if (!index_to_params(reg->id, ¶ms)) + return -ENOENT; + id = reg_to_encoding(¶ms); + + if (!is_id_reg(id)) + return -ENOENT; + + if (get_user(val, uaddr)) + return -EFAULT; + + r = &id_reg_descs[IDREG_IDX(id)].reg_desc; + + if (sysreg_user_write_ignore(vcpu, r)) + return 0; + + if (r->set_user) + ret = (r->set_user)(vcpu, r, val); + else + ret = 0; + + return ret; } bool kvm_arm_check_idreg_table(void) { - return check_sysreg_table(id_reg_descs, ARRAY_SIZE(id_reg_descs), false); + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { + const struct sys_reg_desc *r = &id_reg_descs[i].reg_desc; + + if (r->reg && !r->reset) { + kvm_err("sys_reg table %pS entry %d lacks reset\n", r, i); + return false; + } + + if (i && cmp_sys_reg(&id_reg_descs[i-1].reg_desc, r) >= 0) { + kvm_err("sys_reg table %pS entry %d out of order\n", + &id_reg_descs[i - 1].reg_desc, i - 1); + return false; + } + } + + return true; } /* Assumed ordered tables, see kvm_sys_reg_table_init. */ int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) { - const struct sys_reg_desc *i2, *end2; + const struct id_reg_desc *i2, *end2; unsigned int total = 0; int err; i2 = id_reg_descs; end2 = id_reg_descs + ARRAY_SIZE(id_reg_descs); - while (i2 != end2) { - err = walk_one_sys_reg(vcpu, i2++, &uind, &total); + for (; i2 != end2; i2++) { + err = walk_one_sys_reg(vcpu, &(i2->reg_desc), &uind, &total); if (err) return err; } @@ -498,12 +595,12 @@ void kvm_arm_set_default_id_regs(struct kvm *kvm) u64 val; for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { - id = reg_to_encoding(&id_reg_descs[i]); + id = reg_to_encoding(&id_reg_descs[i].reg_desc); if (WARN_ON_ONCE(!is_id_reg(id))) /* Shouldn't happen */ continue; - if (id_reg_descs[i].visibility == raz_visibility) + if (id_reg_descs[i].reg_desc.visibility == raz_visibility) /* Hidden or reserved ID register */ continue; diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 4dec5b038d77..11d302a75090 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2365,7 +2365,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu) * Userspace API *****************************************************************************/ -static bool index_to_params(u64 id, struct sys_reg_params *params) +bool index_to_params(u64 id, struct sys_reg_params *params) { switch (id & KVM_REG_SIZE_MASK) { case KVM_REG_SIZE_U64: diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 3b59cc940148..3646be75b920 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -216,6 +216,7 @@ static inline bool write_to_read_only(struct kvm_vcpu *vcpu, return false; } +bool index_to_params(u64 id, struct sys_reg_params *params); const struct sys_reg_desc *get_reg_by_id(u64 id, const struct sys_reg_desc table[], unsigned int num); From patchwork Wed Feb 1 02:50:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13123679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B586C38142 for ; Wed, 1 Feb 2023 02:51:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231897AbjBACvx (ORCPT ); Tue, 31 Jan 2023 21:51:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230494AbjBACvw (ORCPT ); Tue, 31 Jan 2023 21:51:52 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6187B457ED for ; Tue, 31 Jan 2023 18:51:47 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id gn18-20020a17090ac79200b0022bef1f49c9so1670638pjb.0 for ; Tue, 31 Jan 2023 18:51:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gYXB8455jErANOmRJ2jLFfjMcp+g3Oz3xDMAy3Dz6GI=; b=c2lGockCeQPFRxkSN0rbjLqGNy7jBnJXAMfPpRKS7ZuwGZCGxGASbiIZL0qZResYSp fhRITaOj5oqmPHx3gp0ZkkayTPQOCQty67KuOxr+ZQ5LErSYBA6c/uMe/p6jYOj3r7NR s3t+Eda4k/aqcJh+9n8FgWeh0yise+wLW0CtjTpKlyXNw55Li4aop5VEfKU7YBzpPJt4 XXPNgOY3yjcf5sVBcDP2WHCfcnZntfWvQHuPG8kTajQkGLLqrmU3T98gq6pCI0yDLD5v cKBE294mat+Q5hHvV00Y6W37WphDmBrloopsugDsQs3h9FpVYD/qe6Yjac/OLFyHz1pe Cupw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gYXB8455jErANOmRJ2jLFfjMcp+g3Oz3xDMAy3Dz6GI=; b=nj3eJ8WfPwXt6DeqY1vWFTv+XMf2kZj8yJ7Lfrbx9L2lVP3dPFzg0DdSQID98Om40R kyF1Ok5grsc0QvmCfQ5ujXCniIEEWWM0ntp71V1lxjf8hJF/o+ZH4gMQgQdteOcgV2ak kmuUTi5xe4vI60xa6WgS685aLLOeoHVnrDhGXg27MUtS3R/6w+odgehLUbSl9r9km0i8 cgGRZ2cj6ZnZk5mxWyGa42ZHFkBQUop340xcM2Y6cTM9qVzAjvJ8TQhEm3KNnBvvSSy7 eFcbWsscYwqbo366ge4kpmAPfm4fYKrUTEZoyD0MjntUaSzpoiN6mbCY+os0TPltglPu iJyA== X-Gm-Message-State: AO0yUKUy008WhlMrdrDmYeRlOA0TsGxZdIRwEudMm2OB6rKuj0R1YN6D X6PWiE8ZTuvChytke1ZXduyhoEbwc5HF/wvSjH2jdtxbllfp47vPGB4v2ugVhPC8MkYSw2aTe/R 4wXKWygQHor9ejqvA3jifnr1a3uH7YSBXrDnDnV8xt1apgoJnsvqnzELlBOeU4uLF4jCK5/Y= X-Google-Smtp-Source: AK7set9o36kPcvx7OJBsp3Z/hVwjyJdwEYXZ6FHQ/s5CcegzKpB/IA23dWbDS7uKF2BWQL61tqjc30aeYc5YkKYYSA== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a05:6a00:1821:b0:593:ebe9:90c with SMTP id y33-20020a056a00182100b00593ebe9090cmr147318pfa.40.1675219906563; Tue, 31 Jan 2023 18:51:46 -0800 (PST) Date: Wed, 1 Feb 2023 02:50:48 +0000 In-Reply-To: <20230201025048.205820-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230201025048.205820-1-jingzhangos@google.com> X-Mailer: git-send-email 2.39.1.456.gfc5497dd1b-goog Message-ID: <20230201025048.205820-7-jingzhangos@google.com> Subject: [PATCH v1 6/6] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3 From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Save KVM sanitised ID register value in ID descriptor (kvm_sys_val). Add an init callback for every ID register to setup kvm_sys_val. All per VCPU sanitizations are still handled on the fly during ID register read and write from userspace. An arm64_ftr_bits array is used to indicate writable feature fields. Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3], ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities introduced by ID register descriptor. No functional change intended. Co-developed-by: Reiji Watanabe Signed-off-by: Reiji Watanabe Signed-off-by: Jing Zhang --- arch/arm64/include/asm/cpufeature.h | 25 +++ arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kernel/cpufeature.c | 26 +-- arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/id_regs.c | 305 ++++++++++++++++++---------- arch/arm64/kvm/sys_regs.c | 3 +- arch/arm64/kvm/sys_regs.h | 2 +- 7 files changed, 232 insertions(+), 133 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 03d1c9d7af82..a64c75bc75c8 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -64,6 +64,30 @@ struct arm64_ftr_bits { s64 safe_val; /* safe value for FTR_EXACT features */ }; +#define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ + { \ + .sign = SIGNED, \ + .visible = VISIBLE, \ + .strict = STRICT, \ + .type = TYPE, \ + .shift = SHIFT, \ + .width = WIDTH, \ + .safe_val = SAFE_VAL, \ + } + +/* Define a feature with unsigned values */ +#define ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ + __ARM64_FTR_BITS(FTR_UNSIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) + +/* Define a feature with a signed value */ +#define S_ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ + __ARM64_FTR_BITS(FTR_SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) + +#define ARM64_FTR_END \ + { \ + .width = 0, \ + } + /* * Describe the early feature override to the core override code: * @@ -905,6 +929,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1) return 8; } +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur); struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id); extern struct arm64_ftr_override id_aa64mmfr1_override; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 1ab443b52c46..d139aace06b1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -971,7 +971,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, struct kvm_arm_copy_mte_tags *copy_tags); -void kvm_arm_set_default_id_regs(struct kvm *kvm); +void kvm_arm_init_id_regs(struct kvm *kvm); /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index a77315b338e6..00ddaf04c649 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -139,30 +139,6 @@ void dump_cpu_features(void) pr_emerg("0x%*pb\n", ARM64_NCAPS, &cpu_hwcaps); } -#define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ - { \ - .sign = SIGNED, \ - .visible = VISIBLE, \ - .strict = STRICT, \ - .type = TYPE, \ - .shift = SHIFT, \ - .width = WIDTH, \ - .safe_val = SAFE_VAL, \ - } - -/* Define a feature with unsigned values */ -#define ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ - __ARM64_FTR_BITS(FTR_UNSIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) - -/* Define a feature with a signed value */ -#define S_ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ - __ARM64_FTR_BITS(FTR_SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) - -#define ARM64_FTR_END \ - { \ - .width = 0, \ - } - static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap); static bool __system_matches_cap(unsigned int n); @@ -780,7 +756,7 @@ static u64 arm64_ftr_set_value(const struct arm64_ftr_bits *ftrp, s64 reg, return reg; } -static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur) { s64 ret = 0; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 25bd95650223..f76edcf5e7a5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -135,7 +135,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->max_vcpus = kvm_arm_default_max_vcpus(); - kvm_arm_set_default_id_regs(kvm); + kvm_arm_init_id_regs(kvm); kvm_arm_init_hypercalls(kvm); return 0; diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index 435f38e3ceb3..aed11cdd6fee 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -17,10 +17,88 @@ #include "sys_regs.h" +/* + * Number of entries in id_reg_desc's ftr_bits[] (Number of 4 bits fields + * in 64 bit register + 1 entry for a terminator entry). + */ +#define FTR_FIELDS_NUM 17 + struct id_reg_desc { const struct sys_reg_desc reg_desc; + /* + * KVM sanitised ID register value. + * It is the default value for per VM emulated ID register. + */ + u64 kvm_sys_val; + /* + * Used to validate the ID register values with arm64_check_features(). + * The last item in the array must be terminated by an item whose + * width field is zero as that is expected by arm64_check_features(). + * Only feature bits defined in this array are writable. + */ + struct arm64_ftr_bits ftr_bits[FTR_FIELDS_NUM]; + + /* + * Basically init() is used to setup the KVM sanitised value + * stored in kvm_sys_val. + */ + void (*init)(struct id_reg_desc *idr); }; +static struct id_reg_desc id_reg_descs[]; + +/** + * arm64_check_features() - Check if a feature register value constitutes + * a subset of features indicated by @limit. + * + * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by + * an item whose width field is zero. + * @val: The feature register value to check + * @limit: The limit value of the feature register + * + * This function will check if each feature field of @val is the "safe" value + * against @limit based on @ftrp[], each of which specifies the target field + * (shift, width), whether or not the field is for a signed value (sign), + * how the field is determined to be "safe" (type), and the safe value + * (safe_val) when type == FTR_EXACT (safe_val won't be used by this + * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits + * won't be used by this function. If a field value in @val is the same + * as the one in @limit, it is always considered the safe value regardless + * of the type. For register fields that are not in @ftrp[], only the value + * in @limit is considered the safe value. + * + * Return: 0 if all the fields are safe. Otherwise, return negative errno. + */ +static int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit) +{ + u64 mask = 0; + + for (; ftrp->width; ftrp++) { + s64 f_val, f_lim, safe_val; + + f_val = arm64_ftr_value(ftrp, val); + f_lim = arm64_ftr_value(ftrp, limit); + mask |= arm64_ftr_mask(ftrp); + + if (f_val == f_lim) + safe_val = f_val; + else + safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim); + + if (safe_val != f_val) + return -E2BIG; + } + + /* + * For fields that are not indicated in ftrp, values in limit are the + * safe values. + */ + if ((val & ~mask) != (limit & ~mask)) + return -E2BIG; + + return 0; +} + static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { u8 pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), @@ -31,19 +109,6 @@ static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) return 0; } -static u8 perfmon_to_pmuver(u8 perfmon) -{ - switch (perfmon) { - case ID_DFR0_EL1_PerfMon_PMUv3: - return ID_AA64DFR0_EL1_PMUVer_IMP; - case ID_DFR0_EL1_PerfMon_IMPDEF: - return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; - default: - /* Anything ARMv8.1+ and NI have the same value. For now. */ - return perfmon; - } -} - static u8 pmuver_to_perfmon(u8 pmuver) { switch (pmuver) { @@ -76,7 +141,6 @@ u64 kvm_arm_read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id) case SYS_ID_AA64PFR0_EL1: if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -103,15 +167,10 @@ u64 kvm_arm_read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id) val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); break; case SYS_ID_AA64DFR0_EL1: - /* Limit debug to ARMv8.0 */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); /* Set PMUver to the required version */ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), vcpu_pmuver(vcpu)); - /* Hide SPE from guests */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); break; case SYS_ID_DFR0_EL1: val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); @@ -167,9 +226,15 @@ static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { - /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(vcpu, rd)) - return -EINVAL; + int ret; + int id = reg_to_encoding(rd); + + ret = arm64_check_features(id_reg_descs[IDREG_IDX(id)].ftr_bits, val, + id_reg_descs[IDREG_IDX(id)].kvm_sys_val); + if (ret) + return ret; + + IDREG(vcpu->kvm, id) = val; return 0; } @@ -203,12 +268,47 @@ static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, return id_visibility(vcpu, r); } +static void init_id_reg(struct id_reg_desc *idr) +{ + idr->kvm_sys_val = read_sanitised_ftr_reg(reg_to_encoding(&idr->reg_desc)); +} + +static void init_id_aa64pfr0_el1(struct id_reg_desc *idr) +{ + u64 val; + u32 id = reg_to_encoding(&idr->reg_desc); + + val = read_sanitised_ftr_reg(id); + /* + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. + */ + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } + + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); + + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); + + idr->kvm_sys_val = val; +} + static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { u8 csv2, csv3; - u64 sval = val; /* * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as @@ -226,16 +326,29 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) return -EINVAL; - /* We can only differ with CSV[23], and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); - if (val) - return -EINVAL; + return set_id_reg(vcpu, rd, val); +} - IDREG_RD(vcpu->kvm, rd) = sval; +static void init_id_aa64dfr0_el1(struct id_reg_desc *idr) +{ + u64 val; + u32 id = reg_to_encoding(&idr->reg_desc); - return 0; + val = read_sanitised_ftr_reg(id); + /* Limit debug to ARMv8.0 */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + kvm_arm_pmu_get_pmuver_limit()); + /* Hide SPE from guests */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); + + idr->kvm_sys_val = val; } static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, @@ -263,17 +376,24 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; - /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - if (val) - return -EINVAL; + return set_id_reg(vcpu, rd, val); +} + +static void init_id_dfr0_el1(struct id_reg_desc *idr) +{ + u64 val; + u32 id = reg_to_encoding(&idr->reg_desc); - IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) |= - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), pmuver); + val = read_sanitised_ftr_reg(id); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), + kvm_arm_pmu_get_pmuver_limit()); - return 0; + idr->kvm_sys_val = val; } static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, @@ -302,28 +422,22 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; - /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - if (val) - return -EINVAL; - - IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) |= - FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), perfmon_to_pmuver(perfmon)); - - return 0; + return set_id_reg(vcpu, rd, val); } /* sys_reg_desc initialiser for known cpufeature ID registers */ +#define SYS_DESC_SANITISED(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = id_visibility, \ +} + #define ID_SANITISED(name) { \ - .reg_desc = { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = id_visibility, \ - }, \ + .reg_desc = SYS_DESC_SANITISED(name), \ + .ftr_bits = { ARM64_FTR_END, }, \ + .init = init_id_reg, \ } /* sys_reg_desc initialiser for known cpufeature ID registers */ @@ -335,6 +449,8 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, .set_user = set_id_reg, \ .visibility = aa32_id_visibility, \ }, \ + .ftr_bits = { ARM64_FTR_END, }, \ + .init = init_id_reg, \ } /* @@ -350,6 +466,7 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, .set_user = set_id_reg, \ .visibility = raz_visibility \ }, \ + .ftr_bits = { ARM64_FTR_END, }, \ } /* @@ -365,9 +482,10 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, .set_user = set_id_reg, \ .visibility = raz_visibility, \ }, \ + .ftr_bits = { ARM64_FTR_END, }, \ } -static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { +static struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { /* * ID regs: all ID_SANITISED() entries here must have corresponding * entries in arm64_ftr_regs[]. @@ -383,6 +501,11 @@ static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { .get_user = get_id_reg, .set_user = set_id_dfr0_el1, .visibility = aa32_id_visibility, }, + .ftr_bits = { + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, + ID_DFR0_EL1_PerfMon_SHIFT, ID_DFR0_EL1_PerfMon_WIDTH, 0), + ARM64_FTR_END, }, + .init = init_id_dfr0_el1, }, ID_HIDDEN(ID_AFR0_EL1), AA32_ID_SANITISED(ID_MMFR0_EL1), @@ -417,6 +540,13 @@ static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { .access = access_id_reg, .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + .ftr_bits = { + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, + ID_AA64PFR0_EL1_CSV2_SHIFT, ID_AA64PFR0_EL1_CSV2_WIDTH, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, + ID_AA64PFR0_EL1_CSV3_SHIFT, ID_AA64PFR0_EL1_CSV3_WIDTH, 0), + ARM64_FTR_END, }, + .init = init_id_aa64pfr0_el1, }, ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4, 2), @@ -432,6 +562,11 @@ static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { .access = access_id_reg, .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + .ftr_bits = { + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, + ID_AA64DFR0_EL1_PMUVer_SHIFT, ID_AA64DFR0_EL1_PMUVer_WIDTH, 0), + ARM64_FTR_END, }, + .init = init_id_aa64dfr0_el1, }, ID_SANITISED(ID_AA64DFR1_EL1), ID_UNALLOCATED(5, 2), @@ -544,7 +679,7 @@ int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return ret; } -bool kvm_arm_check_idreg_table(void) +bool kvm_arm_idreg_table_init(void) { unsigned int i; @@ -561,6 +696,9 @@ bool kvm_arm_check_idreg_table(void) &id_reg_descs[i - 1].reg_desc, i - 1); return false; } + + if (id_reg_descs[i].init) + id_reg_descs[i].init(&id_reg_descs[i]); } return true; @@ -585,55 +723,16 @@ int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) } /* - * Set the guest's ID registers that are defined in id_reg_descs[] - * with ID_SANITISED() to the host's sanitized value. + * Initialize the guest's ID registers with KVM sanitised values that were setup + * during ID register descriptors initialization. */ -void kvm_arm_set_default_id_regs(struct kvm *kvm) +void kvm_arm_init_id_regs(struct kvm *kvm) { int i; u32 id; - u64 val; for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { id = reg_to_encoding(&id_reg_descs[i].reg_desc); - if (WARN_ON_ONCE(!is_id_reg(id))) - /* Shouldn't happen */ - continue; - - if (id_reg_descs[i].reg_desc.visibility == raz_visibility) - /* Hidden or reserved ID register */ - continue; - - val = read_sanitised_ftr_reg(id); - IDREG(kvm, id) = val; - } - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); - - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); - } - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + IDREG(kvm, id) = id_reg_descs[i].kvm_sys_val; } - - IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; - - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - IDREG(kvm, SYS_ID_AA64DFR0_EL1) &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - IDREG(kvm, SYS_ID_AA64DFR0_EL1) |= - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), - kvm_arm_pmu_get_pmuver_limit()); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 11d302a75090..91bc80f40e72 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2758,14 +2758,13 @@ int __init kvm_sys_reg_table_init(void) unsigned int i; /* Make sure tables are unique and in order. */ - valid &= kvm_arm_check_idreg_table(); valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false); valid &= check_sysreg_table(cp14_regs, ARRAY_SIZE(cp14_regs), true); valid &= check_sysreg_table(cp14_64_regs, ARRAY_SIZE(cp14_64_regs), true); valid &= check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true); valid &= check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true); valid &= check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false); - + valid &= kvm_arm_idreg_table_init(); if (!valid) return -EINVAL; diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 3646be75b920..e1e0cc0d0cde 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -237,7 +237,7 @@ const struct sys_reg_desc *kvm_arm_find_id_reg(const struct sys_reg_params *para void kvm_arm_reset_id_regs(struct kvm_vcpu *vcpu); int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); -bool kvm_arm_check_idreg_table(void); +bool kvm_arm_idreg_table_init(void); int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind); u64 kvm_arm_read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id);