From patchwork Mon Apr 24 23:46:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13222726 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 34BF2C77B7F for ; Mon, 24 Apr 2023 23:48:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=CLeNZonn8a04vsZjsRWrubGxi8afIe9cEHn/XpVlejE=; b=cnIy81ys+Iv2oT7illcrPgA+72 Ab5nwCs2B5E6ubmi3SclNCCxEKuVzFyXLIFezFn/akGb+Y7nLiDH8fupDMidkH0hrYFepabQUPaQ7 pa5UUZZ0kO3JpyisSTn0WrSqpMfOmNwV/KcffgbAlXPTqrNfojl5hyHiD2YcUeRijZ2w739jY+zsW CvgzFD5LwrYlv038F5lbHMs35ebRgIn221JxzUpiyz2/AHwDn+JbS9gmJ7OPj8sboAfKocS58Z09T hfwUISgYhaXYnHLxhzfztSDeUPXDAq6e9+4DEnxDYx7MNPJ5lxg/DylfUKzJRp0tYANsm1etS1tdK ZWIDyMIw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pr5tx-00HSAj-2v; Mon, 24 Apr 2023 23:47:21 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pr5to-00HS6W-0q for linux-arm-kernel@lists.infradead.org; Mon, 24 Apr 2023 23:47:16 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b98d97c8130so5370210276.3 for ; Mon, 24 Apr 2023 16:47:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682380030; x=1684972030; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZJrvBUY4tyLMEUEre8yaSJjUJQzlvekPlPP6K6B9+Og=; b=UH0IMjYpLZY58gHU0xLykMDFuSgzYcjkfUimDRlY/RQXO/D5krWOAYisV7BpijVG7q rwfBZ+0TrybvfUaCgbbm+WJygKopNq0k2xV+9oTmYR6L+mqCQHSnMWZbJBRRA+UK/G4G XFX+MMIIU5Bz6y9lnVuHTGxI2JcCJSAuwYeEMsODvtlPrqp5ZmwbOV5EWRBFHv4SjvOy SG5FczkvOYXVPlgVXxRZysqeYoo+QpDxb8Ho0Hx3YOppnw+mZadsGRXCwc/D5ZJT89JM Qt62aDO08b+xmn8mvFq+TOTZyDwbE4zuufWJXn5v+/bptil1tdAWve2B19meNq7DXqk+ Y70w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682380030; x=1684972030; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZJrvBUY4tyLMEUEre8yaSJjUJQzlvekPlPP6K6B9+Og=; b=C/Tsqc8lsk1OgnajMUH6ZPeauVT6IhP8y3sB51tQiWzSk+CYS2qlu2QyOo3+iuRl9C MNYrkSKGyCzqFKEzFgjjQ18ZYfW2FrbDh4lr1JDDwCsQMQXHRetN6t9dmvqG8lRDoPnJ lYHura155wrd7bcjxL93cNdxWfR7BQ/grm4as4ZApC1L3gdwTK9vjrStZjNl4CmNYn+E nbOqfRTaC7BpbqkMqZrO83YNQ5iNUM7fm2l4UeGjG4lkKS8tl/66WV9B4yiq1xn52OIR JzU+mev4JCrZ5+N/D6l577iMcOSBzlFcYpd8M1bJd1bRPEMshJc5arLR7M8d6/PTEbl2 Prfg== X-Gm-Message-State: AC+VfDxGZThW2SWXIa6jCLZBL9b4gRTGRpTt7+RRpyqfjoS0AODW4fMl rn7Aq3XSsfr7L1Z95fPyaN96VO8076EuhZcy9A== X-Google-Smtp-Source: ACHHUZ4qnFzyngygyRXZa/EORvqxSa9phc+V9m7U5VZTFGebcGONLOVg+DTxjx/wqWb2H+fCRXnMp0I95F1eekBtiA== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a25:da89:0:b0:b99:efd5:2e05 with SMTP id n131-20020a25da89000000b00b99efd52e05mr254422ybf.8.1682380030149; Mon, 24 Apr 2023 16:47:10 -0700 (PDT) Date: Mon, 24 Apr 2023 23:46:59 +0000 In-Reply-To: <20230424234704.2571444-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230424234704.2571444-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230424234704.2571444-2-jingzhangos@google.com> Subject: [PATCH v7 1/6] KVM: arm64: Move CPU ID feature registers emulation into a separate file From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230424_164712_308541_81869D99 X-CRM114-Status: GOOD ( 25.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Create a new file id_regs.c for CPU ID feature registers emulation code, which are moved from sys_regs.c and tweak sys_regs code accordingly. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/id_regs.c | 460 +++++++++++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 464 ++++---------------------------------- arch/arm64/kvm/sys_regs.h | 19 ++ 4 files changed, 519 insertions(+), 426 deletions(-) create mode 100644 arch/arm64/kvm/id_regs.c diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index c0c050e53157..a6a315fcd81e 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -13,7 +13,7 @@ obj-$(CONFIG_KVM) += hyp/ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \ inject_fault.o va_layout.o handle_exit.o \ guest.o debug.o reset.o sys_regs.o stacktrace.o \ - vgic-sys-reg-v3.o fpsimd.o pkvm.o \ + vgic-sys-reg-v3.o fpsimd.o pkvm.o id_regs.o \ arch_timer.o trng.o vmid.o emulate-nested.o nested.o \ vgic/vgic.o vgic/vgic-init.o \ vgic/vgic-irqfd.o vgic/vgic-v2.o \ diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c new file mode 100644 index 000000000000..96b4c43a5100 --- /dev/null +++ b/arch/arm64/kvm/id_regs.c @@ -0,0 +1,460 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 - Google LLC + * Author: Jing Zhang + * + * Moved from arch/arm64/kvm/sys_regs.c + * Copyright (C) 2012,2013 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "sys_regs.h" + +static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_has_pmu(vcpu)) + return vcpu->kvm->arch.dfr0_pmuver.imp; + + return vcpu->kvm->arch.dfr0_pmuver.unimp; +} + +static u8 perfmon_to_pmuver(u8 perfmon) +{ + switch (perfmon) { + case ID_DFR0_EL1_PerfMon_PMUv3: + return ID_AA64DFR0_EL1_PMUVer_IMP; + case ID_DFR0_EL1_PerfMon_IMPDEF: + return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; + default: + /* Anything ARMv8.1+ and NI have the same value. For now. */ + return perfmon; + } +} + +static u8 pmuver_to_perfmon(u8 pmuver) +{ + switch (pmuver) { + case ID_AA64DFR0_EL1_PMUVer_IMP: + return ID_DFR0_EL1_PerfMon_PMUv3; + case ID_AA64DFR0_EL1_PMUVer_IMP_DEF: + return ID_DFR0_EL1_PerfMon_IMPDEF; + default: + /* Anything ARMv8.1+ and NI have the same value. For now. */ + return pmuver; + } +} + +/* Read a sanitised cpufeature ID register by sys_reg_desc */ +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +{ + u32 id = reg_to_encoding(r); + u64 val; + + if (sysreg_visible_as_raz(vcpu, r)) + return 0; + + val = read_sanitised_ftr_reg(id); + + switch (id) { + case SYS_ID_AA64PFR0_EL1: + if (!vcpu_has_sve(vcpu)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), + (u64)vcpu->kvm->arch.pfr0_csv2); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), + (u64)vcpu->kvm->arch.pfr0_csv3); + if (kvm_vgic_global_state.type == VGIC_V3) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); + } + break; + case SYS_ID_AA64PFR1_EL1: + if (!kvm_has_mte(vcpu->kvm)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); + + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); + break; + case SYS_ID_AA64ISAR1_EL1: + if (!vcpu_has_ptrauth(vcpu)) + val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI)); + break; + case SYS_ID_AA64ISAR2_EL1: + if (!vcpu_has_ptrauth(vcpu)) + val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); + if (!cpus_have_final_cap(ARM64_HAS_WFXT)) + val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); + break; + case SYS_ID_AA64DFR0_EL1: + /* Limit debug to ARMv8.0 */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); + /* Set PMUver to the required version */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + vcpu_pmuver(vcpu)); + /* Hide SPE from guests */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); + break; + case SYS_ID_DFR0_EL1: + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), + pmuver_to_perfmon(vcpu_pmuver(vcpu))); + break; + case SYS_ID_AA64MMFR2_EL1: + val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; + break; + case SYS_ID_MMFR4_EL1: + val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX); + break; + } + + return val; +} + +/* cpufeature ID register access trap handlers */ + +static bool access_id_reg(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return write_to_read_only(vcpu, p, r); + + p->regval = read_id_reg(vcpu, r); + if (vcpu_has_nv(vcpu)) + access_nested_id_reg(vcpu, p, r); + + return true; +} + +/* + * cpufeature ID register user accessors + * + * For now, these registers are immutable for userspace, so no values + * are stored, and for set_id_reg() we don't allow the effective value + * to be changed. + */ +static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 *val) +{ + *val = read_id_reg(vcpu, rd); + return 0; +} + +static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 val) +{ + /* This is what we mean by invariant: you can't change it. */ + if (val != read_id_reg(vcpu, rd)) + return -EINVAL; + + return 0; +} + +static unsigned int id_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + u32 id = reg_to_encoding(r); + + switch (id) { + case SYS_ID_AA64ZFR0_EL1: + if (!vcpu_has_sve(vcpu)) + return REG_RAZ; + break; + } + + return 0; +} + +static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + /* + * AArch32 ID registers are UNKNOWN if AArch32 isn't implemented at any + * EL. Promote to RAZ/WI in order to guarantee consistency between + * systems. + */ + if (!kvm_supports_32bit_el0()) + return REG_RAZ | REG_USER_WI; + + return id_visibility(vcpu, r); +} + +static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 csv2, csv3; + + /* + * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as + * it doesn't promise more than what is actually provided (the + * guest could otherwise be covered in ectoplasmic residue). + */ + csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT); + if (csv2 > 1 || (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) + return -EINVAL; + + /* Same thing for CSV3 */ + csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT); + if (csv3 > 1 || (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) + return -EINVAL; + + /* We can only differ with CSV[23], and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); + if (val) + return -EINVAL; + + vcpu->kvm->arch.pfr0_csv2 = csv2; + vcpu->kvm->arch.pfr0_csv3 = csv3; + + return 0; +} + +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 pmuver, host_pmuver; + bool valid_pmu; + + host_pmuver = kvm_arm_pmu_get_pmuver_limit(); + + /* + * Allow AA64DFR0_EL1.PMUver to be set from userspace as long + * as it doesn't promise more than what the HW gives us. We + * allow an IMPDEF PMU though, only if no PMU is supported + * (KVM backward compatibility handling). + */ + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); + if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)) + return -EINVAL; + + valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF); + + /* Make sure view register and PMU support do match */ + if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) + return -EINVAL; + + /* We can only differ with PMUver, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + if (val) + return -EINVAL; + + if (valid_pmu) + vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; + else + vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; + + return 0; +} + +static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 perfmon, host_perfmon; + bool valid_pmu; + + host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); + + /* + * Allow DFR0_EL1.PerfMon to be set from userspace as long as + * it doesn't promise more than what the HW gives us on the + * AArch64 side (as everything is emulated with that), and + * that this is a PMUv3. + */ + perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val); + if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) || + (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3)) + return -EINVAL; + + valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF); + + /* Make sure view register and PMU support do match */ + if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) + return -EINVAL; + + /* We can only differ with PerfMon, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + if (val) + return -EINVAL; + + if (valid_pmu) + vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); + else + vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); + + return 0; +} + +/* sys_reg_desc initialiser for known cpufeature ID registers */ +#define ID_SANITISED(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = id_visibility, \ +} + +/* sys_reg_desc initialiser for known cpufeature ID registers */ +#define AA32_ID_SANITISED(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = aa32_id_visibility, \ +} + +/* + * sys_reg_desc initialiser for architecturally unallocated cpufeature ID + * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 + * (1 <= crm < 8, 0 <= Op2 < 8). + */ +#define ID_UNALLOCATED(crm, op2) { \ + Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility \ +} + +/* + * sys_reg_desc initialiser for known ID registers that we hide from guests. + * For now, these are exposed just like unallocated ID regs: they appear + * RAZ for the guest. + */ +#define ID_HIDDEN(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility, \ +} + +const struct sys_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { + /* + * ID regs: all ID_SANITISED() entries here must have corresponding + * entries in arm64_ftr_regs[]. + */ + + /* AArch64 mappings of the AArch32 ID registers */ + /* CRm=1 */ + AA32_ID_SANITISED(ID_PFR0_EL1), + AA32_ID_SANITISED(ID_PFR1_EL1), + { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_dfr0_el1, + .visibility = aa32_id_visibility, }, + ID_HIDDEN(ID_AFR0_EL1), + AA32_ID_SANITISED(ID_MMFR0_EL1), + AA32_ID_SANITISED(ID_MMFR1_EL1), + AA32_ID_SANITISED(ID_MMFR2_EL1), + AA32_ID_SANITISED(ID_MMFR3_EL1), + + /* CRm=2 */ + AA32_ID_SANITISED(ID_ISAR0_EL1), + AA32_ID_SANITISED(ID_ISAR1_EL1), + AA32_ID_SANITISED(ID_ISAR2_EL1), + AA32_ID_SANITISED(ID_ISAR3_EL1), + AA32_ID_SANITISED(ID_ISAR4_EL1), + AA32_ID_SANITISED(ID_ISAR5_EL1), + AA32_ID_SANITISED(ID_MMFR4_EL1), + AA32_ID_SANITISED(ID_ISAR6_EL1), + + /* CRm=3 */ + AA32_ID_SANITISED(MVFR0_EL1), + AA32_ID_SANITISED(MVFR1_EL1), + AA32_ID_SANITISED(MVFR2_EL1), + ID_UNALLOCATED(3, 3), + AA32_ID_SANITISED(ID_PFR2_EL1), + ID_HIDDEN(ID_DFR1_EL1), + AA32_ID_SANITISED(ID_MMFR5_EL1), + ID_UNALLOCATED(3, 7), + + /* AArch64 ID registers */ + /* CRm=4 */ + { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + ID_SANITISED(ID_AA64PFR1_EL1), + ID_UNALLOCATED(4, 2), + ID_UNALLOCATED(4, 3), + ID_SANITISED(ID_AA64ZFR0_EL1), + ID_HIDDEN(ID_AA64SMFR0_EL1), + ID_UNALLOCATED(4, 6), + ID_UNALLOCATED(4, 7), + + /* CRm=5 */ + { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + ID_SANITISED(ID_AA64DFR1_EL1), + ID_UNALLOCATED(5, 2), + ID_UNALLOCATED(5, 3), + ID_HIDDEN(ID_AA64AFR0_EL1), + ID_HIDDEN(ID_AA64AFR1_EL1), + ID_UNALLOCATED(5, 6), + ID_UNALLOCATED(5, 7), + + /* CRm=6 */ + ID_SANITISED(ID_AA64ISAR0_EL1), + ID_SANITISED(ID_AA64ISAR1_EL1), + ID_SANITISED(ID_AA64ISAR2_EL1), + ID_UNALLOCATED(6, 3), + ID_UNALLOCATED(6, 4), + ID_UNALLOCATED(6, 5), + ID_UNALLOCATED(6, 6), + ID_UNALLOCATED(6, 7), + + /* CRm=7 */ + ID_SANITISED(ID_AA64MMFR0_EL1), + ID_SANITISED(ID_AA64MMFR1_EL1), + ID_SANITISED(ID_AA64MMFR2_EL1), + ID_UNALLOCATED(7, 3), + ID_UNALLOCATED(7, 4), + ID_UNALLOCATED(7, 5), + ID_UNALLOCATED(7, 6), + ID_UNALLOCATED(7, 7), +}; + +/** + * emulate_id_reg - Emulate a guest access to an AArch64 CPU ID feature register + * @vcpu: The VCPU pointer + * @params: Decoded system register parameters + * + * Return: true if the ID register access was successful, false otherwise. + */ +int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params) +{ + const struct sys_reg_desc *r; + + r = find_reg(params, id_reg_descs, ARRAY_SIZE(id_reg_descs)); + + if (likely(r)) { + perform_access(vcpu, params, r); + } else { + print_sys_reg_msg(params, + "Unsupported guest id_reg access at: %lx [%08lx]\n", + *vcpu_pc(vcpu), *vcpu_cpsr(vcpu)); + kvm_inject_undefined(vcpu); + } + + return 1; +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 34688918c811..8dba10696dac 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -53,9 +53,9 @@ static bool read_from_write_only(struct kvm_vcpu *vcpu, return false; } -static bool write_to_read_only(struct kvm_vcpu *vcpu, - struct sys_reg_params *params, - const struct sys_reg_desc *r) +bool write_to_read_only(struct kvm_vcpu *vcpu, + struct sys_reg_params *params, + const struct sys_reg_desc *r) { WARN_ONCE(1, "Unexpected sys_reg write to read-only register\n"); print_sys_reg_instr(params); @@ -1168,163 +1168,11 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, return true; } -static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) -{ - if (kvm_vcpu_has_pmu(vcpu)) - return vcpu->kvm->arch.dfr0_pmuver.imp; - - return vcpu->kvm->arch.dfr0_pmuver.unimp; -} - -static u8 perfmon_to_pmuver(u8 perfmon) -{ - switch (perfmon) { - case ID_DFR0_EL1_PerfMon_PMUv3: - return ID_AA64DFR0_EL1_PMUVer_IMP; - case ID_DFR0_EL1_PerfMon_IMPDEF: - return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; - default: - /* Anything ARMv8.1+ and NI have the same value. For now. */ - return perfmon; - } -} - -static u8 pmuver_to_perfmon(u8 pmuver) -{ - switch (pmuver) { - case ID_AA64DFR0_EL1_PMUVer_IMP: - return ID_DFR0_EL1_PerfMon_PMUv3; - case ID_AA64DFR0_EL1_PMUVer_IMP_DEF: - return ID_DFR0_EL1_PerfMon_IMPDEF; - default: - /* Anything ARMv8.1+ and NI have the same value. For now. */ - return pmuver; - } -} - -/* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) -{ - u32 id = reg_to_encoding(r); - u64 val; - - if (sysreg_visible_as_raz(vcpu, r)) - return 0; - - val = read_sanitised_ftr_reg(id); - - switch (id) { - case SYS_ID_AA64PFR0_EL1: - if (!vcpu_has_sve(vcpu)) - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); - if (kvm_vgic_global_state.type == VGIC_V3) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); - } - break; - case SYS_ID_AA64PFR1_EL1: - if (!kvm_has_mte(vcpu->kvm)) - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); - - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); - break; - case SYS_ID_AA64ISAR1_EL1: - if (!vcpu_has_ptrauth(vcpu)) - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI)); - break; - case SYS_ID_AA64ISAR2_EL1: - if (!vcpu_has_ptrauth(vcpu)) - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); - if (!cpus_have_final_cap(ARM64_HAS_WFXT)) - val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); - break; - case SYS_ID_AA64DFR0_EL1: - /* Limit debug to ARMv8.0 */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); - /* Set PMUver to the required version */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), - vcpu_pmuver(vcpu)); - /* Hide SPE from guests */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); - break; - case SYS_ID_DFR0_EL1: - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), - pmuver_to_perfmon(vcpu_pmuver(vcpu))); - break; - case SYS_ID_AA64MMFR2_EL1: - val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; - break; - case SYS_ID_MMFR4_EL1: - val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX); - break; - } - - return val; -} - -static unsigned int id_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) -{ - u32 id = reg_to_encoding(r); - - switch (id) { - case SYS_ID_AA64ZFR0_EL1: - if (!vcpu_has_sve(vcpu)) - return REG_RAZ; - break; - } - - return 0; -} - -static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) -{ - /* - * AArch32 ID registers are UNKNOWN if AArch32 isn't implemented at any - * EL. Promote to RAZ/WI in order to guarantee consistency between - * systems. - */ - if (!kvm_supports_32bit_el0()) - return REG_RAZ | REG_USER_WI; - - return id_visibility(vcpu, r); -} - -static unsigned int raz_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) +unsigned int raz_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { return REG_RAZ; } -/* cpufeature ID register access trap handlers */ - -static bool access_id_reg(struct kvm_vcpu *vcpu, - struct sys_reg_params *p, - const struct sys_reg_desc *r) -{ - if (p->is_write) - return write_to_read_only(vcpu, p, r); - - p->regval = read_id_reg(vcpu, r); - if (vcpu_has_nv(vcpu)) - access_nested_id_reg(vcpu, p, r); - - return true; -} - /* Visibility overrides for SVE-specific control registers */ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) @@ -1335,144 +1183,6 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } -static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 csv2, csv3; - - /* - * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as - * it doesn't promise more than what is actually provided (the - * guest could otherwise be covered in ectoplasmic residue). - */ - csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT); - if (csv2 > 1 || - (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; - - /* Same thing for CSV3 */ - csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT); - if (csv3 > 1 || - (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; - - /* We can only differ with CSV[23], and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); - if (val) - return -EINVAL; - - vcpu->kvm->arch.pfr0_csv2 = csv2; - vcpu->kvm->arch.pfr0_csv3 = csv3; - - return 0; -} - -static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 pmuver, host_pmuver; - bool valid_pmu; - - host_pmuver = kvm_arm_pmu_get_pmuver_limit(); - - /* - * Allow AA64DFR0_EL1.PMUver to be set from userspace as long - * as it doesn't promise more than what the HW gives us. We - * allow an IMPDEF PMU though, only if no PMU is supported - * (KVM backward compatibility handling). - */ - pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); - if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)) - return -EINVAL; - - valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF); - - /* Make sure view register and PMU support do match */ - if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; - - /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - if (val) - return -EINVAL; - - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; - else - vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; - - return 0; -} - -static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 perfmon, host_perfmon; - bool valid_pmu; - - host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); - - /* - * Allow DFR0_EL1.PerfMon to be set from userspace as long as - * it doesn't promise more than what the HW gives us on the - * AArch64 side (as everything is emulated with that), and - * that this is a PMUv3. - */ - perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val); - if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) || - (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3)) - return -EINVAL; - - valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF); - - /* Make sure view register and PMU support do match */ - if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; - - /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - if (val) - return -EINVAL; - - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); - else - vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); - - return 0; -} - -/* - * cpufeature ID register user accessors - * - * For now, these registers are immutable for userspace, so no values - * are stored, and for set_id_reg() we don't allow the effective value - * to be changed. - */ -static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, - u64 *val) -{ - *val = read_id_reg(vcpu, rd); - return 0; -} - -static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, - u64 val) -{ - /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(vcpu, rd)) - return -EINVAL; - - return 0; -} - static int get_raz_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 *val) { @@ -1657,50 +1367,6 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .visibility = elx2_visibility, \ } -/* sys_reg_desc initialiser for known cpufeature ID registers */ -#define ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = id_visibility, \ -} - -/* sys_reg_desc initialiser for known cpufeature ID registers */ -#define AA32_ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = aa32_id_visibility, \ -} - -/* - * sys_reg_desc initialiser for architecturally unallocated cpufeature ID - * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 - * (1 <= crm < 8, 0 <= Op2 < 8). - */ -#define ID_UNALLOCATED(crm, op2) { \ - Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility \ -} - -/* - * sys_reg_desc initialiser for known ID registers that we hide from guests. - * For now, these are exposed just like unallocated ID regs: they appear - * RAZ for the guest. - */ -#define ID_HIDDEN(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility, \ -} - static bool access_sp_el1(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) @@ -1737,6 +1403,8 @@ static bool access_spsr(struct kvm_vcpu *vcpu, return true; } +extern const struct sys_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM]; + /* * Architected system registers. * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 @@ -1791,87 +1459,6 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 }, - /* - * ID regs: all ID_SANITISED() entries here must have corresponding - * entries in arm64_ftr_regs[]. - */ - - /* AArch64 mappings of the AArch32 ID registers */ - /* CRm=1 */ - AA32_ID_SANITISED(ID_PFR0_EL1), - AA32_ID_SANITISED(ID_PFR1_EL1), - { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_dfr0_el1, - .visibility = aa32_id_visibility, }, - ID_HIDDEN(ID_AFR0_EL1), - AA32_ID_SANITISED(ID_MMFR0_EL1), - AA32_ID_SANITISED(ID_MMFR1_EL1), - AA32_ID_SANITISED(ID_MMFR2_EL1), - AA32_ID_SANITISED(ID_MMFR3_EL1), - - /* CRm=2 */ - AA32_ID_SANITISED(ID_ISAR0_EL1), - AA32_ID_SANITISED(ID_ISAR1_EL1), - AA32_ID_SANITISED(ID_ISAR2_EL1), - AA32_ID_SANITISED(ID_ISAR3_EL1), - AA32_ID_SANITISED(ID_ISAR4_EL1), - AA32_ID_SANITISED(ID_ISAR5_EL1), - AA32_ID_SANITISED(ID_MMFR4_EL1), - AA32_ID_SANITISED(ID_ISAR6_EL1), - - /* CRm=3 */ - AA32_ID_SANITISED(MVFR0_EL1), - AA32_ID_SANITISED(MVFR1_EL1), - AA32_ID_SANITISED(MVFR2_EL1), - ID_UNALLOCATED(3,3), - AA32_ID_SANITISED(ID_PFR2_EL1), - ID_HIDDEN(ID_DFR1_EL1), - AA32_ID_SANITISED(ID_MMFR5_EL1), - ID_UNALLOCATED(3,7), - - /* AArch64 ID registers */ - /* CRm=4 */ - { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, - ID_SANITISED(ID_AA64PFR1_EL1), - ID_UNALLOCATED(4,2), - ID_UNALLOCATED(4,3), - ID_SANITISED(ID_AA64ZFR0_EL1), - ID_HIDDEN(ID_AA64SMFR0_EL1), - ID_UNALLOCATED(4,6), - ID_UNALLOCATED(4,7), - - /* CRm=5 */ - { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, - ID_SANITISED(ID_AA64DFR1_EL1), - ID_UNALLOCATED(5,2), - ID_UNALLOCATED(5,3), - ID_HIDDEN(ID_AA64AFR0_EL1), - ID_HIDDEN(ID_AA64AFR1_EL1), - ID_UNALLOCATED(5,6), - ID_UNALLOCATED(5,7), - - /* CRm=6 */ - ID_SANITISED(ID_AA64ISAR0_EL1), - ID_SANITISED(ID_AA64ISAR1_EL1), - ID_SANITISED(ID_AA64ISAR2_EL1), - ID_UNALLOCATED(6,3), - ID_UNALLOCATED(6,4), - ID_UNALLOCATED(6,5), - ID_UNALLOCATED(6,6), - ID_UNALLOCATED(6,7), - - /* CRm=7 */ - ID_SANITISED(ID_AA64MMFR0_EL1), - ID_SANITISED(ID_AA64MMFR1_EL1), - ID_SANITISED(ID_AA64MMFR2_EL1), - ID_UNALLOCATED(7,3), - ID_UNALLOCATED(7,4), - ID_UNALLOCATED(7,5), - ID_UNALLOCATED(7,6), - ID_UNALLOCATED(7,7), - { SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 }, { SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 }, { SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 }, @@ -2573,7 +2160,7 @@ int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu) return 1; } -static void perform_access(struct kvm_vcpu *vcpu, +void perform_access(struct kvm_vcpu *vcpu, struct sys_reg_params *params, const struct sys_reg_desc *r) { @@ -2948,6 +2535,9 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu) params = esr_sys64_to_params(esr); params.regval = vcpu_get_reg(vcpu, Rt); + if (is_id_reg(reg_to_encoding(¶ms))) + return emulate_id_reg(vcpu, ¶ms); + if (!emulate_sys_reg(vcpu, ¶ms)) return 1; @@ -3167,6 +2757,7 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { void __user *uaddr = (void __user *)(unsigned long)reg->addr; + struct sys_reg_params params; int err; if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX) @@ -3176,6 +2767,13 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (err != -ENOENT) return err; + if (!index_to_params(reg->id, ¶ms)) + return -ENOENT; + + if (is_id_reg(reg_to_encoding(¶ms))) + return kvm_sys_reg_get_user(vcpu, reg, + id_reg_descs, ARRAY_SIZE(id_reg_descs)); + return kvm_sys_reg_get_user(vcpu, reg, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); } @@ -3211,6 +2809,7 @@ int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { void __user *uaddr = (void __user *)(unsigned long)reg->addr; + struct sys_reg_params params; int err; if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX) @@ -3220,6 +2819,13 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (err != -ENOENT) return err; + if (!index_to_params(reg->id, ¶ms)) + return -ENOENT; + + if (is_id_reg(reg_to_encoding(¶ms))) + return kvm_sys_reg_set_user(vcpu, reg, + id_reg_descs, ARRAY_SIZE(id_reg_descs)); + return kvm_sys_reg_set_user(vcpu, reg, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); } @@ -3289,14 +2895,15 @@ static int walk_one_sys_reg(const struct kvm_vcpu *vcpu, } /* Assumed ordered tables, see kvm_sys_reg_table_init. */ -static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind) +static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind, + const struct sys_reg_desc table[], unsigned int num) { const struct sys_reg_desc *i2, *end2; unsigned int total = 0; int err; - i2 = sys_reg_descs; - end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs); + i2 = table; + end2 = table + num; while (i2 != end2) { err = walk_one_sys_reg(vcpu, i2++, &uind, &total); @@ -3310,7 +2917,8 @@ unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu) { return ARRAY_SIZE(invariant_sys_regs) + num_demux_regs() - + walk_sys_regs(vcpu, (u64 __user *)NULL); + + walk_sys_regs(vcpu, (u64 __user *)NULL, id_reg_descs, ARRAY_SIZE(id_reg_descs)) + + walk_sys_regs(vcpu, (u64 __user *)NULL, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); } int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) @@ -3325,7 +2933,12 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) uindices++; } - err = walk_sys_regs(vcpu, uindices); + err = walk_sys_regs(vcpu, uindices, id_reg_descs, ARRAY_SIZE(id_reg_descs)); + if (err < 0) + return err; + uindices += err; + + err = walk_sys_regs(vcpu, uindices, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); if (err < 0) return err; uindices += err; @@ -3339,6 +2952,7 @@ int __init kvm_sys_reg_table_init(void) unsigned int i; /* Make sure tables are unique and in order. */ + valid &= check_sysreg_table(id_reg_descs, ARRAY_SIZE(id_reg_descs), false); valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false); valid &= check_sysreg_table(cp14_regs, ARRAY_SIZE(cp14_regs), true); valid &= check_sysreg_table(cp14_64_regs, ARRAY_SIZE(cp14_64_regs), true); diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 6b11f2cc7146..7ce546a8be60 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -210,6 +210,19 @@ find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[], return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg); } +/* + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + */ +static inline bool is_id_reg(u32 id) +{ + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 && + sys_reg_CRm(id) < 8); +} + +void perform_access(struct kvm_vcpu *vcpu, struct sys_reg_params *params, + const struct sys_reg_desc *r); const struct sys_reg_desc *get_reg_by_id(u64 id, const struct sys_reg_desc table[], unsigned int num); @@ -220,6 +233,10 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, const struct sys_reg_desc table[], unsigned int num); int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, const struct sys_reg_desc table[], unsigned int num); +bool write_to_read_only(struct kvm_vcpu *vcpu, + struct sys_reg_params *params, const struct sys_reg_desc *r); +unsigned int raz_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r); +int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params); #define AA32(_x) .aarch32_map = AA32_##_x #define Op0(_x) .Op0 = _x @@ -234,4 +251,6 @@ int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, CRn(sys_reg_CRn(reg)), CRm(sys_reg_CRm(reg)), \ Op2(sys_reg_Op2(reg)) +#define KVM_ARM_ID_REG_NUM 56 + #endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */ From patchwork Mon Apr 24 23:47:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13222724 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CCD17C7618E for ; Mon, 24 Apr 2023 23:48:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ttfMT+DrVAZhtUvSoanlmjEv5dB5lodQVDZMdTMsjY0=; b=ZA3m/RRhvvzjV8rqMr3tULa5fa Wlsv/aMQJ4oI0r8nY8J26cGhpaJOJFRWC0+Bc8s54rqklDoSCyL8nOoP5aBMHqGO4s6saIKWfrZ1+ luYdeaqlAdIOAp05XXQwZzJYdVpNEmdKuNMZog66KTfXZqclMmyrNuyM5BlkxuzGh+M7qaQJHcZQb czmW2dxwrlgQ2gQwTEixx/l2KdcrBZGnecVDkQGfe2Sm0orZ7XEkEHDXcuw9Qt36NWHlCSksfzRiv gfofxRcmsFEfEQjZaPenYQvRNKPDzoWP8loaYMDJ2kxJ0QMrRjmModfMErdnIDRFREznmvPJCup7f xISPChpw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pr5tw-00HS9n-2G; Mon, 24 Apr 2023 23:47:20 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pr5tp-00HS6f-0X for linux-arm-kernel@lists.infradead.org; Mon, 24 Apr 2023 23:47:14 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1a6ce2cdb92so57020335ad.3 for ; Mon, 24 Apr 2023 16:47:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682380031; x=1684972031; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TLvwoCFmT9d/aZDDcIcw6O5cB0ka34ie/8GfmqiEy0E=; b=XiAt1WVgGgyZQs1Z3bzcnmvPUrHzham8w+vyr6VYC4lCujss7LCYpVDhhAZXsI+BZ8 Sn1UQtJPcSrbvLChfdi3VmuG+GU3tKWWTW+sBFIHnw31/HO6a/j2bOfye6jlPrTMRkXL 7l+ujDu1mKVJFQPQpeZAFCZ9nP6EJi/q6lquHLiI2mp21FRnuADvnsRU/2ZGiom+oCwj to1ckbCuPgjJ0W6duecxTklCsjzM2gGMcPhcl2qT7UIio9A9tAjtX29KnHcNwCL2fwdb MGd7U4GfFkYDn1xx4kzkldCBZ3/raGbppHN5MUSPHGN0qZArdfgjsjhbGd3WVavYUjyX Fowg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682380031; x=1684972031; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TLvwoCFmT9d/aZDDcIcw6O5cB0ka34ie/8GfmqiEy0E=; b=GpVlf8/wuK5ZTP7HdDDwLuREJ9hsKRo+TkBkPSWGjkr/xLMNjKDoHauITGmJZzqD9x 8IxpuHVF/EiDGrMK75AYjmQwitg0kpN2gPh9Yohu/ndvNCZ2dttQSyDDKO/AcokJ3VH9 eG2LBOrDnV9nUb/WPUvxr2cudFj2tfBYgwkhG9sjnqDg+TO05TNGwc7GyXqLrvFjoGFs Er+2lTEtiOaK4mV/hGy+hqvG2YVMA6hfNl9ywdLlm4oqvIojLbjbhHBMacqLFHd9RS7M D1cz2c9Pa1BhCYLtYEOWYhDh2GIu44wxrQM4b+0SrS+KJYFyOMlm0jtQmo4O1KVlhc+Q KZMg== X-Gm-Message-State: AAQBX9dV3ATIOMdr88/CZepKETtu6xFTg3mcsEOP0XJboHCaLNrs+LG6 Wp67u03U5Rnh4atk76ODf6T0W2IloA593X2duw== X-Google-Smtp-Source: AKy350ba5SY+SLj8omjfpO1GfrqZ6xxAW5x0jMmZCyJbxcpObI07/vL+6dtRFvhOwxJuGuu/sBqNAzvPlys5kajTKA== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:903:41c4:b0:1a6:ef72:b11 with SMTP id u4-20020a17090341c400b001a6ef720b11mr5251210ple.9.1682380031510; Mon, 24 Apr 2023 16:47:11 -0700 (PDT) Date: Mon, 24 Apr 2023 23:47:00 +0000 In-Reply-To: <20230424234704.2571444-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230424234704.2571444-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230424234704.2571444-3-jingzhangos@google.com> Subject: [PATCH v7 2/6] KVM: arm64: Save ID registers' sanitized value per guest From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230424_164713_206439_2659D364 X-CRM114-Status: GOOD ( 22.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce id_regs[] in kvm_arch as a storage of guest's ID registers, and save ID registers' sanitized value in the array at KVM_CREATE_VM. Use the saved ones when ID registers are read by the guest or userspace (via KVM_GET_ONE_REG). No functional change intended. Co-developed-by: Reiji Watanabe Signed-off-by: Reiji Watanabe Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 47 +++++++++++++++++++++++++++++ arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/id_regs.c | 49 +++++++++++++++++++++++++------ arch/arm64/kvm/sys_regs.c | 2 +- arch/arm64/kvm/sys_regs.h | 3 +- 5 files changed, 90 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index bcd774d74f34..2b1fe90a1790 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -177,6 +177,20 @@ struct kvm_smccc_features { unsigned long vendor_hyp_bmap; }; +/* + * Emualted CPU ID registers per VM + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + * + * These emulated idregs are VM-wide, but accessed from the context of a vCPU. + * Access to id regs are guarded by kvm_arch.config_lock. + */ +#define KVM_ARM_ID_REG_NUM 56 +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id)) +struct kvm_idregs { + u64 regs[KVM_ARM_ID_REG_NUM]; +}; + typedef unsigned int pkvm_handle_t; struct kvm_protected_vm { @@ -243,6 +257,9 @@ struct kvm_arch { /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; + /* Emulated CPU ID registers */ + struct kvm_idregs idregs; + /* * For an untrusted host VM, 'pkvm.handle' is used to lookup * the associated pKVM instance in the hypervisor. @@ -1008,6 +1025,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, struct kvm_arm_copy_mte_tags *copy_tags); +void kvm_arm_init_id_regs(struct kvm *kvm); + /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); @@ -1073,4 +1092,32 @@ static inline void kvm_hyp_reserve(void) { } void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu); bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu); +static inline u64 _idreg_read(struct kvm_arch *arch, u32 id) +{ + return arch->idregs.regs[IDREG_IDX(id)]; +} + +static inline void _idreg_write(struct kvm_arch *arch, u32 id, u64 val) +{ + arch->idregs.regs[IDREG_IDX(id)] = val; +} + +static inline u64 idreg_read(struct kvm_arch *arch, u32 id) +{ + u64 val; + + mutex_lock(&arch->config_lock); + val = _idreg_read(arch, id); + mutex_unlock(&arch->config_lock); + + return val; +} + +static inline void idreg_write(struct kvm_arch *arch, u32 id, u64 val) +{ + mutex_lock(&arch->config_lock); + _idreg_write(arch, id, val); + mutex_unlock(&arch->config_lock); +} + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 4b2e16e696a8..e34744c36406 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -153,6 +153,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); + kvm_arm_init_id_regs(kvm); /* * Initialise the default PMUver before there is a chance to diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index 96b4c43a5100..d2fba2fde01c 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -52,16 +52,9 @@ static u8 pmuver_to_perfmon(u8 pmuver) } } -/* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) { - u32 id = reg_to_encoding(r); - u64 val; - - if (sysreg_visible_as_raz(vcpu, r)) - return 0; - - val = read_sanitised_ftr_reg(id); + u64 val = idreg_read(&vcpu->kvm->arch, id); switch (id) { case SYS_ID_AA64PFR0_EL1: @@ -126,6 +119,14 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r return val; } +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +{ + if (sysreg_visible_as_raz(vcpu, r)) + return 0; + + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r)); +} + /* cpufeature ID register access trap handlers */ static bool access_id_reg(struct kvm_vcpu *vcpu, @@ -458,3 +459,33 @@ int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params) return 1; } + +/* + * Set the guest's ID registers that are defined in id_reg_descs[] + * with ID_SANITISED() to the host's sanitized value. + */ +void kvm_arm_init_id_regs(struct kvm *kvm) +{ + int i; + u32 id; + u64 val; + + for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { + id = reg_to_encoding(&id_reg_descs[i]); + if (WARN_ON_ONCE(!is_id_reg(id))) + /* Shouldn't happen */ + continue; + + /* + * Some hidden ID registers which are not in arm64_ftr_regs[] + * would cause warnings from read_sanitised_ftr_reg(). + * Skip those ID registers to avoid the warnings. + */ + if (id_reg_descs[i].visibility == raz_visibility) + /* Hidden or reserved ID register */ + continue; + + val = read_sanitised_ftr_reg(id); + idreg_write(&kvm->arch, id, val); + } +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 8dba10696dac..6370a0db9d26 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -364,7 +364,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); u32 sr = reg_to_encoding(r); if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) { diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 7ce546a8be60..e88fd77309b2 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -237,6 +237,7 @@ bool write_to_read_only(struct kvm_vcpu *vcpu, struct sys_reg_params *params, const struct sys_reg_desc *r); unsigned int raz_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r); int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params); +u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); #define AA32(_x) .aarch32_map = AA32_##_x #define Op0(_x) .Op0 = _x @@ -251,6 +252,4 @@ int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params); CRn(sys_reg_CRn(reg)), CRm(sys_reg_CRm(reg)), \ Op2(sys_reg_Op2(reg)) -#define KVM_ARM_ID_REG_NUM 56 - #endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */ From patchwork Mon Apr 24 23:47:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13222727 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7D332C7618E for ; Mon, 24 Apr 2023 23:48:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=RqTtTbVjvN6Mn6fvfEqAD+f6UeKOKUj6vV8RXQL2XxE=; b=wGRL2bQh7YocuoUg0OOx3e1TYG l0XUn7Z6MtJ+/hZlEMI6ucwWg6ehxzIK8IN6hPD6HUG+PCU5J7Yh06VFLt9N5polLF68FRI+o2wlj QOsPHsisvIFcVhwU8T0cEWZwQPSfbbApeBXiRXkqkE3jXSwMXSabpVimUfGy2zIxF7dDtX63H+xj6 z1p+BYGPF5R8ElGYR8BAeg5FLvMqRsndk5YZ/WD72HVgn0YeAAPQ0k9V/xYAVIqD1PqNww+CdsKfz RYNhdYiV3tNhKS78dRN/EjuJG4dCzxCDiWYu/KXqH3d67Sw6DotR/nCoL20cZF35kRp2yj5sVNtnj VLk68Ybg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pr5ty-00HSBQ-34; Mon, 24 Apr 2023 23:47:22 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pr5tr-00HS7F-01 for linux-arm-kernel@lists.infradead.org; Mon, 24 Apr 2023 23:47:16 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-54bfd2c7ad6so81960157b3.2 for ; Mon, 24 Apr 2023 16:47:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682380033; x=1684972033; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hBRirRGWthjx+Mjpp0R7iqomIsI0z6DFJ7PkuKK4rG8=; b=5uEQC87jgz+l3ZAvYKoqSKwDcxe0APavr7Jx+zOWvP2imq6B0pVK/D77jxBpFSy4H7 wbN+/cbl9v6hkWPXnXOsotpJXnGTX1zHpqIuV8ARxI5HHppULT4BqOVwID5kDDkQZUxP txDjod1IYP1InUzw9WEKwNRR8XPXC2UU2cpzIN5Yg3E5c4jSDX28bEaVvC3bnqNe8KJ/ mv+OsI/hLjHYIq4il/3ADVm+Cz02gCE1DvIFVzT83aGRKaD/mU/P1VlD8yOuVMIQFkDu r0EdPJGLabopoGVIJdLfYO56rAiUHf57mCMxApjJmi3Yhso9YEp5l9YEsP0IdxEvPSnW +0sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682380033; x=1684972033; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hBRirRGWthjx+Mjpp0R7iqomIsI0z6DFJ7PkuKK4rG8=; b=fuV1PRwFkZEbrkkmh7P4so4k4XOEjr5eRsGmgM9sGxucYd7ukU8iMs53BmE/JIjRKz At3xxgAMX31yoSFaMwVOZo/8tv7iCH7PFnfUVPl09atm/0gTICRrTg7JpFVezYVkJ3ye mle6IbRX9hQAt4iRdreJP+A5r1rRzH6gAMyAJDOKFr9uO/RRamf1/9t70Z/Ysdbc6Nou G/EL0G4FpUmZUmEbPN388/dnbkEDt+bB2hgArwyKowFCZN74TpHWn+T3wrtPms00Me0m rV7ygF/7fT/29tw3hLphVHd+rIKzD7oZswb1NvefS+Iv1BVSmn97vVsPf5h0SCr3vend 2qRw== X-Gm-Message-State: AAQBX9en3N5uXwIFeHdnunwbYTF0/poQ12IVf5UfXJ/K3LwqNV4dJqbi 7G1YmpBsHEtPZxZC+BKfy9HcrRMGR+w8qGIFOQ== X-Google-Smtp-Source: AKy350bGdJlzo+k/dHHXPT5SO/f0OAcoFeZFhUsFF7KAUTF4TiYgJN/2q/VCMaRhm43hjebFl6EgeI8LsoNsstoAZQ== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a81:e24c:0:b0:546:5b84:b558 with SMTP id z12-20020a81e24c000000b005465b84b558mr6891742ywl.10.1682380033225; Mon, 24 Apr 2023 16:47:13 -0700 (PDT) Date: Mon, 24 Apr 2023 23:47:01 +0000 In-Reply-To: <20230424234704.2571444-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230424234704.2571444-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230424234704.2571444-4-jingzhangos@google.com> Subject: [PATCH v7 3/6] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3] From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230424_164715_085918_8A68891B X-CRM114-Status: GOOD ( 17.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With per guest ID registers, ID_AA64PFR0_EL1.[CSV2|CSV3] settings from userspace can be stored in its corresponding ID register. The setting of CSV bits for protected VMs are removed according to the discussion from Fuad below: https://lore.kernel.org/all/CA+EHjTwXA9TprX4jeG+-D+c8v9XG+oFdU1o6TSkvVye145_OvA@mail.gmail.com Besides the removal of CSV bits setting for protected VMs, No other functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 2 -- arch/arm64/kvm/arm.c | 17 --------------- arch/arm64/kvm/id_regs.c | 35 ++++++++++++++++++++++++------- 3 files changed, 27 insertions(+), 27 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 2b1fe90a1790..0c719c34f5b4 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -247,8 +247,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - u8 pfr0_csv2; - u8 pfr0_csv3; struct { u8 imp:4; u8 unimp:4; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e34744c36406..0f71b10a2f05 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -104,22 +104,6 @@ static int kvm_arm_default_max_vcpus(void) return vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS; } -static void set_default_spectre(struct kvm *kvm) -{ - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv2 = 1; - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv3 = 1; -} - /** * kvm_arch_init_vm - initializes a VM data structure * @kvm: pointer to the KVM struct @@ -151,7 +135,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->max_vcpus = kvm_arm_default_max_vcpus(); - set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); kvm_arm_init_id_regs(kvm); diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index d2fba2fde01c..18c39af3e319 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -61,12 +61,6 @@ u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), - (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), - (u64)vcpu->kvm->arch.pfr0_csv3); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -201,6 +195,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, u64 val) { u8 csv2, csv3; + u64 sval = val; /* * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as @@ -223,8 +218,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - vcpu->kvm->arch.pfr0_csv2 = csv2; - vcpu->kvm->arch.pfr0_csv3 = csv3; + idreg_write(&vcpu->kvm->arch, reg_to_encoding(rd), sval); return 0; } @@ -488,4 +482,29 @@ void kvm_arm_init_id_regs(struct kvm *kvm) val = read_sanitised_ftr_reg(id); idreg_write(&kvm->arch, id, val); } + + /* + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. + */ + mutex_lock(&kvm->arch.config_lock); + + val = _idreg_read(&kvm->arch, SYS_ID_AA64PFR0_EL1); + + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } + + _idreg_write(&kvm->arch, SYS_ID_AA64PFR0_EL1, val); + + mutex_unlock(&kvm->arch.config_lock); } From patchwork Mon Apr 24 23:47:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13222723 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0C8AEC77B61 for ; Mon, 24 Apr 2023 23:48:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=qn3+PYZXw1Ka9yZCmZk25ogkUEMJ/0tRSn9bwpvvv7Y=; b=rahk+M2nh7omRExQdkj4NFDnz5 UxjaP0jGkX9AS4dX2doa1ftX26/jwQDGtIeBRhpA/aVq22hrQF/+gXWgX37kRln4QoFsQDLQnQU08 of6tMUvELoHViKXiEO6oAUj+ULk3rMCA9EiUzM/feDSo9dR7/FTgoi5+VoF4VY7wi8qF0QjY8HcCQ zEeMXwHEW2YduO95q7n69Hwu5ZPvJBFZ7jgtIBG9j6TJduV5lXRm9YgdfqIkKdLy+P1v29eXe3jvp LUAJhws5D2bGSu80jdqRQJkMdWm73w/ZpeXkWGPoa2xRuvux8MrLWS9uUcYtPP8lhz5fliOFTpnEq NUMj9F0A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pr5u0-00HSBc-01; Mon, 24 Apr 2023 23:47:24 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pr5tt-00HS8K-1v for linux-arm-kernel@lists.infradead.org; Mon, 24 Apr 2023 23:47:19 +0000 Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-524d7681748so3743260a12.1 for ; Mon, 24 Apr 2023 16:47:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682380035; x=1684972035; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RIy+HzYxVdD9wg5y3WVcnsXEaA4ZDP3kYn5Iam7xw70=; b=bGM7JAtgvhlwpfOLnHiMC4Skh35UX1c3KZE9nB1vqzQt9kVdnCbIIHc267lwOHPLU3 EtKwny7BzN/DqqGK31zy4oIvYdaveQUtYd5UcLAlLaX0gWOThSSGKNwcBC1BTTTILTKj I7/2JhiPwFLN5qhKJ+CC387bT3GvCDYbCqmj9LwM+ebJ+HH3gVodtFbrU6QIi8ZEfcrj gfbxbP76pAX655ZpHddBpkdaBgllFhFjpRvTA2uEGLOz1ZBEfB3M2AsK1Rxc1ZmfMzvq Vxw4zNqKWvb/2gZ6Ha8aEkOHd2Ted7nIR1h3z+ppkxd4//5QhAvvftUaY0Vb3jPeqeqc y2Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682380035; x=1684972035; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RIy+HzYxVdD9wg5y3WVcnsXEaA4ZDP3kYn5Iam7xw70=; b=hKLalDaMtCUvmnzmQE2/Qq761Hmpf/FXmIay1Ac2dgNTk2UHAdq4t5Mslmfzd/9LpA bgE5hQiEmmpgIztc8KoA0pvXXs2eeTiHrQ38YaTV2zDNzAl4fkVfsj1xzfZSVPUi8HL/ OnqKrbqVvyFyA05k1EWWR15Rgw5PnYQ4PT4u1hsg82OnVQPS9n6IVWvKDe5kiJzNxwhG ijJKu2Gs/L/W97nedtKEkQjUal64fbk5pOhqFPClrCf6sBHG8AsilRha2pj9AbxM0NRA 82hT+ZnlPaH7BPIGoj1j83YjxvWb0VibF41Ytt+13377W2cUVRtGC+QLUq9MZdi4zcgu oL9w== X-Gm-Message-State: AAQBX9dpbN7jQYdVj7n8wO9duJCSt43JL2G2MEGbdFUXXMrAwdi8tROv UQKRPsWKN0oyv+eAoqQhKdVaEBBkInIBtffTsg== X-Google-Smtp-Source: AKy350a7BJBMaUH21SnPtBI5rgAd4sPJoigUViak0LTLAgVA96CWiVCY8QqJgIq5ts1if0CFDp5OhHpM1TQk13UxtQ== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:902:c411:b0:1a6:b221:daa1 with SMTP id k17-20020a170902c41100b001a6b221daa1mr5344183plk.0.1682380034902; Mon, 24 Apr 2023 16:47:14 -0700 (PDT) Date: Mon, 24 Apr 2023 23:47:02 +0000 In-Reply-To: <20230424234704.2571444-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230424234704.2571444-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230424234704.2571444-5-jingzhangos@google.com> Subject: [PATCH v7 4/6] KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230424_164717_632914_0E8C2D11 X-CRM114-Status: GOOD ( 17.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With per guest ID registers, PMUver settings from userspace can be stored in its corresponding ID register. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 11 +++--- arch/arm64/kvm/arm.c | 6 --- arch/arm64/kvm/id_regs.c | 66 +++++++++++++++++++++++++------ include/kvm/arm_pmu.h | 5 ++- 4 files changed, 63 insertions(+), 25 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0c719c34f5b4..3b583b055d07 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -235,6 +235,12 @@ struct kvm_arch { #define KVM_ARCH_FLAG_EL1_32BIT 4 /* PSCI SYSTEM_SUSPEND enabled for the guest */ #define KVM_ARCH_FLAG_SYSTEM_SUSPEND_ENABLED 5 + /* + * AA64DFR0_EL1.PMUver was set as ID_AA64DFR0_EL1_PMUVer_IMP_DEF + * or DFR0_EL1.PerfMon was set as ID_DFR0_EL1_PerfMon_IMPDEF from + * userspace for VCPUs without PMU. + */ +#define KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU 6 unsigned long flags; @@ -247,11 +253,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - struct { - u8 imp:4; - u8 unimp:4; - } dfr0_pmuver; - /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 0f71b10a2f05..9ecd0c5d0754 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -138,12 +138,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm_arm_init_hypercalls(kvm); kvm_arm_init_id_regs(kvm); - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit(); - return 0; err_free_cpumask: diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index 18c39af3e319..15f79a654be8 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -21,9 +21,12 @@ static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) - return vcpu->kvm->arch.dfr0_pmuver.imp; + return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + idreg_read(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1)); + else if (test_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags)) + return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; - return vcpu->kvm->arch.dfr0_pmuver.unimp; + return 0; } static u8 perfmon_to_pmuver(u8 perfmon) @@ -254,10 +257,24 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; - else - vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; + if (valid_pmu) { + mutex_lock(&vcpu->kvm->arch.config_lock); + + val = _idreg_read(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1); + val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; + val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver); + _idreg_write(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1, val); + + val = _idreg_read(&vcpu->kvm->arch, SYS_ID_DFR0_EL1); + val &= ~ID_DFR0_EL1_PerfMon_MASK; + val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver)); + _idreg_write(&vcpu->kvm->arch, SYS_ID_DFR0_EL1, val); + + mutex_unlock(&vcpu->kvm->arch.config_lock); + } else { + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, + pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF); + } return 0; } @@ -294,10 +311,24 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); - else - vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); + if (valid_pmu) { + mutex_lock(&vcpu->kvm->arch.config_lock); + + val = _idreg_read(&vcpu->kvm->arch, SYS_ID_DFR0_EL1); + val &= ~ID_DFR0_EL1_PerfMon_MASK; + val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon); + _idreg_write(&vcpu->kvm->arch, SYS_ID_DFR0_EL1, val); + + val = _idreg_read(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1); + val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; + val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon)); + _idreg_write(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1, val); + + mutex_unlock(&vcpu->kvm->arch.config_lock); + } else { + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, + perfmon == ID_DFR0_EL1_PerfMon_IMPDEF); + } return 0; } @@ -483,6 +514,7 @@ void kvm_arm_init_id_regs(struct kvm *kvm) idreg_write(&kvm->arch, id, val); } + mutex_lock(&kvm->arch.config_lock); /* * The default is to expose CSV2 == 1 if the HW isn't affected. * Although this is a per-CPU feature, we make it global because @@ -491,8 +523,6 @@ void kvm_arm_init_id_regs(struct kvm *kvm) * Userspace can override this as long as it doesn't promise * the impossible. */ - mutex_lock(&kvm->arch.config_lock); - val = _idreg_read(&kvm->arch, SYS_ID_AA64PFR0_EL1); if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { @@ -506,5 +536,17 @@ void kvm_arm_init_id_regs(struct kvm *kvm) _idreg_write(&kvm->arch, SYS_ID_AA64PFR0_EL1, val); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val = _idreg_read(&kvm->arch, SYS_ID_AA64DFR0_EL1); + + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + kvm_arm_pmu_get_pmuver_limit()); + + _idreg_write(&kvm->arch, SYS_ID_AA64DFR0_EL1, val); + mutex_unlock(&kvm->arch.config_lock); } diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 628775334d5e..e850432f8c09 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -92,8 +92,9 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); /* * Evaluates as true when emulating PMUv3p5, and false otherwise. */ -#define kvm_pmu_is_3p5(vcpu) \ - (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5) +#define kvm_pmu_is_3p5(vcpu) \ + (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), \ + idreg_read(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1)) >= ID_AA64DFR0_EL1_PMUVer_V3P5) u8 kvm_arm_pmu_get_pmuver_limit(void); From patchwork Mon Apr 24 23:47:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13222751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C54E8C7618E for ; Tue, 25 Apr 2023 01:01:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=qiXcQ2W3JBpAYxtIJ6IHB48d6CEYXiBSNnXgq/RwPYM=; b=jksrbfqEYeaJJWsAKs3pOzODyY /DL9ZoWDZB/GbouVpaGWtGYqLJE+H9X5FCNFkJg4fT1ltBHGd7jsvNelwamfoglCScqsFrANgLXde HAMatlAeJqzUeObYqCmdlh+2XF6GUgx0tGDuVHJaZI4SzW+k+/sxFT1JfD6fmWVqWduY1Yl40ESMm +zSFgh/HdyjSO9D2uTfUFNStuCdnHtD3i/iNTagPjyKjs1oV01qtSpm3lZNkxVPtH073lL/53Kxvx AnXgAh1Bk78MuwtkPunHvlpG00rMb36BrEVEG/c5x1KgOZGgtvOpio5MkSJyTo9bmLHNmO/n58pv6 hEmD/mtw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pr72b-00HXjD-0N; Tue, 25 Apr 2023 01:00:21 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pr5tt-00HS8h-2w for linux-arm-kernel@lists.infradead.org; Mon, 24 Apr 2023 23:47:19 +0000 Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-2475be6f3a9so3015680a91.1 for ; Mon, 24 Apr 2023 16:47:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682380036; x=1684972036; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=N1fnAqPS6zL0Ps1vqExjSmtaVpxRkVKSnGLq5UOuzww=; b=wwxk35/g4Az1wwaNwvrHcLXr12FfuJhAQIlYQmKpKZGGCiMMf278Khh2hHW2oClQAL YTOi2klMUv0fIAjBMrvmHh7WNhX7ulV/hhN3mBVAEwaujR0h6JBL+cwjr7ciZB+ilGRm lNPnRYhjxY18sCMb1gDOYOTtStc1w5QvYt5CV1Y/f/X/TtT9fb0EZMSipfCwkshlxA6S tUVq1/QyYY96wQEAqPrOdIyprcViCgtE5cBgR2K0f5A0fuxjXokInyiU01dii2Icsgox dj/maNOZi3vPpPCqF9ikfijvUukr4D55b824WG6vM2Dvq+ZIa6BPznEG+Ew4fju9JhBT ykUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682380036; x=1684972036; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=N1fnAqPS6zL0Ps1vqExjSmtaVpxRkVKSnGLq5UOuzww=; b=kFHKUA0Eki428wtCXqtYZOCPczQxiYktP08tHn3DxKhLZ1PxVxDnFY9ORWofc1HqMt ivj4VPzo3dSzjEB0vIpkTwgwwKEmKccl5SsrPhyLuSInrzuFXHLXWYcs1CMKY63bHE8l lu6nDiMJvcSm5TUtp5XdurHRqz+OYCJxQmZzHuybD2P+1lpiL/LFWWHqL6+ZAiUkyG/+ aPgp7F7Ff3t5f8a9eMAUc5dPXGXK34nOYol5PeUj/BxkPEfDX+odg6dIxi6QKuECmAeV DXkfZ91GJuTmcla1t0LegLBNjaUt9nTIyM5gduUIHdgukHl+fxr1Ix7lM0E/OipURFDX Vf8g== X-Gm-Message-State: AAQBX9e1rlj4FFzt/UUdWLnkAM4WfdM2P56nOJ1jwzdufCbQkz+AwFlK LHNslTgew+p44KA6LcRgNqMenz4/Pg7qb9eFmg== X-Google-Smtp-Source: AKy350ZC5NeJ9+1f8LWHjSHP9Z3frQvpHGhVMkQb8KDBxQuRkByt0R6mNAc6Q7EryfQtsuetooixLzLFvZsfsfzPug== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:90a:72c5:b0:247:710c:c14f with SMTP id l5-20020a17090a72c500b00247710cc14fmr3649162pjk.5.1682380036476; Mon, 24 Apr 2023 16:47:16 -0700 (PDT) Date: Mon, 24 Apr 2023 23:47:03 +0000 In-Reply-To: <20230424234704.2571444-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230424234704.2571444-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230424234704.2571444-6-jingzhangos@google.com> Subject: [PATCH v7 5/6] KVM: arm64: Reuse fields of sys_reg_desc for idreg From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230424_164717_974420_B93B8F4E X-CRM114-Status: GOOD ( 23.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since reset() and val are not used for idreg in sys_reg_desc, they would be used with other purposes for idregs. The callback reset() would be used to return KVM sanitised id register values. The u64 val would be used as mask for writable fields in idregs. Only bits with 1 in val are writable from userspace. Signed-off-by: Jing Zhang --- arch/arm64/kvm/id_regs.c | 44 +++++++++++++++++++---------- arch/arm64/kvm/sys_regs.c | 59 +++++++++++++++++++++++++++------------ arch/arm64/kvm/sys_regs.h | 10 ++++--- 3 files changed, 77 insertions(+), 36 deletions(-) diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index 15f79a654be8..a5fabbb04ceb 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -55,6 +55,11 @@ static u8 pmuver_to_perfmon(u8 pmuver) } } +static u64 general_read_kvm_sanitised_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) +{ + return read_sanitised_ftr_reg(reg_to_encoding(rd)); +} + u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) { u64 val = idreg_read(&vcpu->kvm->arch, id); @@ -333,6 +338,17 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, return 0; } +/* + * Since reset() callback and field val are not used for idregs, they will be + * used for specific purposes for idregs. + * The reset() would return KVM sanitised register value. The value would be the + * same as the host kernel sanitised value if there is no KVM sanitisation. + * The val would be used as a mask indicating writable fields for the idreg. + * Only bits with 1 are writable from userspace. This mask might not be + * necessary in the future whenever all ID registers are enabled as writable + * from userspace. + */ + /* sys_reg_desc initialiser for known cpufeature ID registers */ #define ID_SANITISED(name) { \ SYS_DESC(SYS_##name), \ @@ -340,6 +356,8 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, .get_user = get_id_reg, \ .set_user = set_id_reg, \ .visibility = id_visibility, \ + .reset = general_read_kvm_sanitised_reg,\ + .val = 0, \ } /* sys_reg_desc initialiser for known cpufeature ID registers */ @@ -349,6 +367,8 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, .get_user = get_id_reg, \ .set_user = set_id_reg, \ .visibility = aa32_id_visibility, \ + .reset = general_read_kvm_sanitised_reg,\ + .val = 0, \ } /* @@ -361,7 +381,9 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, .access = access_id_reg, \ .get_user = get_id_reg, \ .set_user = set_id_reg, \ - .visibility = raz_visibility \ + .visibility = raz_visibility, \ + .reset = NULL, \ + .val = 0, \ } /* @@ -375,6 +397,8 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, .get_user = get_id_reg, \ .set_user = set_id_reg, \ .visibility = raz_visibility, \ + .reset = NULL, \ + .val = 0, \ } const struct sys_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { @@ -485,10 +509,7 @@ int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params) return 1; } -/* - * Set the guest's ID registers that are defined in id_reg_descs[] - * with ID_SANITISED() to the host's sanitized value. - */ +/* Initialize the guest's ID registers with KVM sanitised values. */ void kvm_arm_init_id_regs(struct kvm *kvm) { int i; @@ -501,16 +522,11 @@ void kvm_arm_init_id_regs(struct kvm *kvm) /* Shouldn't happen */ continue; - /* - * Some hidden ID registers which are not in arm64_ftr_regs[] - * would cause warnings from read_sanitised_ftr_reg(). - * Skip those ID registers to avoid the warnings. - */ - if (id_reg_descs[i].visibility == raz_visibility) - /* Hidden or reserved ID register */ - continue; + val = 0; + /* Read KVM sanitised register value if available */ + if (id_reg_descs[i].reset) + val = id_reg_descs[i].reset(NULL, &id_reg_descs[i]); - val = read_sanitised_ftr_reg(id); idreg_write(&kvm->arch, id, val); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 6370a0db9d26..032258496f71 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -540,10 +540,11 @@ static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_bvr(struct kvm_vcpu *vcpu, +static u64 reset_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm] = rd->val; + return rd->val; } static bool trap_bcr(struct kvm_vcpu *vcpu, @@ -576,10 +577,11 @@ static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_bcr(struct kvm_vcpu *vcpu, +static u64 reset_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm] = rd->val; + return rd->val; } static bool trap_wvr(struct kvm_vcpu *vcpu, @@ -613,10 +615,11 @@ static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_wvr(struct kvm_vcpu *vcpu, +static u64 reset_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm] = rd->val; + return rd->val; } static bool trap_wcr(struct kvm_vcpu *vcpu, @@ -649,25 +652,28 @@ static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_wcr(struct kvm_vcpu *vcpu, +static u64 reset_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm] = rd->val; + return rd->val; } -static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 amair = read_sysreg(amair_el1); vcpu_write_sys_reg(vcpu, amair, AMAIR_EL1); + return amair; } -static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 actlr = read_sysreg(actlr_el1); vcpu_write_sys_reg(vcpu, actlr, ACTLR_EL1); + return actlr; } -static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 mpidr; @@ -681,7 +687,10 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0); mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1); mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2); - vcpu_write_sys_reg(vcpu, (1ULL << 31) | mpidr, MPIDR_EL1); + mpidr |= (1ULL << 31); + vcpu_write_sys_reg(vcpu, mpidr, MPIDR_EL1); + + return mpidr; } static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, @@ -693,13 +702,13 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } -static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX); /* No PMU available, any PMU reg may UNDEF... */ if (!kvm_arm_support_pmu_v3()) - return; + return 0; n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT; n &= ARMV8_PMU_PMCR_N_MASK; @@ -708,33 +717,41 @@ static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= mask; + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0); + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK; + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_COUNTER_MASK; + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 pmcr; /* No PMU available, PMCR_EL0 may UNDEF... */ if (!kvm_arm_support_pmu_v3()) - return; + return 0; /* Only preserve PMCR_EL0.N, and reset the rest to 0 */ pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); @@ -742,6 +759,8 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) pmcr |= ARMV8_PMU_PMCR_LC; __vcpu_sys_reg(vcpu, r->reg) = pmcr; + + return __vcpu_sys_reg(vcpu, r->reg); } static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) @@ -1220,7 +1239,7 @@ static bool access_clidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, * Fabricate a CLIDR_EL1 value instead of using the real value, which can vary * by the physical CPU which the vcpu currently resides in. */ -static void reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 ctr_el0 = read_sanitised_ftr_reg(SYS_CTR_EL0); u64 clidr; @@ -1268,6 +1287,8 @@ static void reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) clidr |= 2 << CLIDR_TTYPE_SHIFT(loc); __vcpu_sys_reg(vcpu, r->reg) = clidr; + + return __vcpu_sys_reg(vcpu, r->reg); } static int set_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, @@ -2621,19 +2642,21 @@ id_to_sys_reg_desc(struct kvm_vcpu *vcpu, u64 id, */ #define FUNCTION_INVARIANT(reg) \ - static void get_##reg(struct kvm_vcpu *v, \ + static u64 get_##reg(struct kvm_vcpu *v, \ const struct sys_reg_desc *r) \ { \ ((struct sys_reg_desc *)r)->val = read_sysreg(reg); \ + return ((struct sys_reg_desc *)r)->val; \ } FUNCTION_INVARIANT(midr_el1) FUNCTION_INVARIANT(revidr_el1) FUNCTION_INVARIANT(aidr_el1) -static void get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r) +static u64 get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r) { ((struct sys_reg_desc *)r)->val = read_sanitised_ftr_reg(SYS_CTR_EL0); + return ((struct sys_reg_desc *)r)->val; } /* ->val is filled in by kvm_sys_reg_table_init() */ diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index e88fd77309b2..21869319f6e1 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -65,12 +65,12 @@ struct sys_reg_desc { const struct sys_reg_desc *); /* Initialization for vcpu. */ - void (*reset)(struct kvm_vcpu *, const struct sys_reg_desc *); + u64 (*reset)(struct kvm_vcpu *, const struct sys_reg_desc *); /* Index into sys_reg[], or 0 if we don't need to save it. */ int reg; - /* Value (usually reset value) */ + /* Value (usually reset value), or write mask for idregs */ u64 val; /* Custom get/set_user functions, fallback to generic if NULL */ @@ -123,19 +123,21 @@ static inline bool read_zero(struct kvm_vcpu *vcpu, } /* Reset functions */ -static inline void reset_unknown(struct kvm_vcpu *vcpu, +static inline u64 reset_unknown(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { BUG_ON(!r->reg); BUG_ON(r->reg >= NR_SYS_REGS); __vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL; + return __vcpu_sys_reg(vcpu, r->reg); } -static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static inline u64 reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { BUG_ON(!r->reg); BUG_ON(r->reg >= NR_SYS_REGS); __vcpu_sys_reg(vcpu, r->reg) = r->val; + return __vcpu_sys_reg(vcpu, r->reg); } static inline unsigned int sysreg_visibility(const struct kvm_vcpu *vcpu, From patchwork Mon Apr 24 23:47:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13222750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A5D50C77B61 for ; Tue, 25 Apr 2023 01:01:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=J0kvUHwyy0exI8B2x/nJqkfVyAUaEFuy7FYSZ2+i8nU=; b=J3vd32OUPvaYn37nwmk//xh0Vp cceRoJIgBHBo6TGOsA0759nBSkNlZqOpGF3lI/mkDXzYRiRBrqJKbnez0Z87f0DA87veG/FKgDKb5 b+ulrHzuQyaKb+/kSmxMvzO5djMmCg7JySdoeCrmVIqOvLKHyWmGftSETvH8MRkBTTe28SMqspHUs Ezx0XmDWvzjRMcSxlo7ybCjA1wxh7MzY8Q1JAfnMNhgOEtSPe4PCbRhLIA8xhlv1CBAbKn3PHYdew /1Cj0H/VKHLMDA37NcI6RnjUYh0mRTfYkZFB6lz9Y9PdA4PbwGyB0fIWdJnxoWMkBCKF+cnIeZFn6 coHsaPkQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pr72c-00HXjP-06; Tue, 25 Apr 2023 01:00:22 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pr5tv-00HS9I-2A for linux-arm-kernel@lists.infradead.org; Mon, 24 Apr 2023 23:47:21 +0000 Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-63b79d8043eso26481176b3a.0 for ; Mon, 24 Apr 2023 16:47:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682380038; x=1684972038; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DoqosKAO7vt4Apem+vp5K/PHJy1C6t89LuJhpJ40gSo=; b=k7BFC4Ofsui1Z1V6e0npf2dObjfkFcBvujLgC6x4wJE/FBqIl91KU5VXsoNyx9Q3fQ yX0904ukJtjZYMaydvPuuMvChNNj8+NZiJC3gZ8rdzsxkghlOXiwVMR3e/XepenJe7cl hW2XNx2cxLuG3uqquqQU09yzBusE13ASyk7UjzlUE3f3Qdt1dOXvM+ybZAKFKFMhVqyi +8DF1qP+70b3KjQTTZIltC59XI2bXPT30MOHgH7XtMSAOehfqttaZ+vc4hFj3LEXe0jF MIVFtzem//GfmbiLkvKiVUsQZr6C5vtOCRugGZ0EoR6E9Uyi8xmTRFEgHUsipJnBJBfW Touw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682380038; x=1684972038; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DoqosKAO7vt4Apem+vp5K/PHJy1C6t89LuJhpJ40gSo=; b=MBDPCk6fF0wwlR31bT5eRfWa3hVi4J0MTX+JabDgQYxVE7i7hM3m5BgPr/U+eshRo0 leJVtdqGYOL+AokTDiZHcFwOVpFG/sZaAyCJ18zFx/M1gEFeEoD8cJeEDA8oZQi1+Mm4 xemt7eMscMe50kLwMFbT95vbqbMg3uCx15/om1aFepDrjgYh5ZGGc0uDj3q+/xuoTPiO xL8CI/EG9sAk9ezx9P+DrBzOqomE6bgv445jkyYQvYus9HlyJZpY/YAH94Mlw4nyBm1g ruER0WMXgs4Ym9yLgCAZ1if9dUXDpPb0mA4pRLPBs96mM5ZFVBjp46X0nBe4uRBcGRQQ 55Lw== X-Gm-Message-State: AAQBX9ezHTNx/T9TPvdWh2yw+y9HmDM6UfhM55UT5H1bLjX+8lYIZ4wk 8fJca0+HQmPjcsmtoKx39g6r1qbgBBIMOXZ5RA== X-Google-Smtp-Source: AKy350ZnrzLZiWDgWDZCp/OgJAeoodsWUfYlbkGKuZe6DEoVJ0BWgxmBR+wd9ablrqQUZHQpeXb1Px7rOUgE85ZdFA== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:90b:19c6:b0:24b:6d01:584a with SMTP id nm6-20020a17090b19c600b0024b6d01584amr3029079pjb.0.1682380038249; Mon, 24 Apr 2023 16:47:18 -0700 (PDT) Date: Mon, 24 Apr 2023 23:47:04 +0000 In-Reply-To: <20230424234704.2571444-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230424234704.2571444-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230424234704.2571444-7-jingzhangos@google.com> Subject: [PATCH v7 6/6] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3 From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230424_164719_714488_3085133C X-CRM114-Status: GOOD ( 25.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3], ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities introduced by ID register descriptor array. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kvm/id_regs.c | 318 +++++++++++++++++++--------- 3 files changed, 223 insertions(+), 98 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 6bf013fb110d..dc769c2eb7a4 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -915,6 +915,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1) return 8; } +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur); struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id); extern struct arm64_ftr_override id_aa64mmfr1_override; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 2e3e55139777..677ec4fe9f6b 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -791,7 +791,7 @@ static u64 arm64_ftr_set_value(const struct arm64_ftr_bits *ftrp, s64 reg, return reg; } -static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur) { s64 ret = 0; diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index a5fabbb04ceb..de66bfb026f7 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -18,6 +18,86 @@ #include "sys_regs.h" +static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp, + s64 new, s64 cur) +{ + struct arm64_ftr_bits kvm_ftr = *ftrp; + + /* Some features have different safe value type in KVM than host features */ + switch (id) { + case SYS_ID_AA64DFR0_EL1: + if (kvm_ftr.shift == ID_AA64DFR0_EL1_PMUVer_SHIFT) + kvm_ftr.type = FTR_LOWER_SAFE; + break; + case SYS_ID_DFR0_EL1: + if (kvm_ftr.shift == ID_DFR0_EL1_PerfMon_SHIFT) + kvm_ftr.type = FTR_LOWER_SAFE; + break; + } + + return arm64_ftr_safe_value(&kvm_ftr, new, cur); +} + +/** + * arm64_check_features() - Check if a feature register value constitutes + * a subset of features indicated by the idreg's KVM sanitised limit. + * + * This function will check if each feature field of @val is the "safe" value + * against idreg's KVM sanitised limit return from reset() callback. + * If a field value in @val is the same as the one in limit, it is always + * considered the safe value regardless For register fields that are not in + * writable, only the value in limit is considered the safe value. + * + * Return: 0 if all the fields are safe. Otherwise, return negative errno. + */ +static int arm64_check_features(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + const struct arm64_ftr_reg *ftr_reg; + const struct arm64_ftr_bits *ftrp = NULL; + u32 id = reg_to_encoding(rd); + u64 writable_mask = rd->val; + u64 limit = 0; + u64 mask = 0; + + /* For hidden and unallocated idregs without reset, only val = 0 is allowed. */ + if (rd->reset) { + limit = rd->reset(vcpu, rd); + ftr_reg = get_arm64_ftr_reg(id); + if (!ftr_reg) + return -EINVAL; + ftrp = ftr_reg->ftr_bits; + } + + for (; ftrp && ftrp->width; ftrp++) { + s64 f_val, f_lim, safe_val; + u64 ftr_mask; + + ftr_mask = arm64_ftr_mask(ftrp); + if ((ftr_mask & writable_mask) != ftr_mask) + continue; + + f_val = arm64_ftr_value(ftrp, val); + f_lim = arm64_ftr_value(ftrp, limit); + mask |= ftr_mask; + + if (f_val == f_lim) + safe_val = f_val; + else + safe_val = kvm_arm64_ftr_safe_value(id, ftrp, f_val, f_lim); + + if (safe_val != f_val) + return -E2BIG; + } + + /* For fields that are not writable, values in limit are the safe values. */ + if ((val & ~mask) != (limit & ~mask)) + return -E2BIG; + + return 0; +} + static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) @@ -68,7 +148,6 @@ u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) case SYS_ID_AA64PFR0_EL1: if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -95,15 +174,10 @@ u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); break; case SYS_ID_AA64DFR0_EL1: - /* Limit debug to ARMv8.0 */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); /* Set PMUver to the required version */ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), vcpu_pmuver(vcpu)); - /* Hide SPE from guests */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); break; case SYS_ID_DFR0_EL1: val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); @@ -162,9 +236,14 @@ static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { - /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(vcpu, rd)) - return -EINVAL; + u32 id = reg_to_encoding(rd); + int ret; + + ret = arm64_check_features(vcpu, rd, val); + if (ret) + return ret; + + idreg_write(&vcpu->kvm->arch, id, val); return 0; } @@ -198,12 +277,40 @@ static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, return id_visibility(vcpu, r); } +static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + u64 val; + u32 id = reg_to_encoding(rd); + + val = read_sanitised_ftr_reg(id); + /* + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. + */ + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } + + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); + + return val; +} + static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { u8 csv2, csv3; - u64 sval = val; /* * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as @@ -219,16 +326,30 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, if (csv3 > 1 || (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) return -EINVAL; - /* We can only differ with CSV[23], and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); - if (val) - return -EINVAL; + return set_id_reg(vcpu, rd, val); +} - idreg_write(&vcpu->kvm->arch, reg_to_encoding(rd), sval); +static u64 read_sanitised_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + u64 val; + u32 id = reg_to_encoding(rd); - return 0; + val = read_sanitised_ftr_reg(id); + /* Limit debug to ARMv8.0 */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + kvm_arm_pmu_get_pmuver_limit()); + /* Hide SPE from guests */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); + + return val; } static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, @@ -237,6 +358,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, { u8 pmuver, host_pmuver; bool valid_pmu; + int ret; host_pmuver = kvm_arm_pmu_get_pmuver_limit(); @@ -256,40 +378,62 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; - /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - if (val) - return -EINVAL; - - if (valid_pmu) { - mutex_lock(&vcpu->kvm->arch.config_lock); - - val = _idreg_read(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1); + if (!valid_pmu) { + /* + * Ignore the PMUVer filed in @val. The PMUVer would be determined + * by arch flags bit KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, + */ + pmuver = FIELD_GET(ID_AA64DFR0_EL1_PMUVer_MASK, read_id_reg(vcpu, rd)); val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver); - _idreg_write(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1, val); + } - val = _idreg_read(&vcpu->kvm->arch, SYS_ID_DFR0_EL1); - val &= ~ID_DFR0_EL1_PerfMon_MASK; - val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver)); - _idreg_write(&vcpu->kvm->arch, SYS_ID_DFR0_EL1, val); + ret = arm64_check_features(vcpu, rd, val); + if (ret) + return ret; - mutex_unlock(&vcpu->kvm->arch.config_lock); - } else { + mutex_lock(&vcpu->kvm->arch.config_lock); + + _idreg_write(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1, val); + + val = _idreg_read(&vcpu->kvm->arch, SYS_ID_DFR0_EL1); + val &= ~ID_DFR0_EL1_PerfMon_MASK; + val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver)); + _idreg_write(&vcpu->kvm->arch, SYS_ID_DFR0_EL1, val); + + if (!valid_pmu) assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF); - } + + mutex_unlock(&vcpu->kvm->arch.config_lock); return 0; } +static u64 read_sanitised_id_dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + u64 val; + u32 id = reg_to_encoding(rd); + + val = read_sanitised_ftr_reg(id); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), kvm_arm_pmu_get_pmuver_limit()); + + return val; +} + static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { u8 perfmon, host_perfmon; bool valid_pmu; + int ret; host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); @@ -310,30 +454,34 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; - /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - if (val) - return -EINVAL; - - if (valid_pmu) { - mutex_lock(&vcpu->kvm->arch.config_lock); - - val = _idreg_read(&vcpu->kvm->arch, SYS_ID_DFR0_EL1); + if (!valid_pmu) { + /* + * Ignore the PerfMon filed in @val. The PerfMon would be determined + * by arch flags bit KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, + */ + perfmon = FIELD_GET(ID_DFR0_EL1_PerfMon_MASK, read_id_reg(vcpu, rd)); val &= ~ID_DFR0_EL1_PerfMon_MASK; val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon); - _idreg_write(&vcpu->kvm->arch, SYS_ID_DFR0_EL1, val); + } - val = _idreg_read(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1); - val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; - val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon)); - _idreg_write(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1, val); + ret = arm64_check_features(vcpu, rd, val); + if (ret) + return ret; - mutex_unlock(&vcpu->kvm->arch.config_lock); - } else { + mutex_lock(&vcpu->kvm->arch.config_lock); + + _idreg_write(&vcpu->kvm->arch, SYS_ID_DFR0_EL1, val); + + val = _idreg_read(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1); + val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; + val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon)); + _idreg_write(&vcpu->kvm->arch, SYS_ID_AA64DFR0_EL1, val); + + if (!valid_pmu) assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, perfmon == ID_DFR0_EL1_PerfMon_IMPDEF); - } + + mutex_unlock(&vcpu->kvm->arch.config_lock); return 0; } @@ -411,9 +559,13 @@ const struct sys_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { /* CRm=1 */ AA32_ID_SANITISED(ID_PFR0_EL1), AA32_ID_SANITISED(ID_PFR1_EL1), - { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_dfr0_el1, - .visibility = aa32_id_visibility, }, + { SYS_DESC(SYS_ID_DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_dfr0_el1, + .visibility = aa32_id_visibility, + .reset = read_sanitised_id_dfr0_el1, + .val = ID_DFR0_EL1_PerfMon_MASK, }, ID_HIDDEN(ID_AFR0_EL1), AA32_ID_SANITISED(ID_MMFR0_EL1), AA32_ID_SANITISED(ID_MMFR1_EL1), @@ -442,8 +594,12 @@ const struct sys_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { /* AArch64 ID registers */ /* CRm=4 */ - { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + { SYS_DESC(SYS_ID_AA64PFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_aa64pfr0_el1, + .reset = read_sanitised_id_aa64pfr0_el1, + .val = ID_AA64PFR0_EL1_CSV2_MASK | ID_AA64PFR0_EL1_CSV3_MASK, }, ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4, 2), ID_UNALLOCATED(4, 3), @@ -453,8 +609,12 @@ const struct sys_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { ID_UNALLOCATED(4, 7), /* CRm=5 */ - { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + { SYS_DESC(SYS_ID_AA64DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_aa64dfr0_el1, + .reset = read_sanitised_id_aa64dfr0_el1, + .val = ID_AA64DFR0_EL1_PMUVer_MASK, }, ID_SANITISED(ID_AA64DFR1_EL1), ID_UNALLOCATED(5, 2), ID_UNALLOCATED(5, 3), @@ -529,40 +689,4 @@ void kvm_arm_init_id_regs(struct kvm *kvm) idreg_write(&kvm->arch, id, val); } - - mutex_lock(&kvm->arch.config_lock); - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - val = _idreg_read(&kvm->arch, SYS_ID_AA64PFR0_EL1); - - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); - } - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); - } - - _idreg_write(&kvm->arch, SYS_ID_AA64PFR0_EL1, val); - - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - val = _idreg_read(&kvm->arch, SYS_ID_AA64DFR0_EL1); - - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), - kvm_arm_pmu_get_pmuver_limit()); - - _idreg_write(&kvm->arch, SYS_ID_AA64DFR0_EL1, val); - - mutex_unlock(&kvm->arch.config_lock); }