From patchwork Fri Mar 17 05:06:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13178536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3AD30C6FD1D for ; Fri, 17 Mar 2023 05:07:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=8SeZ5JfMpsQSoVSaN8+Y0nNzU98FAZd8cGE14f+pH4E=; b=nlgGINRA2+e4lbb9/pdo8Zb3WT uMEB7Mg/752PEf1OSgVgW50ioxmozGSU0VFBwfSTj4irEGuixlEZLrCZ5ygmWASa9P4MsQZFmwnAu FPQMKSQK+/HRl3PmxEhggZU8e1wYKlsHeXwlF2Gn0eJH7h5a/kCVF9GSVI8s3/0cj+p3OJ4gdX789 /R8Qyo0VeedC1Gk6QOLnJCJ+d8ckNEn8vo7vXpC0hKmN4DdNRDQURk9nTW7C46PX3pDoHLfStdVjP JNd2toNis6L+J+mxsCz7bNQqPfHyVRUsvwhXiiuzp901kxYU8r8Wo+4zCcuLP65cBDQx4F4mGrNhc YFiBzFHg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pd2Iu-0011B5-1f; Fri, 17 Mar 2023 05:07:00 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pd2Ij-00114m-0u for linux-arm-kernel@lists.infradead.org; Fri, 17 Mar 2023 05:06:53 +0000 Received: by mail-pj1-x1049.google.com with SMTP id d5-20020a17090a7bc500b0023d3366e005so1782952pjl.6 for ; Thu, 16 Mar 2023 22:06:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679029605; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QHHnLNrs+JVSlnbuECYVEYVT7+1L+l7HidxlmtUcdy0=; b=fvZTAnxII2SwzCtBzckihNC/gshyTeSlOvmL9XFMIJSYu9wxaNUdOxhG5/XaKUD19/ D7eEJDlTU6VGUGd9aT9JV2xzuDcPieAkjpRMsFjV/hQFZRYudqVgo9RiNGoxC6BGLejk f/z0Y2d0b18Mgx2GM7LBEisesGhysgPrsCqYJygPyxlel1Aj6A8OmOHH3EccbGZAbAUp 9JTCUxaRDuFb0n9fhwTvIG6wxXM5n9cMmdXYQEdrYDwfZmhDNk4+P+3Znn79gObWZPgk 4gYdovFyc0JVs8NFwt1OeA0w4CX3nqn5x89d9xc7ZFczcPG7dcCwtQMR45z1Z05Ou/6a QjKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679029605; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QHHnLNrs+JVSlnbuECYVEYVT7+1L+l7HidxlmtUcdy0=; b=U08Fboa1aDY/fK3gLKWLk58uY4zYS9RcCYqA/CTqpciYbOFLaXIpb4K2AKH6XPz6Tk w+WYiLABqtepTlgASg1/jx8EKNYr3Rwu0dBR4uD/+HTmu+6S7Ol6EeGNC9LkknHVMUpp xfDcO6gRpNiII2Ex2eK8XKOATAthBsckTxqSoXpV2x8orux0YIbBNRuA7ezDhw9SG4ik aWs4EqJQ0xFK/SfMVEMCFrd0wfM94nWr4fsRWWB7WOOMq7flKzi9ZXxDNQbAaMreUvJf JR5/tb2aNXba9Eg5oNYPdcVu22NY47fYYVI1dkb4G3NeUYclDdUtBF34r2nkyXVdyhe3 rOfA== X-Gm-Message-State: AO0yUKX3rOZvr1B/IDTFOOkAxs/94HRCziFEsJDiMRCNth/7n2cRbWMo 0ow+EI0O+ouXrluRh9QJkFltr7zycxxYZIvqoQ== X-Google-Smtp-Source: AK7set+vSm9T6H3E7NAkWbJJ97pe/c0QSOA25B8OmFibLFOcPq2CZcJylh8x3Jq9Ba8DULJzKEI5yxPk456AtpGC4w== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a65:47cb:0:b0:503:7cc9:3f8d with SMTP id f11-20020a6547cb000000b005037cc93f8dmr1563486pgs.9.1679029605653; Thu, 16 Mar 2023 22:06:45 -0700 (PDT) Date: Fri, 17 Mar 2023 05:06:32 +0000 In-Reply-To: <20230317050637.766317-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230317050637.766317-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.rc1.284.g88254d51c5-goog Message-ID: <20230317050637.766317-2-jingzhangos@google.com> Subject: [PATCH v4 1/6] KVM: arm64: Move CPU ID feature registers emulation into a separate file From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230316_220649_326252_589C6E05 X-CRM114-Status: GOOD ( 25.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Create a new file id_regs.c for CPU ID feature registers emulation code, which are moved from sys_regs.c and tweak sys_regs code accordingly. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/id_regs.c | 506 ++++++++++++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 464 ++-------------------------------- arch/arm64/kvm/sys_regs.h | 41 +++ 4 files changed, 575 insertions(+), 438 deletions(-) create mode 100644 arch/arm64/kvm/id_regs.c diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index c0c050e53157..a6a315fcd81e 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -13,7 +13,7 @@ obj-$(CONFIG_KVM) += hyp/ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \ inject_fault.o va_layout.o handle_exit.o \ guest.o debug.o reset.o sys_regs.o stacktrace.o \ - vgic-sys-reg-v3.o fpsimd.o pkvm.o \ + vgic-sys-reg-v3.o fpsimd.o pkvm.o id_regs.o \ arch_timer.o trng.o vmid.o emulate-nested.o nested.o \ vgic/vgic.o vgic/vgic-init.o \ vgic/vgic-irqfd.o vgic/vgic-v2.o \ diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c new file mode 100644 index 000000000000..08b738852955 --- /dev/null +++ b/arch/arm64/kvm/id_regs.c @@ -0,0 +1,506 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 - Google LLC + * Author: Jing Zhang + * + * Moved from arch/arm64/kvm/sys_regs.c + * Copyright (C) 2012,2013 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "sys_regs.h" + +static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_has_pmu(vcpu)) + return vcpu->kvm->arch.dfr0_pmuver.imp; + + return vcpu->kvm->arch.dfr0_pmuver.unimp; +} + +static u8 perfmon_to_pmuver(u8 perfmon) +{ + switch (perfmon) { + case ID_DFR0_EL1_PerfMon_PMUv3: + return ID_AA64DFR0_EL1_PMUVer_IMP; + case ID_DFR0_EL1_PerfMon_IMPDEF: + return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; + default: + /* Anything ARMv8.1+ and NI have the same value. For now. */ + return perfmon; + } +} + +static u8 pmuver_to_perfmon(u8 pmuver) +{ + switch (pmuver) { + case ID_AA64DFR0_EL1_PMUVer_IMP: + return ID_DFR0_EL1_PerfMon_PMUv3; + case ID_AA64DFR0_EL1_PMUVer_IMP_DEF: + return ID_DFR0_EL1_PerfMon_IMPDEF; + default: + /* Anything ARMv8.1+ and NI have the same value. For now. */ + return pmuver; + } +} + +/* Read a sanitised cpufeature ID register by sys_reg_desc */ +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +{ + u32 id = reg_to_encoding(r); + u64 val; + + if (sysreg_visible_as_raz(vcpu, r)) + return 0; + + val = read_sanitised_ftr_reg(id); + + switch (id) { + case SYS_ID_AA64PFR0_EL1: + if (!vcpu_has_sve(vcpu)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), + (u64)vcpu->kvm->arch.pfr0_csv2); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), + (u64)vcpu->kvm->arch.pfr0_csv3); + if (kvm_vgic_global_state.type == VGIC_V3) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); + } + break; + case SYS_ID_AA64PFR1_EL1: + if (!kvm_has_mte(vcpu->kvm)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); + + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); + break; + case SYS_ID_AA64ISAR1_EL1: + if (!vcpu_has_ptrauth(vcpu)) + val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | + ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI)); + break; + case SYS_ID_AA64ISAR2_EL1: + if (!vcpu_has_ptrauth(vcpu)) + val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); + if (!cpus_have_final_cap(ARM64_HAS_WFXT)) + val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); + break; + case SYS_ID_AA64DFR0_EL1: + /* Limit debug to ARMv8.0 */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); + /* Set PMUver to the required version */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + vcpu_pmuver(vcpu)); + /* Hide SPE from guests */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); + break; + case SYS_ID_DFR0_EL1: + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), + pmuver_to_perfmon(vcpu_pmuver(vcpu))); + break; + case SYS_ID_AA64MMFR2_EL1: + val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; + break; + case SYS_ID_MMFR4_EL1: + val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX); + break; + } + + return val; +} + +/* cpufeature ID register access trap handlers */ + +static bool access_id_reg(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return write_to_read_only(vcpu, p, r); + + p->regval = read_id_reg(vcpu, r); + if (vcpu_has_nv(vcpu)) + access_nested_id_reg(vcpu, p, r); + + return true; +} + +/* + * cpufeature ID register user accessors + * + * For now, these registers are immutable for userspace, so no values + * are stored, and for set_id_reg() we don't allow the effective value + * to be changed. + */ +static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 *val) +{ + *val = read_id_reg(vcpu, rd); + return 0; +} + +static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 val) +{ + /* This is what we mean by invariant: you can't change it. */ + if (val != read_id_reg(vcpu, rd)) + return -EINVAL; + + return 0; +} + +static unsigned int id_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + u32 id = reg_to_encoding(r); + + switch (id) { + case SYS_ID_AA64ZFR0_EL1: + if (!vcpu_has_sve(vcpu)) + return REG_RAZ; + break; + } + + return 0; +} + +static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + /* + * AArch32 ID registers are UNKNOWN if AArch32 isn't implemented at any + * EL. Promote to RAZ/WI in order to guarantee consistency between + * systems. + */ + if (!kvm_supports_32bit_el0()) + return REG_RAZ | REG_USER_WI; + + return id_visibility(vcpu, r); +} + +static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 csv2, csv3; + + /* + * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as + * it doesn't promise more than what is actually provided (the + * guest could otherwise be covered in ectoplasmic residue). + */ + csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT); + if (csv2 > 1 || + (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) + return -EINVAL; + + /* Same thing for CSV3 */ + csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT); + if (csv3 > 1 || + (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) + return -EINVAL; + + /* We can only differ with CSV[23], and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); + if (val) + return -EINVAL; + + vcpu->kvm->arch.pfr0_csv2 = csv2; + vcpu->kvm->arch.pfr0_csv3 = csv3; + + return 0; +} + +static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 pmuver, host_pmuver; + bool valid_pmu; + + host_pmuver = kvm_arm_pmu_get_pmuver_limit(); + + /* + * Allow AA64DFR0_EL1.PMUver to be set from userspace as long + * as it doesn't promise more than what the HW gives us. We + * allow an IMPDEF PMU though, only if no PMU is supported + * (KVM backward compatibility handling). + */ + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); + if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)) + return -EINVAL; + + valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF); + + /* Make sure view register and PMU support do match */ + if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) + return -EINVAL; + + /* We can only differ with PMUver, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + if (val) + return -EINVAL; + + if (valid_pmu) + vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; + else + vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; + + return 0; +} + +static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + u8 perfmon, host_perfmon; + bool valid_pmu; + + host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); + + /* + * Allow DFR0_EL1.PerfMon to be set from userspace as long as + * it doesn't promise more than what the HW gives us on the + * AArch64 side (as everything is emulated with that), and + * that this is a PMUv3. + */ + perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val); + if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) || + (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3)) + return -EINVAL; + + valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF); + + /* Make sure view register and PMU support do match */ + if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) + return -EINVAL; + + /* We can only differ with PerfMon, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + if (val) + return -EINVAL; + + if (valid_pmu) + vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); + else + vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); + + return 0; +} + +/* sys_reg_desc initialiser for known cpufeature ID registers */ +#define ID_SANITISED(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = id_visibility, \ +} + +/* sys_reg_desc initialiser for known cpufeature ID registers */ +#define AA32_ID_SANITISED(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = aa32_id_visibility, \ +} + +/* + * sys_reg_desc initialiser for architecturally unallocated cpufeature ID + * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 + * (1 <= crm < 8, 0 <= Op2 < 8). + */ +#define ID_UNALLOCATED(crm, op2) { \ + Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility \ +} + +/* + * sys_reg_desc initialiser for known ID registers that we hide from guests. + * For now, these are exposed just like unallocated ID regs: they appear + * RAZ for the guest. + */ +#define ID_HIDDEN(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility, \ +} + +static const struct sys_reg_desc id_reg_descs[] = { + /* + * ID regs: all ID_SANITISED() entries here must have corresponding + * entries in arm64_ftr_regs[]. + */ + + /* AArch64 mappings of the AArch32 ID registers */ + /* CRm=1 */ + AA32_ID_SANITISED(ID_PFR0_EL1), + AA32_ID_SANITISED(ID_PFR1_EL1), + { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_dfr0_el1, + .visibility = aa32_id_visibility, }, + ID_HIDDEN(ID_AFR0_EL1), + AA32_ID_SANITISED(ID_MMFR0_EL1), + AA32_ID_SANITISED(ID_MMFR1_EL1), + AA32_ID_SANITISED(ID_MMFR2_EL1), + AA32_ID_SANITISED(ID_MMFR3_EL1), + + /* CRm=2 */ + AA32_ID_SANITISED(ID_ISAR0_EL1), + AA32_ID_SANITISED(ID_ISAR1_EL1), + AA32_ID_SANITISED(ID_ISAR2_EL1), + AA32_ID_SANITISED(ID_ISAR3_EL1), + AA32_ID_SANITISED(ID_ISAR4_EL1), + AA32_ID_SANITISED(ID_ISAR5_EL1), + AA32_ID_SANITISED(ID_MMFR4_EL1), + AA32_ID_SANITISED(ID_ISAR6_EL1), + + /* CRm=3 */ + AA32_ID_SANITISED(MVFR0_EL1), + AA32_ID_SANITISED(MVFR1_EL1), + AA32_ID_SANITISED(MVFR2_EL1), + ID_UNALLOCATED(3, 3), + AA32_ID_SANITISED(ID_PFR2_EL1), + ID_HIDDEN(ID_DFR1_EL1), + AA32_ID_SANITISED(ID_MMFR5_EL1), + ID_UNALLOCATED(3, 7), + + /* AArch64 ID registers */ + /* CRm=4 */ + { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + ID_SANITISED(ID_AA64PFR1_EL1), + ID_UNALLOCATED(4, 2), + ID_UNALLOCATED(4, 3), + ID_SANITISED(ID_AA64ZFR0_EL1), + ID_HIDDEN(ID_AA64SMFR0_EL1), + ID_UNALLOCATED(4, 6), + ID_UNALLOCATED(4, 7), + + /* CRm=5 */ + { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, + .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + ID_SANITISED(ID_AA64DFR1_EL1), + ID_UNALLOCATED(5, 2), + ID_UNALLOCATED(5, 3), + ID_HIDDEN(ID_AA64AFR0_EL1), + ID_HIDDEN(ID_AA64AFR1_EL1), + ID_UNALLOCATED(5, 6), + ID_UNALLOCATED(5, 7), + + /* CRm=6 */ + ID_SANITISED(ID_AA64ISAR0_EL1), + ID_SANITISED(ID_AA64ISAR1_EL1), + ID_SANITISED(ID_AA64ISAR2_EL1), + ID_UNALLOCATED(6, 3), + ID_UNALLOCATED(6, 4), + ID_UNALLOCATED(6, 5), + ID_UNALLOCATED(6, 6), + ID_UNALLOCATED(6, 7), + + /* CRm=7 */ + ID_SANITISED(ID_AA64MMFR0_EL1), + ID_SANITISED(ID_AA64MMFR1_EL1), + ID_SANITISED(ID_AA64MMFR2_EL1), + ID_UNALLOCATED(7, 3), + ID_UNALLOCATED(7, 4), + ID_UNALLOCATED(7, 5), + ID_UNALLOCATED(7, 6), + ID_UNALLOCATED(7, 7), +}; + +/** + * emulate_id_reg - Emulate a guest access to an AArch64 CPU ID feature register + * @vcpu: The VCPU pointer + * @params: Decoded system register parameters + * + * Return: true if the ID register access was successful, false otherwise. + */ +int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params) +{ + const struct sys_reg_desc *r; + + r = find_reg(params, id_reg_descs, ARRAY_SIZE(id_reg_descs)); + + if (likely(r)) { + perform_access(vcpu, params, r); + } else { + print_sys_reg_msg(params, + "Unsupported guest id_reg access at: %lx [%08lx]\n", + *vcpu_pc(vcpu), *vcpu_cpsr(vcpu)); + kvm_inject_undefined(vcpu); + } + + return 1; +} + + +void kvm_arm_reset_id_regs(struct kvm_vcpu *vcpu) +{ + unsigned long i; + + for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) + if (id_reg_descs[i].reset) + id_reg_descs[i].reset(vcpu, &id_reg_descs[i]); +} + +int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + return kvm_sys_reg_get_user(vcpu, reg, + id_reg_descs, ARRAY_SIZE(id_reg_descs)); +} + +int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + return kvm_sys_reg_set_user(vcpu, reg, + id_reg_descs, ARRAY_SIZE(id_reg_descs)); +} + +bool kvm_arm_check_idreg_table(void) +{ + return check_sysreg_table(id_reg_descs, ARRAY_SIZE(id_reg_descs), false); +} + +int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) +{ + const struct sys_reg_desc *i2, *end2; + unsigned int total = 0; + int err; + + i2 = id_reg_descs; + end2 = id_reg_descs + ARRAY_SIZE(id_reg_descs); + + while (i2 != end2) { + err = walk_one_sys_reg(vcpu, i2++, &uind, &total); + if (err) + return err; + } + return total; +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 53749d3a0996..22b60474fcab 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -53,16 +53,6 @@ static bool read_from_write_only(struct kvm_vcpu *vcpu, return false; } -static bool write_to_read_only(struct kvm_vcpu *vcpu, - struct sys_reg_params *params, - const struct sys_reg_desc *r) -{ - WARN_ONCE(1, "Unexpected sys_reg write to read-only register\n"); - print_sys_reg_instr(params); - kvm_inject_undefined(vcpu); - return false; -} - u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) { u64 val = 0x8badf00d8badf00d; @@ -1153,163 +1143,6 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, return true; } -static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) -{ - if (kvm_vcpu_has_pmu(vcpu)) - return vcpu->kvm->arch.dfr0_pmuver.imp; - - return vcpu->kvm->arch.dfr0_pmuver.unimp; -} - -static u8 perfmon_to_pmuver(u8 perfmon) -{ - switch (perfmon) { - case ID_DFR0_EL1_PerfMon_PMUv3: - return ID_AA64DFR0_EL1_PMUVer_IMP; - case ID_DFR0_EL1_PerfMon_IMPDEF: - return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; - default: - /* Anything ARMv8.1+ and NI have the same value. For now. */ - return perfmon; - } -} - -static u8 pmuver_to_perfmon(u8 pmuver) -{ - switch (pmuver) { - case ID_AA64DFR0_EL1_PMUVer_IMP: - return ID_DFR0_EL1_PerfMon_PMUv3; - case ID_AA64DFR0_EL1_PMUVer_IMP_DEF: - return ID_DFR0_EL1_PerfMon_IMPDEF; - default: - /* Anything ARMv8.1+ and NI have the same value. For now. */ - return pmuver; - } -} - -/* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) -{ - u32 id = reg_to_encoding(r); - u64 val; - - if (sysreg_visible_as_raz(vcpu, r)) - return 0; - - val = read_sanitised_ftr_reg(id); - - switch (id) { - case SYS_ID_AA64PFR0_EL1: - if (!vcpu_has_sve(vcpu)) - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); - if (kvm_vgic_global_state.type == VGIC_V3) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); - } - break; - case SYS_ID_AA64PFR1_EL1: - if (!kvm_has_mte(vcpu->kvm)) - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE); - - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); - break; - case SYS_ID_AA64ISAR1_EL1: - if (!vcpu_has_ptrauth(vcpu)) - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI)); - break; - case SYS_ID_AA64ISAR2_EL1: - if (!vcpu_has_ptrauth(vcpu)) - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); - if (!cpus_have_final_cap(ARM64_HAS_WFXT)) - val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); - break; - case SYS_ID_AA64DFR0_EL1: - /* Limit debug to ARMv8.0 */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); - /* Set PMUver to the required version */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), - vcpu_pmuver(vcpu)); - /* Hide SPE from guests */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); - break; - case SYS_ID_DFR0_EL1: - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), - pmuver_to_perfmon(vcpu_pmuver(vcpu))); - break; - case SYS_ID_AA64MMFR2_EL1: - val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; - break; - case SYS_ID_MMFR4_EL1: - val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX); - break; - } - - return val; -} - -static unsigned int id_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) -{ - u32 id = reg_to_encoding(r); - - switch (id) { - case SYS_ID_AA64ZFR0_EL1: - if (!vcpu_has_sve(vcpu)) - return REG_RAZ; - break; - } - - return 0; -} - -static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) -{ - /* - * AArch32 ID registers are UNKNOWN if AArch32 isn't implemented at any - * EL. Promote to RAZ/WI in order to guarantee consistency between - * systems. - */ - if (!kvm_supports_32bit_el0()) - return REG_RAZ | REG_USER_WI; - - return id_visibility(vcpu, r); -} - -static unsigned int raz_visibility(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *r) -{ - return REG_RAZ; -} - -/* cpufeature ID register access trap handlers */ - -static bool access_id_reg(struct kvm_vcpu *vcpu, - struct sys_reg_params *p, - const struct sys_reg_desc *r) -{ - if (p->is_write) - return write_to_read_only(vcpu, p, r); - - p->regval = read_id_reg(vcpu, r); - if (vcpu_has_nv(vcpu)) - access_nested_id_reg(vcpu, p, r); - - return true; -} - /* Visibility overrides for SVE-specific control registers */ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) @@ -1320,144 +1153,6 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } -static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 csv2, csv3; - - /* - * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as - * it doesn't promise more than what is actually provided (the - * guest could otherwise be covered in ectoplasmic residue). - */ - csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV2_SHIFT); - if (csv2 > 1 || - (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; - - /* Same thing for CSV3 */ - csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_EL1_CSV3_SHIFT); - if (csv3 > 1 || - (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; - - /* We can only differ with CSV[23], and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); - if (val) - return -EINVAL; - - vcpu->kvm->arch.pfr0_csv2 = csv2; - vcpu->kvm->arch.pfr0_csv3 = csv3; - - return 0; -} - -static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 pmuver, host_pmuver; - bool valid_pmu; - - host_pmuver = kvm_arm_pmu_get_pmuver_limit(); - - /* - * Allow AA64DFR0_EL1.PMUver to be set from userspace as long - * as it doesn't promise more than what the HW gives us. We - * allow an IMPDEF PMU though, only if no PMU is supported - * (KVM backward compatibility handling). - */ - pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), val); - if ((pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && pmuver > host_pmuver)) - return -EINVAL; - - valid_pmu = (pmuver != 0 && pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF); - - /* Make sure view register and PMU support do match */ - if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; - - /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - if (val) - return -EINVAL; - - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; - else - vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; - - return 0; -} - -static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 val) -{ - u8 perfmon, host_perfmon; - bool valid_pmu; - - host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); - - /* - * Allow DFR0_EL1.PerfMon to be set from userspace as long as - * it doesn't promise more than what the HW gives us on the - * AArch64 side (as everything is emulated with that), and - * that this is a PMUv3. - */ - perfmon = FIELD_GET(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), val); - if ((perfmon != ID_DFR0_EL1_PerfMon_IMPDEF && perfmon > host_perfmon) || - (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3)) - return -EINVAL; - - valid_pmu = (perfmon != 0 && perfmon != ID_DFR0_EL1_PerfMon_IMPDEF); - - /* Make sure view register and PMU support do match */ - if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) - return -EINVAL; - - /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - if (val) - return -EINVAL; - - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); - else - vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); - - return 0; -} - -/* - * cpufeature ID register user accessors - * - * For now, these registers are immutable for userspace, so no values - * are stored, and for set_id_reg() we don't allow the effective value - * to be changed. - */ -static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, - u64 *val) -{ - *val = read_id_reg(vcpu, rd); - return 0; -} - -static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, - u64 val) -{ - /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(vcpu, rd)) - return -EINVAL; - - return 0; -} - static int get_raz_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 *val) { @@ -1642,50 +1337,6 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .visibility = elx2_visibility, \ } -/* sys_reg_desc initialiser for known cpufeature ID registers */ -#define ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = id_visibility, \ -} - -/* sys_reg_desc initialiser for known cpufeature ID registers */ -#define AA32_ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = aa32_id_visibility, \ -} - -/* - * sys_reg_desc initialiser for architecturally unallocated cpufeature ID - * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 - * (1 <= crm < 8, 0 <= Op2 < 8). - */ -#define ID_UNALLOCATED(crm, op2) { \ - Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility \ -} - -/* - * sys_reg_desc initialiser for known ID registers that we hide from guests. - * For now, these are exposed just like unallocated ID regs: they appear - * RAZ for the guest. - */ -#define ID_HIDDEN(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility, \ -} - static bool access_sp_el1(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) @@ -1776,87 +1427,6 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 }, - /* - * ID regs: all ID_SANITISED() entries here must have corresponding - * entries in arm64_ftr_regs[]. - */ - - /* AArch64 mappings of the AArch32 ID registers */ - /* CRm=1 */ - AA32_ID_SANITISED(ID_PFR0_EL1), - AA32_ID_SANITISED(ID_PFR1_EL1), - { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_dfr0_el1, - .visibility = aa32_id_visibility, }, - ID_HIDDEN(ID_AFR0_EL1), - AA32_ID_SANITISED(ID_MMFR0_EL1), - AA32_ID_SANITISED(ID_MMFR1_EL1), - AA32_ID_SANITISED(ID_MMFR2_EL1), - AA32_ID_SANITISED(ID_MMFR3_EL1), - - /* CRm=2 */ - AA32_ID_SANITISED(ID_ISAR0_EL1), - AA32_ID_SANITISED(ID_ISAR1_EL1), - AA32_ID_SANITISED(ID_ISAR2_EL1), - AA32_ID_SANITISED(ID_ISAR3_EL1), - AA32_ID_SANITISED(ID_ISAR4_EL1), - AA32_ID_SANITISED(ID_ISAR5_EL1), - AA32_ID_SANITISED(ID_MMFR4_EL1), - AA32_ID_SANITISED(ID_ISAR6_EL1), - - /* CRm=3 */ - AA32_ID_SANITISED(MVFR0_EL1), - AA32_ID_SANITISED(MVFR1_EL1), - AA32_ID_SANITISED(MVFR2_EL1), - ID_UNALLOCATED(3,3), - AA32_ID_SANITISED(ID_PFR2_EL1), - ID_HIDDEN(ID_DFR1_EL1), - AA32_ID_SANITISED(ID_MMFR5_EL1), - ID_UNALLOCATED(3,7), - - /* AArch64 ID registers */ - /* CRm=4 */ - { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, - ID_SANITISED(ID_AA64PFR1_EL1), - ID_UNALLOCATED(4,2), - ID_UNALLOCATED(4,3), - ID_SANITISED(ID_AA64ZFR0_EL1), - ID_HIDDEN(ID_AA64SMFR0_EL1), - ID_UNALLOCATED(4,6), - ID_UNALLOCATED(4,7), - - /* CRm=5 */ - { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, - ID_SANITISED(ID_AA64DFR1_EL1), - ID_UNALLOCATED(5,2), - ID_UNALLOCATED(5,3), - ID_HIDDEN(ID_AA64AFR0_EL1), - ID_HIDDEN(ID_AA64AFR1_EL1), - ID_UNALLOCATED(5,6), - ID_UNALLOCATED(5,7), - - /* CRm=6 */ - ID_SANITISED(ID_AA64ISAR0_EL1), - ID_SANITISED(ID_AA64ISAR1_EL1), - ID_SANITISED(ID_AA64ISAR2_EL1), - ID_UNALLOCATED(6,3), - ID_UNALLOCATED(6,4), - ID_UNALLOCATED(6,5), - ID_UNALLOCATED(6,6), - ID_UNALLOCATED(6,7), - - /* CRm=7 */ - ID_SANITISED(ID_AA64MMFR0_EL1), - ID_SANITISED(ID_AA64MMFR1_EL1), - ID_SANITISED(ID_AA64MMFR2_EL1), - ID_UNALLOCATED(7,3), - ID_UNALLOCATED(7,4), - ID_UNALLOCATED(7,5), - ID_UNALLOCATED(7,6), - ID_UNALLOCATED(7,7), - { SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 }, { SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 }, { SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 }, @@ -2531,8 +2101,8 @@ static const struct sys_reg_desc cp15_64_regs[] = { { SYS_DESC(SYS_AARCH32_CNTP_CVAL), access_arch_timer }, }; -static bool check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, - bool is_32) +bool check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, + bool is_32) { unsigned int i; @@ -2557,7 +2127,7 @@ int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu) return 1; } -static void perform_access(struct kvm_vcpu *vcpu, +void perform_access(struct kvm_vcpu *vcpu, struct sys_reg_params *params, const struct sys_reg_desc *r) { @@ -2912,6 +2482,8 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu) { unsigned long i; + kvm_arm_reset_id_regs(vcpu); + for (i = 0; i < ARRAY_SIZE(sys_reg_descs); i++) if (sys_reg_descs[i].reset) sys_reg_descs[i].reset(vcpu, &sys_reg_descs[i]); @@ -2932,6 +2504,9 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu) params = esr_sys64_to_params(esr); params.regval = vcpu_get_reg(vcpu, Rt); + if (is_id_reg(reg_to_encoding(¶ms))) + return emulate_id_reg(vcpu, ¶ms); + if (!emulate_sys_reg(vcpu, ¶ms)) return 1; @@ -3160,6 +2735,10 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (err != -ENOENT) return err; + err = kvm_arm_get_id_reg(vcpu, reg); + if (err != -ENOENT) + return err; + return kvm_sys_reg_get_user(vcpu, reg, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); } @@ -3204,6 +2783,10 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (err != -ENOENT) return err; + err = kvm_arm_set_id_reg(vcpu, reg); + if (err != -ENOENT) + return err; + return kvm_sys_reg_set_user(vcpu, reg, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); } @@ -3250,10 +2833,10 @@ static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind) return true; } -static int walk_one_sys_reg(const struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - u64 __user **uind, - unsigned int *total) +int walk_one_sys_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 __user **uind, + unsigned int *total) { /* * Ignore registers we trap but don't save, @@ -3294,6 +2877,7 @@ unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu) { return ARRAY_SIZE(invariant_sys_regs) + num_demux_regs() + + kvm_arm_walk_id_regs(vcpu, (u64 __user *)NULL) + walk_sys_regs(vcpu, (u64 __user *)NULL); } @@ -3309,6 +2893,11 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) uindices++; } + err = kvm_arm_walk_id_regs(vcpu, uindices); + if (err < 0) + return err; + uindices += err; + err = walk_sys_regs(vcpu, uindices); if (err < 0) return err; @@ -3323,6 +2912,7 @@ int __init kvm_sys_reg_table_init(void) unsigned int i; /* Make sure tables are unique and in order. */ + valid &= kvm_arm_check_idreg_table(); valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false); valid &= check_sysreg_table(cp14_regs, ARRAY_SIZE(cp14_regs), true); valid &= check_sysreg_table(cp14_64_regs, ARRAY_SIZE(cp14_64_regs), true); diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 6b11f2cc7146..ad41305348f7 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -210,6 +210,35 @@ find_reg(const struct sys_reg_params *params, const struct sys_reg_desc table[], return __inline_bsearch((void *)pval, table, num, sizeof(table[0]), match_sys_reg); } +static inline unsigned int raz_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + return REG_RAZ; +} + +static inline bool write_to_read_only(struct kvm_vcpu *vcpu, + struct sys_reg_params *params, + const struct sys_reg_desc *r) +{ + WARN_ONCE(1, "Unexpected sys_reg write to read-only register\n"); + print_sys_reg_instr(params); + kvm_inject_undefined(vcpu); + return false; +} + +/* + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + */ +static inline bool is_id_reg(u32 id) +{ + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 && + sys_reg_CRm(id) < 8); +} + +void perform_access(struct kvm_vcpu *vcpu, struct sys_reg_params *params, + const struct sys_reg_desc *r); const struct sys_reg_desc *get_reg_by_id(u64 id, const struct sys_reg_desc table[], unsigned int num); @@ -220,6 +249,18 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, const struct sys_reg_desc table[], unsigned int num); int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, const struct sys_reg_desc table[], unsigned int num); +bool check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, + bool is_32); +int walk_one_sys_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 __user **uind, + unsigned int *total); +int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params); +void kvm_arm_reset_id_regs(struct kvm_vcpu *vcpu); +int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +bool kvm_arm_check_idreg_table(void); +int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind); #define AA32(_x) .aarch32_map = AA32_##_x #define Op0(_x) .Op0 = _x From patchwork Fri Mar 17 05:06:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13178534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B2E6C7618E for ; Fri, 17 Mar 2023 05:07:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=xn0QUO/tqqsvMVeNGckJmwIx05d4FKhJ6OI3E7L0Q4A=; b=IPOHUA9OLZYDIsO55igxgVS5+u AOwsiyjyTpvW2TT5W+P8Qv3veV+xe5qmgIKIu97zRl7uJteaeQn1mHcMv8FwSKv/fDSIf6NAr2zj0 IkAOl+TchyNr1Mpz5nRVELlQD2W0cOkJk5aW2+anukoAGP2EVLHUuBdvERKgzcYqa6j5Zb9HXo19z 5htqLZfa7vz3LAeYO8UWdJLSjUZ5lZ6Lq8x0tDRC5/Xi2ylT3pf8aDQqHBuIDR9Ll4ofZdh4LAGdD pIUeAhfINhO5a/e5rHVHo/0KBMwO7//BhZ5YY108MZiXQe1DvshFvbLuJ2lWWfKL+s5BGv54zNJY9 n2UAsDyg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pd2Is-0011A4-0y; Fri, 17 Mar 2023 05:06:58 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pd2Ik-00115I-0x for linux-arm-kernel@lists.infradead.org; Fri, 17 Mar 2023 05:06:52 +0000 Received: by mail-pj1-x104a.google.com with SMTP id oa14-20020a17090b1bce00b0023d1b58d3baso1845819pjb.4 for ; Thu, 16 Mar 2023 22:06:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679029607; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RuRhuab5NHh6jdquqFeN+zZbgdXrSF9RmQxZ9t9e3LY=; b=KA14W5qmmIRQTJN4Voj4DlKGiiH9FnRzPrtb3AU+BUWgS/cfFw1WWeoHsN5c0t1qJK LegrY15yg4rKxarAcgoB287PSDz4fjM7ZsiGGr3tgco/JIGN7u6FL2wS3ELnEvGb/3WJ 9W5/OAPTmjIAHHYNoLTc/mxpBu0L37GrHLgz5m7VcPh9TO3EuaRQHNPvQ4x4O6GFpHFr RWfz32bnvQK4RN9eJn4LJyr2xjr4ydoZVNoB/qYBuTzE1FV6vqeZ9SuqmGWZznqoTSNj f1InBT+5MvG4xTbUxfmFzSByP0ahIhpCgXnXnKmWGB0+S0BE0fJ8Lzbpo9QsS5cB3oPs +lIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679029607; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RuRhuab5NHh6jdquqFeN+zZbgdXrSF9RmQxZ9t9e3LY=; b=j0+SCQKWTOAhxIa2Py/XFfmrmF5ZseXsWm7wfAEQo3P45UD9PsIAa1HXEpmZ8AslsY yALrU0qwnBafmOZucX/kuU5m/NJXvuFb7ePgxvOZeWKx+E/RsgGhCzv4X3CDctmuVA+F FWjBI2GKvzu5LGx4oHkizm2k4cpKG1VcaK+DirHBC4Gopafu4c7AxjI9zoU6qFpB+mlW T0NP6Yw0H+AIx6v5QuTvaWq6DnLlTjoNSojK1SyVvPIm028NMfTmf8goQgTTlEHgCILY HWNk1TFZi/EJa+neWWBfdcHMoO6c95E1nzXeXNLP/HxDpnJusJGFQy86HkgqRUQ9Rxce m10w== X-Gm-Message-State: AO0yUKWTV0cKfvP58VCD6c2u7XWGqbtPDPln9vx+etNRU/QdmGzAHtA+ QL+1ombtkffM3+faypaU9SYFnNudGsnGvEPeMA== X-Google-Smtp-Source: AK7set+9JvZdD/7XS+6rY8t6RIow6Q6uj5u+3/IXKHFcsW/cp9TrWnV6fizperVFTxTGIyxZGtx85W8ItIsNTRweXQ== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a65:5306:0:b0:502:e1c4:d37b with SMTP id m6-20020a655306000000b00502e1c4d37bmr1599137pgq.12.1679029607543; Thu, 16 Mar 2023 22:06:47 -0700 (PDT) Date: Fri, 17 Mar 2023 05:06:33 +0000 In-Reply-To: <20230317050637.766317-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230317050637.766317-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.rc1.284.g88254d51c5-goog Message-ID: <20230317050637.766317-3-jingzhangos@google.com> Subject: [PATCH v4 2/6] KVM: arm64: Save ID registers' sanitized value per guest From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230316_220650_329798_2BEC9474 X-CRM114-Status: GOOD ( 18.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Reiji Watanabe Introduce id_regs[] in kvm_arch as a storage of guest's ID registers, and save ID registers' sanitized value in the array at KVM_CREATE_VM. Use the saved ones when ID registers are read by the guest or userspace (via KVM_GET_ONE_REG). No functional change intended. Signed-off-by: Reiji Watanabe Co-developed-by: Jing Zhang Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 11 ++++++++ arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/id_regs.c | 44 ++++++++++++++++++++++++------- arch/arm64/kvm/sys_regs.c | 2 +- arch/arm64/kvm/sys_regs.h | 1 + 5 files changed, 49 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a1892a8f6032..fb6b50b1f111 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -245,6 +245,15 @@ struct kvm_arch { * the associated pKVM instance in the hypervisor. */ struct kvm_protected_vm pkvm; + + /* + * Save ID registers for the guest in id_regs[]. + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + */ +#define KVM_ARM_ID_REG_NUM 56 +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id)) + u64 id_regs[KVM_ARM_ID_REG_NUM]; }; struct kvm_vcpu_fault_info { @@ -1005,6 +1014,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, struct kvm_arm_copy_mte_tags *copy_tags); +void kvm_arm_set_default_id_regs(struct kvm *kvm); + /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 3bd732eaf087..4579c878ab30 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -153,6 +153,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); + kvm_arm_set_default_id_regs(kvm); /* * Initialise the default PMUver before there is a chance to diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index 08b738852955..e393b5730557 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -52,16 +52,9 @@ static u8 pmuver_to_perfmon(u8 pmuver) } } -/* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) { - u32 id = reg_to_encoding(r); - u64 val; - - if (sysreg_visible_as_raz(vcpu, r)) - return 0; - - val = read_sanitised_ftr_reg(id); + u64 val = vcpu->kvm->arch.id_regs[IDREG_IDX(id)]; switch (id) { case SYS_ID_AA64PFR0_EL1: @@ -126,6 +119,14 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r return val; } +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +{ + if (sysreg_visible_as_raz(vcpu, r)) + return 0; + + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r)); +} + /* cpufeature ID register access trap handlers */ static bool access_id_reg(struct kvm_vcpu *vcpu, @@ -504,3 +505,28 @@ int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) } return total; } + +/* + * Set the guest's ID registers that are defined in id_reg_descs[] + * with ID_SANITISED() to the host's sanitized value. + */ +void kvm_arm_set_default_id_regs(struct kvm *kvm) +{ + int i; + u32 id; + u64 val; + + for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { + id = reg_to_encoding(&id_reg_descs[i]); + if (WARN_ON_ONCE(!is_id_reg(id))) + /* Shouldn't happen */ + continue; + + if (id_reg_descs[i].visibility == raz_visibility) + /* Hidden or reserved ID register */ + continue; + + val = read_sanitised_ftr_reg(id); + kvm->arch.id_regs[IDREG_IDX(id)] = val; + } +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 22b60474fcab..3243c924527e 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -354,7 +354,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); u32 sr = reg_to_encoding(r); if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) { diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index ad41305348f7..ee136ba28fa5 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -261,6 +261,7 @@ int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); bool kvm_arm_check_idreg_table(void); int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind); +u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); #define AA32(_x) .aarch32_map = AA32_##_x #define Op0(_x) .Op0 = _x From patchwork Fri Mar 17 05:06:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13178533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F3113C6FD1D for ; Fri, 17 Mar 2023 05:07:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=BzWW+UwgTn3n6tmrlyc0RNJuE7vsBjPJARX6G86wdDM=; b=hZO9bsWOJ1tjM0VXRHXaiHPvxN V2wVPl1+Yeg59Ew1pIEkerW9HFJMhNfcYkfElaAZVyUlDRnFliip+V++TTkJemEDtYsYujYU+ZWdp A+Kd7nUL03/k8E/KlllmUeVnKXPrGw/TwQfx/nDbDmngmMLqSh20H5FM4gdl3ATzcb6/K37smuLoA KEg4X1lR1xd/XvlHJ7BuRUAowlv6LYuhwIcmcwGZnLgPcy1lHjaxwHwBVSG2GJ+8cDjus72yQfisg e7D7pUG3jYRmiEmhtFrhlK89JzaaS4vIpuGe9GUs9/RYPcAk2N+p4PzdvUwPsJMDtggsh21OBWu/1 Mjp1rflQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pd2It-0011Ad-1M; Fri, 17 Mar 2023 05:06:59 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pd2Ik-001161-1A for linux-arm-kernel@lists.infradead.org; Fri, 17 Mar 2023 05:06:52 +0000 Received: by mail-pj1-x104a.google.com with SMTP id d5-20020a17090a7bc500b0023d3366e005so1783016pjl.6 for ; Thu, 16 Mar 2023 22:06:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679029609; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qi4CYMe4M1PBtkpLLK5m9h/4D/wAB6aSbE76cc81UuY=; b=NLy1crdFD0h1D0+YYdrkj+1TfuhfR1BkbSgWnQtAOY/INGEUORDtSZslLKHF83F72O E/n8bbz59TfWd9lzhYVa6NMrSSjm8M9VAAzJVJMcOGBPKFiMkBs+xEAfdgPRGvMyQ0pH gJ0teOmUVCzGguLQC4MN2b2H2gvGyNtlh/bISfky57nhZd8os7Ufz3BgpmDpiq+4On/X izPlNjUkSZxFX4GK743DApmFKW46TWdok+EHzZHO56QMl5NEvE31bopEoi/vac4ZcUyt aIhtXQo0OOLe9n+IfU0h44MOhHijTgfaNp/LwCSpDxyrmxi38ojaQo4gkAY61G7mTDEf 3+kQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679029609; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qi4CYMe4M1PBtkpLLK5m9h/4D/wAB6aSbE76cc81UuY=; b=iMfV+e82WKwXu1xXzNicPYPYhIMrYbZdmFKYjkkVOAheZvguYNUVASQ1JbiF3fJFBi iXyHypSmo4MR0SveoCbjN4ztg35q9+WkbahA6frCzIWpqiwQlxDPBq5w1tPuAUIASiDB RxBbmCVdPXlBQTWcd0L/4A55J4vj/uPTd6ZSm95+jVKOObCnqoe5C0uaMh3js6O6VGYE zf/h2K+fOi0oqj0k3/jIv/MibGuVpsrwELuuMvs/iXM032odzNHsKc6fboAzqV7+PJvv yu6d8o8R/7cWzOYSvbKp0G8B4f91trmlKBRuNmEn8NvFsbvNpZncEePMLHpMhzy/I1tJ KgdA== X-Gm-Message-State: AO0yUKVv3ReiOVdawkeKcqC15TPozjzfU6A9VBUpC9+9I7tLx8RzUA7m LLzVqjeIQaOuxOW4YpwxXfH3wvy/LXQ5VhNQig== X-Google-Smtp-Source: AK7set9O5n6eyLrWml1aCC+9jbgEplGLq6m7jeShYBDO0kl6DGyaUtwp0zOfK34OZTNGF4ldhnQ6rivIodnN33Wt1Q== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:90b:a0b:b0:23b:419d:8efe with SMTP id gg11-20020a17090b0a0b00b0023b419d8efemr1926700pjb.3.1679029609085; Thu, 16 Mar 2023 22:06:49 -0700 (PDT) Date: Fri, 17 Mar 2023 05:06:34 +0000 In-Reply-To: <20230317050637.766317-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230317050637.766317-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.rc1.284.g88254d51c5-goog Message-ID: <20230317050637.766317-4-jingzhangos@google.com> Subject: [PATCH v4 3/6] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3] From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230316_220650_399862_CEB82E54 X-CRM114-Status: GOOD ( 17.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With per guest ID registers, ID_AA64PFR0_EL1.[CSV2|CSV3] settings from userspace can be stored in its corresponding ID register. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 2 -- arch/arm64/kvm/arm.c | 19 +------------------ arch/arm64/kvm/hyp/nvhe/sys_regs.c | 7 +++---- arch/arm64/kvm/id_regs.c | 30 ++++++++++++++++++++++-------- 4 files changed, 26 insertions(+), 32 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index fb6b50b1f111..e926ea91a73c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -230,8 +230,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - u8 pfr0_csv2; - u8 pfr0_csv3; struct { u8 imp:4; u8 unimp:4; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 4579c878ab30..c78d68d011cb 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -104,22 +104,6 @@ static int kvm_arm_default_max_vcpus(void) return vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS; } -static void set_default_spectre(struct kvm *kvm) -{ - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv2 = 1; - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv3 = 1; -} - /** * kvm_arch_init_vm - initializes a VM data structure * @kvm: pointer to the KVM struct @@ -151,9 +135,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->max_vcpus = kvm_arm_default_max_vcpus(); - set_default_spectre(kvm); - kvm_arm_init_hypercalls(kvm); kvm_arm_set_default_id_regs(kvm); + kvm_arm_init_hypercalls(kvm); /* * Initialise the default PMUver before there is a chance to diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 08d2b004f4b7..0e1988740a65 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -93,10 +93,9 @@ static u64 get_pvm_id_aa64pfr0(const struct kvm_vcpu *vcpu) PVM_ID_AA64PFR0_RESTRICT_UNSIGNED); /* Spectre and Meltdown mitigation in KVM */ - set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), - (u64)kvm->arch.pfr0_csv2); - set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), - (u64)kvm->arch.pfr0_csv3); + set_mask |= vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64PFR0_EL1)] & + (ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); return (id_aa64pfr0_el1_sys_val & allow_mask) | set_mask; } diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index e393b5730557..b60ca1058301 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -61,12 +61,6 @@ u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), - (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), - (u64)vcpu->kvm->arch.pfr0_csv3); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -201,6 +195,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, u64 val) { u8 csv2, csv3; + u64 sval = val; /* * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as @@ -225,8 +220,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - vcpu->kvm->arch.pfr0_csv2 = csv2; - vcpu->kvm->arch.pfr0_csv3 = csv3; + vcpu->kvm->arch.id_regs[IDREG_IDX(reg_to_encoding(rd))] = sval; return 0; } @@ -529,4 +523,24 @@ void kvm_arm_set_default_id_regs(struct kvm *kvm) val = read_sanitised_ftr_reg(id); kvm->arch.id_regs[IDREG_IDX(id)] = val; } + /* + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. + */ + val = kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64PFR0_EL1)]; + + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } + + kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = val; } From patchwork Fri Mar 17 05:06:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13178535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0D744C76195 for ; Fri, 17 Mar 2023 05:07:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=sLn+POtJPkSHISJ+kbYHO36U+/hoRYWzbVfv3FOu11s=; b=oJmZIAluaOtr7W01du+kwFQwAp E72Y7jI5HUgAqbF1wlOzVtfnMCmQIuEtaWmRUe5tptCx8a1DYnGow0dovJPwEPqiV4c5/PHyT33BS fC3BgzRNMgwQ3ASv6L8wRrrX1ChOL1MgbIaDmTKTl6Hq/wh4PE+Ct+d07F9Wv8mxgaaBaNIbb08gS wQjbxp8g6r4ABYy+VXygdCCK2mRREyrsmcJrozESxQ5+w6IcRYpgAQxsauasMZx+6oKxnEAvF3UgK 20LvPI9443570jjsCAKeJ+PekQTeqZBZVY2F9ZGtQbut6l+BgiM4xABxRIlaOzgARbAQOysxx+FKP EbAnLacw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pd2Iv-0011Bf-26; Fri, 17 Mar 2023 05:07:01 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pd2In-00116x-0T for linux-arm-kernel@lists.infradead.org; Fri, 17 Mar 2023 05:06:54 +0000 Received: by mail-pl1-x649.google.com with SMTP id l10-20020a17090270ca00b0019caa6e6bd1so2141666plt.2 for ; Thu, 16 Mar 2023 22:06:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679029610; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OZEB1m53gFZBM6ivDePWUKW0Ym2zNIJxVqCGHxY2i+w=; b=CF5MV1cJGzGufvSokK74ZzZ55bn9uqhY6I8M9+PEMv3MqkhYT/vS60WO6niTJmxfEp PDX+4vmD9cMjBeg/20KVJ6PNsHE3+XZsKS8jan1GyBwzUwEICElIAqryQjTC5feADYQd Mkc/WmacfGt6YbxocAbUQ2uzvYeoyLa5ozWQvJ6v48EE6NFZFDeOESRGBFSUIBgOEDXW R22zqVhru+ec7IJJvqla7ZoqSNdnsj0sTohNn8UG4AMfZlG5Ua4AiUcRZqxT98Hnk6Un 1FW+cnrkBcqCeUttGVKnp89ul4F7Ru1eG37tEGwzG7YHuv1Q+xl1Fe+awKImvtLgyIpi 3aDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679029610; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OZEB1m53gFZBM6ivDePWUKW0Ym2zNIJxVqCGHxY2i+w=; b=FxLI9JYdx1MxN2LEJp+t+ItZNOtH+WLgB/+LIEjxQVze+yLgazeu0Mn5+929lsps9z skAtW78VxuYpNLI1hBWEYYQb452hSyxdbHD/GEnuwSQCMbdPmcVYcm3HiJ/nKFsd1IBg qiFlXRkA6xAf2QF2SOI91yat7GZ6f6cYHsNbm+QZTW1XAooPw81QoXNcFnHRFU61c8tm zLXc/64kOmdZjQ74V5mOj3S++Q9ezl1L85KRYL8XCjsJbYkrIOk6KY/QxIQ0qUUE0Vaa 9syYABdtApx4aTKAXZGUDiy6rOEdVgR9/jZrHQ8MLKfjRjxy1ejj1B7T7BDFzyihv0Vp r6yg== X-Gm-Message-State: AO0yUKU4vp3fA9twtMqXv/1pVtmbtXGD4P4LHczTg+LX04jFfM8yvVpb QDP3IEIqKkZLfm1530j/243xy4DEWxuP9OJdyw== X-Google-Smtp-Source: AK7set/EHeJ/LpIi8CyeNWVjFfI5VAcbXWYcGoamGR0S/Cq6q3ioHjokpVjO2bipV8CRDfT9aK8+oOjF2kSH7Q4xQQ== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a65:6449:0:b0:507:4737:cdb5 with SMTP id s9-20020a656449000000b005074737cdb5mr1489502pgv.8.1679029610705; Thu, 16 Mar 2023 22:06:50 -0700 (PDT) Date: Fri, 17 Mar 2023 05:06:35 +0000 In-Reply-To: <20230317050637.766317-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230317050637.766317-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.rc1.284.g88254d51c5-goog Message-ID: <20230317050637.766317-5-jingzhangos@google.com> Subject: [PATCH v4 4/6] KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230316_220653_191185_047F54B0 X-CRM114-Status: GOOD ( 15.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With per guest ID registers, PMUver settings from userspace can be stored in its corresponding ID register. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 11 +++--- arch/arm64/kvm/arm.c | 6 --- arch/arm64/kvm/id_regs.c | 61 +++++++++++++++++++++++++------ include/kvm/arm_pmu.h | 5 ++- 4 files changed, 59 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e926ea91a73c..102860ba896d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -218,6 +218,12 @@ struct kvm_arch { #define KVM_ARCH_FLAG_EL1_32BIT 4 /* PSCI SYSTEM_SUSPEND enabled for the guest */ #define KVM_ARCH_FLAG_SYSTEM_SUSPEND_ENABLED 5 + /* + * AA64DFR0_EL1.PMUver was set as ID_AA64DFR0_EL1_PMUVer_IMP_DEF + * or DFR0_EL1.PerfMon was set as ID_DFR0_EL1_PerfMon_IMPDEF from + * userspace for VCPUs without PMU. + */ +#define KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU 6 unsigned long flags; @@ -230,11 +236,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - struct { - u8 imp:4; - u8 unimp:4; - } dfr0_pmuver; - /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c78d68d011cb..fb2de2cb98cb 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -138,12 +138,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm_arm_set_default_id_regs(kvm); kvm_arm_init_hypercalls(kvm); - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit(); - return 0; err_free_cpumask: diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index b60ca1058301..3a87a3d2390d 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -21,9 +21,12 @@ static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) - return vcpu->kvm->arch.dfr0_pmuver.imp; - - return vcpu->kvm->arch.dfr0_pmuver.unimp; + return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)]); + else if (test_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags)) + return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; + else + return 0; } static u8 perfmon_to_pmuver(u8 perfmon) @@ -256,10 +259,23 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; - else - vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; + if (valid_pmu) { + mutex_lock(&vcpu->kvm->lock); + vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] &= + ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] |= + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), pmuver); + + vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_DFR0_EL1)] &= + ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_DFR0_EL1)] |= FIELD_PREP( + ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), pmuver_to_perfmon(pmuver)); + mutex_unlock(&vcpu->kvm->lock); + } else if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) { + set_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); + } else { + clear_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); + } return 0; } @@ -296,10 +312,23 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (val) return -EINVAL; - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); - else - vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); + if (valid_pmu) { + mutex_lock(&vcpu->kvm->lock); + vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_DFR0_EL1)] &= + ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_DFR0_EL1)] |= FIELD_PREP( + ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), perfmon); + + vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] &= + ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] |= FIELD_PREP( + ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), perfmon_to_pmuver(perfmon)); + mutex_unlock(&vcpu->kvm->lock); + } else if (perfmon == ID_DFR0_EL1_PerfMon_IMPDEF) { + set_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); + } else { + clear_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); + } return 0; } @@ -543,4 +572,14 @@ void kvm_arm_set_default_id_regs(struct kvm *kvm) } kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = val; + + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] &= + ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] |= + FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + kvm_arm_pmu_get_pmuver_limit()); } diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 628775334d5e..51c7f3e7bdde 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -92,8 +92,9 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); /* * Evaluates as true when emulating PMUv3p5, and false otherwise. */ -#define kvm_pmu_is_3p5(vcpu) \ - (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5) +#define kvm_pmu_is_3p5(vcpu) \ + (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), \ + vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)]) >= ID_AA64DFR0_EL1_PMUVer_V3P5) u8 kvm_arm_pmu_get_pmuver_limit(void); From patchwork Fri Mar 17 05:06:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13178537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1B43CC74A5B for ; Fri, 17 Mar 2023 05:08:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=aC4JeJHe0+krgoYX91gJtY4tyS6A4U3sOGi1gNyZURc=; b=wGfkZsQ1xT+M50ebJRQXSAskBi RqbZrCIe6cDDuK5IZ4W3NLPNYbnh2Y50V5uoSnC87K7ZIs78daaAaozErgOx15KycGCVIRIKyHcbp d/rH9geqlGNxFFtGQwY1IvqjdoRHTc1B6KwHs1oFUiUnbAXOcHa7vrD27N4bHTiNp1MPgixlF54MO v2uddVq6s2xgKHIMtKs/ZI2gi7HSChNte11iKOXK3dAqUgcRDTxVx9fRKaGQ/Pk/NaVbKJvDpfMUy uEAtZCGLFdfB+baKJbTadlSP1AZSV64kgrRAyuqrNpwPR9aDQwhKsSDOW4FAhK+aVF4gzzmTmsOQC 3uY9Rdsg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pd2JI-0011Ld-1d; Fri, 17 Mar 2023 05:07:24 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pd2Iq-00117l-1Z for linux-arm-kernel@lists.infradead.org; Fri, 17 Mar 2023 05:06:58 +0000 Received: by mail-yb1-xb49.google.com with SMTP id m6-20020a056902118600b00aeb1e3dbd1bso4085194ybu.9 for ; Thu, 16 Mar 2023 22:06:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679029612; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kgPyqr/q0gDjusIFoqgQanf7sMQvD/5tjY88iiTDhlM=; b=ZFCBze+qB5JYCJLS2SQzWoOfdda4KfdB2YskxmjjWfg/yMh6SMH40QDJqRESl4+4Su m+CTLZYl7hHVni0RDHzRg3ZzPtQkfo4d0F1RzQEeGJGWR6Q6AWoSo7Un91G/D7O5aMsc SXsh2TKxCoAKkMi0Zq1cfdPDcFtVTjxcfA9Nd4o9e7X8qDM94uAN6mDIRkDuQUVv/NmY bmQl3aU7yY1B+ms/rF8M93TUXqOS36cpYJiN+ubLzPFyfckWDXYYY1nlK7bAx41hEU6C Zz1GJ3Du/Wg4apaeHvBZLv83TZrpuaZfdZV3A14CfyUtUzyxmXHK1gDi5k0I0bcRu7QS udpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679029612; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kgPyqr/q0gDjusIFoqgQanf7sMQvD/5tjY88iiTDhlM=; b=ywIifjk1dcxrnNyMl3tjiVTyhEuozLCZsq3zfEhELn5gZxGbztigF/cFnct6hkeo2i tvrvba+VHQVlHCeGDczsl8BSsO7mta2OVP+YPU6rpCvfui/vVtokcdpWVVdAqwY6qzbj e9DuIfo3H3IrlS1E1qdVT6VNo7t7s5DbjRbVNW9qrOz8KHPMDEB5WWfK2wxHsAsDL2lZ FoIvZStYGt/CToAskbiD1R05FM1cUzUUG3rLZJnyDGH7c5lsHuoNt4afE0lHzU0t9SE0 TQUf7gtcFnQg/45wH2IH3+PBafZlQjpSX9kL9Tl4rlHeky5svrUmD9K0pq6ywk0JSZEf rNBQ== X-Gm-Message-State: AO0yUKX6FwlLK3P1XpDLdoVZjFZOOdhvvDXlIu2VqjKuUfQG4AaJUgJu R/o2KH9kbUxADx4879Cib6oVokY6SbQpS/+o3g== X-Google-Smtp-Source: AK7set9SL0OqxX8wAftY2RxrewTvwJ+CVA9tvXCW9V/awmifb0L6/6gygUykXl79qiV803Q7QYhlafsmR60t/OGswg== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a25:9888:0:b0:a88:ba7:59b with SMTP id l8-20020a259888000000b00a880ba7059bmr29687408ybo.9.1679029612446; Thu, 16 Mar 2023 22:06:52 -0700 (PDT) Date: Fri, 17 Mar 2023 05:06:36 +0000 In-Reply-To: <20230317050637.766317-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230317050637.766317-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.rc1.284.g88254d51c5-goog Message-ID: <20230317050637.766317-6-jingzhangos@google.com> Subject: [PATCH v4 5/6] KVM: arm64: Introduce ID register specific descriptor From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230316_220656_527437_86BF5946 X-CRM114-Status: GOOD ( 20.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce an ID feature register specific descriptor to include ID register specific fields and callbacks besides its corresponding general system register descriptor. New fields for ID register descriptor would be added later when it is necessary to support a writable ID register. No functional change intended. Co-developed-by: Reiji Watanabe Signed-off-by: Reiji Watanabe Signed-off-by: Jing Zhang --- arch/arm64/kvm/id_regs.c | 187 +++++++++++++++++++++++++++----------- arch/arm64/kvm/sys_regs.c | 2 +- arch/arm64/kvm/sys_regs.h | 1 + 3 files changed, 138 insertions(+), 52 deletions(-) diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index 3a87a3d2390d..9956c99d20f7 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -18,6 +18,10 @@ #include "sys_regs.h" +struct id_reg_desc { + const struct sys_reg_desc reg_desc; +}; + static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) @@ -334,21 +338,25 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, } /* sys_reg_desc initialiser for known cpufeature ID registers */ -#define ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = id_visibility, \ +#define ID_SANITISED(name) { \ + .reg_desc = { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = id_visibility, \ + }, \ } /* sys_reg_desc initialiser for known cpufeature ID registers */ -#define AA32_ID_SANITISED(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = aa32_id_visibility, \ +#define AA32_ID_SANITISED(name) { \ + .reg_desc = { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = aa32_id_visibility, \ + }, \ } /* @@ -356,12 +364,14 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2 * (1 <= crm < 8, 0 <= Op2 < 8). */ -#define ID_UNALLOCATED(crm, op2) { \ - Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility \ +#define ID_UNALLOCATED(crm, op2) { \ + .reg_desc = { \ + Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility \ + }, \ } /* @@ -369,15 +379,17 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, * For now, these are exposed just like unallocated ID regs: they appear * RAZ for the guest. */ -#define ID_HIDDEN(name) { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = raz_visibility, \ +#define ID_HIDDEN(name) { \ + .reg_desc = { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = raz_visibility, \ + }, \ } -static const struct sys_reg_desc id_reg_descs[] = { +static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { /* * ID regs: all ID_SANITISED() entries here must have corresponding * entries in arm64_ftr_regs[]. @@ -387,9 +399,13 @@ static const struct sys_reg_desc id_reg_descs[] = { /* CRm=1 */ AA32_ID_SANITISED(ID_PFR0_EL1), AA32_ID_SANITISED(ID_PFR1_EL1), - { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_dfr0_el1, - .visibility = aa32_id_visibility, }, + { .reg_desc = { + SYS_DESC(SYS_ID_DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_dfr0_el1, + .visibility = aa32_id_visibility, }, + }, ID_HIDDEN(ID_AFR0_EL1), AA32_ID_SANITISED(ID_MMFR0_EL1), AA32_ID_SANITISED(ID_MMFR1_EL1), @@ -418,8 +434,12 @@ static const struct sys_reg_desc id_reg_descs[] = { /* AArch64 ID registers */ /* CRm=4 */ - { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + { .reg_desc = { + SYS_DESC(SYS_ID_AA64PFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_aa64pfr0_el1, }, + }, ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4, 2), ID_UNALLOCATED(4, 3), @@ -429,8 +449,12 @@ static const struct sys_reg_desc id_reg_descs[] = { ID_UNALLOCATED(4, 7), /* CRm=5 */ - { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + { .reg_desc = { + SYS_DESC(SYS_ID_AA64DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_aa64dfr0_el1, }, + }, ID_SANITISED(ID_AA64DFR1_EL1), ID_UNALLOCATED(5, 2), ID_UNALLOCATED(5, 3), @@ -469,12 +493,12 @@ static const struct sys_reg_desc id_reg_descs[] = { */ int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params) { - const struct sys_reg_desc *r; + u32 id; - r = find_reg(params, id_reg_descs, ARRAY_SIZE(id_reg_descs)); + id = reg_to_encoding(params); - if (likely(r)) { - perform_access(vcpu, params, r); + if (likely(is_id_reg(id))) { + perform_access(vcpu, params, &id_reg_descs[IDREG_IDX(id)].reg_desc); } else { print_sys_reg_msg(params, "Unsupported guest id_reg access at: %lx [%08lx]\n", @@ -491,38 +515,102 @@ void kvm_arm_reset_id_regs(struct kvm_vcpu *vcpu) unsigned long i; for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) - if (id_reg_descs[i].reset) - id_reg_descs[i].reset(vcpu, &id_reg_descs[i]); + if (id_reg_descs[i].reg_desc.reset) + id_reg_descs[i].reg_desc.reset(vcpu, &id_reg_descs[i].reg_desc); } int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { - return kvm_sys_reg_get_user(vcpu, reg, - id_reg_descs, ARRAY_SIZE(id_reg_descs)); + u64 __user *uaddr = (u64 __user *)(unsigned long)reg->addr; + const struct sys_reg_desc *r; + struct sys_reg_params params; + u64 val; + int ret; + u32 id; + + if (!index_to_params(reg->id, ¶ms)) + return -ENOENT; + id = reg_to_encoding(¶ms); + + if (!is_id_reg(id)) + return -ENOENT; + + r = &id_reg_descs[IDREG_IDX(id)].reg_desc; + if (r->get_user) { + ret = (r->get_user)(vcpu, r, &val); + } else { + ret = 0; + val = vcpu->kvm->arch.id_regs[IDREG_IDX(id)]; + } + + if (!ret) + ret = put_user(val, uaddr); + + return ret; } int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { - return kvm_sys_reg_set_user(vcpu, reg, - id_reg_descs, ARRAY_SIZE(id_reg_descs)); + u64 __user *uaddr = (u64 __user *)(unsigned long)reg->addr; + const struct sys_reg_desc *r; + struct sys_reg_params params; + u64 val; + int ret; + u32 id; + + if (!index_to_params(reg->id, ¶ms)) + return -ENOENT; + id = reg_to_encoding(¶ms); + + if (!is_id_reg(id)) + return -ENOENT; + + if (get_user(val, uaddr)) + return -EFAULT; + + r = &id_reg_descs[IDREG_IDX(id)].reg_desc; + + if (sysreg_user_write_ignore(vcpu, r)) + return 0; + + if (r->set_user) { + ret = (r->set_user)(vcpu, r, val); + } else { + WARN_ONCE(1, "ID register set_user callback is NULL\n"); + ret = 0; + } + + return ret; } bool kvm_arm_check_idreg_table(void) { - return check_sysreg_table(id_reg_descs, ARRAY_SIZE(id_reg_descs), false); + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { + const struct sys_reg_desc *r = &id_reg_descs[i].reg_desc; + + if (!is_id_reg(reg_to_encoding(r))) { + kvm_err("id_reg table %pS entry %d not set correctly\n", + &id_reg_descs[i].reg_desc, i); + return false; + } + } + + return true; } int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) { - const struct sys_reg_desc *i2, *end2; + const struct id_reg_desc *i2, *end2; unsigned int total = 0; int err; i2 = id_reg_descs; end2 = id_reg_descs + ARRAY_SIZE(id_reg_descs); - while (i2 != end2) { - err = walk_one_sys_reg(vcpu, i2++, &uind, &total); + for (; i2 != end2; i2++) { + err = walk_one_sys_reg(vcpu, &(i2->reg_desc), &uind, &total); if (err) return err; } @@ -540,12 +628,9 @@ void kvm_arm_set_default_id_regs(struct kvm *kvm) u64 val; for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { - id = reg_to_encoding(&id_reg_descs[i]); - if (WARN_ON_ONCE(!is_id_reg(id))) - /* Shouldn't happen */ - continue; + id = reg_to_encoding(&id_reg_descs[i].reg_desc); - if (id_reg_descs[i].visibility == raz_visibility) + if (id_reg_descs[i].reg_desc.visibility == raz_visibility) /* Hidden or reserved ID register */ continue; diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 3243c924527e..608a0378bdae 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2519,7 +2519,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu) * Userspace API *****************************************************************************/ -static bool index_to_params(u64 id, struct sys_reg_params *params) +bool index_to_params(u64 id, struct sys_reg_params *params) { switch (id & KVM_REG_SIZE_MASK) { case KVM_REG_SIZE_U64: diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index ee136ba28fa5..8fd0020c67ca 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -239,6 +239,7 @@ static inline bool is_id_reg(u32 id) void perform_access(struct kvm_vcpu *vcpu, struct sys_reg_params *params, const struct sys_reg_desc *r); +bool index_to_params(u64 id, struct sys_reg_params *params); const struct sys_reg_desc *get_reg_by_id(u64 id, const struct sys_reg_desc table[], unsigned int num); From patchwork Fri Mar 17 05:06:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13178538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EE7EEC6FD1D for ; Fri, 17 Mar 2023 05:08:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Pe0pkGcX6+0SIBsPFX9v9Jh6/OFwbAao1Al+/PZ9Yq0=; b=ADYKGC4UdHCPvdP4qLRsG49m/G fAoKEuye2S3fsfKp39zqiRkTbtxRccxkI5sMF87P/orEwGYPEu1VSVIrRuUsd80FoIvbjoAQ26Jar XcKWwIAnLRLpLXvYHjzZdxRnZJbzT1lyfTtjfcvPHIoqLhkcc7tSNAPwLx5QvTrNPptcSLAfXVbDJ Icohp7T1YCUBR3r8CUZ1P3jj4h9Jceqma5xJ6MzU2E2ZI8jxHjHw241bYiNGQ3gO3eFbjv88TtVn0 MnnWXojz8N7DhmW8AIbte0T7+NxKegBX1uP6jJUxH+2leRZCcuxrk5U69wqq86sAFJu3I5R0R7POV BJ+1SIDg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pd2JF-0011KQ-33; Fri, 17 Mar 2023 05:07:21 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pd2Ip-00118c-0q for linux-arm-kernel@lists.infradead.org; Fri, 17 Mar 2023 05:06:58 +0000 Received: by mail-pj1-x1049.google.com with SMTP id o10-20020a17090ac08a00b0023f3196fa6fso1719005pjs.2 for ; Thu, 16 Mar 2023 22:06:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679029614; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ws4dDIxnN7UbDLg/7RF8XgiW2NLpTuqEwtlrx1bmRbA=; b=Vkc0wc8B684oTPog1L8UwyrFfYiOcrwFLyld2lRb4xbEVjpDNGRZsw8yL86wEoS/KF h57Pf3qBvgpBhxMh7NLYR5EyVv7GDuLQDz+UTfO1FC5f5gBMkFhrKW0KoteSmJ10quTF w76pPsh4xtDMpjiz5116D+gbLefCL88ayQfHV86sjf2zDtF4TDNKabAfc1dJLZpY9QZK sXOH5avgy24/FcDeQJqf9Q4wnjY1+QsUGlHhdzaOGOdYJpnkW8vV88e7TVoBKsITpZbU ZLhqd1P9pLYSJdKgFb/aitgWJp5T507HiYvV1gc26AmMWkZMvubogigRf0xp/DPXF8Pb 7saw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679029614; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ws4dDIxnN7UbDLg/7RF8XgiW2NLpTuqEwtlrx1bmRbA=; b=mg9QEXteVZ+yUEwhbXKQrGpCRkVFXxxj1DfV+TDI9yWyzh5IsQrs6qhXbompm6mafe sQ5cQAqw2xyfwXisTwuU0N514WcLYszYoTDYKYb/5qXJJDlyRgICqSfFvzUkUrY1jmyr 20BprBBIzCJ3D0ktTpZlAGWajIzAV0YapzhtuefzHFpTWwaeS+BkMY6B8+mz4EOVzRIN c1ZdvV4XC1vgnloBvmvxG+IGhfvsOHBRaOSuJMkZHwPZABGnwnM+C0fhNOvP5UXNQgXI nKD+hMTL5PTINKy8XVZi5Luu7Ak0KKuorkk+3iMAl6CTJPCzf8pJJmElupfY+Sg5dJha IM3w== X-Gm-Message-State: AO0yUKXBPImoyTTziX4ZH0xZEqtml69T1Ari0US7BZxX4uFDhAA2t6w0 HnLeO3zv2kSNUUmge2q0ZZ2aguhYy9FrR9Q8HQ== X-Google-Smtp-Source: AK7set8PCSFDrbVhDI5nLmHt8N0a8kWEjolQArepXS4l5269zhh0LygwQhGVXxAyMuSvVHfUlTGdVainCL3wE6oW5A== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:902:aa86:b0:19f:39f8:88c4 with SMTP id d6-20020a170902aa8600b0019f39f888c4mr627081plr.2.1679029614291; Thu, 16 Mar 2023 22:06:54 -0700 (PDT) Date: Fri, 17 Mar 2023 05:06:37 +0000 In-Reply-To: <20230317050637.766317-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230317050637.766317-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.0.rc1.284.g88254d51c5-goog Message-ID: <20230317050637.766317-7-jingzhangos@google.com> Subject: [PATCH v4 6/6] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3 From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230316_220655_303669_E715F57D X-CRM114-Status: GOOD ( 35.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Save KVM sanitised ID register value in ID descriptor (kvm_sys_val). Add an init callback for every ID register to setup kvm_sys_val. All per VCPU sanitizations are still handled on the fly during ID register read and write from userspace. An arm64_ftr_bits array is used to indicate writable feature fields. Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3], ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities introduced by ID register descriptor. No functional change intended. Co-developed-by: Reiji Watanabe Signed-off-by: Reiji Watanabe Signed-off-by: Jing Zhang --- arch/arm64/include/asm/cpufeature.h | 25 +++ arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kernel/cpufeature.c | 26 +-- arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/id_regs.c | 325 ++++++++++++++++++++-------- arch/arm64/kvm/sys_regs.c | 3 +- arch/arm64/kvm/sys_regs.h | 2 +- 7 files changed, 261 insertions(+), 124 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index fc2c739f48f1..493ec530eefc 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -64,6 +64,30 @@ struct arm64_ftr_bits { s64 safe_val; /* safe value for FTR_EXACT features */ }; +#define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ + { \ + .sign = SIGNED, \ + .visible = VISIBLE, \ + .strict = STRICT, \ + .type = TYPE, \ + .shift = SHIFT, \ + .width = WIDTH, \ + .safe_val = SAFE_VAL, \ + } + +/* Define a feature with unsigned values */ +#define ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ + __ARM64_FTR_BITS(FTR_UNSIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) + +/* Define a feature with a signed value */ +#define S_ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ + __ARM64_FTR_BITS(FTR_SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) + +#define ARM64_FTR_END \ + { \ + .width = 0, \ + } + /* * Describe the early feature override to the core override code: * @@ -911,6 +935,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1) return 8; } +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur); struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id); extern struct arm64_ftr_override id_aa64mmfr1_override; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 102860ba896d..aa83dd79e7ff 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1013,7 +1013,7 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, struct kvm_arm_copy_mte_tags *copy_tags); -void kvm_arm_set_default_id_regs(struct kvm *kvm); +void kvm_arm_init_id_regs(struct kvm *kvm); /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 23bd2a926b74..e18848ee4b98 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -139,30 +139,6 @@ void dump_cpu_features(void) pr_emerg("0x%*pb\n", ARM64_NCAPS, &cpu_hwcaps); } -#define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ - { \ - .sign = SIGNED, \ - .visible = VISIBLE, \ - .strict = STRICT, \ - .type = TYPE, \ - .shift = SHIFT, \ - .width = WIDTH, \ - .safe_val = SAFE_VAL, \ - } - -/* Define a feature with unsigned values */ -#define ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ - __ARM64_FTR_BITS(FTR_UNSIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) - -/* Define a feature with a signed value */ -#define S_ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \ - __ARM64_FTR_BITS(FTR_SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) - -#define ARM64_FTR_END \ - { \ - .width = 0, \ - } - static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap); static bool __system_matches_cap(unsigned int n); @@ -790,7 +766,7 @@ static u64 arm64_ftr_set_value(const struct arm64_ftr_bits *ftrp, s64 reg, return reg; } -static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur) { s64 ret = 0; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index fb2de2cb98cb..e539d9ca9d01 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -135,7 +135,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->max_vcpus = kvm_arm_default_max_vcpus(); - kvm_arm_set_default_id_regs(kvm); + kvm_arm_init_id_regs(kvm); kvm_arm_init_hypercalls(kvm); return 0; diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c index 9956c99d20f7..726b810b6e06 100644 --- a/arch/arm64/kvm/id_regs.c +++ b/arch/arm64/kvm/id_regs.c @@ -18,10 +18,88 @@ #include "sys_regs.h" +/* + * Number of entries in id_reg_desc's ftr_bits[] (Number of 4 bits fields + * in 64 bit register + 1 entry for a terminator entry). + */ +#define FTR_FIELDS_NUM 17 + struct id_reg_desc { const struct sys_reg_desc reg_desc; + /* + * KVM sanitised ID register value. + * It is the default value for per VM emulated ID register. + */ + u64 kvm_sys_val; + /* + * Used to validate the ID register values with arm64_check_features(). + * The last item in the array must be terminated by an item whose + * width field is zero as that is expected by arm64_check_features(). + * Only feature bits defined in this array are writable. + */ + struct arm64_ftr_bits ftr_bits[FTR_FIELDS_NUM]; + + /* + * Basically init() is used to setup the KVM sanitised value + * stored in kvm_sys_val. + */ + void (*init)(struct id_reg_desc *idr); }; +static struct id_reg_desc id_reg_descs[]; + +/** + * arm64_check_features() - Check if a feature register value constitutes + * a subset of features indicated by @limit. + * + * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by + * an item whose width field is zero. + * @val: The feature register value to check + * @limit: The limit value of the feature register + * + * This function will check if each feature field of @val is the "safe" value + * against @limit based on @ftrp[], each of which specifies the target field + * (shift, width), whether or not the field is for a signed value (sign), + * how the field is determined to be "safe" (type), and the safe value + * (safe_val) when type == FTR_EXACT (safe_val won't be used by this + * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits + * won't be used by this function. If a field value in @val is the same + * as the one in @limit, it is always considered the safe value regardless + * of the type. For register fields that are not in @ftrp[], only the value + * in @limit is considered the safe value. + * + * Return: 0 if all the fields are safe. Otherwise, return negative errno. + */ +static int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit) +{ + u64 mask = 0; + + for (; ftrp->width; ftrp++) { + s64 f_val, f_lim, safe_val; + + f_val = arm64_ftr_value(ftrp, val); + f_lim = arm64_ftr_value(ftrp, limit); + mask |= arm64_ftr_mask(ftrp); + + if (f_val == f_lim) + safe_val = f_val; + else + safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim); + + if (safe_val != f_val) + return -E2BIG; + } + + /* + * For fields that are not indicated in ftrp, values in limit are the + * safe values. + */ + if ((val & ~mask) != (limit & ~mask)) + return -E2BIG; + + return 0; +} + static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) @@ -67,7 +145,6 @@ u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) case SYS_ID_AA64PFR0_EL1: if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -94,15 +171,10 @@ u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); break; case SYS_ID_AA64DFR0_EL1: - /* Limit debug to ARMv8.0 */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); /* Set PMUver to the required version */ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), vcpu_pmuver(vcpu)); - /* Hide SPE from guests */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); break; case SYS_ID_DFR0_EL1: val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); @@ -161,9 +233,15 @@ static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { - /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(vcpu, rd)) - return -EINVAL; + int ret; + int id = reg_to_encoding(rd); + + ret = arm64_check_features(id_reg_descs[IDREG_IDX(id)].ftr_bits, val, + id_reg_descs[IDREG_IDX(id)].kvm_sys_val); + if (ret) + return ret; + + vcpu->kvm->arch.id_regs[IDREG_IDX(id)] = val; return 0; } @@ -197,12 +275,47 @@ static unsigned int aa32_id_visibility(const struct kvm_vcpu *vcpu, return id_visibility(vcpu, r); } +static void init_id_reg(struct id_reg_desc *idr) +{ + idr->kvm_sys_val = read_sanitised_ftr_reg(reg_to_encoding(&idr->reg_desc)); +} + +static void init_id_aa64pfr0_el1(struct id_reg_desc *idr) +{ + u64 val; + u32 id = reg_to_encoding(&idr->reg_desc); + + val = read_sanitised_ftr_reg(id); + /* + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. + */ + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } + + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); + + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); + + idr->kvm_sys_val = val; +} + static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { u8 csv2, csv3; - u64 sval = val; /* * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as @@ -220,16 +333,29 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) return -EINVAL; - /* We can only differ with CSV[23], and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); - if (val) - return -EINVAL; + return set_id_reg(vcpu, rd, val); +} - vcpu->kvm->arch.id_regs[IDREG_IDX(reg_to_encoding(rd))] = sval; +static void init_id_aa64dfr0_el1(struct id_reg_desc *idr) +{ + u64 val; + u32 id = reg_to_encoding(&idr->reg_desc); - return 0; + val = read_sanitised_ftr_reg(id); + /* Limit debug to ARMv8.0 */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + kvm_arm_pmu_get_pmuver_limit()); + /* Hide SPE from guests */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); + + idr->kvm_sys_val = val; } static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, @@ -238,6 +364,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, { u8 pmuver, host_pmuver; bool valid_pmu; + int ret; host_pmuver = kvm_arm_pmu_get_pmuver_limit(); @@ -257,39 +384,58 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; - /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - if (val) - return -EINVAL; - if (valid_pmu) { mutex_lock(&vcpu->kvm->lock); - vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] &= - ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] |= - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), pmuver); + ret = set_id_reg(vcpu, rd, val); + if (ret) + return ret; vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_DFR0_EL1)] &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_DFR0_EL1)] |= FIELD_PREP( ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), pmuver_to_perfmon(pmuver)); mutex_unlock(&vcpu->kvm->lock); - } else if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) { - set_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); } else { - clear_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); + /* We can only differ with PMUver, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + if (val) + return -EINVAL; + + if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) + set_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); + else + clear_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); + } return 0; } +static void init_id_dfr0_el1(struct id_reg_desc *idr) +{ + u64 val; + u32 id = reg_to_encoding(&idr->reg_desc); + + val = read_sanitised_ftr_reg(id); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), + kvm_arm_pmu_get_pmuver_limit()); + + idr->kvm_sys_val = val; +} + static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { u8 perfmon, host_perfmon; bool valid_pmu; + int ret; host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); @@ -310,42 +456,46 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; - /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - if (val) - return -EINVAL; - if (valid_pmu) { mutex_lock(&vcpu->kvm->lock); - vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_DFR0_EL1)] &= - ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_DFR0_EL1)] |= FIELD_PREP( - ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), perfmon); + ret = set_id_reg(vcpu, rd, val); + if (ret) + return ret; vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); vcpu->kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] |= FIELD_PREP( ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), perfmon_to_pmuver(perfmon)); mutex_unlock(&vcpu->kvm->lock); - } else if (perfmon == ID_DFR0_EL1_PerfMon_IMPDEF) { - set_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); } else { - clear_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); + /* We can only differ with PerfMon, and anything else is an error */ + val ^= read_id_reg(vcpu, rd); + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + if (val) + return -EINVAL; + + if (perfmon == ID_DFR0_EL1_PerfMon_IMPDEF) + set_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); + else + clear_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags); } return 0; } /* sys_reg_desc initialiser for known cpufeature ID registers */ +#define SYS_DESC_SANITISED(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = id_visibility, \ +} + #define ID_SANITISED(name) { \ - .reg_desc = { \ - SYS_DESC(SYS_##name), \ - .access = access_id_reg, \ - .get_user = get_id_reg, \ - .set_user = set_id_reg, \ - .visibility = id_visibility, \ - }, \ + .reg_desc = SYS_DESC_SANITISED(name), \ + .ftr_bits = { ARM64_FTR_END, }, \ + .init = init_id_reg, \ } /* sys_reg_desc initialiser for known cpufeature ID registers */ @@ -357,6 +507,8 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, .set_user = set_id_reg, \ .visibility = aa32_id_visibility, \ }, \ + .ftr_bits = { ARM64_FTR_END, }, \ + .init = init_id_reg, \ } /* @@ -372,6 +524,7 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, .set_user = set_id_reg, \ .visibility = raz_visibility \ }, \ + .ftr_bits = { ARM64_FTR_END, }, \ } /* @@ -387,9 +540,10 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, .set_user = set_id_reg, \ .visibility = raz_visibility, \ }, \ + .ftr_bits = { ARM64_FTR_END, }, \ } -static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { +static struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { /* * ID regs: all ID_SANITISED() entries here must have corresponding * entries in arm64_ftr_regs[]. @@ -405,6 +559,11 @@ static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { .get_user = get_id_reg, .set_user = set_id_dfr0_el1, .visibility = aa32_id_visibility, }, + .ftr_bits = { + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, + ID_DFR0_EL1_PerfMon_SHIFT, ID_DFR0_EL1_PerfMon_WIDTH, 0), + ARM64_FTR_END, }, + .init = init_id_dfr0_el1, }, ID_HIDDEN(ID_AFR0_EL1), AA32_ID_SANITISED(ID_MMFR0_EL1), @@ -439,6 +598,13 @@ static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { .access = access_id_reg, .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + .ftr_bits = { + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, + ID_AA64PFR0_EL1_CSV2_SHIFT, ID_AA64PFR0_EL1_CSV2_WIDTH, 0), + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, + ID_AA64PFR0_EL1_CSV3_SHIFT, ID_AA64PFR0_EL1_CSV3_WIDTH, 0), + ARM64_FTR_END, }, + .init = init_id_aa64pfr0_el1, }, ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4, 2), @@ -454,6 +620,11 @@ static const struct id_reg_desc id_reg_descs[KVM_ARM_ID_REG_NUM] = { .access = access_id_reg, .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + .ftr_bits = { + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, + ID_AA64DFR0_EL1_PMUVer_SHIFT, ID_AA64DFR0_EL1_PMUVer_WIDTH, 0), + ARM64_FTR_END, }, + .init = init_id_aa64dfr0_el1, }, ID_SANITISED(ID_AA64DFR1_EL1), ID_UNALLOCATED(5, 2), @@ -583,7 +754,7 @@ int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return ret; } -bool kvm_arm_check_idreg_table(void) +bool kvm_arm_idreg_table_init(void) { unsigned int i; @@ -595,6 +766,9 @@ bool kvm_arm_check_idreg_table(void) &id_reg_descs[i].reg_desc, i); return false; } + + if (id_reg_descs[i].init) + id_reg_descs[i].init(&id_reg_descs[i]); } return true; @@ -618,53 +792,16 @@ int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) } /* - * Set the guest's ID registers that are defined in id_reg_descs[] - * with ID_SANITISED() to the host's sanitized value. + * Initialize the guest's ID registers with KVM sanitised values that were setup + * during ID register descriptors initialization. */ -void kvm_arm_set_default_id_regs(struct kvm *kvm) +void kvm_arm_init_id_regs(struct kvm *kvm) { int i; u32 id; - u64 val; for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { id = reg_to_encoding(&id_reg_descs[i].reg_desc); - - if (id_reg_descs[i].reg_desc.visibility == raz_visibility) - /* Hidden or reserved ID register */ - continue; - - val = read_sanitised_ftr_reg(id); - kvm->arch.id_regs[IDREG_IDX(id)] = val; + kvm->arch.id_regs[IDREG_IDX(id)] = id_reg_descs[i].kvm_sys_val; } - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - val = kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64PFR0_EL1)]; - - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); - } - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); - } - - kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = val; - - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] &= - ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - kvm->arch.id_regs[IDREG_IDX(SYS_ID_AA64DFR0_EL1)] |= - FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), - kvm_arm_pmu_get_pmuver_limit()); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 608a0378bdae..61b9adfd5d5e 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2912,14 +2912,13 @@ int __init kvm_sys_reg_table_init(void) unsigned int i; /* Make sure tables are unique and in order. */ - valid &= kvm_arm_check_idreg_table(); valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false); valid &= check_sysreg_table(cp14_regs, ARRAY_SIZE(cp14_regs), true); valid &= check_sysreg_table(cp14_64_regs, ARRAY_SIZE(cp14_64_regs), true); valid &= check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true); valid &= check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true); valid &= check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false); - + valid &= kvm_arm_idreg_table_init(); if (!valid) return -EINVAL; diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 8fd0020c67ca..55484d1615be 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -260,7 +260,7 @@ int emulate_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *params); void kvm_arm_reset_id_regs(struct kvm_vcpu *vcpu); int kvm_arm_get_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_id_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); -bool kvm_arm_check_idreg_table(void); +bool kvm_arm_idreg_table_init(void); int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind); u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);