From patchwork Mon Jul 17 15:27:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13316490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qLQ8G-004NvZ-0P for linux-arm-kernel@lists.infradead.org; Mon, 17 Jul 2023 15:27:29 +0000 Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-668719b8945so2614587b3a.1 for ; Mon, 17 Jul 2023 08:27:27 -0700 (PDT) Date: Mon, 17 Jul 2023 15:27:18 +0000 In-Reply-To: <20230717152722.1837864-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230717152722.1837864-1-jingzhangos@google.com> Message-ID: <20230717152722.1837864-2-jingzhangos@google.com> Subject: [PATCH v6 1/5] KVM: arm64: Use guest ID register values for the sake of emulation From: Jing Zhang List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+lwn-linux-arm-kernel=archive.lwn.net@lists.infradead.org List-Archive: To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Suraj Jitindar Singh , Cornelia Huck , Jing Zhang Since KVM now supports per-VM ID registers, use per-VM ID register values for the sake of emulation for DBGDIDR and LORegion. Signed-off-by: Jing Zhang --- arch/arm64/kvm/sys_regs.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index bd3431823ec5..c1a5ec1a016e 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -379,7 +379,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + u64 val = IDREG(vcpu->kvm, SYS_ID_AA64MMFR1_EL1); u32 sr = reg_to_encoding(r); if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) { @@ -2429,8 +2429,8 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu, if (p->is_write) { return ignore_write(vcpu, p); } else { - u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); - u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); + u64 dfr = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); + u64 pfr = IDREG(vcpu->kvm, SYS_ID_AA64PFR0_EL1); u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL1_EL3_SHIFT); p->regval = ((((dfr >> ID_AA64DFR0_EL1_WRPs_SHIFT) & 0xf) << 28) | From patchwork Mon Jul 17 15:27:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13316493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qLQ8H-004NwD-2O for linux-arm-kernel@lists.infradead.org; Mon, 17 Jul 2023 15:27:31 +0000 Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-666e3dad70aso4087889b3a.0 for ; Mon, 17 Jul 2023 08:27:28 -0700 (PDT) Date: Mon, 17 Jul 2023 15:27:19 +0000 In-Reply-To: <20230717152722.1837864-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230717152722.1837864-1-jingzhangos@google.com> Message-ID: <20230717152722.1837864-3-jingzhangos@google.com> Subject: [PATCH v6 2/5] KVM: arm64: Reject attempts to set invalid debug arch version From: Jing Zhang List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+lwn-linux-arm-kernel=archive.lwn.net@lists.infradead.org List-Archive: To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Suraj Jitindar Singh , Cornelia Huck , Jing Zhang From: Oliver Upton The debug architecture is mandatory in ARMv8, so KVM should not allow userspace to configure a vCPU with less than that. Of course, this isn't handled elegantly by the generic ID register plumbing, as the respective ID register fields have a nonzero starting value. Add an explicit check for debug versions less than v8 of the architecture. Signed-off-by: Oliver Upton Signed-off-by: Jing Zhang --- arch/arm64/kvm/sys_regs.c | 34 +++++++++++++++++++++++++++++++--- 1 file changed, 31 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index c1a5ec1a016e..053d8057ff1e 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1216,8 +1216,14 @@ static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp, /* Some features have different safe value type in KVM than host features */ switch (id) { case SYS_ID_AA64DFR0_EL1: - if (kvm_ftr.shift == ID_AA64DFR0_EL1_PMUVer_SHIFT) + switch (kvm_ftr.shift) { + case ID_AA64DFR0_EL1_PMUVer_SHIFT: kvm_ftr.type = FTR_LOWER_SAFE; + break; + case ID_AA64DFR0_EL1_DebugVer_SHIFT: + kvm_ftr.type = FTR_LOWER_SAFE; + break; + } break; case SYS_ID_DFR0_EL1: if (kvm_ftr.shift == ID_DFR0_EL1_PerfMon_SHIFT) @@ -1469,14 +1475,22 @@ static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, return val; } +#define ID_REG_LIMIT_FIELD_ENUM(val, reg, field, limit) \ +({ \ + u64 __f_val = FIELD_GET(reg##_##field##_MASK, val); \ + (val) &= ~reg##_##field##_MASK; \ + (val) |= FIELD_PREP(reg##_##field##_MASK, \ + min(__f_val, (u64)reg##_##field##_##limit)); \ + (val); \ +}) + static u64 read_sanitised_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { u64 val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); /* Limit debug to ARMv8.0 */ - val &= ~ID_AA64DFR0_EL1_DebugVer_MASK; - val |= SYS_FIELD_PREP_ENUM(ID_AA64DFR0_EL1, DebugVer, IMP); + val = ID_REG_LIMIT_FIELD_ENUM(val, ID_AA64DFR0_EL1, DebugVer, V8P8); /* * Only initialize the PMU version if the vCPU was configured with one. @@ -1496,6 +1510,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { + u8 debugver = SYS_FIELD_GET(ID_AA64DFR0_EL1, DebugVer, val); u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, val); /* @@ -1515,6 +1530,13 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; + /* + * ID_AA64DFR0_EL1.DebugVer is one of those awkward fields with a + * nonzero minimum safe value. + */ + if (debugver < ID_AA64DFR0_EL1_DebugVer_IMP) + return -EINVAL; + return set_id_reg(vcpu, rd, val); } @@ -1528,6 +1550,8 @@ static u64 read_sanitised_id_dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu)) val |= SYS_FIELD_PREP(ID_DFR0_EL1, PerfMon, perfmon); + val = ID_REG_LIMIT_FIELD_ENUM(val, ID_DFR0_EL1, CopDbg, Debugv8p8); + return val; } @@ -1536,6 +1560,7 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, u64 val) { u8 perfmon = SYS_FIELD_GET(ID_DFR0_EL1, PerfMon, val); + u8 copdbg = SYS_FIELD_GET(ID_DFR0_EL1, CopDbg, val); if (perfmon == ID_DFR0_EL1_PerfMon_IMPDEF) { val &= ~ID_DFR0_EL1_PerfMon_MASK; @@ -1551,6 +1576,9 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3) return -EINVAL; + if (copdbg < ID_DFR0_EL1_CopDbg_Armv8) + return -EINVAL; + return set_id_reg(vcpu, rd, val); } From patchwork Mon Jul 17 15:27:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13316495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qLQ8K-004Nwo-0J for linux-arm-kernel@lists.infradead.org; Mon, 17 Jul 2023 15:27:33 +0000 Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-55fe85f4d84so733822a12.0 for ; Mon, 17 Jul 2023 08:27:31 -0700 (PDT) Date: Mon, 17 Jul 2023 15:27:20 +0000 In-Reply-To: <20230717152722.1837864-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230717152722.1837864-1-jingzhangos@google.com> Message-ID: <20230717152722.1837864-4-jingzhangos@google.com> Subject: [PATCH v6 3/5] KVM: arm64: Enable writable for ID_AA64PFR0_EL1 From: Jing Zhang List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+lwn-linux-arm-kernel=archive.lwn.net@lists.infradead.org List-Archive: To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Suraj Jitindar Singh , Cornelia Huck , Jing Zhang All valid fields in ID_AA64PFR0_EL1 are writable from usrespace with this change. Signed-off-by: Jing Zhang --- arch/arm64/kvm/sys_regs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 053d8057ff1e..fab525508510 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2042,7 +2042,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { .get_user = get_id_reg, .set_user = set_id_reg, .reset = read_sanitised_id_aa64pfr0_el1, - .val = ID_AA64PFR0_EL1_CSV2_MASK | ID_AA64PFR0_EL1_CSV3_MASK, }, + .val = GENMASK(63, 0), }, ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4,2), ID_UNALLOCATED(4,3), From patchwork Mon Jul 17 15:27:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13316496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qLQ8L-004Ny4-1B for linux-arm-kernel@lists.infradead.org; Mon, 17 Jul 2023 15:27:34 +0000 Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-666ecb21fb8so4047855b3a.1 for ; Mon, 17 Jul 2023 08:27:32 -0700 (PDT) Date: Mon, 17 Jul 2023 15:27:21 +0000 In-Reply-To: <20230717152722.1837864-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230717152722.1837864-1-jingzhangos@google.com> Message-ID: <20230717152722.1837864-5-jingzhangos@google.com> Subject: [PATCH v6 4/5] KVM: arm64: Enable writable for ID_AA64MMFR{0, 1, 2, 3}_EL1 From: Jing Zhang List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+lwn-linux-arm-kernel=archive.lwn.net@lists.infradead.org List-Archive: To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Suraj Jitindar Singh , Cornelia Huck , Jing Zhang Enable writable from userspace for ID_AA64MMFR{0, 1, 2, 3}_EL1. Added a macro for defining general writable idregs. Signed-off-by: Jing Zhang --- arch/arm64/kvm/sys_regs.c | 38 +++++++++++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index fab525508510..5fbf14320ad9 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1346,9 +1346,6 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu, val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS); break; - case SYS_ID_AA64MMFR2_EL1: - val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; - break; case SYS_ID_MMFR4_EL1: val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX); break; @@ -1582,6 +1579,18 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, return set_id_reg(vcpu, rd, val); } +static u64 read_sanitised_id_aa64mmfr2_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + u64 val; + u32 id = reg_to_encoding(rd); + + val = read_sanitised_ftr_reg(id); + val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; + + return val; +} + /* * cpufeature ID register user accessors * @@ -1856,6 +1865,16 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .val = 0, \ } +#define ID_SANITISED_WRITABLE(name) { \ + SYS_DESC(SYS_##name), \ + .access = access_id_reg, \ + .get_user = get_id_reg, \ + .set_user = set_id_reg, \ + .visibility = id_visibility, \ + .reset = kvm_read_sanitised_id_reg, \ + .val = GENMASK(63, 0), \ +} + /* sys_reg_desc initialiser for known cpufeature ID registers */ #define AA32_ID_SANITISED(name) { \ SYS_DESC(SYS_##name), \ @@ -2077,10 +2096,15 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_UNALLOCATED(6,7), /* CRm=7 */ - ID_SANITISED(ID_AA64MMFR0_EL1), - ID_SANITISED(ID_AA64MMFR1_EL1), - ID_SANITISED(ID_AA64MMFR2_EL1), - ID_SANITISED(ID_AA64MMFR3_EL1), + ID_SANITISED_WRITABLE(ID_AA64MMFR0_EL1), + ID_SANITISED_WRITABLE(ID_AA64MMFR1_EL1), + { SYS_DESC(SYS_ID_AA64MMFR2_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_reg, + .reset = read_sanitised_id_aa64mmfr2_el1, + .val = GENMASK(63, 0), }, + ID_SANITISED_WRITABLE(ID_AA64MMFR3_EL1), ID_UNALLOCATED(7,4), ID_UNALLOCATED(7,5), ID_UNALLOCATED(7,6), From patchwork Mon Jul 17 15:27:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13316499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qLQ8O-004Nys-0y for linux-arm-kernel@lists.infradead.org; Mon, 17 Jul 2023 15:27:37 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-cb6f1b3e826so3642387276.0 for ; Mon, 17 Jul 2023 08:27:35 -0700 (PDT) Date: Mon, 17 Jul 2023 15:27:22 +0000 In-Reply-To: <20230717152722.1837864-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230717152722.1837864-1-jingzhangos@google.com> Message-ID: <20230717152722.1837864-6-jingzhangos@google.com> Subject: [PATCH v6 5/5] KVM: arm64: selftests: Test for setting ID register from usersapce From: Jing Zhang List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+lwn-linux-arm-kernel=archive.lwn.net@lists.infradead.org List-Archive: To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Suraj Jitindar Singh , Cornelia Huck , Jing Zhang Add a test to verify setting ID registers from userapce is handled correctly by KVM. Signed-off-by: Jing Zhang --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/aarch64/set_id_regs.c | 163 ++++++++++++++++++ 2 files changed, 164 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/set_id_regs.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c692cc86e7da..87ceadc1292a 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -144,6 +144,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list TEST_GEN_PROGS_aarch64 += aarch64/hypercalls TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test TEST_GEN_PROGS_aarch64 += aarch64/psci_test +TEST_GEN_PROGS_aarch64 += aarch64/set_id_regs TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init diff --git a/tools/testing/selftests/kvm/aarch64/set_id_regs.c b/tools/testing/selftests/kvm/aarch64/set_id_regs.c new file mode 100644 index 000000000000..e2242ef36bab --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/set_id_regs.c @@ -0,0 +1,163 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * set_id_regs - Test for setting ID register from usersapce. + * + * Copyright (c) 2023 Google LLC. + * + * + * Test that KVM supports setting ID registers from userspace and handles the + * feature set correctly. + */ + +#include +#include "kvm_util.h" +#include "processor.h" +#include "test_util.h" +#include + +#define field_get(_mask, _reg) (((_reg) & (_mask)) >> (ffs(_mask) - 1)) +#define field_prep(_mask, _val) (((_val) << (ffs(_mask) - 1)) & (_mask)) + +struct reg_feature { + uint64_t reg; + uint64_t ftr_mask; +}; + +static void guest_code(void) +{ + for (;;) + GUEST_SYNC(0); +} + +static struct reg_feature lower_safe_reg_ftrs[] = { + { KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), ARM64_FEATURE_MASK(ID_AA64DFR0_BRPS) }, + { KVM_ARM64_SYS_REG(SYS_ID_DFR0_EL1), ARM64_FEATURE_MASK(ID_DFR0_COPDBG) }, + { KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), ARM64_FEATURE_MASK(ID_AA64PFR0_EL3) }, + { KVM_ARM64_SYS_REG(SYS_ID_AA64MMFR0_EL1), ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN4) }, +}; + +static void test_user_set_lower_safe(struct kvm_vcpu *vcpu) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(lower_safe_reg_ftrs); i++) { + struct reg_feature *reg_ftr = lower_safe_reg_ftrs + i; + uint64_t val, new_val, ftr; + + vcpu_get_reg(vcpu, reg_ftr->reg, &val); + ftr = field_get(reg_ftr->ftr_mask, val); + + /* Set a safe value for the feature */ + if (ftr > 0) + ftr--; + + val &= ~reg_ftr->ftr_mask; + val |= field_prep(reg_ftr->ftr_mask, ftr); + + vcpu_set_reg(vcpu, reg_ftr->reg, val); + vcpu_get_reg(vcpu, reg_ftr->reg, &new_val); + ASSERT_EQ(new_val, val); + } +} + +static struct reg_feature exact_reg_ftrs[] = { + { KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER) }, +}; + +static void test_user_set_exact(struct kvm_vcpu *vcpu) +{ + int i, r; + + for (i = 0; i < ARRAY_SIZE(exact_reg_ftrs); i++) { + struct reg_feature *reg_ftr = exact_reg_ftrs + i; + uint64_t val, old_val, ftr; + + vcpu_get_reg(vcpu, reg_ftr->reg, &val); + ftr = field_get(reg_ftr->ftr_mask, val); + old_val = val; + + /* Exact match */ + vcpu_set_reg(vcpu, reg_ftr->reg, val); + vcpu_get_reg(vcpu, reg_ftr->reg, &val); + ASSERT_EQ(val, old_val); + + /* Smaller value */ + if (ftr > 0) + ftr--; + val &= ~reg_ftr->ftr_mask; + val |= field_prep(reg_ftr->ftr_mask, ftr); + r = __vcpu_set_reg(vcpu, reg_ftr->reg, val); + TEST_ASSERT(r < 0 && errno == EINVAL, + "Unexpected KVM_SET_ONE_REG error: r=%d, errno=%d", r, errno); + vcpu_get_reg(vcpu, reg_ftr->reg, &val); + ASSERT_EQ(val, old_val); + + /* Bigger value */ + ftr += 2; + val &= ~reg_ftr->ftr_mask; + val |= field_prep(reg_ftr->ftr_mask, ftr); + r = __vcpu_set_reg(vcpu, reg_ftr->reg, val); + TEST_ASSERT(r < 0 && errno == EINVAL, + "Unexpected KVM_SET_ONE_REG error: r=%d, errno=%d", r, errno); + vcpu_get_reg(vcpu, reg_ftr->reg, &val); + ASSERT_EQ(val, old_val); + } +} + +static struct reg_feature fail_reg_ftrs[] = { + { KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), ARM64_FEATURE_MASK(ID_AA64DFR0_WRPS) }, + { KVM_ARM64_SYS_REG(SYS_ID_DFR0_EL1), ARM64_FEATURE_MASK(ID_DFR0_MPROFDBG) }, + { KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), ARM64_FEATURE_MASK(ID_AA64PFR0_EL2) }, + { KVM_ARM64_SYS_REG(SYS_ID_AA64MMFR0_EL1), ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN64) }, +}; + +static void test_user_set_fail(struct kvm_vcpu *vcpu) +{ + int i, r; + + for (i = 0; i < ARRAY_SIZE(fail_reg_ftrs); i++) { + struct reg_feature *reg_ftr = fail_reg_ftrs + i; + uint64_t val, old_val, ftr; + + vcpu_get_reg(vcpu, reg_ftr->reg, &val); + ftr = field_get(reg_ftr->ftr_mask, val); + + /* Set a invalid value (too big) for the feature */ + ftr++; + + old_val = val; + val &= ~reg_ftr->ftr_mask; + val |= field_prep(reg_ftr->ftr_mask, ftr); + + r = __vcpu_set_reg(vcpu, reg_ftr->reg, val); + TEST_ASSERT(r < 0 && errno == EINVAL, + "Unexpected KVM_SET_ONE_REG error: r=%d, errno=%d", r, errno); + + vcpu_get_reg(vcpu, reg_ftr->reg, &val); + ASSERT_EQ(val, old_val); + } +} + +int main(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code); + + ksft_print_header(); + ksft_set_plan(3); + + test_user_set_lower_safe(vcpu); + ksft_test_result_pass("test_user_set_lower_safe\n"); + + test_user_set_exact(vcpu); + ksft_test_result_pass("test_user_set_exact\n"); + + test_user_set_fail(vcpu); + ksft_test_result_pass("test_user_set_fail\n"); + + kvm_vm_free(vm); + + ksft_finished(); +}