From patchwork Mon May 22 22:18:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13251191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 13D95C7EE2D for ; Mon, 22 May 2023 22:19:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=voxieGMxf5v8jFWFexhJh/PnKAJcDGjo1t4I4wTPw3A=; b=DdXbdm8y//QF+ALwxSV8s0uWBF 3vHt24gbkskwugXU3/bRkkulJh4vX5E0d0dfo+GsRmHDtUxjhKd2jf0wHNDFpKRLv3Th69N5Igyss 65v8S+c7i2XwCETmzY+1slC/Lv0ybA79RuJPMtAHZMYXyLWkHHM4qca34scOJJdwODmj5ZjVSMEW6 OqMWwbEVBAK/fZ3K2XRGOML84Yl4m+t8Jre2bpYlyc4fwES5ThGqLJE1wnPk6xeZq1mGRck9hAFqR fLGxCEFnkirmAntnPFXiuRo8kCXJutNLnrC47+qMGCuYKVjinNITWJ0fAST1Rq+5bwpXSLXH24ggR 9VxpwPsQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1Drf-008D6N-0n; Mon, 22 May 2023 22:18:51 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1DrX-008D2C-2w for linux-arm-kernel@lists.infradead.org; Mon, 22 May 2023 22:18:45 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-ba8c9e9e164so8017219276.2 for ; Mon, 22 May 2023 15:18:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684793920; x=1687385920; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IDMeA7A1szNMy9XVG2BQVLz/xGU0fwdnrNBj9KxEXqQ=; b=sxaNsKaYp7rfW2UnfBF/MIX5ykqr/q+S0h/gmcehsB3hGwDKycL+hqVAfhHie0d3DN VR6hF1Vst7StwAyhIpzmA94mUwrS2m4EUs7NYPjNMAaODHRlINSwHljfossSVDlPeWtq om/SI51LbQeJCSPFOFyI5HyQbEoua3+OJy2dKpCcC6beACOe8ZFnW2e23138ZWch0BGa OokO9LAt/IZShM+2zinSXJwgmij4H6wMMcboBCbTM5xXMMuhf1TyYEr2U1/LBWAeCd5n jb4oA88Xgsg0uM0RphZQojhcam9b60q/XANIbGgCrhvOIa4byW2YFUQnqfzg1qmqCkx0 Vn3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684793920; x=1687385920; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IDMeA7A1szNMy9XVG2BQVLz/xGU0fwdnrNBj9KxEXqQ=; b=Fmgz25GFbC3q3yZiVqKsJJAw76HNwND2FgTl8FeNyJzWWoh2VwdBsQs2O6r/UsCfaV esKYZ6l4sZkQ/80NV3AtmwJzM11UPs0ZIHFsTBYcfidyV7w5wMGRU1J6X3k2ExocCifs yYJNVwNuRGAaYK0XXFF/cULp6nwokBEitoChkCpc6nw351BfCVgD8ncx8WGLAjLEYvZb wFk/26UsEaqiOTfXccu47lOK/PI+Iu+BU2auBB8ONPdqCvncxB27ZqDpWe8wAIAUGsUH LKo4SWI5asGfRBsDB6dziFjSFi2i/yTDXpTiw73aWYFherkA3IMLalt89GrK9dqsen/4 beWg== X-Gm-Message-State: AC+VfDwHyS90Vab10EITCVIXH+W7cjJtNrONdFGFcDILB2ruxhu4GIfJ Z6r+xRot3+BbCCGajlGHL1gZXETqLfB/1ZFQBQ== X-Google-Smtp-Source: ACHHUZ6dJKEQ9FePfmeIiD/2qt9Vd0Xa5GOm/1XWH/27M9QAvRROIMEl+qs3XrlO7knAL25ZXX9mn7R2Bb1kzTe9hw== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a05:6902:1895:b0:ba8:4ba3:5b54 with SMTP id cj21-20020a056902189500b00ba84ba35b54mr5371297ybb.11.1684793920168; Mon, 22 May 2023 15:18:40 -0700 (PDT) Date: Mon, 22 May 2023 22:18:31 +0000 In-Reply-To: <20230522221835.957419-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230522221835.957419-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230522221835.957419-2-jingzhangos@google.com> Subject: [PATCH v10 1/5] KVM: arm64: Save ID registers' sanitized value per guest From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230522_151843_953807_0ED0ED7F X-CRM114-Status: GOOD ( 22.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce id_regs[] in kvm_arch as a storage of guest's ID registers, and save ID registers' sanitized value in the array at KVM_CREATE_VM. Use the saved ones when ID registers are read by the guest or userspace (via KVM_GET_ONE_REG). No functional change intended. Co-developed-by: Reiji Watanabe Signed-off-by: Reiji Watanabe Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 20 +++++++++ arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/sys_regs.c | 69 +++++++++++++++++++++++++------ arch/arm64/kvm/sys_regs.h | 7 ++++ 4 files changed, 85 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7e7e19ef6993..069606170c82 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -178,6 +178,21 @@ struct kvm_smccc_features { unsigned long vendor_hyp_bmap; }; +/* + * Emulated CPU ID registers per VM + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + * + * These emulated idregs are VM-wide, but accessed from the context of a vCPU. + * Atomic access to multiple idregs are guarded by kvm_arch.config_lock. + */ +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id)) +#define IDREG(kvm, id) ((kvm)->arch.idregs.regs[IDREG_IDX(id)]) +#define KVM_ARM_ID_REG_NUM (IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1) +struct kvm_idregs { + u64 regs[KVM_ARM_ID_REG_NUM]; +}; + typedef unsigned int pkvm_handle_t; struct kvm_protected_vm { @@ -253,6 +268,9 @@ struct kvm_arch { struct kvm_smccc_features smccc_feat; struct maple_tree smccc_filter; + /* Emulated CPU ID registers */ + struct kvm_idregs idregs; + /* * For an untrusted host VM, 'pkvm.handle' is used to lookup * the associated pKVM instance in the hypervisor. @@ -1045,6 +1063,8 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm, struct kvm_arm_counter_offset *offset); +void kvm_arm_init_id_regs(struct kvm *kvm); + /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 14391826241c..774656a0718d 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -163,6 +163,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); + kvm_arm_init_id_regs(kvm); /* * Initialise the default PMUver before there is a chance to diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 71b12094d613..d2ee3a1c7f03 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -41,6 +41,7 @@ * 64bit interface. */ +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); static u64 sys_reg_to_index(const struct sys_reg_desc *reg); static bool read_from_write_only(struct kvm_vcpu *vcpu, @@ -364,7 +365,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); u32 sr = reg_to_encoding(r); if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) { @@ -1208,16 +1209,9 @@ static u8 pmuver_to_perfmon(u8 pmuver) } } -/* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) { - u32 id = reg_to_encoding(r); - u64 val; - - if (sysreg_visible_as_raz(vcpu, r)) - return 0; - - val = read_sanitised_ftr_reg(id); + u64 val = IDREG(vcpu->kvm, id); switch (id) { case SYS_ID_AA64PFR0_EL1: @@ -1280,6 +1274,26 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r return val; } +/* Read a sanitised cpufeature ID register by sys_reg_desc */ +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) +{ + if (sysreg_visible_as_raz(vcpu, r)) + return 0; + + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r)); +} + +/* + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. + */ +static inline bool is_id_reg(u32 id) +{ + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 && + sys_reg_CRm(id) < 8); +} + static unsigned int id_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { @@ -2244,8 +2258,8 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu, if (p->is_write) { return ignore_write(vcpu, p); } else { - u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); - u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); + u64 dfr = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); + u64 pfr = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL1_EL3_SHIFT); p->regval = ((((dfr >> ID_AA64DFR0_EL1_WRPs_SHIFT) & 0xf) << 28) | @@ -3343,6 +3357,37 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) return write_demux_regids(uindices); } +/* + * Set the guest's ID registers with ID_SANITISED() to the host's sanitized value. + */ +void kvm_arm_init_id_regs(struct kvm *kvm) +{ + const struct sys_reg_desc *idreg; + struct sys_reg_params params; + u32 id; + + /* Find the first idreg (SYS_ID_PFR0_EL1) in sys_reg_descs. */ + id = SYS_ID_PFR0_EL1; + params = encoding_to_params(id); + idreg = find_reg(¶ms, sys_reg_descs, ARRAY_SIZE(sys_reg_descs)); + if (WARN_ON(!idreg)) + return; + + /* Initialize all idregs */ + while (is_id_reg(id)) { + /* + * Some hidden ID registers which are not in arm64_ftr_regs[] + * would cause warnings from read_sanitised_ftr_reg(). + * Skip those ID registers to avoid the warnings. + */ + if (idreg->visibility != raz_visibility) + IDREG(kvm, id) = read_sanitised_ftr_reg(id); + + idreg++; + id = reg_to_encoding(idreg); + } +} + int __init kvm_sys_reg_table_init(void) { bool valid = true; diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 6b11f2cc7146..eba10de2e7ae 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -27,6 +27,13 @@ struct sys_reg_params { bool is_write; }; +#define encoding_to_params(reg) \ + ((struct sys_reg_params){ .Op0 = sys_reg_Op0(reg), \ + .Op1 = sys_reg_Op1(reg), \ + .CRn = sys_reg_CRn(reg), \ + .CRm = sys_reg_CRm(reg), \ + .Op2 = sys_reg_Op2(reg) }) + #define esr_sys64_to_params(esr) \ ((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3, \ .Op1 = ((esr) >> 14) & 0x7, \ From patchwork Mon May 22 22:18:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13251192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 25FB6C7EE29 for ; Mon, 22 May 2023 22:19:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=UpZ8hVh5kbX3k9hd/VeEdCU39CnhhfDXjEfrCe4Z0ss=; b=FUXxYsOSuX/8Ho8N3mDW0AqH7e 77vQrXQZF8k1Y1Axi9e1b2S5ZrVflm38OIEmpB97uRbTVjb/PXcrunM87zF4NdvVegzFcTTR912YS vY5d93S33QPm7V+Dy3SkzklyxQWU9HrUyLljYhgyfPAgq4n/4/0LYuTUo8eGcxbBMMXvnDTeLqPZZ K07qxMCHlg7e6tY78mj7VPoLli+zVzFPGw4zmaSe0vmHy2v8xvZLtEZeOf+KI5kTs7r55i1XEZV1M r1XNgU+UxN4DdnKb/i1DRmzorkp8dEGtH4zyYgCkIKIjOHtk1zs/ub704sXP8DpxTIbhh/qGNsReN GR4fXJbA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1Drg-008D7J-0z; Mon, 22 May 2023 22:18:52 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1DrZ-008D2o-2y for linux-arm-kernel@lists.infradead.org; Mon, 22 May 2023 22:18:47 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1ae8ed1c293so16255245ad.3 for ; Mon, 22 May 2023 15:18:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684793922; x=1687385922; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/VrN2Ar+AOdWjfiYe/2nYRQITVL0veWRWCp7zOr2qyA=; b=rRVPR4tKxt43nlG+IuMkOLZRfyICGLMKSXLHPT1Qx704DFwmHi8y3V/jtWrcWQWyDa Jg7RAno6p7cTYfPopz/fP4DB4s1XWf1o5mwnPm2x+cgJzAYoGS0cVdkA03huXsbRTt16 eB7Vi6DDXss9qKqpaClzs3Mb1MoOKxxiW8uNx3d0+WJv09ryABea+4IXSbc+MjpN6hAp 4JBMpFryEKgYWK+S89VXuSAlusM6eGJE/IDZOCLBT1vpmJRF9kFZu9E3zt9ouZ995XyL Dd+CG5Nwffewy1PgXV7UKkiygvL/txsjlntDLImYdCwMGtwi8tGzgNH1Mu7CApfCv3QL xH5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684793922; x=1687385922; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/VrN2Ar+AOdWjfiYe/2nYRQITVL0veWRWCp7zOr2qyA=; b=Vx9zwuqlVbFooOwoDw8jhJx7ymwdSPutm/C03iJDxuhoJaTufPZsrriE1pFELyvqSp zKGfyUMYXsu0iueJdUONooqIQCopZTi74QrVFDMSAm8IxcVKB9uQ6aITFq3vaj+Qwcqt 1GTDg8b8iAEUwUu62IKExYWj2hCLM8r6DoFuU1qbJSVnfi13+B0x+7NwUTcj/NS4Lt+U 5/CI/ARQG0FPfGwAiXyH9OaC5mhOaEU8CFlPrIaT16uxMxuwv+9AJoJUAz78PkxISSs/ i/U9CGxZ41UY95DCShEtIFWAWEE8/pcjqNa3+HG/vh36ulu0wiQOUDdtMLDkfvNpdU0O Nhfw== X-Gm-Message-State: AC+VfDwU+i5PXGaLVXbnoI9DIbVHqYG4C34WPgTOOHLQq+XkotBe61Tm 7m8LITO7Dth0ByzIS6RZnQgztPnXcTXNTTnqnA== X-Google-Smtp-Source: ACHHUZ4HoO5hlOBsRNn24MoFNGIQjCDVVm3MB7aqK9lioRYeF9fvojdnGXslsxiJUFgurOBI1z0N7vVipd5ZsT5f9Q== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:903:33ce:b0:1ae:5bf5:1082 with SMTP id kc14-20020a17090333ce00b001ae5bf51082mr2854105plb.6.1684793922069; Mon, 22 May 2023 15:18:42 -0700 (PDT) Date: Mon, 22 May 2023 22:18:32 +0000 In-Reply-To: <20230522221835.957419-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230522221835.957419-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230522221835.957419-3-jingzhangos@google.com> Subject: [PATCH v10 2/5] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3] From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230522_151845_958260_7817C031 X-CRM114-Status: GOOD ( 21.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With per guest ID registers, ID_AA64PFR0_EL1.[CSV2|CSV3] settings from userspace can be stored in its corresponding ID register. The setting of CSV bits for protected VMs are removed according to the discussion from Fuad below: https://lore.kernel.org/all/CA+EHjTwXA9TprX4jeG+-D+c8v9XG+oFdU1o6TSkvVye145_OvA@mail.gmail.com Besides the removal of CSV bits setting for protected VMs, No other functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 2 -- arch/arm64/kvm/arm.c | 17 --------- arch/arm64/kvm/sys_regs.c | 58 +++++++++++++++++++++++++------ 3 files changed, 47 insertions(+), 30 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 069606170c82..8a2fde6c04c4 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -257,8 +257,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - u8 pfr0_csv2; - u8 pfr0_csv3; struct { u8 imp:4; u8 unimp:4; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 774656a0718d..5114521ace60 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -102,22 +102,6 @@ static int kvm_arm_default_max_vcpus(void) return vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS; } -static void set_default_spectre(struct kvm *kvm) -{ - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv2 = 1; - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) - kvm->arch.pfr0_csv3 = 1; -} - /** * kvm_arch_init_vm - initializes a VM data structure * @kvm: pointer to the KVM struct @@ -161,7 +145,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->max_vcpus = kvm_arm_default_max_vcpus(); - set_default_spectre(kvm); kvm_arm_init_hypercalls(kvm); kvm_arm_init_id_regs(kvm); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index d2ee3a1c7f03..9fb1c2f8f5a5 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1218,10 +1218,6 @@ static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -1359,7 +1355,11 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { + struct kvm_arch *arch = &vcpu->kvm->arch; + u64 old_val = read_id_reg(vcpu, rd); + u64 new_val = val; u8 csv2, csv3; + int ret = 0; /* * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as @@ -1377,17 +1377,26 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) return -EINVAL; + mutex_lock(&arch->config_lock); /* We can only differ with CSV[23], and anything else is an error */ - val ^= read_id_reg(vcpu, rd); + val ^= old_val; val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); - if (val) - return -EINVAL; - - vcpu->kvm->arch.pfr0_csv2 = csv2; - vcpu->kvm->arch.pfr0_csv3 = csv3; + if (val) { + ret = -EINVAL; + goto out; + } - return 0; + /* Only allow userspace to change the idregs before VM running */ + if (kvm_vm_has_ran_once(vcpu->kvm)) { + if (new_val != old_val) + ret = -EBUSY; + } else { + IDREG(vcpu->kvm, reg_to_encoding(rd)) = new_val; + } +out: + mutex_unlock(&arch->config_lock); + return ret; } static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, @@ -1479,7 +1488,12 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 *val) { + struct kvm_arch *arch = &vcpu->kvm->arch; + + mutex_lock(&arch->config_lock); *val = read_id_reg(vcpu, rd); + mutex_unlock(&arch->config_lock); + return 0; } @@ -3364,6 +3378,7 @@ void kvm_arm_init_id_regs(struct kvm *kvm) { const struct sys_reg_desc *idreg; struct sys_reg_params params; + u64 val; u32 id; /* Find the first idreg (SYS_ID_PFR0_EL1) in sys_reg_descs. */ @@ -3386,6 +3401,27 @@ void kvm_arm_init_id_regs(struct kvm *kvm) idreg++; id = reg_to_encoding(idreg); } + + /* + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. + */ + val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); + + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } + + IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; } int __init kvm_sys_reg_table_init(void) From patchwork Mon May 22 22:18:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13251190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ACDE3C77B75 for ; Mon, 22 May 2023 22:19:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=3Q3ygR/xDgFWnucKUZucTT7Ap0EatH3x+//SXd0abDQ=; b=mSs5mgfmpplqt1i1H8DeUEGqf3 TPoTfACBC5urGqC+H3xZ72SdHXRAweR5PNDsTOmIk9TBSBZUXfktZzVICJip9xwlNhTLwllRwNQGC DNqpYvWv6Scxwmvdn/XJQf96ooaAVI/dM28uUhYylvWADOsXAH22gIM31Z3gxgnJ11rEG32Oa2aUS zAa7hc3z4EOi7MA5sL1Qio4SXgdPPfELc1rD8FqZiTpJWzooqgnYGOtkxDTQWRuNAxLnA7umD/cyd BX1czt9FH2WVv2yZlpzpW4qlSuZ8G+pY1GZRTSQDYXp94t2bptn9mx8YNwc2TXztBT1ZGmuAPK5bR SZtSuFbg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1Drf-008D6m-2W; Mon, 22 May 2023 22:18:51 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1DrZ-008D3K-2m for linux-arm-kernel@lists.infradead.org; Mon, 22 May 2023 22:18:47 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1ae40139967so64848475ad.3 for ; Mon, 22 May 2023 15:18:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684793924; x=1687385924; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rxLJmdcsmtgg168Jc6B1bzOdGO2WKNd0sf7sUU3rn4I=; b=KYtKplk7fUQ4Ma46edslbFA2ELgsLccmDqy1O6Q5cLB5ipd+bWUDD1SJvhRclhvE/E leuzOn6DO0jen6zgCIoIk0dyYwTV+m1Tbu9uTrWvrxwCMAaRnHHmlVLqHhrOgmzZog9S NUGzYb7jJb9iSev6rWoKLS5/iDw+CwGO44sGHOpnFrhtmDopH04OPLZGVlfUmJET4GZl UG3AiUILbFkWN0HqBfPKaN1Y16fFtZuavi955hCLMf+uQxMRr+aXedRmFlvtsowxVG+/ ibhXxWo1A60PjHuI/eJbZDxDXy59LgkW8jwq8jaZXuc2X6bjNM4/o/Or88ExGTEDtspW Eosw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684793924; x=1687385924; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rxLJmdcsmtgg168Jc6B1bzOdGO2WKNd0sf7sUU3rn4I=; b=BVqNRy9rs/jwCYSvensQrzRI96ebRRALyqGRc+CTtJ3pN1FlDEvzUIIfxrCfIzLofW PhYeyTHHCXwl2cocSnUfa8v6aKpaRXZHr04HrkPLValWiLuPcex7AeSPlgF5DdYWRFLl rz2ASJbU84RySZx+o9fqlj373ElvJXzbFLqe85xuJ0/lkyBgL+hSVi/nOv5XgSMWCAdj 6C1F5JioqfXAJPZ32pvON+UdYR41D+roLEktfDqAGOfi/5AKM/H6Ew4TeGZJIu2RRVGG 4MDn0WFfQNnK/XmZdCsi4YF06ImpQkQFQmVWvRuyx07eFpacQ4Wo+VRFsKiZRY9tk2bu jYoA== X-Gm-Message-State: AC+VfDyQFNGbf/4MC6nwRKHdgB51F8iXSnmg7O47n8CxykQalUn1xzSg Q6z10x8RpqaaLfb6HGUN3c9nAy7yBGZKIy82bg== X-Google-Smtp-Source: ACHHUZ7vx9TiuHYd6N9l2BaA+GQI8XZH+xGQ3r2UG1t2m1J+GAhg41hHiNsa/NEYQ0+MwgY6G9MvNUCVzMlDN5C02w== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:902:d14c:b0:1a9:baa8:359f with SMTP id t12-20020a170902d14c00b001a9baa8359fmr2762897plt.6.1684793923933; Mon, 22 May 2023 15:18:43 -0700 (PDT) Date: Mon, 22 May 2023 22:18:33 +0000 In-Reply-To: <20230522221835.957419-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230522221835.957419-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230522221835.957419-4-jingzhangos@google.com> Subject: [PATCH v10 3/5] KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230522_151845_899365_90AE4AAE X-CRM114-Status: GOOD ( 20.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With per guest ID registers, PMUver settings from userspace can be stored in its corresponding ID register. No functional change intended. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 12 ++-- arch/arm64/kvm/arm.c | 6 -- arch/arm64/kvm/sys_regs.c | 100 ++++++++++++++++++++++++------ include/kvm/arm_pmu.h | 5 +- 4 files changed, 92 insertions(+), 31 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 8a2fde6c04c4..7b0f43373dbe 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -246,6 +246,13 @@ struct kvm_arch { #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE 7 /* SMCCC filter initialized for the VM */ #define KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED 8 + /* + * AA64DFR0_EL1.PMUver was set as ID_AA64DFR0_EL1_PMUVer_IMP_DEF + * or DFR0_EL1.PerfMon was set as ID_DFR0_EL1_PerfMon_IMPDEF from + * userspace for VCPUs without PMU. + */ +#define KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU 9 + unsigned long flags; /* @@ -257,11 +264,6 @@ struct kvm_arch { cpumask_var_t supported_cpus; - struct { - u8 imp:4; - u8 unimp:4; - } dfr0_pmuver; - /* Hypercall features firmware registers' descriptor */ struct kvm_smccc_features smccc_feat; struct maple_tree smccc_filter; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 5114521ace60..ca18c09ccf82 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -148,12 +148,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm_arm_init_hypercalls(kvm); kvm_arm_init_id_regs(kvm); - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit(); - return 0; err_free_cpumask: diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 9fb1c2f8f5a5..84d9e4baa4f8 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1178,9 +1178,12 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) - return vcpu->kvm->arch.dfr0_pmuver.imp; + return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)); + else if (test_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags)) + return ID_AA64DFR0_EL1_PMUVer_IMP_DEF; - return vcpu->kvm->arch.dfr0_pmuver.unimp; + return 0; } static u8 perfmon_to_pmuver(u8 perfmon) @@ -1403,8 +1406,12 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { + struct kvm_arch *arch = &vcpu->kvm->arch; + u64 old_val = read_id_reg(vcpu, rd); u8 pmuver, host_pmuver; + u64 new_val = val; bool valid_pmu; + int ret = 0; host_pmuver = kvm_arm_pmu_get_pmuver_limit(); @@ -1424,26 +1431,51 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; + mutex_lock(&arch->config_lock); /* We can only differ with PMUver, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); + val ^= old_val; val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - if (val) - return -EINVAL; + if (val) { + ret = -EINVAL; + goto out; + } - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = pmuver; - else - vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver; + /* Only allow userspace to change the idregs before VM running */ + if (kvm_vm_has_ran_once(vcpu->kvm)) { + if (new_val != old_val) + ret = -EBUSY; + } else { + if (valid_pmu) { + val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); + val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; + val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver); + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val; + + val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1); + val &= ~ID_DFR0_EL1_PerfMon_MASK; + val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver)); + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val; + } else { + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, + pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF); + } + } - return 0; +out: + mutex_unlock(&arch->config_lock); + return ret; } static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { + struct kvm_arch *arch = &vcpu->kvm->arch; + u64 old_val = read_id_reg(vcpu, rd); u8 perfmon, host_perfmon; + u64 new_val = val; bool valid_pmu; + int ret = 0; host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit()); @@ -1464,18 +1496,39 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, if (kvm_vcpu_has_pmu(vcpu) != valid_pmu) return -EINVAL; + mutex_lock(&arch->config_lock); /* We can only differ with PerfMon, and anything else is an error */ - val ^= read_id_reg(vcpu, rd); + val ^= old_val; val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - if (val) - return -EINVAL; + if (val) { + ret = -EINVAL; + goto out; + } - if (valid_pmu) - vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon); - else - vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon); + /* Only allow userspace to change the idregs before VM running */ + if (kvm_vm_has_ran_once(vcpu->kvm)) { + if (new_val != old_val) + ret = -EBUSY; + } else { + if (valid_pmu) { + val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1); + val &= ~ID_DFR0_EL1_PerfMon_MASK; + val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon); + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val; + + val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); + val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; + val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon)); + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val; + } else { + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, + perfmon == ID_DFR0_EL1_PerfMon_IMPDEF); + } + } - return 0; +out: + mutex_unlock(&arch->config_lock); + return ret; } /* @@ -3422,6 +3475,17 @@ void kvm_arm_init_id_regs(struct kvm *kvm) } IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val = IDREG(kvm, SYS_ID_AA64DFR0_EL1); + + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + kvm_arm_pmu_get_pmuver_limit()); + + IDREG(kvm, SYS_ID_AA64DFR0_EL1) = val; } int __init kvm_sys_reg_table_init(void) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 1a6a695ca67a..8d70dbdc1e0a 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -92,8 +92,9 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); /* * Evaluates as true when emulating PMUv3p5, and false otherwise. */ -#define kvm_pmu_is_3p5(vcpu) \ - (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5) +#define kvm_pmu_is_3p5(vcpu) \ + (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), \ + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)) >= ID_AA64DFR0_EL1_PMUVer_V3P5) u8 kvm_arm_pmu_get_pmuver_limit(void); From patchwork Mon May 22 22:18:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13251194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 165A5C7EE29 for ; Mon, 22 May 2023 22:19:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zUxkCqEq7jnNNHNW6lO1UwiCLzCv+R/YZNzmPXVKHgw=; b=qjEyMyajvShoFYr6t2/hUGMAcj 4cj64HS1NFZacMLw7XEOH1A9/QXap0WdDcWSoU6pmt7kePnqS7fL45SPyGKMjv+12wgT28PSfV7yp 4cm7Q6dkTVkObqgwXOV0D4MbgpKjGEG+7NuLAA4VERTg4Tdh4aRprQt5KhxgR3TVtn3GR6ocCXwqY 78GY90K8JA2hA4ZT1KTIH06f9lJuY36U7LTXdhLIFGr6o2TQQnl4IVuQG1bLeUbfamt1N8fo1sLKs ZfSt2nOjfXY9uT6YOBtcNmxolGEvPke38P5q1KRer2avyHk1s7SqFrNiDAFQ1VPngAgR2uXrder8h 3vu5QOyg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1Drw-008DCg-13; Mon, 22 May 2023 22:19:08 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1Drd-008D3y-0j for linux-arm-kernel@lists.infradead.org; Mon, 22 May 2023 22:18:51 +0000 Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-52857fc23b1so6273265a12.2 for ; Mon, 22 May 2023 15:18:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684793925; x=1687385925; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=B6MwmF5tW0kJz0GQbY5uvOaGCq1OmgHqWI1PXCfNEYo=; b=4Y41+6OOw9FD8x+LYdJAA5zcJPDY2F4uvX2P2wiMDmlcFyQK7pDhowfAbxrK0lDA9u iqykXUTuiWwF8RmFoMKOqYkEEC+Xef/igc4iMquh797PpfKYWZnYnBqoU+QzJCBoNJid fH2s6a3k0OJgUgRhxQ9u0RIoZjZjNfx+/pYaRJMAYd4nbFodoT6aH/Vf1VCA4fakyyLV sb62PuzIK8uwAyjGMSStUYPv5YnbekM/hs08riWLTSIB2m/Z/vlZ3ydG54fAGCQgAnJg TEYs1pv1Xn7/aRQPG4uRIkMEfVx9iTUpp/e6cMhq4HP64qAUjl7izIKQOywmoEoGcta8 IW9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684793925; x=1687385925; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=B6MwmF5tW0kJz0GQbY5uvOaGCq1OmgHqWI1PXCfNEYo=; b=LELitF2VIWIwgvkH1BYTk8ECA55KRTWoK5UY8EaBZDqipe1A5ljY6G6znyu1waYlMt QszknmqziwlruCv2xUH+KO4AWIgr1RRqcEUiptzniDgsJJDSTcRHwKcDcoMunLfZDIqK +AXxGjUs9mcEZ4wTNUvigiCKULeonGyaINj4PueUEbTAJApAtf+SjQcNBecFEU8biMmK IYqP+6pEXyfU+HYRTBrelL6Iu4u1Cpx53XXMwq+6OeFENEpzvfUuIz8vUXRhDxh59bzT w87kSPvRxIMy5owoaD5L5DubOnW0vwnfHs+lhti/tBQN6Qdbod4PuheY5/80jM4i68JU 0GPQ== X-Gm-Message-State: AC+VfDwyPYPBrEuu7/arE9CVra6fxPAPOQD3fIBY74XypcJSM5Chhv+6 2W6wIjdoufTUhCC9b6E9PLhVQbhoOn/liVRUfQ== X-Google-Smtp-Source: ACHHUZ4N17BT2FRtfQtNSWQodbknx9z5KKPCkNiLaQ4YqyUZtkivE8W1Rap90Y5TyPXGtxg4LlaL8hA2KIlD614nUg== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a63:60e:0:b0:534:8402:c020 with SMTP id 14-20020a63060e000000b005348402c020mr2906470pgg.0.1684793925573; Mon, 22 May 2023 15:18:45 -0700 (PDT) Date: Mon, 22 May 2023 22:18:34 +0000 In-Reply-To: <20230522221835.957419-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230522221835.957419-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230522221835.957419-5-jingzhangos@google.com> Subject: [PATCH v10 4/5] KVM: arm64: Reuse fields of sys_reg_desc for idreg From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230522_151849_265251_E6143620 X-CRM114-Status: GOOD ( 24.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since reset() and val are not used for idreg in sys_reg_desc, they would be used with other purposes for idregs. The callback reset() would be used to return KVM sanitised id register values. The u64 val would be used as mask for writable fields in idregs. Only bits with 1 in val are writable from userspace. Signed-off-by: Jing Zhang --- arch/arm64/kvm/sys_regs.c | 101 +++++++++++++++++++++++++++----------- arch/arm64/kvm/sys_regs.h | 15 ++++-- 2 files changed, 82 insertions(+), 34 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 84d9e4baa4f8..72255dea8027 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -541,10 +541,11 @@ static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_bvr(struct kvm_vcpu *vcpu, +static u64 reset_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm] = rd->val; + return rd->val; } static bool trap_bcr(struct kvm_vcpu *vcpu, @@ -577,10 +578,11 @@ static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_bcr(struct kvm_vcpu *vcpu, +static u64 reset_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm] = rd->val; + return rd->val; } static bool trap_wvr(struct kvm_vcpu *vcpu, @@ -614,10 +616,11 @@ static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_wvr(struct kvm_vcpu *vcpu, +static u64 reset_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm] = rd->val; + return rd->val; } static bool trap_wcr(struct kvm_vcpu *vcpu, @@ -650,25 +653,28 @@ static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, return 0; } -static void reset_wcr(struct kvm_vcpu *vcpu, +static u64 reset_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm] = rd->val; + return rd->val; } -static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 amair = read_sysreg(amair_el1); vcpu_write_sys_reg(vcpu, amair, AMAIR_EL1); + return amair; } -static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 actlr = read_sysreg(actlr_el1); vcpu_write_sys_reg(vcpu, actlr, ACTLR_EL1); + return actlr; } -static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 mpidr; @@ -682,7 +688,10 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0); mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1); mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2); - vcpu_write_sys_reg(vcpu, (1ULL << 31) | mpidr, MPIDR_EL1); + mpidr |= (1ULL << 31); + vcpu_write_sys_reg(vcpu, mpidr, MPIDR_EL1); + + return mpidr; } static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, @@ -694,13 +703,13 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } -static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX); /* No PMU available, any PMU reg may UNDEF... */ if (!kvm_arm_support_pmu_v3()) - return; + return 0; n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT; n &= ARMV8_PMU_PMCR_N_MASK; @@ -709,33 +718,41 @@ static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= mask; + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0); + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK; + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { reset_unknown(vcpu, r); __vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_COUNTER_MASK; + + return __vcpu_sys_reg(vcpu, r->reg); } -static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 pmcr; /* No PMU available, PMCR_EL0 may UNDEF... */ if (!kvm_arm_support_pmu_v3()) - return; + return 0; /* Only preserve PMCR_EL0.N, and reset the rest to 0 */ pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); @@ -743,6 +760,8 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) pmcr |= ARMV8_PMU_PMCR_LC; __vcpu_sys_reg(vcpu, r->reg) = pmcr; + + return __vcpu_sys_reg(vcpu, r->reg); } static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) @@ -1212,6 +1231,11 @@ static u8 pmuver_to_perfmon(u8 pmuver) } } +static u64 general_read_kvm_sanitised_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) +{ + return read_sanitised_ftr_reg(reg_to_encoding(rd)); +} + static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) { u64 val = IDREG(vcpu->kvm, id); @@ -1597,7 +1621,7 @@ static bool access_clidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, * Fabricate a CLIDR_EL1 value instead of using the real value, which can vary * by the physical CPU which the vcpu currently resides in. */ -static void reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static u64 reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 ctr_el0 = read_sanitised_ftr_reg(SYS_CTR_EL0); u64 clidr; @@ -1645,6 +1669,8 @@ static void reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) clidr |= 2 << CLIDR_TTYPE_SHIFT(loc); __vcpu_sys_reg(vcpu, r->reg) = clidr; + + return __vcpu_sys_reg(vcpu, r->reg); } static int set_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, @@ -1744,6 +1770,17 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .visibility = elx2_visibility, \ } +/* + * Since reset() callback and field val are not used for idregs, they will be + * used for specific purposes for idregs. + * The reset() would return KVM sanitised register value. The value would be the + * same as the host kernel sanitised value if there is no KVM sanitisation. + * The val would be used as a mask indicating writable fields for the idreg. + * Only bits with 1 are writable from userspace. This mask might not be + * necessary in the future whenever all ID registers are enabled as writable + * from userspace. + */ + /* sys_reg_desc initialiser for known cpufeature ID registers */ #define ID_SANITISED(name) { \ SYS_DESC(SYS_##name), \ @@ -1751,6 +1788,8 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .get_user = get_id_reg, \ .set_user = set_id_reg, \ .visibility = id_visibility, \ + .reset = general_read_kvm_sanitised_reg,\ + .val = 0, \ } /* sys_reg_desc initialiser for known cpufeature ID registers */ @@ -1760,6 +1799,8 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .get_user = get_id_reg, \ .set_user = set_id_reg, \ .visibility = aa32_id_visibility, \ + .reset = general_read_kvm_sanitised_reg,\ + .val = 0, \ } /* @@ -1772,7 +1813,9 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .access = access_id_reg, \ .get_user = get_id_reg, \ .set_user = set_id_reg, \ - .visibility = raz_visibility \ + .visibility = raz_visibility, \ + .reset = NULL, \ + .val = 0, \ } /* @@ -1786,6 +1829,8 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, .get_user = get_id_reg, \ .set_user = set_id_reg, \ .visibility = raz_visibility, \ + .reset = NULL, \ + .val = 0, \ } static bool access_sp_el1(struct kvm_vcpu *vcpu, @@ -3122,19 +3167,21 @@ id_to_sys_reg_desc(struct kvm_vcpu *vcpu, u64 id, */ #define FUNCTION_INVARIANT(reg) \ - static void get_##reg(struct kvm_vcpu *v, \ + static u64 get_##reg(struct kvm_vcpu *v, \ const struct sys_reg_desc *r) \ { \ ((struct sys_reg_desc *)r)->val = read_sysreg(reg); \ + return ((struct sys_reg_desc *)r)->val; \ } FUNCTION_INVARIANT(midr_el1) FUNCTION_INVARIANT(revidr_el1) FUNCTION_INVARIANT(aidr_el1) -static void get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r) +static u64 get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r) { ((struct sys_reg_desc *)r)->val = read_sanitised_ftr_reg(SYS_CTR_EL0); + return ((struct sys_reg_desc *)r)->val; } /* ->val is filled in by kvm_sys_reg_table_init() */ @@ -3424,9 +3471,7 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) return write_demux_regids(uindices); } -/* - * Set the guest's ID registers with ID_SANITISED() to the host's sanitized value. - */ +/* Initialize the guest's ID registers with KVM sanitised values. */ void kvm_arm_init_id_regs(struct kvm *kvm) { const struct sys_reg_desc *idreg; @@ -3443,13 +3488,11 @@ void kvm_arm_init_id_regs(struct kvm *kvm) /* Initialize all idregs */ while (is_id_reg(id)) { - /* - * Some hidden ID registers which are not in arm64_ftr_regs[] - * would cause warnings from read_sanitised_ftr_reg(). - * Skip those ID registers to avoid the warnings. - */ - if (idreg->visibility != raz_visibility) - IDREG(kvm, id) = read_sanitised_ftr_reg(id); + val = 0; + /* Read KVM sanitised register value if available */ + if (idreg->reset) + val = idreg->reset(NULL, idreg); + IDREG(kvm, id) = val; idreg++; id = reg_to_encoding(idreg); diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index eba10de2e7ae..c65c129b3500 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -71,13 +71,16 @@ struct sys_reg_desc { struct sys_reg_params *, const struct sys_reg_desc *); - /* Initialization for vcpu. */ - void (*reset)(struct kvm_vcpu *, const struct sys_reg_desc *); + /* + * Initialization for vcpu. Return initialized value, or KVM + * sanitized value for ID registers. + */ + u64 (*reset)(struct kvm_vcpu *, const struct sys_reg_desc *); /* Index into sys_reg[], or 0 if we don't need to save it. */ int reg; - /* Value (usually reset value) */ + /* Value (usually reset value), or write mask for idregs */ u64 val; /* Custom get/set_user functions, fallback to generic if NULL */ @@ -130,19 +133,21 @@ static inline bool read_zero(struct kvm_vcpu *vcpu, } /* Reset functions */ -static inline void reset_unknown(struct kvm_vcpu *vcpu, +static inline u64 reset_unknown(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { BUG_ON(!r->reg); BUG_ON(r->reg >= NR_SYS_REGS); __vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL; + return __vcpu_sys_reg(vcpu, r->reg); } -static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +static inline u64 reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { BUG_ON(!r->reg); BUG_ON(r->reg >= NR_SYS_REGS); __vcpu_sys_reg(vcpu, r->reg) = r->val; + return __vcpu_sys_reg(vcpu, r->reg); } static inline unsigned int sysreg_visibility(const struct kvm_vcpu *vcpu, From patchwork Mon May 22 22:18:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 13251193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 55E3CC77B75 for ; Mon, 22 May 2023 22:19:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=i1/P35VONlyBpr4btZujZymcSRJIkwp1Ot7EMiSRHoY=; b=AptI87VtqojdceI+AFKBomwkgL X3xHNfqnRpOgoXWnRwcHWWaQUuj7Dqwt9r1/6xYd9sRqhOvGj1mQh1FPORE8Q212Z2Gm6EEiNBYoK 20Ou6aqZi1GPpE0jLBWL4Lf2IlMfPRceqKAQiIM7gbWYlDsGb40A+B9sR0l84JfAImZ8c6Pucjwn4 XeDIH9rL9ykY5pwWMhjXxOJNu/QReHNf52JMJP+WFdhvMSo7m/9T2Y3nzI8WyJ684aEIWWPw5sD53 cQ30W1FLmBQ2LgZtEJGbPzeNJ0UIcQy+5LKUtbFpBf6TY7OfsLAHyTuMpyU9uVMrxkfTMiG8KC0FN F4NFUl+w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1Drw-008DD6-2Y; Mon, 22 May 2023 22:19:08 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1Dre-008D53-36 for linux-arm-kernel@lists.infradead.org; Mon, 22 May 2023 22:18:53 +0000 Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-64d1cbfb240so3018629b3a.0 for ; Mon, 22 May 2023 15:18:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684793927; x=1687385927; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/poaNaOrMaOWpcJS0msAO1JJZSaOkyXyWcZbcKh1EsI=; b=B3YZJ4NC4MAf2yJU9FiifvmmDnD46172Ok0W+slcZmUjoeVVWM4B4l0wDQCP7AEArY fBXr5EPWNCM9bhJMwOm3ZAVPk+T9ejH7gnJLrDo0t4pWZxxLvUNZe9AiYn9HADj9zLcz FnstCejW/NjN7q5VnuKnkYpg0axYWa9K+xvEGTY4MekKpxgVkCPQpwE2C/m9IjRSeANP BCOpb7jmjSHfdJD5axvSQ8VBw3rzkDawPO4+uyHmcryfA1+brXJ8U7vbzzZtcvedcTtL zMZzr+5Gd3HUiKwpQLhjJGlYtf9xIQK6xHsZ61RnK8/+FsQGDNCP5RmM3JkCzWbMGd0p pd7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684793927; x=1687385927; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/poaNaOrMaOWpcJS0msAO1JJZSaOkyXyWcZbcKh1EsI=; b=aT6hTwNAA6W7McVfjJFPghJcAHaWBDXlJCvSVgoN8EIAt9WNQolagA0J1iAkhIrhXx BWr+zrgiDEqM7Jp4/PXDz4isGxvcBxsMkfl7UzWjIZO+E33HwuJk8aluhkmBypeH5pQ5 3ol011D/Y23sZw9wCASY76C+IyO4XwGGlFLqXrjzJQieCKDYG2rMjhnJpEHGHg8sMZCm +UOyIKkovbJb7dMXpElh3EP9eGMipFEV+TIZNOgfNJu/mSCe08wFSDkWeHff2dxXPoC6 EEKX+MBNml6ZwJNonIG49d95/vyB+UIck7fWqQwwA6NO8Q4ERItR202vHeLqU0BT+IRp PUKQ== X-Gm-Message-State: AC+VfDyvf/KohsB/TjsxdBxhcF9dAAYUo6Ku4G2hA4pDXI+pRxNRtuAm qXL8lDaIKmY5Yz5LAi+waqgJzb/X+fpUz0pKfQ== X-Google-Smtp-Source: ACHHUZ6lLYGjKcKwr0DRMshIbU7PYm3NY4FPnp4zYe6fglsoCGFczLxaZp0dBDk9LbgQJkAGBAisdXT5OQrLyui/uA== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a05:6a00:98e:b0:643:a3a6:115f with SMTP id u14-20020a056a00098e00b00643a3a6115fmr5215347pfg.3.1684793927604; Mon, 22 May 2023 15:18:47 -0700 (PDT) Date: Mon, 22 May 2023 22:18:35 +0000 In-Reply-To: <20230522221835.957419-1-jingzhangos@google.com> Mime-Version: 1.0 References: <20230522221835.957419-1-jingzhangos@google.com> X-Mailer: git-send-email 2.40.1.698.g37aff9b760-goog Message-ID: <20230522221835.957419-6-jingzhangos@google.com> Subject: [PATCH v10 5/5] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3 From: Jing Zhang To: KVM , KVMARM , ARMLinux , Marc Zyngier , Oliver Upton Cc: Will Deacon , Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Fuad Tabba , Reiji Watanabe , Raghavendra Rao Ananta , Jing Zhang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230522_151851_000570_D453B8B7 X-CRM114-Status: GOOD ( 26.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3], ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities specific to ID register. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kvm/sys_regs.c | 365 ++++++++++++++++++---------- 3 files changed, 243 insertions(+), 125 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 6bf013fb110d..dc769c2eb7a4 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -915,6 +915,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1) return 8; } +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur); struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id); extern struct arm64_ftr_override id_aa64mmfr1_override; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 7d7128c65161..3317a7b6deac 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -798,7 +798,7 @@ static u64 arm64_ftr_set_value(const struct arm64_ftr_bits *ftrp, s64 reg, return reg; } -static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, +s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur) { s64 ret = 0; diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 72255dea8027..b3eacfc592eb 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -41,6 +41,7 @@ * 64bit interface. */ +static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val); static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id); static u64 sys_reg_to_index(const struct sys_reg_desc *reg); @@ -1194,6 +1195,86 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, return true; } +static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp, + s64 new, s64 cur) +{ + struct arm64_ftr_bits kvm_ftr = *ftrp; + + /* Some features have different safe value type in KVM than host features */ + switch (id) { + case SYS_ID_AA64DFR0_EL1: + if (kvm_ftr.shift == ID_AA64DFR0_EL1_PMUVer_SHIFT) + kvm_ftr.type = FTR_LOWER_SAFE; + break; + case SYS_ID_DFR0_EL1: + if (kvm_ftr.shift == ID_DFR0_EL1_PerfMon_SHIFT) + kvm_ftr.type = FTR_LOWER_SAFE; + break; + } + + return arm64_ftr_safe_value(&kvm_ftr, new, cur); +} + +/** + * arm64_check_features() - Check if a feature register value constitutes + * a subset of features indicated by the idreg's KVM sanitised limit. + * + * This function will check if each feature field of @val is the "safe" value + * against idreg's KVM sanitised limit return from reset() callback. + * If a field value in @val is the same as the one in limit, it is always + * considered the safe value regardless For register fields that are not in + * writable, only the value in limit is considered the safe value. + * + * Return: 0 if all the fields are safe. Otherwise, return negative errno. + */ +static int arm64_check_features(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, + u64 val) +{ + const struct arm64_ftr_reg *ftr_reg; + const struct arm64_ftr_bits *ftrp = NULL; + u32 id = reg_to_encoding(rd); + u64 writable_mask = rd->val; + u64 limit = 0; + u64 mask = 0; + + /* For hidden and unallocated idregs without reset, only val = 0 is allowed. */ + if (rd->reset) { + limit = rd->reset(vcpu, rd); + ftr_reg = get_arm64_ftr_reg(id); + if (!ftr_reg) + return -EINVAL; + ftrp = ftr_reg->ftr_bits; + } + + for (; ftrp && ftrp->width; ftrp++) { + s64 f_val, f_lim, safe_val; + u64 ftr_mask; + + ftr_mask = arm64_ftr_mask(ftrp); + if ((ftr_mask & writable_mask) != ftr_mask) + continue; + + f_val = arm64_ftr_value(ftrp, val); + f_lim = arm64_ftr_value(ftrp, limit); + mask |= ftr_mask; + + if (f_val == f_lim) + safe_val = f_val; + else + safe_val = kvm_arm64_ftr_safe_value(id, ftrp, f_val, f_lim); + + if (safe_val != f_val) + return -E2BIG; + } + + /* For fields that are not writable, values in limit are the safe values. */ + if ((val & ~mask) != (limit & ~mask)) + return -E2BIG; + + return 0; +} + static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu) { if (kvm_vcpu_has_pmu(vcpu)) @@ -1244,7 +1325,6 @@ static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) case SYS_ID_AA64PFR0_EL1: if (!vcpu_has_sve(vcpu)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); if (kvm_vgic_global_state.type == VGIC_V3) { val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1); @@ -1271,15 +1351,10 @@ static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); break; case SYS_ID_AA64DFR0_EL1: - /* Limit debug to ARMv8.0 */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); /* Set PMUver to the required version */ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), vcpu_pmuver(vcpu)); - /* Hide SPE from guests */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); break; case SYS_ID_DFR0_EL1: val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); @@ -1378,15 +1453,40 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } +static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + u64 val; + u32 id = reg_to_encoding(rd); + + val = read_sanitised_ftr_reg(id); + /* + * The default is to expose CSV2 == 1 if the HW isn't affected. + * Although this is a per-CPU feature, we make it global because + * asymmetric systems are just a nuisance. + * + * Userspace can override this as long as it doesn't promise + * the impossible. + */ + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); + } + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); + } + + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU); + + return val; +} + static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { - struct kvm_arch *arch = &vcpu->kvm->arch; - u64 old_val = read_id_reg(vcpu, rd); - u64 new_val = val; u8 csv2, csv3; - int ret = 0; /* * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as @@ -1404,26 +1504,30 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) return -EINVAL; - mutex_lock(&arch->config_lock); - /* We can only differ with CSV[23], and anything else is an error */ - val ^= old_val; - val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); - if (val) { - ret = -EINVAL; - goto out; - } + return set_id_reg(vcpu, rd, val); +} - /* Only allow userspace to change the idregs before VM running */ - if (kvm_vm_has_ran_once(vcpu->kvm)) { - if (new_val != old_val) - ret = -EBUSY; - } else { - IDREG(vcpu->kvm, reg_to_encoding(rd)) = new_val; - } -out: - mutex_unlock(&arch->config_lock); - return ret; +static u64 read_sanitised_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + u64 val; + u32 id = reg_to_encoding(rd); + + val = read_sanitised_ftr_reg(id); + /* Limit debug to ARMv8.0 */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), + kvm_arm_pmu_get_pmuver_limit()); + /* Hide SPE from guests */ + val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer); + + return val; } static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, @@ -1431,9 +1535,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, u64 val) { struct kvm_arch *arch = &vcpu->kvm->arch; - u64 old_val = read_id_reg(vcpu, rd); u8 pmuver, host_pmuver; - u64 new_val = val; bool valid_pmu; int ret = 0; @@ -1456,48 +1558,67 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, return -EINVAL; mutex_lock(&arch->config_lock); - /* We can only differ with PMUver, and anything else is an error */ - val ^= old_val; - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - if (val) { - ret = -EINVAL; - goto out; - } - /* Only allow userspace to change the idregs before VM running */ if (kvm_vm_has_ran_once(vcpu->kvm)) { - if (new_val != old_val) + if (val != read_id_reg(vcpu, rd)) ret = -EBUSY; - } else { - if (valid_pmu) { - val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); - val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; - val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver); - IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val; - - val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1); - val &= ~ID_DFR0_EL1_PerfMon_MASK; - val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver)); - IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val; - } else { - assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, - pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF); - } + goto out; } + if (!valid_pmu) { + /* + * Ignore the PMUVer field in @val. The PMUVer would be determined + * by arch flags bit KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, + */ + pmuver = FIELD_GET(ID_AA64DFR0_EL1_PMUVer_MASK, + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)); + val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; + val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver); + } + + ret = arm64_check_features(vcpu, rd, val); + if (ret) + goto out; + + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val; + + val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1); + val &= ~ID_DFR0_EL1_PerfMon_MASK; + val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver)); + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val; + + if (!valid_pmu) + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, + pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF); + out: mutex_unlock(&arch->config_lock); return ret; } +static u64 read_sanitised_id_dfr0_el1(struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + u64 val; + u32 id = reg_to_encoding(rd); + + val = read_sanitised_ftr_reg(id); + /* + * Initialise the default PMUver before there is a chance to + * create an actual PMU. + */ + val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), kvm_arm_pmu_get_pmuver_limit()); + + return val; +} + static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { struct kvm_arch *arch = &vcpu->kvm->arch; - u64 old_val = read_id_reg(vcpu, rd); u8 perfmon, host_perfmon; - u64 new_val = val; bool valid_pmu; int ret = 0; @@ -1521,35 +1642,39 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu, return -EINVAL; mutex_lock(&arch->config_lock); - /* We can only differ with PerfMon, and anything else is an error */ - val ^= old_val; - val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon); - if (val) { - ret = -EINVAL; - goto out; - } - /* Only allow userspace to change the idregs before VM running */ if (kvm_vm_has_ran_once(vcpu->kvm)) { - if (new_val != old_val) + if (val != read_id_reg(vcpu, rd)) ret = -EBUSY; - } else { - if (valid_pmu) { - val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1); - val &= ~ID_DFR0_EL1_PerfMon_MASK; - val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon); - IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val; - - val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); - val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; - val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon)); - IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val; - } else { - assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, - perfmon == ID_DFR0_EL1_PerfMon_IMPDEF); - } + goto out; } + if (!valid_pmu) { + /* + * Ignore the PerfMon field in @val. The PerfMon would be determined + * by arch flags bit KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, + */ + perfmon = FIELD_GET(ID_DFR0_EL1_PerfMon_MASK, + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1)); + val &= ~ID_DFR0_EL1_PerfMon_MASK; + val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon); + } + + ret = arm64_check_features(vcpu, rd, val); + if (ret) + goto out; + + IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val; + + val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); + val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; + val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon)); + IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val; + + if (!valid_pmu) + assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags, + perfmon == ID_DFR0_EL1_PerfMon_IMPDEF); + out: mutex_unlock(&arch->config_lock); return ret; @@ -1577,11 +1702,23 @@ static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val) { - /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(vcpu, rd)) - return -EINVAL; + struct kvm_arch *arch = &vcpu->kvm->arch; + u32 id = reg_to_encoding(rd); + int ret = 0; - return 0; + mutex_lock(&arch->config_lock); + /* Only allow userspace to change the idregs before VM running */ + if (kvm_vm_has_ran_once(vcpu->kvm)) { + if (val != read_id_reg(vcpu, rd)) + ret = -EBUSY; + } else { + ret = arm64_check_features(vcpu, rd, val); + if (!ret) + IDREG(vcpu->kvm, id) = val; + } + mutex_unlock(&arch->config_lock); + + return ret; } static int get_raz_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, @@ -1932,9 +2069,13 @@ static const struct sys_reg_desc sys_reg_descs[] = { /* CRm=1 */ AA32_ID_SANITISED(ID_PFR0_EL1), AA32_ID_SANITISED(ID_PFR1_EL1), - { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_dfr0_el1, - .visibility = aa32_id_visibility, }, + { SYS_DESC(SYS_ID_DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_dfr0_el1, + .visibility = aa32_id_visibility, + .reset = read_sanitised_id_dfr0_el1, + .val = ID_DFR0_EL1_PerfMon_MASK, }, ID_HIDDEN(ID_AFR0_EL1), AA32_ID_SANITISED(ID_MMFR0_EL1), AA32_ID_SANITISED(ID_MMFR1_EL1), @@ -1963,8 +2104,12 @@ static const struct sys_reg_desc sys_reg_descs[] = { /* AArch64 ID registers */ /* CRm=4 */ - { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + { SYS_DESC(SYS_ID_AA64PFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_aa64pfr0_el1, + .reset = read_sanitised_id_aa64pfr0_el1, + .val = ID_AA64PFR0_EL1_CSV2_MASK | ID_AA64PFR0_EL1_CSV3_MASK, }, ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4,2), ID_UNALLOCATED(4,3), @@ -1974,8 +2119,12 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_UNALLOCATED(4,7), /* CRm=5 */ - { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, }, + { SYS_DESC(SYS_ID_AA64DFR0_EL1), + .access = access_id_reg, + .get_user = get_id_reg, + .set_user = set_id_aa64dfr0_el1, + .reset = read_sanitised_id_aa64dfr0_el1, + .val = ID_AA64DFR0_EL1_PMUVer_MASK, }, ID_SANITISED(ID_AA64DFR1_EL1), ID_UNALLOCATED(5,2), ID_UNALLOCATED(5,3), @@ -3497,38 +3646,6 @@ void kvm_arm_init_id_regs(struct kvm *kvm) idreg++; id = reg_to_encoding(idreg); } - - /* - * The default is to expose CSV2 == 1 if the HW isn't affected. - * Although this is a per-CPU feature, we make it global because - * asymmetric systems are just a nuisance. - * - * Userspace can override this as long as it doesn't promise - * the impossible. - */ - val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); - - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1); - } - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1); - } - - IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val; - /* - * Initialise the default PMUver before there is a chance to - * create an actual PMU. - */ - val = IDREG(kvm, SYS_ID_AA64DFR0_EL1); - - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), - kvm_arm_pmu_get_pmuver_limit()); - - IDREG(kvm, SYS_ID_AA64DFR0_EL1) = val; } int __init kvm_sys_reg_table_init(void)