From patchwork Wed Jul 14 09:55:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12376505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9F9AC07E9A for ; Wed, 14 Jul 2021 09:57:40 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 828F961040 for ; Wed, 14 Jul 2021 09:57:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 828F961040 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WjodJI3lVyBM8g/8pJuoV1bBiYJ1CvVmsy+60DinveA=; b=dYN94yoWTs8zhT rcTaEfZtNht9n8ijv+nh0KsYSxEEF38wSEFSWj4GRsflML7xqezIzePf0aGoU/DBAe2rdC7MWDDTr s6oucndbbDSJTqsAsRbmPSvxFzPTlmbctBW89fIeP0fg2HBNbVEA0FE7V7kbZ6F5+AkitjOmSnxSE MEKSY/hH6f9OdT1X9jMo8rsNdwXlykR1jarN6h7t7UAkKbiY++4QQTOnrRFWiLo5RmRYHVXi9ikiK kN209Sc+KDzvzN73HkpKwdirprQucLWI15ZIk8A6PmmDBiOQyPrUkQ/MFUF2PmRs2zyucokaNBmbE FXC+9bbA6YZ7AHX0LGoQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3bc2-00Cxzw-L4; Wed, 14 Jul 2021 09:55:30 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3bbn-00Cxu3-1a for linux-arm-kernel@lists.infradead.org; Wed, 14 Jul 2021 09:55:16 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 15011D6E; Wed, 14 Jul 2021 02:55:08 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 08F453F774; Wed, 14 Jul 2021 02:55:06 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, suzuki.poulose@arm.com Subject: [PATCH 1/5] KVM: arm64: Move vcpu_has_feature() to asm/kvm_host.h Date: Wed, 14 Jul 2021 10:55:57 +0100 Message-Id: <20210714095601.184854-2-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210714095601.184854-1-alexandru.elisei@arm.com> References: <20210714095601.184854-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210714_025515_173272_F3378F5F X-CRM114-Status: GOOD ( 10.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move the vcpu_has_feature() function to asm/kvm_host.h in preparation for replacing test_bit() feature checks. struct kvm_vcpu is defined in linux/kvm_host.h, which includes asm/kvm_host.h; the function has been transformed into a macro to avoid compiler errors about struct kvm_vcpu being undefined. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_emulate.h | 5 ----- arch/arm64/include/asm/kvm_host.h | 3 +++ 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index fd418955e31e..6bf776c2399c 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -466,9 +466,4 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu) vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC; } -static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature) -{ - return test_bit(feature, vcpu->arch.features); -} - #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 41911585ae0c..0b087bcfcfeb 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -774,6 +774,9 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type); int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); +#define vcpu_has_feature(vcpu, feature) \ + (test_bit((feature), (vcpu)->arch.features)) + #define kvm_arm_vcpu_sve_finalized(vcpu) \ ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED) From patchwork Wed Jul 14 09:55:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12376509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44247C07E9A for ; Wed, 14 Jul 2021 09:57:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0C91061040 for ; Wed, 14 Jul 2021 09:57:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0C91061040 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oASAGbpREwdteYkpSg40ZyxijviS9YlMzGtO9AG5YhU=; b=Cqnb0b6TGu46r3 ZhGnxWsb0eX2y5h1myG7xRQ8s5lVy83m/ko27EWiLGAcD0uw5m2dp/xbGW6Yn0JaV948TryZarMNv qyP7eNl7KCYJg48IKz0JBTe85HuSU/c88gzz3eQH+G/fMv4bJZzTUTAol2NiY8rkM2mQGdJD94VDQ EUxBrQbKM49TQOS1L4lUlhsywwRfBCDfNTxVBfZIspbQPyl47+VQIMDKiMLHj9MzhJxGtnbfG/+kz tnOsRv+prRjLx+bEDamth+JXcdILiGaTvdr/cSG5oYC1lTFekf0PiCSR1fGByaE6zn8eosoblHCXA UnyAqPiXQ3zS98xKFf1w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3bcP-00Cy7s-PF; Wed, 14 Jul 2021 09:55:53 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3bbn-00Cxu1-4h for linux-arm-kernel@lists.infradead.org; Wed, 14 Jul 2021 09:55:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B83921042; Wed, 14 Jul 2021 02:55:09 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 57C8D3F774; Wed, 14 Jul 2021 02:55:08 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, suzuki.poulose@arm.com Subject: [PATCH 2/5] KVM: arm64: Use vcpu_has_feature() to check the feature bits Date: Wed, 14 Jul 2021 10:55:58 +0100 Message-Id: <20210714095601.184854-3-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210714095601.184854-1-alexandru.elisei@arm.com> References: <20210714095601.184854-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210714_025515_354411_BA8A3629 X-CRM114-Status: GOOD ( 16.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Commit 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation") added the vcpu_has_feature() function for checking if a feature bit is set. Replace uses of test_bit() with the vcpu_has_feature() abstraction. Signed-off-by: Alexandru Elisei --- There's one place place where I opted not to replace test_bit() with vcpu_has_feature(), in arm.c::kvm_vcpu_set_target(). The function already manipulates the feature bits directly, so I preferred to keep it consistent. arch/arm64/include/asm/kvm_emulate.h | 2 +- arch/arm64/include/asm/kvm_host.h | 4 ++-- arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/psci.c | 2 +- arch/arm64/kvm/reset.c | 12 ++++++------ include/kvm/arm_psci.h | 2 +- 6 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 6bf776c2399c..738fb51920d2 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -70,7 +70,7 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 |= HCR_TVM; } - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT)) vcpu->arch.hcr_el2 &= ~HCR_RW; /* diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0b087bcfcfeb..e6bdf1feb922 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -781,8 +781,8 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED) #define kvm_has_mte(kvm) (system_supports_mte() && (kvm)->arch.mte_enabled) -#define kvm_vcpu_has_pmu(vcpu) \ - (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) + +#define kvm_vcpu_has_pmu(vcpu) (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) int kvm_trng_call(struct kvm_vcpu *vcpu); #ifdef CONFIG_KVM diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e9a2b8f27792..646c7b003a59 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1108,7 +1108,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, /* * Handle the "start in power-off" case. */ - if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features)) + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_POWER_OFF)) vcpu_power_off(vcpu); else vcpu->arch.power_off = false; diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c index db4056ecccfd..536f637f1e41 100644 --- a/arch/arm64/kvm/psci.c +++ b/arch/arm64/kvm/psci.c @@ -492,7 +492,7 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { bool wants_02; - wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features); + wants_02 = vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2); switch (val) { case KVM_ARM_PSCI_0_1: diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index cba7872d69a8..80b16984cf3a 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -157,8 +157,8 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) * features are requested by the userspace together and the system * supports these capabilities. */ - if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) || - !test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features) || + if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) || + !vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC) || !system_has_full_ptr_auth()) return -EINVAL; @@ -223,7 +223,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) kvm_arch_vcpu_put(vcpu); if (!kvm_arm_vcpu_sve_finalized(vcpu)) { - if (test_bit(KVM_ARM_VCPU_SVE, vcpu->arch.features)) { + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) { ret = kvm_vcpu_enable_sve(vcpu); if (ret) goto out; @@ -232,8 +232,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) kvm_vcpu_reset_sve(vcpu); } - if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) || - test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) { + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) || + vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC)) { if (kvm_vcpu_enable_ptrauth(vcpu)) { ret = -EINVAL; goto out; @@ -247,7 +247,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) switch (vcpu->arch.target) { default: - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT)) { pstate = VCPU_RESET_PSTATE_SVC; } else { pstate = VCPU_RESET_PSTATE_EL1; diff --git a/include/kvm/arm_psci.h b/include/kvm/arm_psci.h index 5b58bd2fe088..8b7891d7a368 100644 --- a/include/kvm/arm_psci.h +++ b/include/kvm/arm_psci.h @@ -29,7 +29,7 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu, struct kvm *kvm) * revisions. It is thus safe to return the latest, unless * userspace has instructed us otherwise. */ - if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) { + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2)) { if (vcpu->kvm->arch.psci_version) return vcpu->kvm->arch.psci_version; From patchwork Wed Jul 14 09:55:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12376521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6AE2C07E9A for ; Wed, 14 Jul 2021 09:58:11 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8474261040 for ; Wed, 14 Jul 2021 09:58:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8474261040 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=j4twmpJxEzwfG/vDCbst0/qv0fc2Npp7BG12NH9k1yY=; b=4K0tSA9rfAWGN9 JVDLNYK28pSze0vmML57Mab7iLfbij7F17biaPAmkye+rnQxadKArnRZW0U027RPbnxzNCRR98sGC XJKHTZ4887wY7rnnf718qxWeJgH3Ua73zelewvVDl1nivoEbYukw1YuO6bvksVnaHEjpwB5Fo1RC5 f6C2A4Yyl74fECBRRqhnDvEfISUVXC+HpgBpamRxq4nB/MAiX131E3Thw9Sk9+/FsquS/+uiVSsSt G/MXbPCa0lrUgtGFQCNro742oucfsIWotu4Km87VWwT3CWZ2+YlodzNQecIn/y7Kxa+wxw8ujubI3 OCyblE2/4NmblrTXp5tA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3bcc-00CyDZ-LD; Wed, 14 Jul 2021 09:56:06 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3bbn-00CxuI-2G for linux-arm-kernel@lists.infradead.org; Wed, 14 Jul 2021 09:55:19 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 396BC11D4; Wed, 14 Jul 2021 02:55:11 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 083423F774; Wed, 14 Jul 2021 02:55:09 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, suzuki.poulose@arm.com Subject: [PATCH 3/5] KVM: arm64: Rename __vcpu_sys_reg -> vcpu_sys_reg Date: Wed, 14 Jul 2021 10:55:59 +0100 Message-Id: <20210714095601.184854-4-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210714095601.184854-1-alexandru.elisei@arm.com> References: <20210714095601.184854-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210714_025515_246786_76809D1B X-CRM114-Status: GOOD ( 23.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org There are two macros to access a specific system register from a known kvm_cpu_context: __ctxt_sys_reg(), which returns a pointer to the register, and ctxt_sys_reg(), which deferences the pointer returned by __ctxt_sys_reg(). __vcpu_sys_reg() serves a similar purpose, with the difference being that it takes a struct kvm_vcpu as a parameter. __vcpu_sys_reg(), although it looks like __ctxt_sys_reg(), it dereferences the pointer to the register, like ctxt_sys_reg() does, and indeed it is defined as an abstraction over ctxt_sys_reg(). Let's remove this naming inconsistency by renaming __vcpu_sys_reg() to vcpu_sys_reg(), to make it clear it behaves like ctxt_sys_reg(), and not like __ctxt_sys_reg(). Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 4 +- arch/arm64/kvm/arch_timer.c | 20 ++++---- arch/arm64/kvm/arm.c | 3 +- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/hyp/exception.c | 4 +- arch/arm64/kvm/hyp/include/hyp/switch.h | 6 +-- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 12 ++--- arch/arm64/kvm/pmu-emul.c | 58 +++++++++++----------- arch/arm64/kvm/sys_regs.c | 54 ++++++++++---------- arch/arm64/kvm/sys_regs.h | 4 +- 10 files changed, 83 insertions(+), 84 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e6bdf1feb922..76efede8beae 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -459,7 +459,7 @@ struct kvm_vcpu_arch { #define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs) /* - * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the + * Only use vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently * accessed by a running VCPU. For example, for userspace access or * for system registers that are never context switched, but only @@ -469,7 +469,7 @@ struct kvm_vcpu_arch { #define ctxt_sys_reg(c,r) (*__ctxt_sys_reg(c,r)) -#define __vcpu_sys_reg(v,r) (ctxt_sys_reg(&(v)->arch.ctxt, (r))) +#define vcpu_sys_reg(v,r) (ctxt_sys_reg(&(v)->arch.ctxt, (r))) u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg); void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg); diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c index 3df67c127489..b928a70c01d2 100644 --- a/arch/arm64/kvm/arch_timer.c +++ b/arch/arm64/kvm/arch_timer.c @@ -58,9 +58,9 @@ u32 timer_get_ctl(struct arch_timer_context *ctxt) switch(arch_timer_ctx_index(ctxt)) { case TIMER_VTIMER: - return __vcpu_sys_reg(vcpu, CNTV_CTL_EL0); + return vcpu_sys_reg(vcpu, CNTV_CTL_EL0); case TIMER_PTIMER: - return __vcpu_sys_reg(vcpu, CNTP_CTL_EL0); + return vcpu_sys_reg(vcpu, CNTP_CTL_EL0); default: WARN_ON(1); return 0; @@ -73,9 +73,9 @@ u64 timer_get_cval(struct arch_timer_context *ctxt) switch(arch_timer_ctx_index(ctxt)) { case TIMER_VTIMER: - return __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0); + return vcpu_sys_reg(vcpu, CNTV_CVAL_EL0); case TIMER_PTIMER: - return __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0); + return vcpu_sys_reg(vcpu, CNTP_CVAL_EL0); default: WARN_ON(1); return 0; @@ -88,7 +88,7 @@ static u64 timer_get_offset(struct arch_timer_context *ctxt) switch(arch_timer_ctx_index(ctxt)) { case TIMER_VTIMER: - return __vcpu_sys_reg(vcpu, CNTVOFF_EL2); + return vcpu_sys_reg(vcpu, CNTVOFF_EL2); default: return 0; } @@ -100,10 +100,10 @@ static void timer_set_ctl(struct arch_timer_context *ctxt, u32 ctl) switch(arch_timer_ctx_index(ctxt)) { case TIMER_VTIMER: - __vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = ctl; + vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = ctl; break; case TIMER_PTIMER: - __vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = ctl; + vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = ctl; break; default: WARN_ON(1); @@ -116,10 +116,10 @@ static void timer_set_cval(struct arch_timer_context *ctxt, u64 cval) switch(arch_timer_ctx_index(ctxt)) { case TIMER_VTIMER: - __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = cval; + vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = cval; break; case TIMER_PTIMER: - __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = cval; + vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = cval; break; default: WARN_ON(1); @@ -132,7 +132,7 @@ static void timer_set_offset(struct arch_timer_context *ctxt, u64 offset) switch(arch_timer_ctx_index(ctxt)) { case TIMER_VTIMER: - __vcpu_sys_reg(vcpu, CNTVOFF_EL2) = offset; + vcpu_sys_reg(vcpu, CNTVOFF_EL2) = offset; break; default: WARN(offset, "timer %ld\n", arch_timer_ctx_index(ctxt)); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 646c7b003a59..04c722e4eca0 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -700,8 +700,7 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu) } if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu)) - kvm_pmu_handle_pmcr(vcpu, - __vcpu_sys_reg(vcpu, PMCR_EL0)); + kvm_pmu_handle_pmcr(vcpu, vcpu_sys_reg(vcpu, PMCR_EL0)); } } diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 5621020b28de..3fea05c7f49d 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -122,7 +122,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { if (guest_has_sve) { - __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); + vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR); /* Restore the VL that was saved when bound to the CPU */ if (!has_vhe()) diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index 0418399e0a20..89b5cdc850f5 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -25,7 +25,7 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) if (__vcpu_read_sys_reg_from_cpu(reg, &val)) return val; - return __vcpu_sys_reg(vcpu, reg); + return vcpu_sys_reg(vcpu, reg); } static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) @@ -33,7 +33,7 @@ static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) if (__vcpu_write_sys_reg_to_cpu(val, reg)) return; - __vcpu_sys_reg(vcpu, reg) = val; + vcpu_sys_reg(vcpu, reg) = val; } static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index e4a2f295a394..cc0ab79edcd9 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -57,7 +57,7 @@ static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) if (!vcpu_el1_is_32bit(vcpu)) return; - __vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2); + vcpu_sys_reg(vcpu, FPEXC32_EL2) = read_sysreg(fpexc32_el2); } static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) @@ -218,7 +218,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_restore_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr); - write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); + write_sysreg_el1(vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR); } /* Check for an FPSIMD/SVE trap and handle as appropriate */ @@ -280,7 +280,7 @@ static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) /* Skip restoring fpexc32 for AArch64 guests */ if (!(read_sysreg(hcr_el2) & HCR_RW)) - write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2); + write_sysreg(vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2); vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index de7e14c862e6..c50e7462adff 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -187,11 +187,11 @@ static inline void __sysreg32_save_state(struct kvm_vcpu *vcpu) vcpu->arch.ctxt.spsr_irq = read_sysreg(spsr_irq); vcpu->arch.ctxt.spsr_fiq = read_sysreg(spsr_fiq); - __vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2); - __vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2); + vcpu_sys_reg(vcpu, DACR32_EL2) = read_sysreg(dacr32_el2); + vcpu_sys_reg(vcpu, IFSR32_EL2) = read_sysreg(ifsr32_el2); if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - __vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); + vcpu_sys_reg(vcpu, DBGVCR32_EL2) = read_sysreg(dbgvcr32_el2); } static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) @@ -204,11 +204,11 @@ static inline void __sysreg32_restore_state(struct kvm_vcpu *vcpu) write_sysreg(vcpu->arch.ctxt.spsr_irq, spsr_irq); write_sysreg(vcpu->arch.ctxt.spsr_fiq, spsr_fiq); - write_sysreg(__vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2); - write_sysreg(__vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2); + write_sysreg(vcpu_sys_reg(vcpu, DACR32_EL2), dacr32_el2); + write_sysreg(vcpu_sys_reg(vcpu, IFSR32_EL2), ifsr32_el2); if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - write_sysreg(__vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2); + write_sysreg(vcpu_sys_reg(vcpu, DBGVCR32_EL2), dbgvcr32_el2); } #endif /* __ARM64_KVM_HYP_SYSREG_SR_H__ */ diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index f33825c995cb..f53e1a2d6df5 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -43,7 +43,7 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm) static bool kvm_pmu_idx_is_64bit(struct kvm_vcpu *vcpu, u64 select_idx) { return (select_idx == ARMV8_PMU_CYCLE_IDX && - __vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC); + vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC); } static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc) @@ -115,7 +115,7 @@ static bool kvm_pmu_idx_has_chain_evtype(struct kvm_vcpu *vcpu, u64 select_idx) return false; reg = PMEVTYPER0_EL0 + select_idx; - eventsel = __vcpu_sys_reg(vcpu, reg) & kvm_pmu_event_mask(vcpu->kvm); + eventsel = vcpu_sys_reg(vcpu, reg) & kvm_pmu_event_mask(vcpu->kvm); return eventsel == ARMV8_PMUV3_PERFCTR_CHAIN; } @@ -134,14 +134,14 @@ static u64 kvm_pmu_get_pair_counter_value(struct kvm_vcpu *vcpu, pmc = kvm_pmu_get_canonical_pmc(pmc); reg = PMEVCNTR0_EL0 + pmc->idx; - counter = __vcpu_sys_reg(vcpu, reg); - counter_high = __vcpu_sys_reg(vcpu, reg + 1); + counter = vcpu_sys_reg(vcpu, reg); + counter_high = vcpu_sys_reg(vcpu, reg + 1); counter = lower_32_bits(counter) | (counter_high << 32); } else { reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx; - counter = __vcpu_sys_reg(vcpu, reg); + counter = vcpu_sys_reg(vcpu, reg); } /* @@ -189,7 +189,7 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx; - __vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); + vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx); /* Recreate the perf event to reflect the updated sample_period */ kvm_pmu_create_perf_event(vcpu, select_idx); @@ -233,10 +233,10 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) val = lower_32_bits(counter); } - __vcpu_sys_reg(vcpu, reg) = val; + vcpu_sys_reg(vcpu, reg) = val; if (kvm_pmu_pmc_is_chained(pmc)) - __vcpu_sys_reg(vcpu, reg + 1) = upper_32_bits(counter); + vcpu_sys_reg(vcpu, reg + 1) = upper_32_bits(counter); kvm_pmu_release_perf_event(pmc); } @@ -289,7 +289,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) { - u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT; + u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT; val &= ARMV8_PMU_PMCR_N_MASK; if (val == 0) @@ -311,7 +311,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) struct kvm_pmu *pmu = &vcpu->arch.pmu; struct kvm_pmc *pmc; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) + if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) return; for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { @@ -369,10 +369,10 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu) { u64 reg = 0; - if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) { - reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0); - reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); - reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1); + if ((vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) { + reg = vcpu_sys_reg(vcpu, PMOVSSET_EL0); + reg &= vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + reg &= vcpu_sys_reg(vcpu, PMINTENSET_EL1); reg &= kvm_pmu_valid_counter_mask(vcpu); } @@ -493,7 +493,7 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event, perf_event->attr.sample_period = period; perf_event->hw.sample_period = period; - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx); + vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx); if (kvm_pmu_overflow_status(vcpu)) { kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); @@ -517,11 +517,11 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) struct kvm_pmu *pmu = &vcpu->arch.pmu; int i; - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) + if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) return; /* Weed out disabled counters */ - val &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); + val &= vcpu_sys_reg(vcpu, PMCNTENSET_EL0); for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) { u64 type, reg; @@ -530,29 +530,29 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) continue; /* PMSWINC only applies to ... SW_INC! */ - type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); + type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); type &= kvm_pmu_event_mask(vcpu->kvm); if (type != ARMV8_PMUV3_PERFCTR_SW_INCR) continue; /* increment this even SW_INC counter */ - reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1; + reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1; reg = lower_32_bits(reg); - __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg; + vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg; if (reg) /* no overflow on the low part */ continue; if (kvm_pmu_pmc_is_chained(&pmu->pmc[i])) { /* increment the high counter */ - reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) + 1; + reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) + 1; reg = lower_32_bits(reg); - __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg; + vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg; if (!reg) /* mark overflow on the high counter */ - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1); + vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1); } else { /* mark overflow on low counter */ - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i); + vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i); } } } @@ -569,7 +569,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) if (val & ARMV8_PMU_PMCR_E) { kvm_pmu_enable_counter_mask(vcpu, - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask); + vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask); } else { kvm_pmu_disable_counter_mask(vcpu, mask); } @@ -586,8 +586,8 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) { - return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && - (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx)); + return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && + (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx)); } /** @@ -612,7 +612,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX) ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pmc->idx; - data = __vcpu_sys_reg(vcpu, reg); + data = vcpu_sys_reg(vcpu, reg); kvm_pmu_stop_counter(vcpu, pmc); if (pmc->idx == ARMV8_PMU_CYCLE_IDX) @@ -734,7 +734,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx; - __vcpu_sys_reg(vcpu, reg) = data & mask; + vcpu_sys_reg(vcpu, reg) = data & mask; kvm_pmu_update_pmc_chained(vcpu, select_idx); kvm_pmu_create_perf_event(vcpu, select_idx); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index f6f126eb6ac1..a3c6419f1df6 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -76,7 +76,7 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) __vcpu_read_sys_reg_from_cpu(reg, &val)) return val; - return __vcpu_sys_reg(vcpu, reg); + return vcpu_sys_reg(vcpu, reg); } void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) @@ -85,7 +85,7 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) __vcpu_write_sys_reg_to_cpu(val, reg)) return; - __vcpu_sys_reg(vcpu, reg) = val; + vcpu_sys_reg(vcpu, reg) = val; } /* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */ @@ -620,12 +620,12 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E); if (!system_supports_32bit_el0()) val |= ARMV8_PMU_PMCR_LC; - __vcpu_sys_reg(vcpu, r->reg) = val; + vcpu_sys_reg(vcpu, r->reg) = val; } static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) { - u64 reg = __vcpu_sys_reg(vcpu, PMUSERENR_EL0); + u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0); bool enabled = (reg & flags) || vcpu_mode_priv(vcpu); if (!enabled) @@ -664,17 +664,17 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (p->is_write) { /* Only update writeable bits of PMCR */ - val = __vcpu_sys_reg(vcpu, PMCR_EL0); + val = vcpu_sys_reg(vcpu, PMCR_EL0); val &= ~ARMV8_PMU_PMCR_MASK; val |= p->regval & ARMV8_PMU_PMCR_MASK; if (!system_supports_32bit_el0()) val |= ARMV8_PMU_PMCR_LC; - __vcpu_sys_reg(vcpu, PMCR_EL0) = val; + vcpu_sys_reg(vcpu, PMCR_EL0) = val; kvm_pmu_handle_pmcr(vcpu, val); kvm_vcpu_pmu_restore_guest(vcpu); } else { /* PMCR.P & PMCR.C are RAZ */ - val = __vcpu_sys_reg(vcpu, PMCR_EL0) + val = vcpu_sys_reg(vcpu, PMCR_EL0) & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); p->regval = val; } @@ -689,10 +689,10 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, return false; if (p->is_write) - __vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval; + vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval; else /* return PMSELR.SEL field */ - p->regval = __vcpu_sys_reg(vcpu, PMSELR_EL0) + p->regval = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK; return true; @@ -723,7 +723,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx) { u64 pmcr, val; - pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0); + pmcr = vcpu_sys_reg(vcpu, PMCR_EL0); val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) { kvm_inject_undefined(vcpu); @@ -745,7 +745,7 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, if (pmu_access_event_counter_el0_disabled(vcpu)) return false; - idx = __vcpu_sys_reg(vcpu, PMSELR_EL0) + idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK; } else if (r->Op2 == 0) { /* PMCCNTR_EL0 */ @@ -796,7 +796,7 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) { /* PMXEVTYPER_EL0 */ - idx = __vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK; + idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK; reg = PMEVTYPER0_EL0 + idx; } else if (r->CRn == 14 && (r->CRm & 12) == 12) { idx = ((r->CRm & 3) << 3) | (r->Op2 & 7); @@ -814,10 +814,10 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (p->is_write) { kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); - __vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_PMU_EVTYPE_MASK; + vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_PMU_EVTYPE_MASK; kvm_vcpu_pmu_restore_guest(vcpu); } else { - p->regval = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK; + p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK; } return true; @@ -836,16 +836,16 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, val = p->regval & mask; if (r->Op2 & 0x1) { /* accessing PMCNTENSET_EL0 */ - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val; + vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val; kvm_pmu_enable_counter_mask(vcpu, val); kvm_vcpu_pmu_restore_guest(vcpu); } else { /* accessing PMCNTENCLR_EL0 */ - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val; + vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val; kvm_pmu_disable_counter_mask(vcpu, val); } } else { - p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask; + p->regval = vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask; } return true; @@ -864,12 +864,12 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (r->Op2 & 0x1) /* accessing PMINTENSET_EL1 */ - __vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= val; + vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= val; else /* accessing PMINTENCLR_EL1 */ - __vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val; + vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val; } else { - p->regval = __vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask; + p->regval = vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask; } return true; @@ -886,12 +886,12 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (p->is_write) { if (r->CRm & 0x2) /* accessing PMOVSSET_EL0 */ - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= (p->regval & mask); + vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= (p->regval & mask); else /* accessing PMOVSCLR_EL0 */ - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask); + vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask); } else { - p->regval = __vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask; + p->regval = vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask; } return true; @@ -922,10 +922,10 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, return false; } - __vcpu_sys_reg(vcpu, PMUSERENR_EL0) = + vcpu_sys_reg(vcpu, PMUSERENR_EL0) = p->regval & ARMV8_PMU_USERENR_MASK; } else { - p->regval = __vcpu_sys_reg(vcpu, PMUSERENR_EL0) + p->regval = vcpu_sys_reg(vcpu, PMUSERENR_EL0) & ARMV8_PMU_USERENR_MASK; } @@ -2635,7 +2635,7 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (r->get_user) return (r->get_user)(vcpu, r, reg, uaddr); - return reg_to_user(uaddr, &__vcpu_sys_reg(vcpu, r->reg), reg->id); + return reg_to_user(uaddr, &vcpu_sys_reg(vcpu, r->reg), reg->id); } int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) @@ -2660,7 +2660,7 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (r->set_user) return (r->set_user)(vcpu, r, reg, uaddr); - return reg_from_user(&__vcpu_sys_reg(vcpu, r->reg), uaddr, reg->id); + return reg_from_user(&vcpu_sys_reg(vcpu, r->reg), uaddr, reg->id); } static unsigned int num_demux_regs(void) diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 9d0621417c2a..7840d657ab09 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -105,14 +105,14 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu, { BUG_ON(!r->reg); BUG_ON(r->reg >= NR_SYS_REGS); - __vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL; + vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL; } static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { BUG_ON(!r->reg); BUG_ON(r->reg >= NR_SYS_REGS); - __vcpu_sys_reg(vcpu, r->reg) = r->val; + vcpu_sys_reg(vcpu, r->reg) = r->val; } static inline bool sysreg_hidden(const struct kvm_vcpu *vcpu, From patchwork Wed Jul 14 09:56:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12376507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B39D5C07E9A for ; Wed, 14 Jul 2021 09:57:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7DE4E61040 for ; Wed, 14 Jul 2021 09:57:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7DE4E61040 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TimQFjX0goWKqH1u73mf3xYiQ4nBg7A4FcAUbga86E4=; b=Zk01jBM+Yg4U3j 3J4RyggUqdEqsjHjEJ+9nXcJxM8+RbzYcJA7KJgdU7zbbs5S8Pk++HdaE9U9rV4zbUIXWg2DLFDPr YlTTMf8vwMp5xyAjerE6sht8cC8GD1vdB+qNiEbGcSxPHUavGrXuA+g4gz2h8wg8K1CxpyuRi6IcZ aOHc/EIzxFi3mKgMi6OfXevWnsDMTO3Ep8xJJmxrQ13eY+cKgNqLaF0B6jt0AvwpXEZsRmxYZapwc hmw9Pl4KY9j8bJ4Hw//Q0IgdqcjR4eShJqfKV8wUEDh+4H0xT3UqXSy+g9/+ZLDO2Xo2HW0LElPnB QreGiD5eUczCKYg/6VIg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3bcE-00Cy4X-FX; Wed, 14 Jul 2021 09:55:42 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3bbo-00Cxuz-9A for linux-arm-kernel@lists.infradead.org; Wed, 14 Jul 2021 09:55:17 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 873FB11FB; Wed, 14 Jul 2021 02:55:12 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7B7443F774; Wed, 14 Jul 2021 02:55:11 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, suzuki.poulose@arm.com Subject: [PATCH 4/5] KVM: arm64: Add __vcpu_sys_reg() Date: Wed, 14 Jul 2021 10:56:00 +0100 Message-Id: <20210714095601.184854-5-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210714095601.184854-1-alexandru.elisei@arm.com> References: <20210714095601.184854-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210714_025516_431112_AFA0D41D X-CRM114-Status: GOOD ( 10.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Similar to how __ctxt_sys_reg() returns a reference to a system register, add the __vcpu_sys_reg() macro which returns the pointer, but using a struct kvm_vcpu as a parameter instead of a kvm_cpu_context. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/guest.c | 6 +++--- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 76efede8beae..267b5e6f1c28 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -469,6 +469,8 @@ struct kvm_vcpu_arch { #define ctxt_sys_reg(c,r) (*__ctxt_sys_reg(c,r)) +#define __vcpu_sys_reg(v,r) (__ctxt_sys_reg(&(v)->arch.ctxt, (r))) + #define vcpu_sys_reg(v,r) (ctxt_sys_reg(&(v)->arch.ctxt, (r))) u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 1dfb83578277..a8c5a76a8e7a 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -148,13 +148,13 @@ static void *core_reg_addr(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return &vcpu->arch.ctxt.regs.pstate; case KVM_REG_ARM_CORE_REG(sp_el1): - return __ctxt_sys_reg(&vcpu->arch.ctxt, SP_EL1); + return __vcpu_sys_reg(vcpu, SP_EL1); case KVM_REG_ARM_CORE_REG(elr_el1): - return __ctxt_sys_reg(&vcpu->arch.ctxt, ELR_EL1); + return __vcpu_sys_reg(vcpu, ELR_EL1); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_EL1]): - return __ctxt_sys_reg(&vcpu->arch.ctxt, SPSR_EL1); + return __vcpu_sys_reg(vcpu, SPSR_EL1); case KVM_REG_ARM_CORE_REG(spsr[KVM_SPSR_ABT]): return &vcpu->arch.ctxt.spsr_abt; From patchwork Wed Jul 14 09:56:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12376523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1B89C07E9A for ; Wed, 14 Jul 2021 09:58:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8289C6115A for ; Wed, 14 Jul 2021 09:58:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8289C6115A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=w2uhULGj5vKPtZJ35bk7qAGPHz3PRBlGqkEK8UAiIS8=; b=fDQEyYlfsRGCcv ycZ+e6+nVucCodUWomBfr0WEBEiWmzgZyZJr5XLwRZNv91b1jkAArFbo/iJ4CNujbUmEeAhPhkKor 8PQNp2w5rnSirSnRVlIXOs927ZbvBllDWlkCoMOz6SW2WgPHY2fMmHURl9ya6YhzvWdk4R3YedUk1 DB878ENuvEwtpeCnWeLmU+F/gbiJTqsYHZQOOQvmNIhb6V2ErPktjEU2R3wM/wzyNH/tWglhEJeVE S6PCWxkgZi74X/024o6qO5rTMa0HEEE8Ng/0jM8cF5Gr1OqjSjseoSyeac39qlNhn4aeDy4uHDZQP oFdIsGTVXY2n93VhJmeA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3bcv-00CyMv-HJ; Wed, 14 Jul 2021 09:56:25 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3bbq-00CxxL-R9 for linux-arm-kernel@lists.infradead.org; Wed, 14 Jul 2021 09:55:20 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0A1001396; Wed, 14 Jul 2021 02:55:14 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CAB173F774; Wed, 14 Jul 2021 02:55:12 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, suzuki.poulose@arm.com, Will Deacon Subject: [PATCH 5/5] KVM: arm64: nVHE: Remove unneeded isb() when modifying PMSCR_EL1 Date: Wed, 14 Jul 2021 10:56:01 +0100 Message-Id: <20210714095601.184854-6-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210714095601.184854-1-alexandru.elisei@arm.com> References: <20210714095601.184854-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210714_025518_996259_4ACE0E13 X-CRM114-Status: UNSURE ( 9.59 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org According to ARM DDI 0487G.a, page D9-2930, profiling is disabled when the PE is executing at a higher exception level than the profiling buffer owning exception level. This is also confirmed by the pseudocode for the StatisticalProfilingEnabled() function. During the world switch and before activating guest traps, KVM executes at EL2 with the buffer owning exception level being EL1 (MDCR_EL2.E2PB = 0b11). As a result, profiling is already disabled when draining the buffer, making the isb() after the write to PMSCR_EL1 unnecessary. CC: Will Deacon Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 7d3f25868cae..fdf0e0ba17e9 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -33,7 +33,6 @@ static void __debug_save_spe(u64 *pmscr_el1) /* Yes; save the control register and disable data generation */ *pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1); write_sysreg_s(0, SYS_PMSCR_EL1); - isb(); /* Now drain all buffered data to memory */ psb_csync();