From patchwork Wed Aug 25 16:17:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B6A8C4338F for ; Wed, 25 Aug 2021 16:20:18 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 49FD061151 for ; Wed, 25 Aug 2021 16:20:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 49FD061151 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LDZhcAF3njqX6aqd06O7IAQw87p445ipCjzo0IgCT2k=; b=aVwgb28VIrHoPo ppucLKakXujbIKe/LauV6X4L7oRJNQtg/aUvjxm1EU0WBKofMEt92sExgDFdXoHUot01+TaiSfpIS 0XzhZkpe3SSOC0Wg5QidZrj3ZAG208hIIS09VnXLG5Oj3qRPM3smEC8hvSnT/l7WAkipVr+/DlHqg MOXRwphVWE2Xu0uG3l6QiNvqC+2VuYOVC93OXc5ZaUPr4cYQZaKRfrSZPiVbFS09CBXGP1NEglweL KDqxoiPCdgG8pUNcHjIxxyg/4z5V0j5oL0NnK1bmwBrNIZYvZNfYSxF8XsA6VfID6yEf6O3kauQp/ HKA2v59i2SNGQDE22Fnw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvab-007gRt-9P; Wed, 25 Aug 2021 16:17:21 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvaW-007gQ7-2c for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 719DA101E; Wed, 25 Aug 2021 09:17:13 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 21E4A3F66F; Wed, 25 Aug 2021 09:17:11 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 01/39] KVM: arm64: Make lock_all_vcpus() available to the rest of KVM Date: Wed, 25 Aug 2021 17:17:37 +0100 Message-Id: <20210825161815.266051-2-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091716_252075_6D5AF2B0 X-CRM114-Status: GOOD ( 22.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The VGIC code uses the lock_all_vcpus() function to make sure no VCPUs are run while it fiddles with the global VGIC state. Move the declaration of lock_all_vcpus() and the corresponding unlock function into asm/kvm_host.h where it can be reused by other parts of KVM/arm64 and rename the functions to kvm_{lock,unlock}_all_vcpus() to make them more generic. Because the scope of the code potentially using the functions has increased, add a lockdep check that the kvm->lock is held by the caller. Holding the lock is necessary because otherwise userspace would be able to create new VCPUs and run them while the existing VCPUs are locked. No functional change intended. Signed-off-by: Alexandru Elisei Reviewed-by: Suzuki K Poulose --- arch/arm64/include/asm/kvm_host.h | 3 ++ arch/arm64/kvm/arm.c | 41 ++++++++++++++++++++++ arch/arm64/kvm/vgic/vgic-init.c | 4 +-- arch/arm64/kvm/vgic/vgic-its.c | 8 ++--- arch/arm64/kvm/vgic/vgic-kvm-device.c | 50 ++++----------------------- arch/arm64/kvm/vgic/vgic.h | 3 -- 6 files changed, 56 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 41911585ae0c..797083203603 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -601,6 +601,9 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, void kvm_arm_halt_guest(struct kvm *kvm); void kvm_arm_resume_guest(struct kvm *kvm); +bool kvm_lock_all_vcpus(struct kvm *kvm); +void kvm_unlock_all_vcpus(struct kvm *kvm); + #ifndef __KVM_NVHE_HYPERVISOR__ #define kvm_call_hyp_nvhe(f, ...) \ ({ \ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e9a2b8f27792..ddace63528f1 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -647,6 +647,47 @@ void kvm_arm_resume_guest(struct kvm *kvm) } } +/* unlocks vcpus from @vcpu_lock_idx and smaller */ +static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx) +{ + struct kvm_vcpu *tmp_vcpu; + + for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) { + tmp_vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx); + mutex_unlock(&tmp_vcpu->mutex); + } +} + +void kvm_unlock_all_vcpus(struct kvm *kvm) +{ + lockdep_assert_held(&kvm->lock); + unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1); +} + +/* Returns true if all vcpus were locked, false otherwise */ +bool kvm_lock_all_vcpus(struct kvm *kvm) +{ + struct kvm_vcpu *tmp_vcpu; + int c; + + lockdep_assert_held(&kvm->lock); + + /* + * Any time a vcpu is run, vcpu_load is called which tries to grab the + * vcpu->mutex. By grabbing the vcpu->mutex of all VCPUs we ensure that + * no other VCPUs are run and it is safe to fiddle with KVM global + * state. + */ + kvm_for_each_vcpu(c, tmp_vcpu, kvm) { + if (!mutex_trylock(&tmp_vcpu->mutex)) { + unlock_vcpus(kvm, c - 1); + return false; + } + } + + return true; +} + static void vcpu_req_sleep(struct kvm_vcpu *vcpu) { struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu); diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c index 340c51d87677..6a85aa064a6c 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -87,7 +87,7 @@ int kvm_vgic_create(struct kvm *kvm, u32 type) return -ENODEV; ret = -EBUSY; - if (!lock_all_vcpus(kvm)) + if (!kvm_lock_all_vcpus(kvm)) return ret; kvm_for_each_vcpu(i, vcpu, kvm) { @@ -117,7 +117,7 @@ int kvm_vgic_create(struct kvm *kvm, u32 type) INIT_LIST_HEAD(&kvm->arch.vgic.rd_regions); out_unlock: - unlock_all_vcpus(kvm); + kvm_unlock_all_vcpus(kvm); return ret; } diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c index 61728c543eb9..3a336a678cb8 100644 --- a/arch/arm64/kvm/vgic/vgic-its.c +++ b/arch/arm64/kvm/vgic/vgic-its.c @@ -2005,7 +2005,7 @@ static int vgic_its_attr_regs_access(struct kvm_device *dev, goto out; } - if (!lock_all_vcpus(dev->kvm)) { + if (!kvm_lock_all_vcpus(dev->kvm)) { ret = -EBUSY; goto out; } @@ -2023,7 +2023,7 @@ static int vgic_its_attr_regs_access(struct kvm_device *dev, } else { *reg = region->its_read(dev->kvm, its, addr, len); } - unlock_all_vcpus(dev->kvm); + kvm_unlock_all_vcpus(dev->kvm); out: mutex_unlock(&dev->kvm->lock); return ret; @@ -2668,7 +2668,7 @@ static int vgic_its_ctrl(struct kvm *kvm, struct vgic_its *its, u64 attr) mutex_lock(&kvm->lock); mutex_lock(&its->its_lock); - if (!lock_all_vcpus(kvm)) { + if (!kvm_lock_all_vcpus(kvm)) { mutex_unlock(&its->its_lock); mutex_unlock(&kvm->lock); return -EBUSY; @@ -2686,7 +2686,7 @@ static int vgic_its_ctrl(struct kvm *kvm, struct vgic_its *its, u64 attr) break; } - unlock_all_vcpus(kvm); + kvm_unlock_all_vcpus(kvm); mutex_unlock(&its->its_lock); mutex_unlock(&kvm->lock); return ret; diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c index 7740995de982..c2f95d124cbc 100644 --- a/arch/arm64/kvm/vgic/vgic-kvm-device.c +++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c @@ -298,44 +298,6 @@ int vgic_v2_parse_attr(struct kvm_device *dev, struct kvm_device_attr *attr, return 0; } -/* unlocks vcpus from @vcpu_lock_idx and smaller */ -static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx) -{ - struct kvm_vcpu *tmp_vcpu; - - for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) { - tmp_vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx); - mutex_unlock(&tmp_vcpu->mutex); - } -} - -void unlock_all_vcpus(struct kvm *kvm) -{ - unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1); -} - -/* Returns true if all vcpus were locked, false otherwise */ -bool lock_all_vcpus(struct kvm *kvm) -{ - struct kvm_vcpu *tmp_vcpu; - int c; - - /* - * Any time a vcpu is run, vcpu_load is called which tries to grab the - * vcpu->mutex. By grabbing the vcpu->mutex of all VCPUs we ensure - * that no other VCPUs are run and fiddle with the vgic state while we - * access it. - */ - kvm_for_each_vcpu(c, tmp_vcpu, kvm) { - if (!mutex_trylock(&tmp_vcpu->mutex)) { - unlock_vcpus(kvm, c - 1); - return false; - } - } - - return true; -} - /** * vgic_v2_attr_regs_access - allows user space to access VGIC v2 state * @@ -366,7 +328,7 @@ static int vgic_v2_attr_regs_access(struct kvm_device *dev, if (ret) goto out; - if (!lock_all_vcpus(dev->kvm)) { + if (!kvm_lock_all_vcpus(dev->kvm)) { ret = -EBUSY; goto out; } @@ -383,7 +345,7 @@ static int vgic_v2_attr_regs_access(struct kvm_device *dev, break; } - unlock_all_vcpus(dev->kvm); + kvm_unlock_all_vcpus(dev->kvm); out: mutex_unlock(&dev->kvm->lock); return ret; @@ -532,7 +494,7 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev, goto out; } - if (!lock_all_vcpus(dev->kvm)) { + if (!kvm_lock_all_vcpus(dev->kvm)) { ret = -EBUSY; goto out; } @@ -582,7 +544,7 @@ static int vgic_v3_attr_regs_access(struct kvm_device *dev, break; } - unlock_all_vcpus(dev->kvm); + kvm_unlock_all_vcpus(dev->kvm); out: mutex_unlock(&dev->kvm->lock); return ret; @@ -637,12 +599,12 @@ static int vgic_v3_set_attr(struct kvm_device *dev, case KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES: mutex_lock(&dev->kvm->lock); - if (!lock_all_vcpus(dev->kvm)) { + if (!kvm_lock_all_vcpus(dev->kvm)) { mutex_unlock(&dev->kvm->lock); return -EBUSY; } ret = vgic_v3_save_pending_tables(dev->kvm); - unlock_all_vcpus(dev->kvm); + kvm_unlock_all_vcpus(dev->kvm); mutex_unlock(&dev->kvm->lock); return ret; } diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h index dc1f3d1657ee..0511618c89f6 100644 --- a/arch/arm64/kvm/vgic/vgic.h +++ b/arch/arm64/kvm/vgic/vgic.h @@ -252,9 +252,6 @@ int vgic_init(struct kvm *kvm); void vgic_debug_init(struct kvm *kvm); void vgic_debug_destroy(struct kvm *kvm); -bool lock_all_vcpus(struct kvm *kvm); -void unlock_all_vcpus(struct kvm *kvm); - static inline int vgic_v3_max_apr_idx(struct kvm_vcpu *vcpu) { struct vgic_cpu *cpu_if = &vcpu->arch.vgic_cpu; From patchwork Wed Aug 25 16:17:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04DCAC43214 for ; Wed, 25 Aug 2021 16:21:18 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C83A8610E5 for ; Wed, 25 Aug 2021 16:21:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C83A8610E5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rUMWQiXkSpvJI26kSGqiQ97N/jLWhVmwDL0RCor2izY=; b=FTkIBuPD+XVtcZ ekCT131c6B4Og6CpRXK2+3Dqg0MV++KxfkdfRfdRKslSj4Zx9o6WWcY1PILWdtMgoOPFx1yqDvMxA tLOeUg/36KwBUGTMmHL1dk4kc+id4swRiP75Anva705QO+Ox7bx+85JteCIYj1qFP9wawifxgecKG FHBBu1UE9YcHrCtaXgz7WpCshFOqykCcUoBPsuZk0n7DtmZJMqenACeYLod+1PpwAMRcLentUJ079 1ziOBq9GBcUAd1p0WZTvZeIGz3J2yNbqOy9UQ/a5gaCxvmEN54fe+GuSRSHRX3RER9mKTqst+QCT+ W458TI5kVMXmY8xXSVxA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvb6-007gfb-Im; Wed, 25 Aug 2021 16:17:52 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvaX-007gQY-64 for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:19 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 66ACE1042; Wed, 25 Aug 2021 09:17:15 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B4E303F66F; Wed, 25 Aug 2021 09:17:13 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 02/39] KVM: arm64: Add lock/unlock memslot user API Date: Wed, 25 Aug 2021 17:17:38 +0100 Message-Id: <20210825161815.266051-3-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091717_363736_58D187B8 X-CRM114-Status: GOOD ( 29.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Stage 2 faults triggered by the profiling buffer attempting to write to memory are reported by the SPE hardware by asserting a buffer management event interrupt. Interrupts are by their nature asynchronous, which means that the guest might have changed its stage 1 translation tables since the attempted write. SPE reports the guest virtual address that caused the data abort, not the IPA, which means that KVM would have to walk the guest's stage 1 tables to find the IPA. Using the AT instruction to walk the guest's tables in hardware is not an option because it doesn't report the IPA in the case of a stage 2 fault on a stage 1 table walk. Avoid both issues by pre-mapping the guest memory at stage 2. This is being done by adding a capability that allows the user to pin the memory backing a memslot. The same capability can be used to unlock a memslot, which unpins the pages associated with the memslot, but doesn't unmap the IPA range from stage 2; in this case, the addresses will be unmapped from stage 2 via the MMU notifiers when the process' address space changes. For now, the capability doesn't actually do anything other than checking that the usage is correct; the memory operations will be added in future patches. Signed-off-by: Alexandru Elisei --- Documentation/virt/kvm/api.rst | 56 +++++++++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 3 ++ arch/arm64/kvm/arm.c | 42 ++++++++++++++++-- arch/arm64/kvm/mmu.c | 76 ++++++++++++++++++++++++++++++++ include/uapi/linux/kvm.h | 8 ++++ 5 files changed, 181 insertions(+), 4 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index dae68e68ca23..741327ef06b0 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6682,6 +6682,62 @@ MAP_SHARED mmap will result in an -EINVAL return. When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to perform a bulk copy of tags to/from the guest. +7.29 KVM_CAP_ARM_LOCK_USER_MEMORY_REGION +---------------------------------------- + +:Architectures: arm64 +:Target: VM +:Parameters: flags is one of KVM_ARM_LOCK_USER_MEMORY_REGION_FLAGS_LOCK or + KVM_ARM_LOCK_USER_MEMORY_REGION_FLAGS_UNLOCK + args[0] is the slot number + args[1] specifies the permisions when the memslot is locked or if + all memslots should be unlocked + +The presence of this capability indicates that KVM supports locking the memory +associated with the memslot, and unlocking a previously locked memslot. + +The 'flags' parameter is defined as follows: + +7.29.1 KVM_ARM_LOCK_USER_MEMORY_REGION_FLAGS_LOCK +------------------------------------------------- + +:Capability: 'flags' parameter to KVM_CAP_ARM_LOCK_USER_MEMORY_REGION +:Architectures: arm64 +:Target: VM +:Parameters: args[0] contains the memory slot number + args[1] contains the permissions for the locked memory: + KVM_ARM_LOCK_MEMORY_READ (mandatory) to map it with + read permissions and KVM_ARM_LOCK_MEMORY_WRITE + (optional) with write permissions +:Returns: 0 on success; negative error code on failure + +Enabling this capability causes the memory described by the memslot to be +pinned in the process address space and the corresponding stage 2 IPA range +mapped at stage 2. The permissions specified in args[1] apply to both +mappings. The memory pinned with this capability counts towards the max +locked memory limit for the current process. + +The capability must be enabled before any VCPUs have run. The virtual memory +range described by the memslot must be mapped in the userspace process without +any gaps. It is considered an error if write permissions are specified for a +memslot which logs dirty pages. + +7.29.2 KVM_ARM_LOCK_USER_MEMORY_REGION_FLAGS_UNLOCK +--------------------------------------------------- + +:Capability: 'flags' parameter to KVM_CAP_ARM_LOCK_USER_MEMORY_REGION +:Architectures: arm64 +:Target: VM +:Parameters: args[0] contains the memory slot number + args[1] optionally contains the flag KVM_ARM_UNLOCK_MEM_ALL, + which unlocks all previously locked memslots. +:Returns: 0 on success; negative error code on failure + +Enabling this capability causes the memory pinned when locking the memslot +specified in args[0] to be unpinned, or, optionally, the memory associated +with all locked memslots, to be unpinned. The IPA range is not unmapped +from stage 2. + 8. Other capabilities. ====================== diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index b52c5c4b9a3d..ef079b5eb475 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -216,6 +216,9 @@ static inline void __invalidate_icache_guest_page(void *va, size_t size) void kvm_set_way_flush(struct kvm_vcpu *vcpu); void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled); +int kvm_mmu_lock_memslot(struct kvm *kvm, u64 slot, u64 flags); +int kvm_mmu_unlock_memslot(struct kvm *kvm, u64 slot, u64 flags); + static inline unsigned int kvm_get_vmid_bits(void) { int reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ddace63528f1..57ac97b30b3d 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -80,16 +80,43 @@ int kvm_arch_check_processor_compat(void *opaque) return 0; } +static int kvm_arm_lock_memslot_supported(void) +{ + return 0; +} + +static int kvm_lock_user_memory_region_ioctl(struct kvm *kvm, + struct kvm_enable_cap *cap) +{ + u64 slot, flags; + u32 action; + + if (cap->args[2] || cap->args[3]) + return -EINVAL; + + slot = cap->args[0]; + flags = cap->args[1]; + action = cap->flags; + + switch (action) { + case KVM_ARM_LOCK_USER_MEMORY_REGION_FLAGS_LOCK: + return kvm_mmu_lock_memslot(kvm, slot, flags); + case KVM_ARM_LOCK_USER_MEMORY_REGION_FLAGS_UNLOCK: + return kvm_mmu_unlock_memslot(kvm, slot, flags); + default: + return -EINVAL; + } +} + int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) { int r; - if (cap->flags) - return -EINVAL; - switch (cap->cap) { case KVM_CAP_ARM_NISV_TO_USER: + if (cap->flags) + return -EINVAL; r = 0; kvm->arch.return_nisv_io_abort_to_user = true; break; @@ -99,6 +126,11 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, r = 0; kvm->arch.mte_enabled = true; break; + case KVM_CAP_ARM_LOCK_USER_MEMORY_REGION: + if (!kvm_arm_lock_memslot_supported()) + return -EINVAL; + r = kvm_lock_user_memory_region_ioctl(kvm, cap); + break; default: r = -EINVAL; break; @@ -166,7 +198,6 @@ vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) return VM_FAULT_SIGBUS; } - /** * kvm_arch_destroy_vm - destroy the VM data structure * @kvm: pointer to the KVM struct @@ -274,6 +305,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_PTRAUTH_GENERIC: r = system_has_full_ptr_auth(); break; + case KVM_CAP_ARM_LOCK_USER_MEMORY_REGION: + r = kvm_arm_lock_memslot_supported(); + break; default: r = 0; } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 0625bf2353c2..689b24cb0f10 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1244,6 +1244,82 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) return ret; } +int kvm_mmu_lock_memslot(struct kvm *kvm, u64 slot, u64 flags) +{ + struct kvm_memory_slot *memslot; + struct kvm_vcpu *vcpu; + int i, ret; + + if (slot >= KVM_MEM_SLOTS_NUM) + return -EINVAL; + + if (!(flags & KVM_ARM_LOCK_MEM_READ)) + return -EINVAL; + + mutex_lock(&kvm->lock); + + if (!kvm_lock_all_vcpus(kvm)) { + ret = -EBUSY; + goto out; + } + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (vcpu->arch.has_run_once) { + ret = -EBUSY; + goto out_unlock_vcpus; + } + } + + mutex_lock(&kvm->slots_lock); + + memslot = id_to_memslot(kvm_memslots(kvm), slot); + if (!memslot) { + ret = -EINVAL; + goto out_unlock_slots; + } + if ((flags & KVM_ARM_LOCK_MEM_WRITE) && + ((memslot->flags & KVM_MEM_READONLY) || memslot->dirty_bitmap)) { + ret = -EPERM; + goto out_unlock_slots; + } + + ret = -EINVAL; + +out_unlock_slots: + mutex_unlock(&kvm->slots_lock); +out_unlock_vcpus: + kvm_unlock_all_vcpus(kvm); +out: + mutex_unlock(&kvm->lock); + return ret; +} + +int kvm_mmu_unlock_memslot(struct kvm *kvm, u64 slot, u64 flags) +{ + struct kvm_memory_slot *memslot; + int ret; + + if (flags & KVM_ARM_UNLOCK_MEM_ALL) + return -EINVAL; + + if (slot >= KVM_MEM_SLOTS_NUM) + return -EINVAL; + + mutex_lock(&kvm->slots_lock); + + memslot = id_to_memslot(kvm_memslots(kvm), slot); + if (!memslot) { + ret = -EINVAL; + goto out_unlock_slots; + } + + ret = -EINVAL; + +out_unlock_slots: + mutex_unlock(&kvm->slots_lock); + return ret; +} + bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { if (!kvm->arch.mmu.pgt) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index d9e4aabcb31a..bcf62c7bdd2d 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1112,6 +1112,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_BINARY_STATS_FD 203 #define KVM_CAP_EXIT_ON_EMULATION_FAILURE 204 #define KVM_CAP_ARM_MTE 205 +#define KVM_CAP_ARM_LOCK_USER_MEMORY_REGION 206 #ifdef KVM_CAP_IRQ_ROUTING @@ -1459,6 +1460,13 @@ struct kvm_s390_ucas_mapping { #define KVM_PPC_SVM_OFF _IO(KVMIO, 0xb3) #define KVM_ARM_MTE_COPY_TAGS _IOR(KVMIO, 0xb4, struct kvm_arm_copy_mte_tags) +/* Used by KVM_CAP_ARM_LOCK_USER_MEMORY_REGION */ +#define KVM_ARM_LOCK_USER_MEMORY_REGION_FLAGS_LOCK (1 << 0) +#define KVM_ARM_LOCK_MEM_READ (1 << 0) +#define KVM_ARM_LOCK_MEM_WRITE (1 << 1) +#define KVM_ARM_LOCK_USER_MEMORY_REGION_FLAGS_UNLOCK (1 << 1) +#define KVM_ARM_UNLOCK_MEM_ALL (1 << 0) + /* ioctl for vm fd */ #define KVM_CREATE_DEVICE _IOWR(KVMIO, 0xe0, struct kvm_create_device) From patchwork Wed Aug 25 16:17:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C52AC432BE for ; Wed, 25 Aug 2021 16:20:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E0DCA61153 for ; Wed, 25 Aug 2021 16:20:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E0DCA61153 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QaotLhZHUDbyRGx3SNFyh+059Zjr2DLfGIOzSoziSH8=; b=kB7JKzavOd4H5T JyGzHCU7g1d2Kt7cLuoval9wMbtaLUaRYAyeB76/cnF9Izku4J3wBX6cxvYvqnAo2zEDbe+POjQ+B AUQkUzDSwQHGEoWjOnEfZMFenVgDRC84vGE/Bdw/VyYbr144JFeF/OW10GWrMg2gYrbaffNyhg9rZ zHiFw4j0AuiYup6HBUSLqK0AQkIdtB/FfYVwX+A8VtQpM+kYUbMpiMYcv+FVv39Gk5WeREP6fAqbh 7i5XSq5bIOxv1iHkSTSyoYOW94zzRjteHgM/YngDI3UF/zQjQrb5uZidZXpZhPLZkK7ktNHRdGlvZ /a89HKqd+a8H62BzR+vw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbP-007grx-ES; Wed, 25 Aug 2021 16:18:11 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvaX-007gR4-Qx for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:20 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 06BA11063; Wed, 25 Aug 2021 09:17:17 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AB1C33F66F; Wed, 25 Aug 2021 09:17:15 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 03/39] KVM: arm64: Implement the memslot lock/unlock functionality Date: Wed, 25 Aug 2021 17:17:39 +0100 Message-Id: <20210825161815.266051-4-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091718_009740_80496461 X-CRM114-Status: GOOD ( 27.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Pin memory in the process address space and map it in the stage 2 tables as a result of userspace enabling the KVM_CAP_ARM_LOCK_USER_MEMORY_REGION capability; and unpin it from the process address space when the capability is used with the KVM_ARM_LOCK_USER_MEMORY_REGION_FLAGS_UNLOCK flag. The current implementation has two drawbacks which will be fixed in future patches: - The dcache maintenance is done when the memslot is locked, which means that it is possible that memory changes made by userspace after the ioctl completes won't be visible to a guest running with the MMU off. - Tag scrubbing is done when the memslot is locked. If the MTE capability is enabled after the ioctl, the guest will be able to access unsanitised pages. This is prevented by forbidding userspace to enable the MTE capability if any memslots are locked. Only PAGE_SIZE mappings are supported at stage 2. Signed-off-by: Alexandru Elisei --- Documentation/virt/kvm/api.rst | 8 +- arch/arm64/include/asm/kvm_host.h | 11 ++ arch/arm64/kvm/arm.c | 22 ++++ arch/arm64/kvm/mmu.c | 211 +++++++++++++++++++++++++++++- 4 files changed, 245 insertions(+), 7 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 741327ef06b0..5aa251df7077 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6717,10 +6717,10 @@ mapped at stage 2. The permissions specified in args[1] apply to both mappings. The memory pinned with this capability counts towards the max locked memory limit for the current process. -The capability must be enabled before any VCPUs have run. The virtual memory -range described by the memslot must be mapped in the userspace process without -any gaps. It is considered an error if write permissions are specified for a -memslot which logs dirty pages. +The capability must be enabled before any VCPUs have run. The entire virtual +memory range described by the memslot must be mapped by the userspace process. +It is considered an error if write permissions are specified for a memslot which +logs dirty pages. 7.29.2 KVM_ARM_LOCK_USER_MEMORY_REGION_FLAGS_UNLOCK --------------------------------------------------- diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 797083203603..97ff3ed5d4b7 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -98,7 +98,18 @@ struct kvm_s2_mmu { struct kvm_arch *arch; }; +#define KVM_MEMSLOT_LOCK_READ (1 << 0) +#define KVM_MEMSLOT_LOCK_WRITE (1 << 1) +#define KVM_MEMSLOT_LOCK_MASK 0x3 + +struct kvm_memory_slot_page { + struct list_head list; + struct page *page; +}; + struct kvm_arch_memory_slot { + struct kvm_memory_slot_page pages; + u32 flags; }; struct kvm_arch { diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 57ac97b30b3d..efb3501c6016 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -108,6 +108,25 @@ static int kvm_lock_user_memory_region_ioctl(struct kvm *kvm, } } +static bool kvm_arm_has_locked_memslots(struct kvm *kvm) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslot; + bool has_locked_memslots = false; + int idx; + + idx = srcu_read_lock(&kvm->srcu); + kvm_for_each_memslot(memslot, slots) { + if (memslot->arch.flags & KVM_MEMSLOT_LOCK_MASK) { + has_locked_memslots = true; + break; + } + } + srcu_read_unlock(&kvm->srcu, idx); + + return has_locked_memslots; +} + int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) { @@ -123,6 +142,9 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, case KVM_CAP_ARM_MTE: if (!system_supports_mte() || kvm->created_vcpus) return -EINVAL; + if (kvm_arm_lock_memslot_supported() && + kvm_arm_has_locked_memslots(kvm)) + return -EPERM; r = 0; kvm->arch.mte_enabled = true; break; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 689b24cb0f10..59c2bfef2fd1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -72,6 +72,11 @@ static bool memslot_is_logging(struct kvm_memory_slot *memslot) return memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY); } +static bool memslot_is_locked(struct kvm_memory_slot *memslot) +{ + return memslot->arch.flags & KVM_MEMSLOT_LOCK_MASK; +} + /** * kvm_flush_remote_tlbs() - flush all VM TLB entries for v7/8 * @kvm: pointer to kvm structure. @@ -722,6 +727,10 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, if (map_size == PAGE_SIZE) return true; + /* Allow only PAGE_SIZE mappings for locked memslots */ + if (memslot_is_locked(memslot)) + return false; + size = memslot->npages * PAGE_SIZE; gpa_start = memslot->base_gfn << PAGE_SHIFT; @@ -1244,6 +1253,159 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) return ret; } +static int try_rlimit_memlock(unsigned long npages) +{ + unsigned long lock_limit; + bool has_lock_cap; + int ret = 0; + + has_lock_cap = capable(CAP_IPC_LOCK); + if (has_lock_cap) + goto out; + + lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; + + mmap_read_lock(current->mm); + if (npages + current->mm->locked_vm > lock_limit) + ret = -ENOMEM; + mmap_read_unlock(current->mm); + +out: + return ret; +} + +static void unpin_memslot_pages(struct kvm_memory_slot *memslot, bool writable) +{ + struct kvm_memory_slot_page *entry, *tmp; + + list_for_each_entry_safe(entry, tmp, &memslot->arch.pages.list, list) { + if (writable) + set_page_dirty_lock(entry->page); + unpin_user_page(entry->page); + kfree(entry); + } +} + +static int lock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot, + u64 flags) +{ + struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, NULL, }; + struct kvm_memory_slot_page *page_entry; + bool writable = flags & KVM_ARM_LOCK_MEM_WRITE; + enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; + struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; + struct vm_area_struct *vma; + unsigned long npages = memslot->npages; + unsigned int pin_flags = FOLL_LONGTERM; + unsigned long i, hva, ipa, mmu_seq; + int ret; + + ret = try_rlimit_memlock(npages); + if (ret) + return -ENOMEM; + + INIT_LIST_HEAD(&memslot->arch.pages.list); + + if (writable) { + prot |= KVM_PGTABLE_PROT_W; + pin_flags |= FOLL_WRITE; + } + + hva = memslot->userspace_addr; + ipa = memslot->base_gfn << PAGE_SHIFT; + + mmu_seq = kvm->mmu_notifier_seq; + smp_rmb(); + + for (i = 0; i < npages; i++) { + page_entry = kzalloc(sizeof(*page_entry), GFP_KERNEL); + if (!page_entry) { + unpin_memslot_pages(memslot, writable); + ret = -ENOMEM; + goto out_err; + } + + mmap_read_lock(current->mm); + ret = pin_user_pages(hva, 1, pin_flags, &page_entry->page, &vma); + if (ret != 1) { + mmap_read_unlock(current->mm); + unpin_memslot_pages(memslot, writable); + ret = -ENOMEM; + goto out_err; + } + if (kvm_has_mte(kvm)) { + if (vma->vm_flags & VM_SHARED) { + ret = -EFAULT; + } else { + ret = sanitise_mte_tags(kvm, + page_to_pfn(page_entry->page), + PAGE_SIZE); + } + if (ret) { + mmap_read_unlock(current->mm); + goto out_err; + } + } + mmap_read_unlock(current->mm); + + ret = kvm_mmu_topup_memory_cache(&cache, kvm_mmu_cache_min_pages(kvm)); + if (ret) { + unpin_memslot_pages(memslot, writable); + goto out_err; + } + + spin_lock(&kvm->mmu_lock); + if (mmu_notifier_retry(kvm, mmu_seq)) { + spin_unlock(&kvm->mmu_lock); + unpin_memslot_pages(memslot, writable); + ret = -EAGAIN; + goto out_err; + } + + ret = kvm_pgtable_stage2_map(pgt, ipa, PAGE_SIZE, + page_to_phys(page_entry->page), + prot, &cache); + spin_unlock(&kvm->mmu_lock); + + if (ret) { + kvm_pgtable_stage2_unmap(pgt, memslot->base_gfn << PAGE_SHIFT, + i << PAGE_SHIFT); + unpin_memslot_pages(memslot, writable); + goto out_err; + } + list_add(&page_entry->list, &memslot->arch.pages.list); + + hva += PAGE_SIZE; + ipa += PAGE_SIZE; + } + + + /* + * Even though we've checked the limit at the start, we can still exceed + * it if userspace locked other pages in the meantime or if the + * CAP_IPC_LOCK capability has been revoked. + */ + ret = account_locked_vm(current->mm, npages, true); + if (ret) { + kvm_pgtable_stage2_unmap(pgt, memslot->base_gfn << PAGE_SHIFT, + npages << PAGE_SHIFT); + unpin_memslot_pages(memslot, writable); + goto out_err; + } + + memslot->arch.flags = KVM_MEMSLOT_LOCK_READ; + if (writable) + memslot->arch.flags |= KVM_MEMSLOT_LOCK_WRITE; + + kvm_mmu_free_memory_cache(&cache); + + return 0; + +out_err: + kvm_mmu_free_memory_cache(&cache); + return ret; +} + int kvm_mmu_lock_memslot(struct kvm *kvm, u64 slot, u64 flags) { struct kvm_memory_slot *memslot; @@ -1283,7 +1445,12 @@ int kvm_mmu_lock_memslot(struct kvm *kvm, u64 slot, u64 flags) goto out_unlock_slots; } - ret = -EINVAL; + if (memslot_is_locked(memslot)) { + ret = -EBUSY; + goto out_unlock_slots; + } + + ret = lock_memslot(kvm, memslot, flags); out_unlock_slots: mutex_unlock(&kvm->slots_lock); @@ -1294,13 +1461,51 @@ int kvm_mmu_lock_memslot(struct kvm *kvm, u64 slot, u64 flags) return ret; } +static int unlock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ + bool writable = memslot->arch.flags & KVM_MEMSLOT_LOCK_WRITE; + unsigned long npages = memslot->npages; + + unpin_memslot_pages(memslot, writable); + account_locked_vm(current->mm, npages, false); + + memslot->arch.flags &= ~KVM_MEMSLOT_LOCK_MASK; + + return 0; +} + +static int unlock_all_memslots(struct kvm *kvm) +{ + struct kvm_memory_slot *memslot; + int err, ret; + + mutex_lock(&kvm->slots_lock); + + ret = 0; + kvm_for_each_memslot(memslot, kvm_memslots(kvm)) { + if (!memslot_is_locked(memslot)) + continue; + err = unlock_memslot(kvm, memslot); + if (err) { + kvm_err("error unlocking memslot %u: %d\n", + memslot->id, err); + /* Continue to try unlocking the rest of the slots */ + ret = err; + } + } + + mutex_unlock(&kvm->slots_lock); + + return ret; +} + int kvm_mmu_unlock_memslot(struct kvm *kvm, u64 slot, u64 flags) { struct kvm_memory_slot *memslot; int ret; if (flags & KVM_ARM_UNLOCK_MEM_ALL) - return -EINVAL; + return unlock_all_memslots(kvm); if (slot >= KVM_MEM_SLOTS_NUM) return -EINVAL; @@ -1313,7 +1518,7 @@ int kvm_mmu_unlock_memslot(struct kvm *kvm, u64 slot, u64 flags) goto out_unlock_slots; } - ret = -EINVAL; + ret = unlock_memslot(kvm, memslot); out_unlock_slots: mutex_unlock(&kvm->slots_lock); From patchwork Wed Aug 25 16:17:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80D8FC4338F for ; Wed, 25 Aug 2021 16:20:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 52CF161151 for ; Wed, 25 Aug 2021 16:20:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 52CF161151 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xThL3bu4cQj+C1XJZ2rFTTwOrStcFqDbg7HajtmBh+A=; b=ggTHLEF80BF3F+ EOe+bJn4ysmGDmwsWPtoPL9uo3Fwd21bxxHL3xDWV3wnkcTNn7wre/nB00u6k+8tQ95I+CFMaa7Jq KT54ztuBmEIWjgE/ubp7eb4EujrLRFzKQMIhbXxnivHKp8rZtBo+J4ayJTcNxpDi7c3jaw3N8MBMk ln6K7TtDbP2PO4hEaKdIowp5MpF19qzBezm/tUhKBvpIrXbrn0kfp94UytSFt3KVVCLAcn+M34SZy wuGCSNnkAFgKzVq2pBMbj3hAgqxL/uZ3SYWidAbGB9Jogbl0Eg9e1pRTsfQzSH0UI7k57XSdwa9Zq aWENNDc13DVyxoUbolhA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbg-007h1P-PH; Wed, 25 Aug 2021 16:18:28 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvaZ-007gRh-BT for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:21 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9AEB2101E; Wed, 25 Aug 2021 09:17:18 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4A6C83F66F; Wed, 25 Aug 2021 09:17:17 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 04/39] KVM: arm64: Defer CMOs for locked memslots until a VCPU is run Date: Wed, 25 Aug 2021 17:17:40 +0100 Message-Id: <20210825161815.266051-5-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091719_516956_76035B06 X-CRM114-Status: GOOD ( 22.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM relies on doing dcache maintenance on stage 2 faults to present to a gueste running with the MMU off the same view of memory as userspace. For locked memslots, KVM so far has done the dcache maintenance when a memslot is locked, but that leaves KVM in a rather awkward position: what userspace writes to guest memory after the memslot is locked, but before a VCPU is run, might not be visible to the guest. Fix this by deferring the dcache maintenance until the first VCPU is run. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 7 ++++ arch/arm64/include/asm/kvm_mmu.h | 5 +++ arch/arm64/kvm/arm.c | 3 ++ arch/arm64/kvm/mmu.c | 56 ++++++++++++++++++++++++++++--- 4 files changed, 67 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 97ff3ed5d4b7..ed67f914d169 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -112,6 +112,10 @@ struct kvm_arch_memory_slot { u32 flags; }; +/* kvm->arch.mmu_pending_ops flags */ +#define KVM_LOCKED_MEMSLOT_FLUSH_DCACHE 0 +#define KVM_MAX_MMU_PENDING_OPS 1 + struct kvm_arch { struct kvm_s2_mmu mmu; @@ -135,6 +139,9 @@ struct kvm_arch { */ bool return_nisv_io_abort_to_user; + /* Defer MMU operations until a VCPU is run. */ + unsigned long mmu_pending_ops; + /* * VM-wide PMU filter, implemented as a bitmap and big enough for * up to 2^10 events (ARMv8.0) or 2^16 events (ARMv8.1+). diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index ef079b5eb475..525c223e769f 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -219,6 +219,11 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled); int kvm_mmu_lock_memslot(struct kvm *kvm, u64 slot, u64 flags); int kvm_mmu_unlock_memslot(struct kvm *kvm, u64 slot, u64 flags); +#define kvm_mmu_has_pending_ops(kvm) \ + (!bitmap_empty(&(kvm)->arch.mmu_pending_ops, KVM_MAX_MMU_PENDING_OPS)) + +void kvm_mmu_perform_pending_ops(struct kvm *kvm); + static inline unsigned int kvm_get_vmid_bits(void) { int reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index efb3501c6016..144c982912d8 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -829,6 +829,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) if (unlikely(!kvm_vcpu_initialized(vcpu))) return -ENOEXEC; + if (unlikely(kvm_mmu_has_pending_ops(vcpu->kvm))) + kvm_mmu_perform_pending_ops(vcpu->kvm); + ret = kvm_vcpu_first_run_init(vcpu); if (ret) return ret; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 59c2bfef2fd1..94fa08f3d9d3 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1253,6 +1253,41 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) return ret; } +/* + * It's safe to do the CMOs when the first VCPU is run because: + * - VCPUs cannot run until mmu_cmo_needed is cleared. + * - Memslots cannot be modified because we hold the kvm->slots_lock. + * + * It's safe to periodically release the mmu_lock because: + * - VCPUs cannot run. + * - Any changes to the stage 2 tables triggered by the MMU notifiers also take + * the mmu_lock, which means accesses will be serialized. + * - Stage 2 tables cannot be freed from under us as long as at least one VCPU + * is live, which means that the VM will be live. + */ +void kvm_mmu_perform_pending_ops(struct kvm *kvm) +{ + struct kvm_memory_slot *memslot; + + mutex_lock(&kvm->slots_lock); + if (!kvm_mmu_has_pending_ops(kvm)) + goto out_unlock; + + if (test_bit(KVM_LOCKED_MEMSLOT_FLUSH_DCACHE, &kvm->arch.mmu_pending_ops)) { + kvm_for_each_memslot(memslot, kvm_memslots(kvm)) { + if (!memslot_is_locked(memslot)) + continue; + stage2_flush_memslot(kvm, memslot); + } + } + + bitmap_zero(&kvm->arch.mmu_pending_ops, KVM_MAX_MMU_PENDING_OPS); + +out_unlock: + mutex_unlock(&kvm->slots_lock); + return; +} + static int try_rlimit_memlock(unsigned long npages) { unsigned long lock_limit; @@ -1293,7 +1328,8 @@ static int lock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot, struct kvm_memory_slot_page *page_entry; bool writable = flags & KVM_ARM_LOCK_MEM_WRITE; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; - struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; + struct kvm_pgtable pgt; + struct kvm_pgtable_mm_ops mm_ops; struct vm_area_struct *vma; unsigned long npages = memslot->npages; unsigned int pin_flags = FOLL_LONGTERM; @@ -1311,6 +1347,16 @@ static int lock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot, pin_flags |= FOLL_WRITE; } + /* + * Make a copy of the stage 2 translation table struct to remove the + * dcache callback so we can postpone the cache maintenance operations + * until the first VCPU is run. + */ + mm_ops = *kvm->arch.mmu.pgt->mm_ops; + mm_ops.dcache_clean_inval_poc = NULL; + pgt = *kvm->arch.mmu.pgt; + pgt.mm_ops = &mm_ops; + hva = memslot->userspace_addr; ipa = memslot->base_gfn << PAGE_SHIFT; @@ -1362,13 +1408,13 @@ static int lock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot, goto out_err; } - ret = kvm_pgtable_stage2_map(pgt, ipa, PAGE_SIZE, + ret = kvm_pgtable_stage2_map(&pgt, ipa, PAGE_SIZE, page_to_phys(page_entry->page), prot, &cache); spin_unlock(&kvm->mmu_lock); if (ret) { - kvm_pgtable_stage2_unmap(pgt, memslot->base_gfn << PAGE_SHIFT, + kvm_pgtable_stage2_unmap(&pgt, memslot->base_gfn << PAGE_SHIFT, i << PAGE_SHIFT); unpin_memslot_pages(memslot, writable); goto out_err; @@ -1387,7 +1433,7 @@ static int lock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot, */ ret = account_locked_vm(current->mm, npages, true); if (ret) { - kvm_pgtable_stage2_unmap(pgt, memslot->base_gfn << PAGE_SHIFT, + kvm_pgtable_stage2_unmap(&pgt, memslot->base_gfn << PAGE_SHIFT, npages << PAGE_SHIFT); unpin_memslot_pages(memslot, writable); goto out_err; @@ -1397,6 +1443,8 @@ static int lock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot, if (writable) memslot->arch.flags |= KVM_MEMSLOT_LOCK_WRITE; + set_bit(KVM_LOCKED_MEMSLOT_FLUSH_DCACHE, &kvm->arch.mmu_pending_ops); + kvm_mmu_free_memory_cache(&cache); return 0; From patchwork Wed Aug 25 16:17:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DAE9C432BE for ; Wed, 25 Aug 2021 16:20:33 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 009E861151 for ; Wed, 25 Aug 2021 16:20:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 009E861151 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=sSD/22TmNtjHf0MoTbW8bIY+zj9dViJVPBd1zFn0PRI=; b=CQHmirsACWKdOg gQNWkEjRg8GHA81LnfFpRPrat0yHnctDgQtZw/AeAugMv0boKQg5DbNqkloHM9wnhUqAyxlet+Jrc nzm2sq/oSGh0l5JNlrcqIo1s630QjR3UrNnhsL7zeoqT+xTAgHowWCxqGk3REqx2XyrOapM3KkSjM 3BiuQ+ySWgadVHUTsavtGklGFhbNE+U1SY5jdYRmMuHIMVNBJuDUFlJ1BwMCCUq7Npi5aOLKAPHiD Gvr/10n3HuQHLlcqY1HWLKkQDNIKB9BYVNAMMX+Xd/mUtUJVcF0DFRlcbj2dTU4BA8iLWrU8YetE7 x4o77g2T1IKm6SVQENxw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbz-007h8T-AP; Wed, 25 Aug 2021 16:18:47 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvab-007gRh-TE for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:23 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 386A9D6E; Wed, 25 Aug 2021 09:17:20 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE56B3F66F; Wed, 25 Aug 2021 09:17:18 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 05/39] KVM: arm64: Perform CMOs on locked memslots when userspace resets VCPUs Date: Wed, 25 Aug 2021 17:17:41 +0100 Message-Id: <20210825161815.266051-6-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091722_059285_56119FBA X-CRM114-Status: GOOD ( 15.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Userspace resets a VCPU that has already run by means of a KVM_ARM_VCPU_INIT ioctl. This is usually done after a VM shutdown and before the same VM is rebooted, and during this interval the VM memory can be modified by userspace (for example, to copy the original guest kernel image). In this situation, KVM unmaps the entire stage 2 to trigger stage 2 faults, which ensures that the guest has the same view of memory as the host's userspace. Unmapping stage 2 is not an option for locked memslots, so instead do the cache maintenance the first time a VCPU is run, similar to what KVM does when a memslot is locked. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 3 ++- arch/arm64/kvm/mmu.c | 13 ++++++++++++- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index ed67f914d169..68905bd47f85 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -114,7 +114,8 @@ struct kvm_arch_memory_slot { /* kvm->arch.mmu_pending_ops flags */ #define KVM_LOCKED_MEMSLOT_FLUSH_DCACHE 0 -#define KVM_MAX_MMU_PENDING_OPS 1 +#define KVM_LOCKED_MEMSLOT_INVAL_ICACHE 1 +#define KVM_MAX_MMU_PENDING_OPS 2 struct kvm_arch { struct kvm_s2_mmu mmu; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 94fa08f3d9d3..f1f8a87550d1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -560,8 +560,16 @@ void stage2_unmap_vm(struct kvm *kvm) spin_lock(&kvm->mmu_lock); slots = kvm_memslots(kvm); - kvm_for_each_memslot(memslot, slots) + kvm_for_each_memslot(memslot, slots) { + if (memslot_is_locked(memslot)) { + set_bit(KVM_LOCKED_MEMSLOT_FLUSH_DCACHE, + &kvm->arch.mmu_pending_ops); + set_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, + &kvm->arch.mmu_pending_ops); + continue; + } stage2_unmap_memslot(kvm, memslot); + } spin_unlock(&kvm->mmu_lock); mmap_read_unlock(current->mm); @@ -1281,6 +1289,9 @@ void kvm_mmu_perform_pending_ops(struct kvm *kvm) } } + if (test_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, &kvm->arch.mmu_pending_ops)) + icache_inval_all_pou(); + bitmap_zero(&kvm->arch.mmu_pending_ops, KVM_MAX_MMU_PENDING_OPS); out_unlock: From patchwork Wed Aug 25 16:17:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0AF4C432BE for ; Wed, 25 Aug 2021 16:21:17 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ACB40610D1 for ; Wed, 25 Aug 2021 16:21:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org ACB40610D1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XSsT/DL88HA0mfwAqfOwogwsP5AUigrjLYRHCnADzDs=; b=Hgr+W4Mg6os5Rl QroFSnLhe+WutfJjaD/7GvgiHDD3SjCqjsQ9uwQ81tMSlnaBcAjYYlwU8cNCP6lknh1S1hKbP8HWA ObN2OddNEVA1m1Ri7O+LXyBTZfRiMapH3LRknGb74l0g+gCv43/qd8N4RoQ4qwHl0eckiWBAJz2/9 MtgkPD0W2ie8pjTnscBD99JFYn81inkBUbxD4kGcTgcBx3yWpaP0CsBeM7/sDKs/mkFvXI/Cfj5S3 PNIO/3wevtr55k2IrE2/UQZ87Z6Mgm7KmNaWCTTe1f8DTOloGsTak0D1jcRsbm2oUedSQ2d/sPo+e G1BTtqplzufRV2WtTaWw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvcL-007hKL-A4; Wed, 25 Aug 2021 16:19:09 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvac-007gTr-Uj for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:25 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CAD86101E; Wed, 25 Aug 2021 09:17:21 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7BB583F66F; Wed, 25 Aug 2021 09:17:20 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 06/39] KVM: arm64: Delay tag scrubbing for locked memslots until a VCPU runs Date: Wed, 25 Aug 2021 17:17:42 +0100 Message-Id: <20210825161815.266051-7-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091723_144071_1BE6E0BA X-CRM114-Status: GOOD ( 23.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When an MTE-enabled guest first accesses a physical page, that page must be scrubbed for tags. This is normally done by KVM on a translation fault, but with locked memslots we will not get translation faults. So far, this has been handled by forbidding userspace to enable the MTE capability after locking a memslot. Remove this constraint by deferring tag cleaning until the first VCPU is run, similar to how KVM handles cache maintenance operations. When userspace resets a VCPU, KVM again performs cache maintenance operations on locked memslots because userspace might have modified the guest memory. Clean the tags the next time a VCPU is run for the same reason. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 7 ++- arch/arm64/include/asm/kvm_mmu.h | 2 +- arch/arm64/kvm/arm.c | 29 ++-------- arch/arm64/kvm/mmu.c | 92 ++++++++++++++++++++++++++----- 4 files changed, 87 insertions(+), 43 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 68905bd47f85..a57f33368a3e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -113,9 +113,10 @@ struct kvm_arch_memory_slot { }; /* kvm->arch.mmu_pending_ops flags */ -#define KVM_LOCKED_MEMSLOT_FLUSH_DCACHE 0 -#define KVM_LOCKED_MEMSLOT_INVAL_ICACHE 1 -#define KVM_MAX_MMU_PENDING_OPS 2 +#define KVM_LOCKED_MEMSLOT_FLUSH_DCACHE 0 +#define KVM_LOCKED_MEMSLOT_INVAL_ICACHE 1 +#define KVM_LOCKED_MEMSLOT_SANITISE_TAGS 2 +#define KVM_MAX_MMU_PENDING_OPS 3 struct kvm_arch { struct kvm_s2_mmu mmu; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 525c223e769f..9fcdd2580f6e 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -222,7 +222,7 @@ int kvm_mmu_unlock_memslot(struct kvm *kvm, u64 slot, u64 flags); #define kvm_mmu_has_pending_ops(kvm) \ (!bitmap_empty(&(kvm)->arch.mmu_pending_ops, KVM_MAX_MMU_PENDING_OPS)) -void kvm_mmu_perform_pending_ops(struct kvm *kvm); +int kvm_mmu_perform_pending_ops(struct kvm *kvm); static inline unsigned int kvm_get_vmid_bits(void) { diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 144c982912d8..c47e96ae4f7c 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -108,25 +108,6 @@ static int kvm_lock_user_memory_region_ioctl(struct kvm *kvm, } } -static bool kvm_arm_has_locked_memslots(struct kvm *kvm) -{ - struct kvm_memslots *slots = kvm_memslots(kvm); - struct kvm_memory_slot *memslot; - bool has_locked_memslots = false; - int idx; - - idx = srcu_read_lock(&kvm->srcu); - kvm_for_each_memslot(memslot, slots) { - if (memslot->arch.flags & KVM_MEMSLOT_LOCK_MASK) { - has_locked_memslots = true; - break; - } - } - srcu_read_unlock(&kvm->srcu, idx); - - return has_locked_memslots; -} - int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) { @@ -142,9 +123,6 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, case KVM_CAP_ARM_MTE: if (!system_supports_mte() || kvm->created_vcpus) return -EINVAL; - if (kvm_arm_lock_memslot_supported() && - kvm_arm_has_locked_memslots(kvm)) - return -EPERM; r = 0; kvm->arch.mte_enabled = true; break; @@ -829,8 +807,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) if (unlikely(!kvm_vcpu_initialized(vcpu))) return -ENOEXEC; - if (unlikely(kvm_mmu_has_pending_ops(vcpu->kvm))) - kvm_mmu_perform_pending_ops(vcpu->kvm); + if (unlikely(kvm_mmu_has_pending_ops(vcpu->kvm))) { + ret = kvm_mmu_perform_pending_ops(vcpu->kvm); + if (ret) + return ret; + } ret = kvm_vcpu_first_run_init(vcpu); if (ret) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f1f8a87550d1..cd44b6f2c53e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -566,6 +566,10 @@ void stage2_unmap_vm(struct kvm *kvm) &kvm->arch.mmu_pending_ops); set_bit(KVM_LOCKED_MEMSLOT_INVAL_ICACHE, &kvm->arch.mmu_pending_ops); + if (kvm_has_mte(kvm)) { + set_bit(KVM_LOCKED_MEMSLOT_SANITISE_TAGS, + &kvm->arch.mmu_pending_ops); + } continue; } stage2_unmap_memslot(kvm, memslot); @@ -909,6 +913,58 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, return 0; } +static int sanitise_mte_tags_memslot(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + unsigned long hva, slot_size, slot_end; + struct kvm_memory_slot_page *entry; + struct page *page; + int ret = 0; + + if (!kvm_has_mte(kvm)) + return 0; + + hva = memslot->userspace_addr; + slot_size = memslot->npages << PAGE_SHIFT; + slot_end = hva + slot_size; + + /* First check that the VMAs spanning the memslot are not shared... */ + do { + struct vm_area_struct *vma; + + vma = find_vma_intersection(current->mm, hva, slot_end); + /* The VMAs spanning the memslot must be contiguous. */ + if (!vma) { + ret = -EFAULT; + goto out; + } + /* + * VM_SHARED mappings are not allowed with MTE to avoid races + * when updating the PG_mte_tagged page flag, see + * sanitise_mte_tags for more details. + */ + if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) { + ret = -EFAULT; + goto out; + } + hva = min(slot_end, vma->vm_end); + } while (hva < slot_end); + + /* ... then clear the tags. */ + list_for_each_entry(entry, &memslot->arch.pages.list, list) { + page = entry->page; + if (!test_bit(PG_mte_tagged, &page->flags)) { + mte_clear_page_tags(page_address(page)); + set_bit(PG_mte_tagged, &page->flags); + } + } + +out: + mmap_read_unlock(current->mm); + + return ret; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) @@ -1273,14 +1329,28 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) * - Stage 2 tables cannot be freed from under us as long as at least one VCPU * is live, which means that the VM will be live. */ -void kvm_mmu_perform_pending_ops(struct kvm *kvm) +int kvm_mmu_perform_pending_ops(struct kvm *kvm) { struct kvm_memory_slot *memslot; + int ret = 0; mutex_lock(&kvm->slots_lock); if (!kvm_mmu_has_pending_ops(kvm)) goto out_unlock; + if (test_bit(KVM_LOCKED_MEMSLOT_SANITISE_TAGS, &kvm->arch.mmu_pending_ops) && + kvm_has_mte(kvm)) { + kvm_for_each_memslot(memslot, kvm_memslots(kvm)) { + if (!memslot_is_locked(memslot)) + continue; + mmap_read_lock(current->mm); + ret = sanitise_mte_tags_memslot(kvm, memslot); + mmap_read_unlock(current->mm); + if (ret) + goto out_unlock; + } + } + if (test_bit(KVM_LOCKED_MEMSLOT_FLUSH_DCACHE, &kvm->arch.mmu_pending_ops)) { kvm_for_each_memslot(memslot, kvm_memslots(kvm)) { if (!memslot_is_locked(memslot)) @@ -1296,7 +1366,7 @@ void kvm_mmu_perform_pending_ops(struct kvm *kvm) out_unlock: mutex_unlock(&kvm->slots_lock); - return; + return ret; } static int try_rlimit_memlock(unsigned long npages) @@ -1390,19 +1460,6 @@ static int lock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot, ret = -ENOMEM; goto out_err; } - if (kvm_has_mte(kvm)) { - if (vma->vm_flags & VM_SHARED) { - ret = -EFAULT; - } else { - ret = sanitise_mte_tags(kvm, - page_to_pfn(page_entry->page), - PAGE_SIZE); - } - if (ret) { - mmap_read_unlock(current->mm); - goto out_err; - } - } mmap_read_unlock(current->mm); ret = kvm_mmu_topup_memory_cache(&cache, kvm_mmu_cache_min_pages(kvm)); @@ -1455,6 +1512,11 @@ static int lock_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot, memslot->arch.flags |= KVM_MEMSLOT_LOCK_WRITE; set_bit(KVM_LOCKED_MEMSLOT_FLUSH_DCACHE, &kvm->arch.mmu_pending_ops); + /* + * MTE might be enabled after we lock the memslot, set it here + * unconditionally. + */ + set_bit(KVM_LOCKED_MEMSLOT_SANITISE_TAGS, &kvm->arch.mmu_pending_ops); kvm_mmu_free_memory_cache(&cache); From patchwork Wed Aug 25 16:17:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 621B5C4338F for ; Wed, 25 Aug 2021 16:22:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 28DBC610E5 for ; Wed, 25 Aug 2021 16:22:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 28DBC610E5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZdWwTAA3T7gpx0/USES5Yw9PFHAFlLiofwHhUg/9Rd4=; b=E10KG7EoRgfPxm vjsfVJ4wD8KW/9W3tHi71ltYKIPHUq/hI2QBd+K4GXKZumX6CXl3PMj5T1Su2evY4sm3nqA2g+yQ9 SWNQVk+AoQ6UpzeSxHI9FayOpAaUc8yz6EtkM/8/cKHfNbiNZyq8/Hrfxzd9U4+SL66Ci1KvVMqAM 3PoxIdhDNvama6XPTWMd8VGKKf2I2VAAimoL8JG2x6/Ju8oLz0Lvjev02jiaRup9dkjsuoOEMHrdW X6GU+1z0ifDtiBCTJnHDK4TbBA6WqqrbWn4wrQyN8F6G2d/q6Dpwi1RtSjbZi6AsgnhKJZzGN6Q5Y kTvVH6hvJVF6W2iE6kZw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvdu-007iCO-Mv; Wed, 25 Aug 2021 16:20:47 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvaf-007gTr-BT for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:26 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6F8F91042; Wed, 25 Aug 2021 09:17:23 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 199F83F66F; Wed, 25 Aug 2021 09:17:21 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 07/39] KVM: arm64: Unlock memslots after stage 2 tables are freed Date: Wed, 25 Aug 2021 17:17:43 +0100 Message-Id: <20210825161815.266051-8-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091725_481444_AA71E46D X-CRM114-Status: UNSURE ( 7.83 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Unpin the backing pages mapped at stage 2 after the stage 2 translation tables are destroyed. Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/mmu.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index cd44b6f2c53e..27b7befd4fa9 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1907,6 +1907,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) void kvm_arch_flush_shadow_all(struct kvm *kvm) { kvm_free_stage2_pgd(&kvm->arch.mmu); + unlock_all_memslots(kvm); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm, From patchwork Wed Aug 25 16:17:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB668C4338F for ; Wed, 25 Aug 2021 16:22:00 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9A628610D1 for ; Wed, 25 Aug 2021 16:22:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9A628610D1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=T8bjTMnwh3iHmi7DLe9SpfKWQtF9MPDtgdzTzfAUj2s=; b=EaOGZN1/L0clgQ 18Jvrtantzt87ZNHebpb+mY1ewOdR3daV1J7p8dvL9QEm+dgAFinijxiDKyTB3UswKCdpzkD+fzxU GLbd4pbJVDGLKxIAnIprPXSt5qrXTRdHohNGraW6c9MWr7Rouk6WYWjvZCR7Z437EBmruLEm/P0Sr s0E3Wj/37Cup5In5O565JZPNG6MqSEnU0wv4Duk6A1Tnmc1gi1iufZ+m4mK3+RuJ1jfE3eqcovqYv ndgCyKLUFYJlNlbQ/nGmUseHUYGr09GnqGmU69lI3MxvJuy+GR8PHIigt3BiDdUAAOiIe0Z8ioMxI rB1gpdRvK5Lk3z7LX2jg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvdD-007hnd-Uk; Wed, 25 Aug 2021 16:20:04 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvaf-007gUm-Ea for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:26 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0CEC8D6E; Wed, 25 Aug 2021 09:17:25 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B362E3F66F; Wed, 25 Aug 2021 09:17:23 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 08/39] KVM: arm64: Deny changes to locked memslots Date: Wed, 25 Aug 2021 17:17:44 +0100 Message-Id: <20210825161815.266051-9-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091725_580802_D07C94AF X-CRM114-Status: GOOD ( 12.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Forbid userspace from making changes to a locked memslot. If userspace wants to modify a locked memslot, then they will need to unlock it. One special case is allowed: memslots locked for read, but not for write, can have dirty page logging turned on. Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/mmu.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 27b7befd4fa9..3ab8eba808ae 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1842,8 +1842,23 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, { hva_t hva = mem->userspace_addr; hva_t reg_end = hva + mem->memory_size; + struct kvm_memory_slot *old; int ret = 0; + /* + * Forbid all changes to locked memslots with the exception of turning + * on dirty page logging for memslots locked only for reads. + */ + old = id_to_memslot(kvm_memslots(kvm), memslot->id); + if (old && memslot_is_locked(old)) { + if (change == KVM_MR_FLAGS_ONLY && + memslot_is_logging(memslot) && + !(old->arch.flags & KVM_MEMSLOT_LOCK_WRITE)) + memcpy(&memslot->arch, &old->arch, sizeof(old->arch)); + else + return -EBUSY; + } + if (change != KVM_MR_CREATE && change != KVM_MR_MOVE && change != KVM_MR_FLAGS_ONLY) return 0; From patchwork Wed Aug 25 16:17:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13BDAC4338F for ; Wed, 25 Aug 2021 16:24:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D862C6113A for ; Wed, 25 Aug 2021 16:24:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D862C6113A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RSBvGgd/tZcTGZA/pP3It9Wr8OZSzdKDok17fayvHUs=; b=G0WsQfpMctj/Ez D4d43KejwGj7CnOU1krtktA488/iO93jfPcd8gL7W8FL7aZH7YQQWOmUmoK76/OHG3DMcsAOnke5J wGXHOTVjH4W5FQk9+LtPHFRgCKVNM2f98etbCM8w1f52St1Gv3MxZrs+Cqs1/SmSkcMMDgh3MF0uh m/OCqJPgYgH4tOJqYt3nOhV/usrZdCQ78VoRH4Ul9Zb6g2jnk5FUflQ3SgWVwcelwCj5G/3ttdpEz AMRAQYqiAFKbY+gfI6B58KJyOEPIagVBSuXujFVGTXgzhYquciJKkA81BndzFJNzyiHIc5kI/A4Lk 0eN6CiRX6+SxC11qo78Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvex-007ig7-Aa; Wed, 25 Aug 2021 16:21:51 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvah-007gUm-35 for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:28 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9EACF1063; Wed, 25 Aug 2021 09:17:26 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5044F3F66F; Wed, 25 Aug 2021 09:17:25 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 09/39] KVM: Add kvm_warn{,_ratelimited} macros Date: Wed, 25 Aug 2021 17:17:45 +0100 Message-Id: <20210825161815.266051-10-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091727_216762_C0986F31 X-CRM114-Status: UNSURE ( 8.54 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add the kvm_warn() and kvm_warn_ratelimited() macros to print a kernel warning. Signed-off-by: Alexandru Elisei --- include/linux/kvm_host.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ae7735b490b4..ada5019ad93d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -605,6 +605,10 @@ struct kvm { #define kvm_err(fmt, ...) \ pr_err("kvm [%i]: " fmt, task_pid_nr(current), ## __VA_ARGS__) +#define kvm_warn(fmt, ...) \ + pr_warn("kvm [%i]: " fmt, task_pid_nr(current), ## __VA_ARGS__) +#define kvm_warn_ratelimited(fmt, ...) \ + pr_warn_ratelimited("kvm [%i]: " fmt, task_pid_nr(current), ## __VA_ARGS__) #define kvm_info(fmt, ...) \ pr_info("kvm [%i]: " fmt, task_pid_nr(current), ## __VA_ARGS__) #define kvm_debug(fmt, ...) \ From patchwork Wed Aug 25 16:17:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98AB0C4338F for ; Wed, 25 Aug 2021 16:23:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 60F02610E5 for ; Wed, 25 Aug 2021 16:23:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 60F02610E5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xNpDTwAM71j+nkn6knqdm/xyVBLdH/TG0jR+QVQbhqE=; b=ySBdinadneNpqm TO2bQgOU+1/13xnHkQyEcV7WbyRV2EWodu1uawJVasCOiYBQcIO7sDr4QfIWffPF56vK/b2w/W3k+ k8nTTaUhVRF3NORKho8I7qdHmHIkQJDnDP5vxZM1qBQPJAhjlfvoLPVjfdnwf2Js1+VdugP+YSOc+ SGR57dC1hCknscCsFBeCnXNtHy1UlxdWOJAalgMMlQeViSfrnvFR8vGazahvqJDtQ3jAtIsrcKIUS +ug8jD/ziqoW6f1J5i+lJao6GhGHD9lYrxKm0nQ6WlLZivMFJrFhQW0uQBE9NycDughy3RvwoBZ75 IZoax4B8fDAIIkO8VTCQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvfO-007iwL-EP; Wed, 25 Aug 2021 16:22:18 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvai-007gUm-Rv for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:30 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3F9F7101E; Wed, 25 Aug 2021 09:17:28 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E222B3F66F; Wed, 25 Aug 2021 09:17:26 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 10/39] KVM: arm64: Print a warning for unexpected faults on locked memslots Date: Wed, 25 Aug 2021 17:17:46 +0100 Message-Id: <20210825161815.266051-11-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091728_972840_03264315 X-CRM114-Status: GOOD ( 13.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When userspace unmaps a VMA backing a memslot, the corresponding stage 2 address range gets unmapped via the MMU notifiers. This makes it possible to get stage 2 faults on a locked memslot, which might not be what userspace wants because the purpose of locking a memslot is to avoid stage 2 faults in the first place. Addresses being unmapped from stage 2 can happen from other reasons too, like bugs in the implementation of the lock memslot API, however unlikely that might seem. Let's try to make debugging easier by printing a warning when this happens. Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/mmu.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 3ab8eba808ae..d66d89c18045 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1298,6 +1298,27 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) /* Userspace should not be able to register out-of-bounds IPAs */ VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->kvm)); + if (memslot_is_locked(memslot)) { + const char *fault_type_str; + + if (kvm_vcpu_trap_is_exec_fault(vcpu)) + goto handle_fault; + + if (fault_status == FSC_ACCESS) + fault_type_str = "access"; + else if (write_fault && (memslot->arch.flags & KVM_MEMSLOT_LOCK_WRITE)) + fault_type_str = "write"; + else if (!write_fault) + fault_type_str = "read"; + else + goto handle_fault; + + kvm_warn_ratelimited("Unexpected L2 %s fault on locked memslot %d: IPA=%#llx, ESR_EL2=%#08x]\n", + fault_type_str, memslot->id, fault_ipa, + kvm_vcpu_get_esr(vcpu)); + } + +handle_fault: if (fault_status == FSC_ACCESS) { handle_access_fault(vcpu, fault_ipa); ret = 1; From patchwork Wed Aug 25 16:17:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45955C4338F for ; Wed, 25 Aug 2021 16:24:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0D226610E5 for ; Wed, 25 Aug 2021 16:24:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0D226610E5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FnWWLEUabMxautd2ReOsaVjADOw7+terinvNNqJucvs=; b=lglI8s68+m0ZdI WlkITzhNUbmao7Frxu5YjvfP6Xj8mwYlgRa9DximoYA3ks1t222sJjtaAjtV+NiAq642eOz8jIjRn YKsTlmBnpljU+edAC/ILcpXJ8qDmKgtY2DQ3Db6RbDt8G0TlWgoObf5wum4BJeKYc7ByYr4kYfbU4 USDCmGiCrU7yanJyFmqX6dr0SMQ3Zgvi9ChXa3LbD1hpMTGOPDA8/b6XNko/FtZ7lanyT/eESdF1X s3gUNs9grSLIeL+nwQDUUznq5JBkSmUoQia/HtfphofnKHQK79f5GgPum0W983GZFaeyaWDpDSR/Q sqsW7J3wDYnREdVYTemg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvfo-007jA1-DV; Wed, 25 Aug 2021 16:22:44 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvak-007gUm-O3 for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:32 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D42CB113E; Wed, 25 Aug 2021 09:17:29 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 838793F66F; Wed, 25 Aug 2021 09:17:28 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 11/39] KVM: arm64: Allow userspace to lock and unlock memslots Date: Wed, 25 Aug 2021 17:17:47 +0100 Message-Id: <20210825161815.266051-12-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091730_855553_C0501872 X-CRM114-Status: UNSURE ( 9.71 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The ioctls have been implemented, allow the userspace to lock and unlock memslots. Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/arm.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c47e96ae4f7c..4bd4b8b082a4 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -80,11 +80,6 @@ int kvm_arch_check_processor_compat(void *opaque) return 0; } -static int kvm_arm_lock_memslot_supported(void) -{ - return 0; -} - static int kvm_lock_user_memory_region_ioctl(struct kvm *kvm, struct kvm_enable_cap *cap) { @@ -127,8 +122,6 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, kvm->arch.mte_enabled = true; break; case KVM_CAP_ARM_LOCK_USER_MEMORY_REGION: - if (!kvm_arm_lock_memslot_supported()) - return -EINVAL; r = kvm_lock_user_memory_region_ioctl(kvm, cap); break; default: @@ -306,7 +299,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) r = system_has_full_ptr_auth(); break; case KVM_CAP_ARM_LOCK_USER_MEMORY_REGION: - r = kvm_arm_lock_memslot_supported(); + r = 1; break; default: r = 0; From patchwork Wed Aug 25 16:17:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84E6CC4338F for ; Wed, 25 Aug 2021 16:24:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4D2266113A for ; Wed, 25 Aug 2021 16:24:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4D2266113A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tpsNJdG6ttOZEHOzplZs7MiDD95GDQ529VB7GJaeJDg=; b=ilIEJa4GrOob9C cTw3hd5MZMlMgNHJSjtLvTS9kyI/SY7K4CWyhnVq+y3pALTjepvzwJ1EeWNmUtG3A6XTN4Wx89np1 3WMgITFbUeEjaclABaVQjPuEtKbA0jMvnuZAbQulxbJKJhhECQlFAubYksZZZ7PM9GFQBR5V13XMn oiRlHIGqNsXvgQYhU8F/cRrAuU/AcT8sAUalmAb6bgBL5Zfg1iIPcEthSo4jrA49dFjYcuuGYCkQC RkjkupPUpSvYUXVFoJsbUN1yrvrYEURV8+AKcvbgPkQEPdOxKR8yXLfasdFo1GtWvjCGy02DsDcXn veu3Z8XsxcIHTbowQSkQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvgD-007jMT-V1; Wed, 25 Aug 2021 16:23:10 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvam-007gXn-Ef for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:34 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 71570D6E; Wed, 25 Aug 2021 09:17:31 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 22D013F66F; Wed, 25 Aug 2021 09:17:29 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 12/39] KVM: arm64: Add the KVM_ARM_VCPU_SUPPORTED_CPUS VCPU ioctl Date: Wed, 25 Aug 2021 17:17:48 +0100 Message-Id: <20210825161815.266051-13-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091732_639768_25DED764 X-CRM114-Status: GOOD ( 21.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The ioctl is used to specify a list of physical CPUs on which the VCPU is allowed to run. The ioctl introduces no constraints on the VCPU scheduling, and userspace is expected to manage the VCPU affinity. Attempting to run the VCPU on a CPU not present in the list will result in KVM_RUN returning -ENOEXEC. The expectation is that this ioctl will be used by KVM to prevent errors, like accesses to undefined registers, when emulating VCPU features for which hardware support is present only on a subset of the CPUs present in the system. Signed-off-by: Alexandru Elisei --- Documentation/virt/kvm/api.rst | 22 ++++++++++++++++++++-- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/arm.c | 31 +++++++++++++++++++++++++++++++ include/uapi/linux/kvm.h | 4 ++++ 4 files changed, 58 insertions(+), 2 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 5aa251df7077..994faa24690a 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -396,8 +396,10 @@ Errors: ======= ============================================================== EINTR an unmasked signal is pending - ENOEXEC the vcpu hasn't been initialized or the guest tried to execute - instructions from device memory (arm64) + ENOEXEC the vcpu hasn't been initialized, the guest tried to execute + instructions from device memory (arm64) or the vcpu has been + scheduled on a cpu not in the list specified by + KVM_ARM_VCPU_SUPPORTED_CPUS (arm64). ENOSYS data abort outside memslots with no syndrome info and KVM_CAP_ARM_NISV_TO_USER not enabled (arm64) EPERM SVE feature set but not finalized (arm64) @@ -5293,6 +5295,22 @@ the trailing ``'\0'``, is indicated by ``name_size`` in the header. The Stats Data block contains an array of 64-bit values in the same order as the descriptors in Descriptors block. +4.134 KVM_ARM_VCPU_SUPPORTED_CPUS +--------------------------------- + +:Capability: KVM_CAP_ARM_SUPPORTED_CPUS +:Architectures: arm64 +:Type: vcpu ioctl +:Parameters: const char * representing a range of supported CPUs +:Returns: 0 on success, < 0 on error + +Specifies a list of physical CPUs on which the VCPU can run. KVM will not make +any attempts to prevent the VCPU from being scheduled on a CPU which is not +present in the list; when that happens, KVM_RUN will return -ENOEXEC. + +The format for the range of supported CPUs is specified in the comment for +the function lib/bitmap.c::bitmap_parselist(). + 5. The kvm_run structure ======================== diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a57f33368a3e..1f3b46a6df81 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -396,6 +396,9 @@ struct kvm_vcpu_arch { * see kvm_vcpu_load_sysregs_vhe and kvm_vcpu_put_sysregs_vhe. */ bool sysregs_loaded_on_cpu; + cpumask_t supported_cpus; + bool cpu_not_supported; + /* Guest PV state */ struct { u64 last_steal; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 4bd4b8b082a4..e8a7c0c3a086 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -301,6 +301,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_LOCK_USER_MEMORY_REGION: r = 1; break; + case KVM_CAP_ARM_VCPU_SUPPORTED_CPUS: + r = 1; + break; default: r = 0; } @@ -456,6 +459,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (vcpu_has_ptrauth(vcpu)) vcpu_ptrauth_disable(vcpu); kvm_arch_vcpu_load_debug_state_flags(vcpu); + + if (!cpumask_empty(&vcpu->arch.supported_cpus) && + !cpumask_test_cpu(smp_processor_id(), &vcpu->arch.supported_cpus)) + vcpu->arch.cpu_not_supported = true; } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) @@ -844,6 +851,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) */ preempt_disable(); + if (unlikely(vcpu->arch.cpu_not_supported)) { + vcpu->arch.cpu_not_supported = false; + ret = -ENOEXEC; + preempt_enable(); + continue; + } + kvm_pmu_flush_hwstate(vcpu); local_irq_disable(); @@ -1361,6 +1375,23 @@ long kvm_arch_vcpu_ioctl(struct file *filp, return kvm_arm_vcpu_set_events(vcpu, &events); } + case KVM_ARM_VCPU_SUPPORTED_CPUS: { + char *cpulist; + + r = -ENOEXEC; + if (unlikely(vcpu->arch.has_run_once)) + break; + + cpulist = strndup_user((const char __user *)argp, PAGE_SIZE); + if (IS_ERR(cpulist)) { + r = PTR_ERR(cpulist); + break; + } + + r = cpulist_parse(cpulist, &vcpu->arch.supported_cpus); + kfree(cpulist); + break; + } case KVM_ARM_VCPU_FINALIZE: { int what; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index bcf62c7bdd2d..e5acc925c528 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1113,6 +1113,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_EXIT_ON_EMULATION_FAILURE 204 #define KVM_CAP_ARM_MTE 205 #define KVM_CAP_ARM_LOCK_USER_MEMORY_REGION 206 +#define KVM_CAP_ARM_VCPU_SUPPORTED_CPUS 207 #ifdef KVM_CAP_IRQ_ROUTING @@ -1594,6 +1595,9 @@ struct kvm_enc_region { #define KVM_S390_NORMAL_RESET _IO(KVMIO, 0xc3) #define KVM_S390_CLEAR_RESET _IO(KVMIO, 0xc4) +/* Available with KVM_CAP_ARM_VCPU_SUPPORTED_CPUS */ +#define KVM_ARM_VCPU_SUPPORTED_CPUS _IOW(KVMIO, 0xc5, const char *) + struct kvm_s390_pv_sec_parm { __u64 origin; __u64 length; From patchwork Wed Aug 25 16:17:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2E8EC4338F for ; Wed, 25 Aug 2021 16:25:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 94A206113A for ; Wed, 25 Aug 2021 16:25:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 94A206113A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pZgJFz+q9hIqMtjoODaus6KCFZ3IAIBjjsjAhj8RHZk=; b=gE4Hs2EUMVa+cd dKdswlbWjnQY9U4o4WA9Xu1o5X/RLuGE40Wf4ts3dqOTHNUlKbJrYd1gmIKRJ2aJyMie/2Wr2Czyu J4k0Foe676XY/nA27W3PBLROz7bVMpXal/jHRkfV9zdRYLoz9l/TC9kCQtQnqL+nR7X5OqiOOB0mh 7w8VTkRIlyvuAwIBk/iWab+IgqxDKTzJTzZkQ3LTw+fBWLl58Qw52vRtj40Z6qr/uqDzRrHD90fFR jHaRZEm1uxAF5B4soHk+0PL4hYHl5bvv4WlyXnapqciZiJ/PGxKRHXtKMNiD1ltrLhBO3zqTq1tSM hhU/iLhH2ul7SBZR8gIQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvgg-007jak-RP; Wed, 25 Aug 2021 16:23:38 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvan-007gUm-8p for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:34 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0EFA81042; Wed, 25 Aug 2021 09:17:33 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B48FB3F66F; Wed, 25 Aug 2021 09:17:31 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 13/39] KVM: arm64: Add CONFIG_KVM_ARM_SPE Kconfig option Date: Wed, 25 Aug 2021 17:17:49 +0100 Message-Id: <20210825161815.266051-14-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091733_395120_E3F370E6 X-CRM114-Status: GOOD ( 10.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a new configuration option that will be used for KVM SPE emulation. CONFIG_KVM_ARM_SPE depends on the SPE driver being builtin because: 1. The cpumask of physical CPUs that support SPE will be used by KVM to emulate SPE on heterogeneous systems. 2. KVM will rely on the SPE driver enabling the SPE interrupt at the GIC level. Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/Kconfig | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index a4eba0908bfa..c6ad5a05efb3 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -46,6 +46,14 @@ if KVM source "virt/kvm/Kconfig" +config KVM_ARM_SPE + bool "Virtual Statistical Profiling Extension (SPE) support" + depends on ARM_SPE_PMU=y + default y + help + Adds support for Statistical Profiling Extension (SPE) in virtual + machines. + endif # KVM endif # VIRTUALIZATION From patchwork Wed Aug 25 16:17:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED1D7C4338F for ; Wed, 25 Aug 2021 16:26:14 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B6B1460E52 for ; Wed, 25 Aug 2021 16:26:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B6B1460E52 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yoTvyrYHajiW200xOl9GcMocPkORq+bCXCZb1flqjN0=; b=eQyVl3rw/czL3c TPSEbWxWptjrpXYKpHYh6zc+SU1TQRggIcqms5Cy6VnTltb5ewXbqUCCUv5FomhFUio8sDGr7Kc3/ IViPyy29bYsn43s9iagHW8K4aiuE9xXcZsEMyR2eymVjYmspW2lS5h1EHXdZkwtCHF/rqAstBrDZG LlXv4bJFYyyK9XCizCGk5kD7RKZ92FbapyVY10TDBZOczea6bfbIpvpa/MtmP4T0bwADePtF4YgAW Wm+G3fu8qP+O2PCiLnUbEp7pbuYxCXE2le4y0z+Xa1DmKelbD5R+9szBCEQBbTGrVpaZoaLkbCsCL vK7/Kfjy73zbqqFBwqCA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvhR-007jw6-Vw; Wed, 25 Aug 2021 16:24:27 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvao-007gXn-QZ for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:36 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A0E5513A1; Wed, 25 Aug 2021 09:17:34 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 525B03F66F; Wed, 25 Aug 2021 09:17:33 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 14/39] KVM: arm64: Add SPE capability and VCPU feature Date: Wed, 25 Aug 2021 17:17:50 +0100 Message-Id: <20210825161815.266051-15-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091734_945745_1766700E X-CRM114-Status: GOOD ( 12.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add the KVM_CAP_ARM_SPE capability which allows userspace to discover if SPE emulation is available. Add the KVM_ARM_VCPU_SPE feature which enables the emulation for a VCPU. Both are disabled for now. Signed-off-by: Alexandru Elisei --- Documentation/virt/kvm/api.rst | 9 +++++++++ arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/arm.c | 3 +++ include/uapi/linux/kvm.h | 1 + 4 files changed, 14 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 994faa24690a..68fada258b80 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -7315,3 +7315,12 @@ The argument to KVM_ENABLE_CAP is also a bitmask, and must be a subset of the result of KVM_CHECK_EXTENSION. KVM will forward to userspace the hypercalls whose corresponding bit is in the argument, and return ENOSYS for the others. + +8.35 KVM_CAP_ARM_SPE +-------------------- + +:Architectures: arm64 + +This capability indicates that Statistical Profiling Extension (SPE) emulation +is available in KVM. SPE emulation is enabled for each VCPU which has the +feature bit KVM_ARM_VCPU_SPE set. diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index b3edde68bc3e..9f0a8ea50ea9 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -106,6 +106,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */ #define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */ #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ +#define KVM_ARM_VCPU_SPE 7 /* enable SPE for this CPU */ struct kvm_vcpu_init { __u32 target; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e8a7c0c3a086..22544eb367f3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -304,6 +304,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_VCPU_SUPPORTED_CPUS: r = 1; break; + case KVM_CAP_ARM_SPE: + r = 0; + break; default: r = 0; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index e5acc925c528..930ef91f7916 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1114,6 +1114,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_ARM_MTE 205 #define KVM_CAP_ARM_LOCK_USER_MEMORY_REGION 206 #define KVM_CAP_ARM_VCPU_SUPPORTED_CPUS 207 +#define KVM_CAP_ARM_SPE 208 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Wed Aug 25 16:17:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 936CFC4338F for ; Wed, 25 Aug 2021 16:27:14 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 54A7760F21 for ; Wed, 25 Aug 2021 16:27:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 54A7760F21 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WJcI9WvskzaEEqtReqLAI7zW0M9rUfDj0loc5JP5mpU=; b=NW76EHg2CnEWVY c2FpV7uiV/FGB9m28lsiJohLxnASLgP/NQiwI5BSbJmoFXuaH2IJnXS1fjlgs4MHXnw4jJU56HuT7 8Vz1O1f/bj8da1GJRt4nXKyA6rUSz27NaLxuGJ61ajm/YDgbs/Qi1ey14BbYYQV8NhT9GKLv3PetV DrDvZspFQFzmHKOkFq0RB3xxxkCdzx8VfBt/XUr1/epGc13orllu0TZcErgq75vYtOfCKEU5JnZAs Rfc+zPkXX1mEEr/cLm4cZlazqXbTfKSQxu5Z9qTbBtVyL5G1OG+GSj0N0cFXU1P1n0ukctigUXGmp fC2gqIzvi9g20JAxE9pw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIviO-007kSX-1q; Wed, 25 Aug 2021 16:25:25 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvaq-007gXn-IC for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:38 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3FE5D142F; Wed, 25 Aug 2021 09:17:36 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E4C2A3F66F; Wed, 25 Aug 2021 09:17:34 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 15/39] drivers/perf: Expose the cpumask of CPUs that support SPE Date: Wed, 25 Aug 2021 17:17:51 +0100 Message-Id: <20210825161815.266051-16-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091736_741806_FBA70D32 X-CRM114-Status: GOOD ( 17.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM SPE emulation will require the list of CPUs that have hardware support for SPE. Add a function that returns the cpumask of supported CPUs discovered by the driver during probing. CC: Will Deacon Signed-off-by: Alexandru Elisei --- drivers/perf/arm_spe_pmu.c | 30 ++++++++++++++++++------------ include/linux/perf/arm_pmu.h | 7 +++++++ 2 files changed, 25 insertions(+), 12 deletions(-) diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c index d44bcc29d99c..40749665f102 100644 --- a/drivers/perf/arm_spe_pmu.c +++ b/drivers/perf/arm_spe_pmu.c @@ -50,7 +50,6 @@ struct arm_spe_pmu_buf { struct arm_spe_pmu { struct pmu pmu; struct platform_device *pdev; - cpumask_t supported_cpus; struct hlist_node hotplug_node; int irq; /* PPI */ @@ -72,6 +71,8 @@ struct arm_spe_pmu { struct perf_output_handle __percpu *handle; }; +static cpumask_t supported_cpus; + #define to_spe_pmu(p) (container_of(p, struct arm_spe_pmu, pmu)) /* Convert a free-running index from perf into an SPE buffer offset */ @@ -234,9 +235,7 @@ static const struct attribute_group arm_spe_pmu_format_group = { static ssize_t cpumask_show(struct device *dev, struct device_attribute *attr, char *buf) { - struct arm_spe_pmu *spe_pmu = dev_get_drvdata(dev); - - return cpumap_print_to_pagebuf(true, buf, &spe_pmu->supported_cpus); + return cpumap_print_to_pagebuf(true, buf, &supported_cpus); } static DEVICE_ATTR_RO(cpumask); @@ -677,7 +676,7 @@ static int arm_spe_pmu_event_init(struct perf_event *event) return -ENOENT; if (event->cpu >= 0 && - !cpumask_test_cpu(event->cpu, &spe_pmu->supported_cpus)) + !cpumask_test_cpu(event->cpu, &supported_cpus)) return -ENOENT; if (arm_spe_event_to_pmsevfr(event) & arm_spe_pmsevfr_res0(spe_pmu->pmsver)) @@ -797,11 +796,10 @@ static void arm_spe_pmu_stop(struct perf_event *event, int flags) static int arm_spe_pmu_add(struct perf_event *event, int flags) { int ret = 0; - struct arm_spe_pmu *spe_pmu = to_spe_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; int cpu = event->cpu == -1 ? smp_processor_id() : event->cpu; - if (!cpumask_test_cpu(cpu, &spe_pmu->supported_cpus)) + if (!cpumask_test_cpu(cpu, &supported_cpus)) return -ENOENT; hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; @@ -883,6 +881,13 @@ static void arm_spe_pmu_free_aux(void *aux) kfree(buf); } +#if IS_BUILTIN(CONFIG_ARM_SPE_PMU) +const cpumask_t *arm_spe_pmu_supported_cpus(void) +{ + return &supported_cpus; +} +#endif + /* Initialisation and teardown functions */ static int arm_spe_pmu_perf_init(struct arm_spe_pmu *spe_pmu) { @@ -1039,7 +1044,7 @@ static void __arm_spe_pmu_dev_probe(void *info) dev_info(dev, "probed for CPUs %*pbl [max_record_sz %u, align %u, features 0x%llx]\n", - cpumask_pr_args(&spe_pmu->supported_cpus), + cpumask_pr_args(&supported_cpus), spe_pmu->max_record_sz, spe_pmu->align, spe_pmu->features); spe_pmu->features |= SPE_PMU_FEAT_DEV_PROBED; @@ -1083,7 +1088,7 @@ static int arm_spe_pmu_cpu_startup(unsigned int cpu, struct hlist_node *node) struct arm_spe_pmu *spe_pmu; spe_pmu = hlist_entry_safe(node, struct arm_spe_pmu, hotplug_node); - if (!cpumask_test_cpu(cpu, &spe_pmu->supported_cpus)) + if (!cpumask_test_cpu(cpu, &supported_cpus)) return 0; __arm_spe_pmu_setup_one(spe_pmu); @@ -1095,7 +1100,7 @@ static int arm_spe_pmu_cpu_teardown(unsigned int cpu, struct hlist_node *node) struct arm_spe_pmu *spe_pmu; spe_pmu = hlist_entry_safe(node, struct arm_spe_pmu, hotplug_node); - if (!cpumask_test_cpu(cpu, &spe_pmu->supported_cpus)) + if (!cpumask_test_cpu(cpu, &supported_cpus)) return 0; __arm_spe_pmu_stop_one(spe_pmu); @@ -1105,7 +1110,7 @@ static int arm_spe_pmu_cpu_teardown(unsigned int cpu, struct hlist_node *node) static int arm_spe_pmu_dev_init(struct arm_spe_pmu *spe_pmu) { int ret; - cpumask_t *mask = &spe_pmu->supported_cpus; + cpumask_t *mask = &supported_cpus; /* Make sure we probe the hardware on a relevant CPU */ ret = smp_call_function_any(mask, __arm_spe_pmu_dev_probe, spe_pmu, 1); @@ -1151,7 +1156,7 @@ static int arm_spe_pmu_irq_probe(struct arm_spe_pmu *spe_pmu) return -EINVAL; } - if (irq_get_percpu_devid_partition(irq, &spe_pmu->supported_cpus)) { + if (irq_get_percpu_devid_partition(irq, &supported_cpus)) { dev_err(&pdev->dev, "failed to get PPI partition (%d)\n", irq); return -EINVAL; } @@ -1216,6 +1221,7 @@ static int arm_spe_pmu_device_probe(struct platform_device *pdev) arm_spe_pmu_dev_teardown(spe_pmu); out_free_handle: free_percpu(spe_pmu->handle); + cpumask_clear(&supported_cpus); return ret; } diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 505480217cf1..3121d95c575b 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -8,6 +8,7 @@ #ifndef __ARM_PMU_H__ #define __ARM_PMU_H__ +#include #include #include #include @@ -177,4 +178,10 @@ void armpmu_free_irq(int irq, int cpu); #define ARMV8_SPE_PDEV_NAME "arm,spe-v1" +#if IS_BUILTIN(CONFIG_ARM_SPE_PMU) +extern const cpumask_t *arm_spe_pmu_supported_cpus(void); +#else +static inline const cpumask_t *arm_spe_pmu_supported_cpus(void) { return NULL; }; +#endif + #endif /* __ARM_PMU_H__ */ From patchwork Wed Aug 25 16:17:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF5D6C432BE for ; Wed, 25 Aug 2021 16:28:27 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A91A460F35 for ; Wed, 25 Aug 2021 16:28:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A91A460F35 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JKWwiSKvXcOnBhakZFAa6Iow3CUlDR1KbOUX4Oh+lqY=; b=HGK7hPUe246qsU booeTElGpUYpXR87OJVDnQ88Ulyc65YA/dj+7O3E7UjDEhiu3GS84dKe5q74JIePxcExZDs0gkG61 qh2QiFZXgkeoCnECadi8K6wP+o00iIcM5gnh6ndU0fGDQObE8XwiVffnYI2fiHNV2AOAfmZcJDgew L59zRC9ni1uiOzhd3/+HVlqal78fdQHhFkNnFsFsjiCN8Xao0ZhSkSJtR+qFMBsGhoI5mJhCHcTNF /7WyC/3kFbUmsJkVhZZXCuE01vx3/gn7qhCSDKipYYGCo90eifMJr3AHFW5QPZbMrVJVU3RZlaoIe tMd3edK+L3YyxSC8FOAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvjC-007ksq-NC; Wed, 25 Aug 2021 16:26:14 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvas-007gZu-Bb for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:40 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D4B271063; Wed, 25 Aug 2021 09:17:37 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 84A1A3F66F; Wed, 25 Aug 2021 09:17:36 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 16/39] KVM: arm64: Make SPE available when at least one CPU supports it Date: Wed, 25 Aug 2021 17:17:52 +0100 Message-Id: <20210825161815.266051-17-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091738_544950_F5D783B9 X-CRM114-Status: GOOD ( 16.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM SPE emulation requires at least one physical CPU to support SPE. Initialize the cpumask of PEs that support SPE the first time KVM_CAP_ARM_SPE is queried or when the first virtual machine is created, whichever comes first. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_spe.h | 26 ++++++++++++++++++++++++++ arch/arm64/kvm/Makefile | 1 + arch/arm64/kvm/arm.c | 4 ++++ arch/arm64/kvm/spe.c | 32 ++++++++++++++++++++++++++++++++ 4 files changed, 63 insertions(+) create mode 100644 arch/arm64/include/asm/kvm_spe.h create mode 100644 arch/arm64/kvm/spe.c diff --git a/arch/arm64/include/asm/kvm_spe.h b/arch/arm64/include/asm/kvm_spe.h new file mode 100644 index 000000000000..328115ce0b48 --- /dev/null +++ b/arch/arm64/include/asm/kvm_spe.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021 - ARM Ltd + */ + +#ifndef __ARM64_KVM_SPE_H__ +#define __ARM64_KVM_SPE_H__ + +#ifdef CONFIG_KVM_ARM_SPE +DECLARE_STATIC_KEY_FALSE(kvm_spe_available); + +static __always_inline bool kvm_supports_spe(void) +{ + return static_branch_likely(&kvm_spe_available); +} + +void kvm_spe_init_supported_cpus(void); +void kvm_spe_vm_init(struct kvm *kvm); +#else +#define kvm_supports_spe() (false) + +static inline void kvm_spe_init_supported_cpus(void) {} +static inline void kvm_spe_vm_init(struct kvm *kvm) {} +#endif /* CONFIG_KVM_ARM_SPE */ + +#endif /* __ARM64_KVM_SPE_H__ */ diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 989bb5dad2c8..86092a0f8367 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -25,3 +25,4 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \ vgic/vgic-its.o vgic/vgic-debug.o kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o +kvm-$(CONFIG_KVM_ARM_SPE) += spe.o diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 22544eb367f3..82cb7b5b3b45 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include @@ -180,6 +181,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) set_default_spectre(kvm); + kvm_spe_vm_init(kvm); + return ret; out_free_stage2_pgd: kvm_free_stage2_pgd(&kvm->arch.mmu); @@ -305,6 +308,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) r = 1; break; case KVM_CAP_ARM_SPE: + kvm_spe_init_supported_cpus(); r = 0; break; default: diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c new file mode 100644 index 000000000000..83f92245f881 --- /dev/null +++ b/arch/arm64/kvm/spe.c @@ -0,0 +1,32 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2021 - ARM Ltd + */ + +#include +#include +#include + +#include + +DEFINE_STATIC_KEY_FALSE(kvm_spe_available); + +static const cpumask_t *supported_cpus; + +void kvm_spe_init_supported_cpus(void) +{ + if (likely(supported_cpus)) + return; + + supported_cpus = arm_spe_pmu_supported_cpus(); + BUG_ON(!supported_cpus); + + if (!cpumask_empty(supported_cpus)) + static_branch_enable(&kvm_spe_available); +} + +void kvm_spe_vm_init(struct kvm *kvm) +{ + /* Set supported_cpus if it isn't already initialized. */ + kvm_spe_init_supported_cpus(); +} From patchwork Wed Aug 25 16:17:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B183DC4338F for ; Wed, 25 Aug 2021 16:30:36 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 73ED061040 for ; Wed, 25 Aug 2021 16:30:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 73ED061040 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MHl2PWRY1VKLNV8GNqw/kK7DeQIwt9896Zpx4W+eRkg=; b=Qyv2FDB35Y87QA w9UcFYfhxV7uD0cqSq7Eui9mKh7c7RBhZb+/J8L4Aqnl1mOWqvC7jmGhwaJBnTRbSBDn1qOCiIGn2 lq0ibKNr72oh7NYpjhqXcqstrPvI68Aj9FoHMQUb+rKgFSY076/INxdTKEIIdQ0UvGf0Z2F0iwspR WFn5TnRgxO517S2AcAHWpw1++nwvxTmE7CtjgRXzLkeM2ulhsECM9bPBGNQdtI0vBCFKqj1R0Yki3 HZ8JZ8MhaFv0Kcn7Zy5hsxCKD6HgpFR5HpgJ4Dm97xR434SXPjazz0cpM2I2K9Cr/ipdFnAE52GiH GSg+QzPwD5C+BtgBRvcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvk5-007lIt-LC; Wed, 25 Aug 2021 16:27:09 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvat-007gXn-JL for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:41 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 734691435; Wed, 25 Aug 2021 09:17:39 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 23D673F66F; Wed, 25 Aug 2021 09:17:38 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 17/39] KVM: arm64: Set the VCPU SPE feature bit when SPE is available Date: Wed, 25 Aug 2021 17:17:53 +0100 Message-Id: <20210825161815.266051-18-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091739_712403_1FA0B4A9 X-CRM114-Status: GOOD ( 14.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Check that KVM SPE emulation is supported before allowing userspace to set the KVM_ARM_VCPU_SPE feature. According to ARM DDI 0487G.a, page D9-2946 the Profiling Buffer is disabled if the owning Exception level is 32 bit, reject the SPE feature if the VCPU's execution state at EL1 is aarch32. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/reset.c | 23 +++++++++++++++++++++++ 2 files changed, 26 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 1f3b46a6df81..948adb152104 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -807,6 +807,9 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); #define kvm_vcpu_has_pmu(vcpu) \ (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) +#define kvm_vcpu_has_spe(vcpu) \ + (test_bit(KVM_ARM_VCPU_SPE, (vcpu)->arch.features)) + int kvm_trng_call(struct kvm_vcpu *vcpu); #ifdef CONFIG_KVM extern phys_addr_t hyp_mem_base; diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index cba7872d69a8..17b9f1b29c24 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -27,6 +27,7 @@ #include #include #include +#include #include /* Maximum phys_shift supported for any VM on this host */ @@ -189,6 +190,21 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu) return true; } +static int kvm_vcpu_enable_spe(struct kvm_vcpu *vcpu) +{ + if (!kvm_supports_spe()) + return -EINVAL; + + /* + * The Profiling Buffer is disabled if the owning Exception level is + * aarch32. + */ + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT)) + return -EINVAL; + + return 0; +} + /** * kvm_reset_vcpu - sets core registers and sys_regs to reset value * @vcpu: The VCPU pointer @@ -245,6 +261,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) goto out; } + if (kvm_vcpu_has_spe(vcpu)) { + if (kvm_vcpu_enable_spe(vcpu)) { + ret = -EINVAL; + goto out; + } + } + switch (vcpu->arch.target) { default: if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { From patchwork Wed Aug 25 16:17:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 288BAC432BE for ; Wed, 25 Aug 2021 16:32:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E9F21610A3 for ; Wed, 25 Aug 2021 16:32:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E9F21610A3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Xp/OXxgcZfDwqdLtpZL28LScTVvEXOarIZwTI22CNjQ=; b=hXBvFQAJNNU2j1 6jLUat+XAHQ2WUZ3c8ubamskDX+9TvTrc1BKQ9bcRqn5/gP4eELU7H+wCpbihEuSKsPjhR2DjYZaX OspxyQB7mdMZx3TTcado2jSn9v/zD3MZJ0orn8eyTlX56ZhZAiemkGDfeH9HPsY7wBFN7n9TcDs3N ji9dHAv9JuICUpX4kQmeK0dlNGbiVR9mm0citAEpmDubYKsNM94VOUlo0GNy//xSZSIUCAhff65d/ mYi5xPZjj7nv5p9m6eNX3ZiKFwVjOy9xzk3laYwS9O+PIya3erlEGjxvrdJwP3qInnPWGsA4oWV8/ n+3884ghUuSbcuBFl92g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvmE-007mKo-90; Wed, 25 Aug 2021 16:29:23 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvav-007gZu-7h for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:43 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1258A101E; Wed, 25 Aug 2021 09:17:41 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B6D073F66F; Wed, 25 Aug 2021 09:17:39 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 18/39] KVM: arm64: Expose SPE version to guests Date: Wed, 25 Aug 2021 17:17:54 +0100 Message-Id: <20210825161815.266051-19-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091741_381695_E9173C89 X-CRM114-Status: GOOD ( 11.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Set the ID_AA64DFR0_EL1.PMSVer field to a non-zero value if the VCPU SPE feature is set. SPE version is capped at FEAT_SPEv1p1 because KVM doesn't yet implement freezing of PMU event counters on a SPE buffer management event. Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/sys_regs.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index f6f126eb6ac1..ab7370b7a44b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1070,8 +1070,10 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, val = cpuid_feature_cap_perfmon_field(val, ID_AA64DFR0_PMUVER_SHIFT, kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_PMUVER_8_4 : 0); - /* Hide SPE from guests */ - val &= ~FEATURE(ID_AA64DFR0_PMSVER); + /* Limit guests to SPE for ARMv8.3 */ + val = cpuid_feature_cap_perfmon_field(val, + ID_AA64DFR0_PMSVER_SHIFT, + kvm_vcpu_has_spe(vcpu) ? ID_AA64DFR0_PMSVER_8_3 : 0); break; case SYS_ID_DFR0_EL1: /* Limit guests to PMUv3 for ARMv8.4 */ From patchwork Wed Aug 25 16:17:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1673AC4338F for ; Wed, 25 Aug 2021 16:31:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CEC59610A3 for ; Wed, 25 Aug 2021 16:31:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org CEC59610A3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=w/ZrIC3KFU7sqdbq2Rw/oL33LIGbtz4MZ2WqDi2i0QE=; b=PV8nBjaZbyuHon z16Pz7ALqFEL9A6YZEyD4x5ftMrZifpz4MPCZmkxYx8vXz5sDcJJ51dYRzWubErELbfKT6dynFGUA 11Bcw2GZGcybtLdh4IBX1fzBLLQUvaSOBMWLN9xgcFhZhTgjLHxMjHOMDN4J4do19EY0LrNaJEYR3 Tzo7S5jJnXnKKr8GaMBnVe2NsfXPlUa7QlkDJC22t14IDSK6ipn9SfuzPzFTFvKqNRIRq+My7kyJL WgHzF9ttJbMf2b0ALtjfGivGOBL1SGFcU3L4w1qSOJ6Of6snGa40EkASjmAaWVeFW4qgzvgXbpatk lmgaMYtrzpXtLNW8SQHw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvn0-007miw-Cu; Wed, 25 Aug 2021 16:30:11 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvax-007gXn-1D for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:44 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A439D113E; Wed, 25 Aug 2021 09:17:42 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 559F33F66F; Wed, 25 Aug 2021 09:17:41 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 19/39] KVM: arm64: Do not emulate SPE on CPUs which don't have SPE Date: Wed, 25 Aug 2021 17:17:55 +0100 Message-Id: <20210825161815.266051-20-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091743_220597_BA071E36 X-CRM114-Status: GOOD ( 15.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The kernel allows heterogeneous systems where FEAT_SPE is not present on all CPUs. This presents a challenge for KVM, as it will have to touch the SPE registers when emulating SPE for a guest, and those accesses will cause an undefined exception if SPE is not present on the CPU. Avoid this situation by comparing the cpumask of the physical CPUs that support SPE with the cpu list provided by userspace via the KVM_ARM_VCPU_SUPPORTED_CPUS ioctl and refusing the run the VCPU if there is a mismatch. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_spe.h | 2 ++ arch/arm64/kvm/arm.c | 3 +++ arch/arm64/kvm/spe.c | 12 ++++++++++++ 3 files changed, 17 insertions(+) diff --git a/arch/arm64/include/asm/kvm_spe.h b/arch/arm64/include/asm/kvm_spe.h index 328115ce0b48..ed67ddbf8132 100644 --- a/arch/arm64/include/asm/kvm_spe.h +++ b/arch/arm64/include/asm/kvm_spe.h @@ -16,11 +16,13 @@ static __always_inline bool kvm_supports_spe(void) void kvm_spe_init_supported_cpus(void); void kvm_spe_vm_init(struct kvm *kvm); +int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu); #else #define kvm_supports_spe() (false) static inline void kvm_spe_init_supported_cpus(void) {} static inline void kvm_spe_vm_init(struct kvm *kvm) {} +static inline int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) { return -ENOEXEC; } #endif /* CONFIG_KVM_ARM_SPE */ #endif /* __ARM64_KVM_SPE_H__ */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 82cb7b5b3b45..8f7025f2e4a0 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -633,6 +633,9 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) if (!kvm_arm_vcpu_is_finalized(vcpu)) return -EPERM; + if (kvm_vcpu_has_spe(vcpu) && kvm_spe_check_supported_cpus(vcpu)) + return -EPERM; + vcpu->arch.has_run_once = true; kvm_arm_vcpu_init_debug(vcpu); diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c index 83f92245f881..8d2afc137151 100644 --- a/arch/arm64/kvm/spe.c +++ b/arch/arm64/kvm/spe.c @@ -30,3 +30,15 @@ void kvm_spe_vm_init(struct kvm *kvm) /* Set supported_cpus if it isn't already initialized. */ kvm_spe_init_supported_cpus(); } + +int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) +{ + /* SPE is supported on all CPUs, we don't care about the VCPU mask */ + if (cpumask_equal(supported_cpus, cpu_possible_mask)) + return 0; + + if (!cpumask_subset(&vcpu->arch.supported_cpus, supported_cpus)) + return -ENOEXEC; + + return 0; +} From patchwork Wed Aug 25 16:17:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C834C4338F for ; Wed, 25 Aug 2021 16:32:49 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0C89E6115A for ; Wed, 25 Aug 2021 16:32:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0C89E6115A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WSgUBJn3zhVMlp40N/52DRqjyDHSEUQTAjqErse4pqk=; b=BU0Kp3yW98OWUX j+iN34ZR8hYnAD6uOW8Fmw6nSB61H3ojsjX+55Eode5CNfWGwCDJz8iTUcdzL+9obBG2L5F06fXq0 sKTzinmlxrP05Xb5pjrqz/Tz8b7KiawSY3Go1ZkwKC4o9U4L/FCFoC332I17YnoFrDo+qIpOEmHad sC2CF4h3fc3LxbmezFOZgXCNSIsIZfnZC+4v/oDkp46NpHYt6xOWcxgSuEW7nXwDnQV7mZ49X7GsQ KkwIoU2yTsLQniyV3bUHhb/ycRDsTeKOIx1a2PPHesYPkZmlIwq12dyGWVdXiNrhDM2YlVGqVguaQ dTyekudeOWqHf8vbYDvg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvno-007n7n-PN; Wed, 25 Aug 2021 16:31:01 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvaz-007gZu-2D for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:46 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 684F6143B; Wed, 25 Aug 2021 09:17:44 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E8A173F66F; Wed, 25 Aug 2021 09:17:42 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Cc: Sudeep Holla Subject: [RFC PATCH v4 20/39] KVM: arm64: Add a new VCPU device control group for SPE Date: Wed, 25 Aug 2021 17:17:56 +0100 Message-Id: <20210825161815.266051-21-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091745_249731_30CAC3E0 X-CRM114-Status: GOOD ( 13.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Sudeep Holla Add a new VCPU device control group to control various aspects of KVM's SPE emulation. Functionality will be added in later patches. [ Alexandru E: Rewrote patch ] Signed-off-by: Sudeep Holla Signed-off-by: Alexandru Elisei --- Documentation/virt/kvm/devices/vcpu.rst | 5 +++++ arch/arm64/include/asm/kvm_spe.h | 20 ++++++++++++++++++++ arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/guest.c | 10 ++++++++++ arch/arm64/kvm/spe.c | 15 +++++++++++++++ 5 files changed, 51 insertions(+) diff --git a/Documentation/virt/kvm/devices/vcpu.rst b/Documentation/virt/kvm/devices/vcpu.rst index 2acec3b9ef65..85399c005197 100644 --- a/Documentation/virt/kvm/devices/vcpu.rst +++ b/Documentation/virt/kvm/devices/vcpu.rst @@ -161,3 +161,8 @@ Specifies the base address of the stolen time structure for this VCPU. The base address must be 64 byte aligned and exist within a valid guest memory region. See Documentation/virt/kvm/arm/pvtime.rst for more information including the layout of the stolen time structure. + +4. GROUP: KVM_ARM_VCPU_SPE_CTRL +=============================== + +:Architectures: ARM64 diff --git a/arch/arm64/include/asm/kvm_spe.h b/arch/arm64/include/asm/kvm_spe.h index ed67ddbf8132..ce0f5b3f2027 100644 --- a/arch/arm64/include/asm/kvm_spe.h +++ b/arch/arm64/include/asm/kvm_spe.h @@ -17,12 +17,32 @@ static __always_inline bool kvm_supports_spe(void) void kvm_spe_init_supported_cpus(void); void kvm_spe_vm_init(struct kvm *kvm); int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu); + +int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); +int kvm_spe_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); +int kvm_spe_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); #else #define kvm_supports_spe() (false) static inline void kvm_spe_init_supported_cpus(void) {} static inline void kvm_spe_vm_init(struct kvm *kvm) {} static inline int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) { return -ENOEXEC; } + +static inline int kvm_spe_set_attr(struct kvm_vcpu *vcpu, + struct kvm_device_attr *attr) +{ + return -ENXIO; +} +static inline int kvm_spe_get_attr(struct kvm_vcpu *vcpu, + struct kvm_device_attr *attr) +{ + return -ENXIO; +} +static inline int kvm_spe_has_attr(struct kvm_vcpu *vcpu, + struct kvm_device_attr *attr) +{ + return -ENXIO; +} #endif /* CONFIG_KVM_ARM_SPE */ #endif /* __ARM64_KVM_SPE_H__ */ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 9f0a8ea50ea9..7159a1e23da2 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -368,6 +368,7 @@ struct kvm_arm_copy_mte_tags { #define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1 #define KVM_ARM_VCPU_PVTIME_CTRL 2 #define KVM_ARM_VCPU_PVTIME_IPA 0 +#define KVM_ARM_VCPU_SPE_CTRL 3 /* KVM_IRQ_LINE irq field index values */ #define KVM_ARM_IRQ_VCPU2_SHIFT 28 diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 1dfb83578277..316110b5dd95 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include "trace.h" @@ -962,6 +963,9 @@ int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu, case KVM_ARM_VCPU_PVTIME_CTRL: ret = kvm_arm_pvtime_set_attr(vcpu, attr); break; + case KVM_ARM_VCPU_SPE_CTRL: + ret = kvm_spe_set_attr(vcpu, attr); + break; default: ret = -ENXIO; break; @@ -985,6 +989,9 @@ int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu, case KVM_ARM_VCPU_PVTIME_CTRL: ret = kvm_arm_pvtime_get_attr(vcpu, attr); break; + case KVM_ARM_VCPU_SPE_CTRL: + ret = kvm_spe_get_attr(vcpu, attr); + break; default: ret = -ENXIO; break; @@ -1008,6 +1015,9 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, case KVM_ARM_VCPU_PVTIME_CTRL: ret = kvm_arm_pvtime_has_attr(vcpu, attr); break; + case KVM_ARM_VCPU_SPE_CTRL: + ret = kvm_spe_has_attr(vcpu, attr); + break; default: ret = -ENXIO; break; diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c index 8d2afc137151..56a3fdb35623 100644 --- a/arch/arm64/kvm/spe.c +++ b/arch/arm64/kvm/spe.c @@ -42,3 +42,18 @@ int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) return 0; } + +int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) +{ + return -ENXIO; +} + +int kvm_spe_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) +{ + return -ENXIO; +} + +int kvm_spe_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) +{ + return -ENXIO; +} From patchwork Wed Aug 25 16:17:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A3C4C4338F for ; Wed, 25 Aug 2021 16:33:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6B169610A3 for ; Wed, 25 Aug 2021 16:33:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6B169610A3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wZ4fP/SWQu8SU6e/J3ly2oLBHhGf2g84qWWJOYaizmI=; b=CDdKTJAcW9VmQr ylX0jsEvQAUtO5qMuOIZWLdVohWAzEDiLlr7k0X7qPwE10DxUqZsB7Fj9mEKzXSCpd+tmyxBUEYpb okEY4quidIUNoLFSYMC8iOOd2RpdMX3lUmJTfMLh0Wj7fuK4YpF2748c7fgMxSaCiVC+IEf518tHd 9GcMTCCmxJ7slMyBfhPk25sA2mUEK16r+46Nh8iyHg3vpkkrItTHZQkHZMrQNRgGXEzB8Qy4tucGP OP9iFVkkd3Eg8BAUPqr6X0dIy8v3HHlq5/m1bYj4bWm0jt14vW/ZBtd2WYj+MI6WeoRR8o1e+hhlh Lua9XLb17qboCgIOzhYQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvon-007ncQ-NF; Wed, 25 Aug 2021 16:32:02 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvb0-007gdi-Ip for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:48 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 29C92143D; Wed, 25 Aug 2021 09:17:46 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ABB083F66F; Wed, 25 Aug 2021 09:17:44 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Cc: Sudeep Holla Subject: [RFC PATCH v4 21/39] KVM: arm64: Add SPE VCPU device attribute to set the interrupt number Date: Wed, 25 Aug 2021 17:17:57 +0100 Message-Id: <20210825161815.266051-22-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091746_776465_A0308ACD X-CRM114-Status: GOOD ( 19.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Sudeep Holla Add KVM_ARM_VCPU_SPE_CTRL(KVM_ARM_VCPU_SPE_IRQ) to allow the user to set the interrupt number for the buffer management interrupt. [ Alexandru E: Split from "KVM: arm64: Add a new VCPU device control group for SPE" ] Signed-off-by: Sudeep Holla Signed-off-by: Alexandru Elisei --- Documentation/virt/kvm/devices/vcpu.rst | 19 ++++++ arch/arm64/include/asm/kvm_host.h | 2 + arch/arm64/include/asm/kvm_spe.h | 10 +++ arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/spe.c | 86 +++++++++++++++++++++++++ 5 files changed, 118 insertions(+) diff --git a/Documentation/virt/kvm/devices/vcpu.rst b/Documentation/virt/kvm/devices/vcpu.rst index 85399c005197..05821d40849f 100644 --- a/Documentation/virt/kvm/devices/vcpu.rst +++ b/Documentation/virt/kvm/devices/vcpu.rst @@ -166,3 +166,22 @@ including the layout of the stolen time structure. =============================== :Architectures: ARM64 + +4.1 ATTRIBUTE: KVM_ARM_VCPU_SPE_IRQ +----------------------------------- + +:Parameters: in kvm_device_attr.addr the address for the Profiling Buffer + management interrupt number as a pointer to an int + +Returns: + + ======= ====================================================== + -EBUSY Interrupt number already set for this VCPU + -EFAULT Error accessing the buffer management interrupt number + -EINVAL Invalid interrupt number + -ENXIO SPE not supported or not properly configured + ======= ====================================================== + +Specifies the Profiling Buffer management interrupt number. The interrupt number +must be a PPI and the interrupt number must be the same for each VCPU. SPE +emulation requires an in-kernel vGIC implementation. diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 948adb152104..7b957e439b3d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -26,6 +26,7 @@ #include #include #include +#include #include #define __KVM_HAVE_ARCH_INTC_INITIALIZED @@ -353,6 +354,7 @@ struct kvm_vcpu_arch { struct vgic_cpu vgic_cpu; struct arch_timer_cpu timer_cpu; struct kvm_pmu pmu; + struct kvm_vcpu_spe spe; /* * Anything that is not used directly from assembly code goes diff --git a/arch/arm64/include/asm/kvm_spe.h b/arch/arm64/include/asm/kvm_spe.h index ce0f5b3f2027..2fe11868719d 100644 --- a/arch/arm64/include/asm/kvm_spe.h +++ b/arch/arm64/include/asm/kvm_spe.h @@ -6,6 +6,8 @@ #ifndef __ARM64_KVM_SPE_H__ #define __ARM64_KVM_SPE_H__ +#include + #ifdef CONFIG_KVM_ARM_SPE DECLARE_STATIC_KEY_FALSE(kvm_spe_available); @@ -14,6 +16,11 @@ static __always_inline bool kvm_supports_spe(void) return static_branch_likely(&kvm_spe_available); } +struct kvm_vcpu_spe { + bool initialized; /* SPE initialized for the VCPU */ + int irq_num; /* Buffer management interrut number */ +}; + void kvm_spe_init_supported_cpus(void); void kvm_spe_vm_init(struct kvm *kvm); int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu); @@ -24,6 +31,9 @@ int kvm_spe_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); #else #define kvm_supports_spe() (false) +struct kvm_vcpu_spe { +}; + static inline void kvm_spe_init_supported_cpus(void) {} static inline void kvm_spe_vm_init(struct kvm *kvm) {} static inline int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) { return -ENOEXEC; } diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 7159a1e23da2..c55d94a1a8f5 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -369,6 +369,7 @@ struct kvm_arm_copy_mte_tags { #define KVM_ARM_VCPU_PVTIME_CTRL 2 #define KVM_ARM_VCPU_PVTIME_IPA 0 #define KVM_ARM_VCPU_SPE_CTRL 3 +#define KVM_ARM_VCPU_SPE_IRQ 0 /* KVM_IRQ_LINE irq field index values */ #define KVM_ARM_IRQ_VCPU2_SHIFT 28 diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c index 56a3fdb35623..2fdb42e27675 100644 --- a/arch/arm64/kvm/spe.c +++ b/arch/arm64/kvm/spe.c @@ -43,17 +43,103 @@ int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) return 0; } +static bool kvm_vcpu_supports_spe(struct kvm_vcpu *vcpu) +{ + if (!kvm_supports_spe()) + return false; + + if (!kvm_vcpu_has_spe(vcpu)) + return false; + + if (!irqchip_in_kernel(vcpu->kvm)) + return false; + + return true; +} + +static bool kvm_spe_irq_is_valid(struct kvm *kvm, int irq) +{ + struct kvm_vcpu *vcpu; + int i; + + if (!irq_is_ppi(irq)) + return -EINVAL; + + kvm_for_each_vcpu(i, vcpu, kvm) { + if (!vcpu->arch.spe.irq_num) + continue; + + if (vcpu->arch.spe.irq_num != irq) + return false; + } + + return true; +} + int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) { + if (!kvm_vcpu_supports_spe(vcpu)) + return -ENXIO; + + if (vcpu->arch.spe.initialized) + return -EBUSY; + + switch (attr->attr) { + case KVM_ARM_VCPU_SPE_IRQ: { + int __user *uaddr = (int __user *)(long)attr->addr; + int irq; + + if (vcpu->arch.spe.irq_num) + return -EBUSY; + + if (get_user(irq, uaddr)) + return -EFAULT; + + if (!kvm_spe_irq_is_valid(vcpu->kvm, irq)) + return -EINVAL; + + kvm_debug("Set KVM_ARM_VCPU_SPE_IRQ: %d\n", irq); + vcpu->arch.spe.irq_num = irq; + return 0; + } + } + return -ENXIO; } int kvm_spe_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) { + if (!kvm_vcpu_supports_spe(vcpu)) + return -ENXIO; + + switch (attr->attr) { + case KVM_ARM_VCPU_SPE_IRQ: { + int __user *uaddr = (int __user *)(long)attr->addr; + int irq; + + if (!vcpu->arch.spe.irq_num) + return -ENXIO; + + irq = vcpu->arch.spe.irq_num; + if (put_user(irq, uaddr)) + return -EFAULT; + + return 0; + } + } + return -ENXIO; } int kvm_spe_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) { + if (!kvm_vcpu_supports_spe(vcpu)) + return -ENXIO; + + switch(attr->attr) { + case KVM_ARM_VCPU_SPE_IRQ: + return 0; + } + return -ENXIO; } From patchwork Wed Aug 25 16:17:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24370C4338F for ; Wed, 25 Aug 2021 16:34:27 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E720661183 for ; Wed, 25 Aug 2021 16:34:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E720661183 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PqIVqIb7ICwBY81rnBp+AQNSzoykdTNUhPFRoZJl4/8=; b=vERiRnLENczySU evGkHa8jrrBiEfNQscMNDhK9ngebzy5mZVNyI+vP23UcKKmmudUcWnLauAGREyGkW+mMS1F0tPZWv H+F4gSPEVGTMS+59650L3TvYrcrT0P4val3IVB3TfIxj9pOQNKfSL2ECSYagMkUwlt511mseOwa0J RvAtwWyqnjrdg3LzbozFXloIdSVaWzrMJlPs83PybzIUhwtfOirO+4U3vg5JxFr/TzcOnzOaua1J+ dctIaEEPQKe/bl/oHT1KkY+JsQjG7pAUC5PBP28SIGWuuh0kqVZPZgS1e6Y/ovG2zqKUDl6qnMmhj 2+CJ5Wg4PdtQnXArYMIg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvpY-007o0U-Hj; Wed, 25 Aug 2021 16:32:48 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvb2-007gdi-Ph for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:50 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DFB02D6E; Wed, 25 Aug 2021 09:17:47 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6E5B93F66F; Wed, 25 Aug 2021 09:17:46 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Cc: Sudeep Holla Subject: [RFC PATCH v4 22/39] KVM: arm64: Add SPE VCPU device attribute to initialize SPE Date: Wed, 25 Aug 2021 17:17:58 +0100 Message-Id: <20210825161815.266051-23-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091748_976188_7F3F314A X-CRM114-Status: GOOD ( 19.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Sudeep Holla Add KVM_ARM_VCPU_SPE_CTRL(KVM_ARM_VCPU_SPE_INIT) VCPU ioctl to initialize SPE. Initialization can only be done once for a VCPU. If the feature bit is set, then SPE must be initialized before the VCPU can be run. [ Alexandru E: Split from "KVM: arm64: Add a new VCPU device control group for SPE" ] Signed-off-by: Sudeep Holla Signed-off-by: Alexandru Elisei --- Documentation/virt/kvm/devices/vcpu.rst | 16 ++++++++++++++ arch/arm64/include/asm/kvm_spe.h | 4 ++-- arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/arm.c | 7 ++++-- arch/arm64/kvm/spe.c | 29 ++++++++++++++++++++++++- 5 files changed, 52 insertions(+), 5 deletions(-) diff --git a/Documentation/virt/kvm/devices/vcpu.rst b/Documentation/virt/kvm/devices/vcpu.rst index 05821d40849f..c275c320e500 100644 --- a/Documentation/virt/kvm/devices/vcpu.rst +++ b/Documentation/virt/kvm/devices/vcpu.rst @@ -185,3 +185,19 @@ Returns: Specifies the Profiling Buffer management interrupt number. The interrupt number must be a PPI and the interrupt number must be the same for each VCPU. SPE emulation requires an in-kernel vGIC implementation. + +4.2 ATTRIBUTE: KVM_ARM_VCPU_SPE_INIT +----------------------------------- + +:Parameters: no additional parameter in kvm_device_attr.addr + +Returns: + + ======= ============================================ + -EBUSY SPE already initialized for this VCPU + -ENXIO SPE not supported or not properly configured + ======= ============================================ + +Request initialization of the Statistical Profiling Extension for this VCPU. +Must be done after initializaing the in-kernel irqchip and after setting the +Profiling Buffer management interrupt number for the VCPU. diff --git a/arch/arm64/include/asm/kvm_spe.h b/arch/arm64/include/asm/kvm_spe.h index 2fe11868719d..2217b821ab37 100644 --- a/arch/arm64/include/asm/kvm_spe.h +++ b/arch/arm64/include/asm/kvm_spe.h @@ -23,7 +23,7 @@ struct kvm_vcpu_spe { void kvm_spe_init_supported_cpus(void); void kvm_spe_vm_init(struct kvm *kvm); -int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu); +int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu); int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); int kvm_spe_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); @@ -36,7 +36,7 @@ struct kvm_vcpu_spe { static inline void kvm_spe_init_supported_cpus(void) {} static inline void kvm_spe_vm_init(struct kvm *kvm) {} -static inline int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) { return -ENOEXEC; } +static inline int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu) { return -ENOEXEC; } static inline int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index c55d94a1a8f5..d4c0b53a5fb2 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -370,6 +370,7 @@ struct kvm_arm_copy_mte_tags { #define KVM_ARM_VCPU_PVTIME_IPA 0 #define KVM_ARM_VCPU_SPE_CTRL 3 #define KVM_ARM_VCPU_SPE_IRQ 0 +#define KVM_ARM_VCPU_SPE_INIT 1 /* KVM_IRQ_LINE irq field index values */ #define KVM_ARM_IRQ_VCPU2_SHIFT 28 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 8f7025f2e4a0..6af7ef26d2c1 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -633,8 +633,11 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) if (!kvm_arm_vcpu_is_finalized(vcpu)) return -EPERM; - if (kvm_vcpu_has_spe(vcpu) && kvm_spe_check_supported_cpus(vcpu)) - return -EPERM; + if (kvm_vcpu_has_spe(vcpu)) { + ret = kvm_spe_vcpu_first_run_init(vcpu); + if (ret) + return ret; + } vcpu->arch.has_run_once = true; diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c index 2fdb42e27675..801ceb66a3d0 100644 --- a/arch/arm64/kvm/spe.c +++ b/arch/arm64/kvm/spe.c @@ -31,7 +31,7 @@ void kvm_spe_vm_init(struct kvm *kvm) kvm_spe_init_supported_cpus(); } -int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) +static int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) { /* SPE is supported on all CPUs, we don't care about the VCPU mask */ if (cpumask_equal(supported_cpus, cpu_possible_mask)) @@ -43,6 +43,20 @@ int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) return 0; } +int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu) +{ + int ret; + + ret = kvm_spe_check_supported_cpus(vcpu); + if (ret) + return ret; + + if (!vcpu->arch.spe.initialized) + return -EPERM; + + return 0; +} + static bool kvm_vcpu_supports_spe(struct kvm_vcpu *vcpu) { if (!kvm_supports_spe()) @@ -102,6 +116,18 @@ int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) vcpu->arch.spe.irq_num = irq; return 0; } + case KVM_ARM_VCPU_SPE_INIT: + if (!vcpu->arch.spe.irq_num) + return -ENXIO; + + if (!vgic_initialized(vcpu->kvm)) + return -ENXIO; + + if (kvm_vgic_set_owner(vcpu, vcpu->arch.spe.irq_num, &vcpu->arch.spe)) + return -ENXIO; + + vcpu->arch.spe.initialized = true; + return 0; } return -ENXIO; @@ -138,6 +164,7 @@ int kvm_spe_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) switch(attr->attr) { case KVM_ARM_VCPU_SPE_IRQ: + case KVM_ARM_VCPU_SPE_INIT: return 0; } From patchwork Wed Aug 25 16:17:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93204C4338F for ; Wed, 25 Aug 2021 16:35:14 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4C04E610A3 for ; Wed, 25 Aug 2021 16:35:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4C04E610A3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AQMIELg3spCsCXEGDhpO33lbBP5WPKPJR4Fh7LbyS6E=; b=DRofAFcKFu3qPD MS8JG8HTCXhsPab1eEFQGs4a3465Z8WOg1tNJJ1WHfMZG0T1MQZH5ruSlJRuDrfDAjf02BWcWKm6J DSeS9l5j1g2PX/jmnKu2dX8aBaoYCvfWHXdH0t6G2avmjRLbjybNEulJ3Io4e7oIkj2C4E8yyvgEF tRSquRo/BK8lPUCfJW3XNg5/uSWLGmuf0q/G2w6623PaS8DxdxG7tg8SVbo+pg5S2S8zDcbsXMLjo Wlv0eHwV52HzQcnK2AFBDY9ml7U3sw5o06w7wMDn3i/UpA19K6kpUHHNS45Yq1JyJkYYqWf2tYNxu voQKnhNk+rZAZOGglV+Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvqO-007oSf-Uk; Wed, 25 Aug 2021 16:33:41 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvb5-007gdi-4I for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:52 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A36DD1042; Wed, 25 Aug 2021 09:17:49 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2F1AA3F66F; Wed, 25 Aug 2021 09:17:48 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Cc: Sudeep Holla Subject: [RFC PATCH v4 23/39] KVM: arm64: VHE: Clear MDCR_EL2.E2PB in vcpu_put() Date: Wed, 25 Aug 2021 17:17:59 +0100 Message-Id: <20210825161815.266051-24-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091751_277993_66CD42E1 X-CRM114-Status: GOOD ( 10.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Sudeep Holla On VHE systems, the kernel executes at EL2 and configures the profiling buffer to use the EL2&0 translation regime and to trap accesses from the guest by clearing MDCR_EL2.E2PB. In vcpu_put(), KVM does a bitwise or with the E2PB mask, preserving its value. This has been correct so far, since MDCR_EL2.E2B has the same value (0b00) for all VMs, as set by kvm_arm_setup_mdcr_el2(). However, this will change when KVM enables support for SPE in guests. For such guests KVM will configure the profiling buffer to use the EL1&0 translation regime, a setting that is obviously undesirable to be preserved for the host running at EL2. Let's avoid this situation by explicitly clearing E2PB in vcpu_put(). [ Alexandru E: Reworded commit ] Signed-off-by: Sudeep Holla Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/hyp/vhe/switch.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index b3229924d243..86d4c8c33f3e 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -95,9 +95,7 @@ void deactivate_traps_vhe_put(void) { u64 mdcr_el2 = read_sysreg(mdcr_el2); - mdcr_el2 &= MDCR_EL2_HPMN_MASK | - MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT | - MDCR_EL2_TPMS; + mdcr_el2 &= MDCR_EL2_HPMN_MASK | MDCR_EL2_TPMS; write_sysreg(mdcr_el2, mdcr_el2); From patchwork Wed Aug 25 16:18:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B076C4338F for ; Wed, 25 Aug 2021 16:36:02 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 04B6060240 for ; Wed, 25 Aug 2021 16:36:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 04B6060240 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AEH/f4NmXCfyQP7555vbLAbgMO7RnGNYi9LKKXW2CTg=; b=U+f1U2F8cHVu7F 9ON/vRbouvokaXu4LgjidI06ltasOhrGcS0Z+0ex/oMzI0mfkDjrwN5qOhCRgomPxThqCIqwWs/Ck MLH7HIpau7CR3gqhjD+p1U9trnWRC3t7nDY5t3DWdDaxIsYEXJhZNKPG90dII24jLNFJ82SvGMPiY Gn2080UIwU7iyFdF3eLijoUgKWhOl0cOjrFE4xre2y27wtaRYD722DNK1+KZ3QUh6hDYUxnu09wJs HIc0ZyfQEuaUvlKOdIrDQAbXjm65uBst0JJgl9bUIwIQEzp2sXglordpUgZbK7L2IzMwzRVcOioGC DDb+CoDVGQxHz/ZEq+Gg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvr5-007ooY-T9; Wed, 25 Aug 2021 16:34:24 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvb5-007ggF-Kd for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:53 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 407B11063; Wed, 25 Aug 2021 09:17:51 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E6A313F66F; Wed, 25 Aug 2021 09:17:49 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 24/39] KVM: arm64: debug: Configure MDCR_EL2 when a VCPU has SPE Date: Wed, 25 Aug 2021 17:18:00 +0100 Message-Id: <20210825161815.266051-25-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091751_805289_CF843F92 X-CRM114-Status: GOOD ( 13.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allow the guest running at EL1 to use SPE when that feature is enabled for the VCPU by setting the profiling buffer owning translation regime to EL1&0 and disabling traps to the profiling control registers. Keep trapping accesses to the buffer control registers because that's needed to emulate the buffer management interrupt. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_arm.h | 1 + arch/arm64/kvm/debug.c | 23 +++++++++++++++++++---- 2 files changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index d436831dd706..d939da6f54dc 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -285,6 +285,7 @@ #define MDCR_EL2_TPMS (1 << 14) #define MDCR_EL2_E2PB_MASK (UL(0x3)) #define MDCR_EL2_E2PB_SHIFT (UL(12)) +#define MDCR_EL2_E2PB_EL1_TRAP (UL(2)) #define MDCR_EL2_TDRA (1 << 11) #define MDCR_EL2_TDOSA (1 << 10) #define MDCR_EL2_TDA (1 << 9) diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index d5e79d7ee6e9..64e8211366b6 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -77,24 +77,39 @@ void kvm_arm_init_debug(void) * - Performance monitors (MDCR_EL2_TPM/MDCR_EL2_TPMCR) * - Debug ROM Address (MDCR_EL2_TDRA) * - OS related registers (MDCR_EL2_TDOSA) - * - Statistical profiler (MDCR_EL2_TPMS/MDCR_EL2_E2PB) * - Self-hosted Trace Filter controls (MDCR_EL2_TTRF) * - Self-hosted Trace (MDCR_EL2_TTRF/MDCR_EL2_E2TB) */ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) { /* - * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK - * to disable guest access to the profiling and trace buffers + * This also clears MDCR_EL2_E2TB_MASK to disable guest access to the + * trace buffers. */ vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK; vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM | - MDCR_EL2_TPMS | MDCR_EL2_TTRF | MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA); + if (kvm_supports_spe() && kvm_vcpu_has_spe(vcpu)) { + /* + * Use EL1&0 for the profiling buffer translation regime and + * trap accesses to the buffer control registers; leave + * MDCR_EL2.TPMS unset and do not trap accesses to the profiling + * control registers. + */ + vcpu->arch.mdcr_el2 |= MDCR_EL2_E2PB_EL1_TRAP << MDCR_EL2_E2PB_SHIFT; + } else { + /* + * Trap accesses to the profiling control registers; leave + * MDCR_EL2.E2PB unset and use the EL2&0 translation regime for + * the profiling buffer. + */ + vcpu->arch.mdcr_el2 |= MDCR_EL2_TPMS; + } + /* Is the VM being debugged by userspace? */ if (vcpu->guest_debug) /* Route all software debug exceptions to EL2 */ From patchwork Wed Aug 25 16:18:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F318DC4338F for ; Wed, 25 Aug 2021 16:36:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C536A60F5C for ; Wed, 25 Aug 2021 16:36:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C536A60F5C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8B2ZhA+tKF2ydLTBgB5smVpqEPgkXs3jpnAanEpd4/4=; b=N2f/qcQL9B+hsP KOvqDemwCSMd3HBt/wDoMsEzCRORJ1lAUfsJsxsqFXcGNAePh6Eo3iRld1x70FjOUGZuCwLAoobWH mMDwjGaNDIr65v/sOjt+bff75g2ClLvtQo8Pf+WsdnUfv0DXGz3E0x/16y1aS0ZNB8BZIyl6PRP4B iVVjAPQy8Uz2Vq9xWC9geooh7wN/q0USQ/J6pq0GZWfpJuDXBH/8bK9lh5xoUFf9XeEE9KCGvrwpm G8u5J1YDMt/Z7GxcbfAIWVpeCNN4k7EbfUbWotZp8ASsxUnsSPOOIWJHVJG2NR+Y7dILOOxotWeRb hbAH9GyEOCfO/AeQvGXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvrc-007p4q-GD; Wed, 25 Aug 2021 16:34:56 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvb7-007gdi-0t for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:54 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D1D3513A1; Wed, 25 Aug 2021 09:17:52 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 84A133F66F; Wed, 25 Aug 2021 09:17:51 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 25/39] KVM: arm64: Move the write to MDCR_EL2 out of __activate_traps_common() Date: Wed, 25 Aug 2021 17:18:01 +0100 Message-Id: <20210825161815.266051-26-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091753_157153_651CF637 X-CRM114-Status: GOOD ( 13.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To run a guest with SPE, MDCR_EL2 must be configured such that the buffer owning regime is EL1&0. With VHE enabled, the guest runs at EL2 and changing the owning regime to EL1&0 too early in vcpu_put() would mean creating an extended blackout window for the host. Move the MDCR_EL2 write out of __activate_traps_common() to prepare for executing it later in the run loop in the VHE case. This also makes __activate_traps_common() look more like __deactivate_traps_common(), which does not touch the register. No functional change intended. Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/hyp/include/hyp/switch.h | 1 - arch/arm64/kvm/hyp/nvhe/switch.c | 2 ++ arch/arm64/kvm/hyp/vhe/switch.c | 2 ++ 3 files changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index e4a2f295a394..5084a54d012e 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -92,7 +92,6 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu) write_sysreg(0, pmselr_el0); write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); } - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); } static inline void __deactivate_traps_common(void) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index f7af9688c1f7..0c70d897a493 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -41,6 +41,8 @@ static void __activate_traps(struct kvm_vcpu *vcpu) ___activate_traps(vcpu); __activate_traps_common(vcpu); + write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); + val = CPTR_EL2_DEFAULT; val |= CPTR_EL2_TTA | CPTR_EL2_TAM; if (!update_fp_enabled(vcpu)) { diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 86d4c8c33f3e..983ba1570d72 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -89,6 +89,8 @@ NOKPROBE_SYMBOL(__deactivate_traps); void activate_traps_vhe_load(struct kvm_vcpu *vcpu) { __activate_traps_common(vcpu); + + write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); } void deactivate_traps_vhe_put(void) From patchwork Wed Aug 25 16:18:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92408C4338F for ; Wed, 25 Aug 2021 16:37:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 646FC60F5C for ; Wed, 25 Aug 2021 16:37:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 646FC60F5C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8iRwv5u15xFp/8zVrsAlCCbSlJWn6A4maoWz82B3q9E=; b=uKY+6hZpYhqk1E QC3SFrC0EglOLwUotOGDGS65Uc/RgHgpd5uQmkR0BnV3/vLOFx857GRO1xLachPnO8eLrWRSxJ7UE 7J4vYC+argGAnvK0t45DXTLztI8XN2cMOO53/3Xfzyfv3WKRmV0SaK30ADi38dcL/HFI5Mp4G3YZ6 sqyaThC7JopVCDF9w/H3OTU50DuTrW8OZYSCYBn8tfLbjU4PfJT33w9wMdwkOUuv5cWaAEW+gTmIU YIlhAODa0BeDdeCOHRvBf5pYYqAXmrpdcGtl7+tyY67/RBNiN4l86EZ9qswh6DBiBosY6EScgffbV amuhJaVR0UJTxtVqwsyQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvsO-007pTZ-2P; Wed, 25 Aug 2021 16:35:44 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvb8-007ggF-MO for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6EF94142F; Wed, 25 Aug 2021 09:17:54 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 20EE23F66F; Wed, 25 Aug 2021 09:17:52 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 26/39] KVM: arm64: VHE: Change MDCR_EL2 at world switch if VCPU has SPE Date: Wed, 25 Aug 2021 17:18:02 +0100 Message-Id: <20210825161815.266051-27-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091754_869040_62F2B89B X-CRM114-Status: GOOD ( 14.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When a VCPU has the SPE feature, MDCR_EL2 sets the buffer owning regime to EL1&0. Write the guest's MDCR_EL2 value as late as possible and restore the host's value as soon as possible at each world switch to make the profiling blackout window as small as possible for the host. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/debug.c | 14 +++++++++++-- arch/arm64/kvm/hyp/vhe/switch.c | 33 +++++++++++++++++++++++------- arch/arm64/kvm/hyp/vhe/sysreg-sr.c | 2 +- 4 files changed, 40 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 9d60b3006efc..657d0c94cf82 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -95,7 +95,7 @@ void __sve_restore_state(void *sve_pffr, u32 *fpsr); #ifndef __KVM_NVHE_HYPERVISOR__ void activate_traps_vhe_load(struct kvm_vcpu *vcpu); -void deactivate_traps_vhe_put(void); +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu); #endif u64 __guest_enter(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 64e8211366b6..70712cd85f32 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -249,9 +249,19 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; /* Write mdcr_el2 changes since vcpu_load on VHE systems */ - if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2) - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); + if (has_vhe()) { + /* + * MDCR_EL2 can modify the SPE buffer owning regime, defer the + * write until the VCPU is run. + */ + if (kvm_vcpu_has_spe(vcpu)) + goto out; + + if (orig_mdcr_el2 != vcpu->arch.mdcr_el2) + write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); + } +out: trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1)); } diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 983ba1570d72..ec4e179d56ae 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -31,12 +31,29 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); +static void __restore_host_mdcr_el2(struct kvm_vcpu *vcpu) +{ + u64 mdcr_el2; + + mdcr_el2 = read_sysreg(mdcr_el2); + mdcr_el2 &= MDCR_EL2_HPMN_MASK | MDCR_EL2_TPMS; + write_sysreg(mdcr_el2, mdcr_el2); +} + +static void __restore_guest_mdcr_el2(struct kvm_vcpu *vcpu) +{ + write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); +} + static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; ___activate_traps(vcpu); + if (kvm_vcpu_has_spe(vcpu)) + __restore_guest_mdcr_el2(vcpu); + val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; val &= ~CPACR_EL1_ZEN; @@ -81,7 +98,11 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) */ asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); + if (kvm_vcpu_has_spe(vcpu)) + __restore_host_mdcr_el2(vcpu); + write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1); + write_sysreg(vectors, vbar_el1); } NOKPROBE_SYMBOL(__deactivate_traps); @@ -90,16 +111,14 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu) { __activate_traps_common(vcpu); - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); + if (!kvm_vcpu_has_spe(vcpu)) + __restore_guest_mdcr_el2(vcpu); } -void deactivate_traps_vhe_put(void) +void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu) { - u64 mdcr_el2 = read_sysreg(mdcr_el2); - - mdcr_el2 &= MDCR_EL2_HPMN_MASK | MDCR_EL2_TPMS; - - write_sysreg(mdcr_el2, mdcr_el2); + if (!kvm_vcpu_has_spe(vcpu)) + __restore_host_mdcr_el2(vcpu); __deactivate_traps_common(); } diff --git a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c index 2a0b8c88d74f..007a12dd4351 100644 --- a/arch/arm64/kvm/hyp/vhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/vhe/sysreg-sr.c @@ -101,7 +101,7 @@ void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu) struct kvm_cpu_context *host_ctxt; host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; - deactivate_traps_vhe_put(); + deactivate_traps_vhe_put(vcpu); __sysreg_save_el1_state(guest_ctxt); __sysreg_save_user_state(guest_ctxt); From patchwork Wed Aug 25 16:18:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82BBEC4338F for ; Wed, 25 Aug 2021 16:38:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4DEED60F6F for ; Wed, 25 Aug 2021 16:38:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4DEED60F6F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dtoI6HYdurrTQh/+xWT0FcxYe0097v36Rbw2loHtl4M=; b=oj1zcJCZGlh8rv LYBHMlwZCX5nsYexosvlIH7PNQoj4X66sIXK4ynfJCbz/6PQSqWOys+6muar5HbiXWVU6YLS0IcQc Eq1F3auI0MjSuO81L8R77XUkFnQ3IBiDQkVbGquhw4F5ZLfestjmBqwKqOj2fE/4YeJJwen6xefo1 iYU2HNe8QupzOHamA1Bd4s0Az/53rnc2bKeU3jP5OXOcm16U+dVtqvXuuVer9xwqJQhHnuy0tZD5N BRrjacBh5FbZ9Pw+lQ7htNa0e/2fWrL551b5pnoFMzWJAUgiZjRSlrS9exE1cIdTQWQFuVw+1lZEx l/AJ1GJZ5dlXm6s6m/Cw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvt7-007pp2-Sk; Wed, 25 Aug 2021 16:36:30 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbA-007gjH-Mz for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:58 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C9FC101E; Wed, 25 Aug 2021 09:17:56 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B26DB3F66F; Wed, 25 Aug 2021 09:17:54 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 27/39] KVM: arm64: Add SPE system registers to VCPU context Date: Wed, 25 Aug 2021 17:18:03 +0100 Message-Id: <20210825161815.266051-28-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091756_916329_EEECBAEA X-CRM114-Status: GOOD ( 20.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add the SPE system registers to the VCPU context. Omitted are PMBIDR_EL1, which cannot be trapped, and PMSIR_EL1, which is a read-only register. The registers that KVM traps are stored in the sys_regs array on a write, and returned on a read; complete emulation and save/restore for all registers on world switch will be added a future patches. KVM exposes FEAT_SPEv1p1 to guests in the ID_AA64DFR0_EL1 register and doesn't trap accesses to the profiling control registers. If the hardware supports FEAT_SPEv1p2, the guest will be able to access the PMSNEVFR_EL1 register, which is UNDEFINED for FEAT_SPEv1p1. However, that inconsistency is somewhat consistent with the architecture because PMBIDR_EL1 behaves similarly: the register is UNDEFINED if SPE is missing, but a VCPU without the SPE feature can still read the register because there is no (easy) way for KVM to trap accesses to the register. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 12 +++++++ arch/arm64/include/asm/kvm_spe.h | 7 ++++ arch/arm64/kvm/spe.c | 10 ++++++ arch/arm64/kvm/sys_regs.c | 54 ++++++++++++++++++++++++------- 4 files changed, 71 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7b957e439b3d..4c0d3d5ba285 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -237,6 +237,18 @@ enum vcpu_sysreg { TFSR_EL1, /* Tag Fault Status Register (EL1) */ TFSRE0_EL1, /* Tag Fault Status Register (EL0) */ + /* Statistical Profiling Extension Registers. */ + PMSCR_EL1, /* Statistical Profiling Control Register */ + PMSICR_EL1, /* Sampling Interval Counter Register */ + PMSIRR_EL1, /* Sampling Interval Reload Register */ + PMSFCR_EL1, /* Sampling Filter Control Register */ + PMSEVFR_EL1, /* Sampling Event Filter Register */ + PMSLATFR_EL1, /* Sampling Latency Filter Register */ + PMBLIMITR_EL1, /* Profiling Buffer Limit Address Register */ + PMBPTR_EL1, /* Profiling Buffer Write Pointer Register */ + PMBSR_EL1, /* Profiling Buffer Status/syndrome Register */ + PMSCR_EL2, /* Statistical Profiling Control Register, EL2 */ + /* 32bit specific registers. Keep them at the end of the range */ DACR32_EL2, /* Domain Access Control Register */ IFSR32_EL2, /* Instruction Fault Status Register */ diff --git a/arch/arm64/include/asm/kvm_spe.h b/arch/arm64/include/asm/kvm_spe.h index 2217b821ab37..934eedb0de46 100644 --- a/arch/arm64/include/asm/kvm_spe.h +++ b/arch/arm64/include/asm/kvm_spe.h @@ -25,9 +25,13 @@ void kvm_spe_init_supported_cpus(void); void kvm_spe_vm_init(struct kvm *kvm); int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu); +void kvm_spe_write_sysreg(struct kvm_vcpu *vcpu, int reg, u64 val); +u64 kvm_spe_read_sysreg(struct kvm_vcpu *vcpu, int reg); + int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); int kvm_spe_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); int kvm_spe_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); + #else #define kvm_supports_spe() (false) @@ -38,6 +42,9 @@ static inline void kvm_spe_init_supported_cpus(void) {} static inline void kvm_spe_vm_init(struct kvm *kvm) {} static inline int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu) { return -ENOEXEC; } +static inline void kvm_spe_write_sysreg(struct kvm_vcpu *vcpu, int reg, u64 val) {} +static inline u64 kvm_spe_read_sysreg(struct kvm_vcpu *vcpu, int reg) { return 0; } + static inline int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) { diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c index 801ceb66a3d0..f760ccd8306a 100644 --- a/arch/arm64/kvm/spe.c +++ b/arch/arm64/kvm/spe.c @@ -57,6 +57,16 @@ int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu) return 0; } +void kvm_spe_write_sysreg(struct kvm_vcpu *vcpu, int reg, u64 val) +{ + __vcpu_sys_reg(vcpu, reg) = val; +} + +u64 kvm_spe_read_sysreg(struct kvm_vcpu *vcpu, int reg) +{ + return __vcpu_sys_reg(vcpu, reg); +} + static bool kvm_vcpu_supports_spe(struct kvm_vcpu *vcpu) { if (!kvm_supports_spe()) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index ab7370b7a44b..843822be5695 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -594,6 +594,33 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) vcpu_write_sys_reg(vcpu, (1ULL << 31) | mpidr, MPIDR_EL1); } +static unsigned int spe_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *r) +{ + if (kvm_vcpu_has_spe(vcpu)) + return 0; + + return REG_HIDDEN; +} + +static bool access_spe_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ int reg = r->reg; + u64 val = p->regval; + + if (reg < PMBLIMITR_EL1) { + print_sys_reg_msg(p, "Unsupported guest SPE register access at: %lx [%08lx]\n", + *vcpu_pc(vcpu), *vcpu_cpsr(vcpu)); + } + + if (p->is_write) + kvm_spe_write_sysreg(vcpu, reg, val); + else + p->regval = kvm_spe_read_sysreg(vcpu, reg); + + return true; +} + static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { @@ -956,6 +983,10 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, { PMU_SYS_REG(SYS_PMEVTYPERn_EL0(n)), \ .access = access_pmu_evtyper, .reg = (PMEVTYPER0_EL0 + n), } +#define SPE_SYS_REG(r) \ + SYS_DESC(r), .access = access_spe_reg, .reset = reset_val, \ + .val = 0, .visibility = spe_visibility + static bool undef_access(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { @@ -1530,18 +1561,17 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_FAR_EL1), access_vm_reg, reset_unknown, FAR_EL1 }, { SYS_DESC(SYS_PAR_EL1), NULL, reset_unknown, PAR_EL1 }, - { SYS_DESC(SYS_PMSCR_EL1), undef_access }, - { SYS_DESC(SYS_PMSNEVFR_EL1), undef_access }, - { SYS_DESC(SYS_PMSICR_EL1), undef_access }, - { SYS_DESC(SYS_PMSIRR_EL1), undef_access }, - { SYS_DESC(SYS_PMSFCR_EL1), undef_access }, - { SYS_DESC(SYS_PMSEVFR_EL1), undef_access }, - { SYS_DESC(SYS_PMSLATFR_EL1), undef_access }, - { SYS_DESC(SYS_PMSIDR_EL1), undef_access }, - { SYS_DESC(SYS_PMBLIMITR_EL1), undef_access }, - { SYS_DESC(SYS_PMBPTR_EL1), undef_access }, - { SYS_DESC(SYS_PMBSR_EL1), undef_access }, - /* PMBIDR_EL1 is not trapped */ + { SPE_SYS_REG(SYS_PMSCR_EL1), .reg = PMSCR_EL1 }, + { SPE_SYS_REG(SYS_PMSICR_EL1), .reg = PMSICR_EL1 }, + { SPE_SYS_REG(SYS_PMSIRR_EL1), .reg = PMSIRR_EL1 }, + { SPE_SYS_REG(SYS_PMSFCR_EL1), .reg = PMSFCR_EL1 }, + { SPE_SYS_REG(SYS_PMSEVFR_EL1), .reg = PMSEVFR_EL1 }, + { SPE_SYS_REG(SYS_PMSLATFR_EL1), .reg = PMSLATFR_EL1 }, + { SPE_SYS_REG(SYS_PMSIDR_EL1), .reset = NULL }, + { SPE_SYS_REG(SYS_PMBLIMITR_EL1), .reg = PMBLIMITR_EL1 }, + { SPE_SYS_REG(SYS_PMBPTR_EL1), .reg = PMBPTR_EL1 }, + { SPE_SYS_REG(SYS_PMBSR_EL1), .reg = PMBSR_EL1 }, + /* PMBIDR_EL1 and PMSCR_EL2 are not trapped */ { PMU_SYS_REG(SYS_PMINTENSET_EL1), .access = access_pminten, .reg = PMINTENSET_EL1 }, From patchwork Wed Aug 25 16:18:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BBE9C4338F for ; Wed, 25 Aug 2021 16:38:54 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 652A660F5C for ; Wed, 25 Aug 2021 16:38:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 652A660F5C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FrtNbGJJ4/74ekAH4mAFVg1uQsTz8GBQ/o64WfHgvW8=; b=JuWoFhQlob7WIh IH7+gXg355XObX5BOh/RPnLrkJuQ8yD860g20FICvsWNkxyc0I7PKW07SwmwITJaxPGDqfdDkpJ3G z8S/EJwkX30QYGxRGwINH2DEOM28SDABNAtKr3PSc1KV7F++7IG3TV6+hnZe9tmjFnFtnwtQ5AQz6 6lpqk4xe32sBOiavZV5zQd6gaLLBjZh7ihmYSBifx5627PGj2oEB5jEo+1/tazMRVbIiNT+SnzB0U 61qa/K562X/f0OHJQNUuQDgXviGgBMTi3K2t/B9WA7PJ/z6OP+7tejTGfJIqBSvE6s0TIVBbSuB+e NfbMogmx3ACKOuHNeJww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvts-007qAn-1e; Wed, 25 Aug 2021 16:37:16 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbC-007gjH-Sx for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:00 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A35DB113E; Wed, 25 Aug 2021 09:17:57 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 504613F66F; Wed, 25 Aug 2021 09:17:56 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 28/39] KVM: arm64: nVHE: Save PMSCR_EL1 to the host context Date: Wed, 25 Aug 2021 17:18:04 +0100 Message-Id: <20210825161815.266051-29-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091759_079485_ED80291E X-CRM114-Status: GOOD ( 14.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The SPE registers are now part of the KVM register context, use the host context to save the value of PMSCR_EL1 instead of a dedicated field in host_debug_state. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 2 -- arch/arm64/include/asm/kvm_hyp.h | 6 ++++-- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 10 ++++++---- arch/arm64/kvm/hyp/nvhe/switch.c | 4 ++-- 4 files changed, 12 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4c0d3d5ba285..351b77dc7732 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -356,8 +356,6 @@ struct kvm_vcpu_arch { struct { /* {Break,watch}point registers */ struct kvm_guest_debug_arch regs; - /* Statistical profiling extension */ - u64 pmscr_el1; /* Self-hosted trace */ u64 trfcr_el1; } host_debug_state; diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 657d0c94cf82..48619c2c0dc6 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -84,8 +84,10 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu); void __debug_switch_to_host(struct kvm_vcpu *vcpu); #ifdef __KVM_NVHE_HYPERVISOR__ -void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu); -void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu); +void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt); +void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt); #endif void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 7d3f25868cae..6db58722f045 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -81,11 +81,12 @@ static void __debug_restore_trace(u64 trfcr_el1) write_sysreg_s(trfcr_el1, SYS_TRFCR_EL1); } -void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu) +void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt) { /* Disable and flush SPE data generation */ if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE) - __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1); + __debug_save_spe(__ctxt_sys_reg(host_ctxt, PMSCR_EL1)); /* Disable and flush Self-Hosted Trace generation */ if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) __debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1); @@ -96,10 +97,11 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu) __debug_switch_to_guest_common(vcpu); } -void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu) +void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt) { if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE) - __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1); + __debug_restore_spe(ctxt_sys_reg(host_ctxt, PMSCR_EL1)); if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) __debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1); } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 0c70d897a493..04d654e71a6e 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -200,7 +200,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and * before we load guest Stage1. */ - __debug_save_host_buffers_nvhe(vcpu); + __debug_save_host_buffers_nvhe(vcpu, host_ctxt); __kvm_adjust_pc(vcpu); @@ -248,7 +248,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) * This must come after restoring the host sysregs, since a non-VHE * system may enable SPE here and make use of the TTBRs. */ - __debug_restore_host_buffers_nvhe(vcpu); + __debug_restore_host_buffers_nvhe(vcpu, host_ctxt); if (pmu_switch_needed) __pmu_switch_to_host(host_ctxt); From patchwork Wed Aug 25 16:18:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F5F0C4338F for ; Wed, 25 Aug 2021 16:39:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EF69760F5C for ; Wed, 25 Aug 2021 16:39:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EF69760F5C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=q5frbGe6b+CLSbVkBPP/gkMnjU7d3E9xFt43/t2hmOc=; b=z1RJ8Tjnakn8Iz ZO2IWwASgKxvOX//qwOHjXOtCsR5WnijnMJ2S8iYWMikMoWlnBPoHzWql/lSNsakUZESQXuKPhqTc xg03wkvVpHqURwmR3mNj5SNhGC+c1G+o8UkfHAobffQawqA7PbUdMtukLAb6+TsR85hF3uq2+m9Mg sxChxChI6XJbCfygL/5V7s8jJ1ZSVdGuB5uMvy2wHHH0pr8vVVY1IJZDStOmuNYl2eavWoreIck2s C+bduPyKJEBWBTSPRchnjOQzwOGYN2zFEz214qEIRHY+hdgkhyig9rW+woLfQltBhZTnyLrjkKnuy yxSmyhnEC+1fKja1YDcQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvuc-007qXe-Pz; Wed, 25 Aug 2021 16:38:02 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbD-007glG-Ln for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:01 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4360C1435; Wed, 25 Aug 2021 09:17:59 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E72953F66F; Wed, 25 Aug 2021 09:17:57 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 29/39] KVM: arm64: Rename DEBUG_STATE_SAVE_SPE -> DEBUG_SAVE_SPE_BUFFER flags Date: Wed, 25 Aug 2021 17:18:05 +0100 Message-Id: <20210825161815.266051-30-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091759_856019_C761CC98 X-CRM114-Status: GOOD ( 19.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Setting the KVM_ARM64_DEBUG_STATE_SAVE_SPE flag will stop profiling to drain the buffer, if the buffer is enabled when switching to the guest, and then re-enable profiling on the return to the host. Rename it to KVM_ARM64_DEBUG_SAVE_SPE_BUFFER to avoid any confusion with what a SPE enabled VCPU will do, which is to save and restore the full SPE state on a world switch, and not just part of it, some of the time. This also matches the name of the function __debug_save_host_buffers_nvhe(), which makes use of the flag to decide if the buffer should be drained. Similar treatment was applied to KVM_ARM64_DEBUG_STATE_SAVE_TRBE, which was renamed to KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER, for consistency and to better reflect what it is doing. CC: Suzuki K Poulose Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 24 ++++++++++++------------ arch/arm64/kvm/debug.c | 11 ++++++----- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 8 ++++---- 3 files changed, 22 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 351b77dc7732..e704847a7645 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -439,18 +439,18 @@ struct kvm_vcpu_arch { }) /* vcpu_arch flags field values: */ -#define KVM_ARM64_DEBUG_DIRTY (1 << 0) -#define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */ -#define KVM_ARM64_FP_HOST (1 << 2) /* host FP regs loaded */ -#define KVM_ARM64_HOST_SVE_IN_USE (1 << 3) /* backup for host TIF_SVE */ -#define KVM_ARM64_HOST_SVE_ENABLED (1 << 4) /* SVE enabled for EL0 */ -#define KVM_ARM64_GUEST_HAS_SVE (1 << 5) /* SVE exposed to guest */ -#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 6) /* SVE config completed */ -#define KVM_ARM64_GUEST_HAS_PTRAUTH (1 << 7) /* PTRAUTH exposed to guest */ -#define KVM_ARM64_PENDING_EXCEPTION (1 << 8) /* Exception pending */ -#define KVM_ARM64_EXCEPT_MASK (7 << 9) /* Target EL/MODE */ -#define KVM_ARM64_DEBUG_STATE_SAVE_SPE (1 << 12) /* Save SPE context if active */ -#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE (1 << 13) /* Save TRBE context if active */ +#define KVM_ARM64_DEBUG_DIRTY (1 << 0) +#define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */ +#define KVM_ARM64_FP_HOST (1 << 2) /* host FP regs loaded */ +#define KVM_ARM64_HOST_SVE_IN_USE (1 << 3) /* backup for host TIF_SVE */ +#define KVM_ARM64_HOST_SVE_ENABLED (1 << 4) /* SVE enabled for EL0 */ +#define KVM_ARM64_GUEST_HAS_SVE (1 << 5) /* SVE exposed to guest */ +#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 6) /* SVE config completed */ +#define KVM_ARM64_GUEST_HAS_PTRAUTH (1 << 7) /* PTRAUTH exposed to guest */ +#define KVM_ARM64_PENDING_EXCEPTION (1 << 8) /* Exception pending */ +#define KVM_ARM64_EXCEPT_MASK (7 << 9) /* Target EL/MODE */ +#define KVM_ARM64_DEBUG_SAVE_SPE_BUFFER (1 << 12) /* Save SPE buffer if active */ +#define KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER (1 << 13) /* Save TRBE buffer if active */ #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ KVM_GUESTDBG_USE_SW_BP | \ diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 70712cd85f32..6e5fc1887215 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -299,22 +299,23 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu) return; dfr0 = read_sysreg(id_aa64dfr0_el1); + /* * If SPE is present on this CPU and is available at current EL, - * we may need to check if the host state needs to be saved. + * we may need to check if the host buffer needs to be drained. */ if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) && !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT))) - vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE; + vcpu->arch.flags |= KVM_ARM64_DEBUG_SAVE_SPE_BUFFER; /* Check if we have TRBE implemented and available at the host */ if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) && !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG)) - vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE; + vcpu->arch.flags |= KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER; } void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu) { - vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE | - KVM_ARM64_DEBUG_STATE_SAVE_TRBE); + vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_SAVE_SPE_BUFFER | + KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER); } diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 6db58722f045..186b90b5fd20 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -85,10 +85,10 @@ void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt) { /* Disable and flush SPE data generation */ - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE) + if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_SPE_BUFFER) __debug_save_spe(__ctxt_sys_reg(host_ctxt, PMSCR_EL1)); /* Disable and flush Self-Hosted Trace generation */ - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) + if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER) __debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1); } @@ -100,9 +100,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu) void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt) { - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE) + if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_SPE_BUFFER) __debug_restore_spe(ctxt_sys_reg(host_ctxt, PMSCR_EL1)); - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) + if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER) __debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1); } From patchwork Wed Aug 25 16:18:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3A2AC4338F for ; Wed, 25 Aug 2021 16:41:54 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 66E1C61056 for ; Wed, 25 Aug 2021 16:41:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 66E1C61056 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5i5lW56DmLukdd9vxUjPShRiDgw1yQIew8mJCKJewlU=; b=0NrPL98CWwuOYD FeEZrVuxmb2+GVloDjvb3X6kEdFQ6AohLq5sr3iPhgMqLYNBlBPG7Q4EMu9AHWjCzOuaLf0s359BX JyBkdLdxrL5tXywlz0SBgiXFAQvghkEn8Kj3xO1L1JjFRiXi2gXxEYbk4jCc69/fLdz0+4K8lK7UC P03OqOZ72bfLYVRTEheX4GceM1Tfx6a0Reid+GkGcJrrDmtD/d+NhnuBqVWazpqFSscOhw4BRsGI6 0eJf898X7P+BUm9nb4RbqoXAtDHK+xPh/tuwbh1c91I4KrwQKFimln4xwqO2oKjnEeX/FTZcBcGGq qECs/JtSa1tPP152afng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvvN-007quh-Lv; Wed, 25 Aug 2021 16:38:50 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbG-007glG-Hv for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D5A07143B; Wed, 25 Aug 2021 09:18:00 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 872583F66F; Wed, 25 Aug 2021 09:17:59 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 30/39] KVM: arm64: nVHE: Context switch SPE state if VCPU has SPE Date: Wed, 25 Aug 2021 17:18:06 +0100 Message-Id: <20210825161815.266051-31-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091802_744083_74289ABE X-CRM114-Status: GOOD ( 23.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For non-VHE systems, make the SPE register state part of the context that is saved and restored at each world switch. The SPE buffer management interrupt will be handled in a later patch. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_hyp.h | 19 ++++++ arch/arm64/kvm/hyp/include/hyp/spe-sr.h | 32 +++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 1 + arch/arm64/kvm/hyp/nvhe/debug-sr.c | 6 +- arch/arm64/kvm/hyp/nvhe/spe-sr.c | 87 +++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 29 +++++++-- 6 files changed, 165 insertions(+), 9 deletions(-) create mode 100644 arch/arm64/kvm/hyp/include/hyp/spe-sr.h create mode 100644 arch/arm64/kvm/hyp/nvhe/spe-sr.c diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 48619c2c0dc6..06e77a739458 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -88,6 +88,25 @@ void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); +#ifdef CONFIG_KVM_ARM_SPE +void __spe_save_host_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt); +void __spe_save_guest_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt); +void __spe_restore_host_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt); +void __spe_restore_guest_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt); +#else +static inline void __spe_save_host_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt) {} +static inline void __spe_save_guest_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt) {} +static inline void __spe_restore_host_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt) {} +static inline void __spe_restore_guest_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt) {} +#endif #endif void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); diff --git a/arch/arm64/kvm/hyp/include/hyp/spe-sr.h b/arch/arm64/kvm/hyp/include/hyp/spe-sr.h new file mode 100644 index 000000000000..d5f8f3ffc7d4 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/hyp/spe-sr.h @@ -0,0 +1,32 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2021 - ARM Ltd + * Author: Alexandru Elisei + */ + +#ifndef __ARM64_KVM_HYP_SPE_SR_H__ +#define __ARM64_KVM_HYP_SPE_SR_H__ + +#include + +#include + +static inline void __spe_save_common_state(struct kvm_cpu_context *ctxt) +{ + ctxt_sys_reg(ctxt, PMSICR_EL1) = read_sysreg_s(SYS_PMSICR_EL1); + ctxt_sys_reg(ctxt, PMSIRR_EL1) = read_sysreg_s(SYS_PMSIRR_EL1); + ctxt_sys_reg(ctxt, PMSFCR_EL1) = read_sysreg_s(SYS_PMSFCR_EL1); + ctxt_sys_reg(ctxt, PMSEVFR_EL1) = read_sysreg_s(SYS_PMSEVFR_EL1); + ctxt_sys_reg(ctxt, PMSLATFR_EL1) = read_sysreg_s(SYS_PMSLATFR_EL1); +} + +static inline void __spe_restore_common_state(struct kvm_cpu_context *ctxt) +{ + write_sysreg_s(ctxt_sys_reg(ctxt, PMSICR_EL1), SYS_PMSICR_EL1); + write_sysreg_s(ctxt_sys_reg(ctxt, PMSIRR_EL1), SYS_PMSIRR_EL1); + write_sysreg_s(ctxt_sys_reg(ctxt, PMSFCR_EL1), SYS_PMSFCR_EL1); + write_sysreg_s(ctxt_sys_reg(ctxt, PMSEVFR_EL1), SYS_PMSEVFR_EL1); + write_sysreg_s(ctxt_sys_reg(ctxt, PMSLATFR_EL1), SYS_PMSLATFR_EL1); +} + +#endif /* __ARM64_KVM_HYP_SPE_SR_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 5df6193fc430..37dca45d85d5 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -15,6 +15,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \ cache.o setup.o mm.o mem_protect.o +obj-$(CONFIG_KVM_ARM_SPE) += spe-sr.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-y += $(lib-objs) diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 186b90b5fd20..1622615954b2 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -85,7 +85,8 @@ void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt) { /* Disable and flush SPE data generation */ - if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_SPE_BUFFER) + if (!kvm_vcpu_has_spe(vcpu) && + vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_SPE_BUFFER) __debug_save_spe(__ctxt_sys_reg(host_ctxt, PMSCR_EL1)); /* Disable and flush Self-Hosted Trace generation */ if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER) @@ -100,7 +101,8 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu) void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt) { - if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_SPE_BUFFER) + if (!kvm_vcpu_has_spe(vcpu) && + vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_SPE_BUFFER) __debug_restore_spe(ctxt_sys_reg(host_ctxt, PMSCR_EL1)); if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER) __debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1); diff --git a/arch/arm64/kvm/hyp/nvhe/spe-sr.c b/arch/arm64/kvm/hyp/nvhe/spe-sr.c new file mode 100644 index 000000000000..46e47c9fd08f --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/spe-sr.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2021 - ARM Ltd + * Author: Alexandru Elisei + */ + +#include + +#include + +#include + +/* + * The owning exception level remains unchange from EL1 during the world switch, + * which means that profiling is disabled for as long as we execute at EL2. KVM + * does not need to explicitely disable profiling, like it does when the VCPU + * does not have SPE and we change buffer owning exception level, nor does it + * need to do any synchronization around sysreg save/restore. + */ + +void __spe_save_host_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt) +{ + u64 pmblimitr; + + pmblimitr = read_sysreg_s(SYS_PMBLIMITR_EL1); + if (pmblimitr & BIT(SYS_PMBLIMITR_EL1_E_SHIFT)) { + psb_csync(); + dsb(nsh); + /* + * The buffer performs indirect writes to system registers, a + * context synchronization event is needed before the new + * PMBPTR_EL1 value is visible to subsequent direct reads. + */ + isb(); + } + + ctxt_sys_reg(host_ctxt, PMBPTR_EL1) = read_sysreg_s(SYS_PMBPTR_EL1); + ctxt_sys_reg(host_ctxt, PMBSR_EL1) = read_sysreg_s(SYS_PMBSR_EL1); + ctxt_sys_reg(host_ctxt, PMBLIMITR_EL1) = pmblimitr; + ctxt_sys_reg(host_ctxt, PMSCR_EL1) = read_sysreg_s(SYS_PMSCR_EL1); + ctxt_sys_reg(host_ctxt, PMSCR_EL2) = read_sysreg_el2(SYS_PMSCR); + + __spe_save_common_state(host_ctxt); +} + +void __spe_save_guest_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt) +{ + if (read_sysreg_s(SYS_PMBLIMITR_EL1) & BIT(SYS_PMBLIMITR_EL1_E_SHIFT)) { + psb_csync(); + dsb(nsh); + /* Ensure hardware updates to PMBPTR_EL1 are visible. */ + isb(); + } + + ctxt_sys_reg(guest_ctxt, PMBPTR_EL1) = read_sysreg_s(SYS_PMBPTR_EL1); + ctxt_sys_reg(guest_ctxt, PMBSR_EL1) = read_sysreg_s(SYS_PMBSR_EL1); + /* PMBLIMITR_EL1 is updated only on a trapped write. */ + ctxt_sys_reg(guest_ctxt, PMSCR_EL1) = read_sysreg_s(SYS_PMSCR_EL1); + + __spe_save_common_state(guest_ctxt); +} + +void __spe_restore_host_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt) +{ + __spe_restore_common_state(host_ctxt); + + write_sysreg_s(ctxt_sys_reg(host_ctxt, PMBPTR_EL1), SYS_PMBPTR_EL1); + write_sysreg_s(ctxt_sys_reg(host_ctxt, PMBSR_EL1), SYS_PMBSR_EL1); + write_sysreg_s(ctxt_sys_reg(host_ctxt, PMBLIMITR_EL1), SYS_PMBLIMITR_EL1); + write_sysreg_s(ctxt_sys_reg(host_ctxt, PMSCR_EL1), SYS_PMSCR_EL1); + write_sysreg_el2(ctxt_sys_reg(host_ctxt, PMSCR_EL2), SYS_PMSCR); +} + +void __spe_restore_guest_state_nvhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt) +{ + __spe_restore_common_state(guest_ctxt); + + write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBPTR_EL1), SYS_PMBPTR_EL1); + write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBSR_EL1), SYS_PMBSR_EL1); + write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBLIMITR_EL1), SYS_PMBLIMITR_EL1); + write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMSCR_EL1), SYS_PMSCR_EL1); + write_sysreg_el2(0, SYS_PMSCR); +} diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 04d654e71a6e..62ef2a5789ba 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -194,12 +194,16 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __sysreg_save_state_nvhe(host_ctxt); /* - * We must flush and disable the SPE buffer for nVHE, as - * the translation regime(EL1&0) is going to be loaded with - * that of the guest. And we must do this before we change the - * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and - * before we load guest Stage1. + * If the VCPU has the SPE feature bit set, then we save the host's SPE + * context. + * + * Otherwise, we only flush and disable the SPE buffer for nVHE, as the + * translation regime(EL1&0) is going to be loaded with that of the + * guest. And we must do this before we change the translation regime to + * EL2 (via MDCR_EL2_E2PB == 0) and before we load guest Stage1. */ + if (kvm_vcpu_has_spe(vcpu)) + __spe_save_host_state_nvhe(vcpu, host_ctxt); __debug_save_host_buffers_nvhe(vcpu, host_ctxt); __kvm_adjust_pc(vcpu); @@ -218,6 +222,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __load_guest_stage2(kern_hyp_va(vcpu->arch.hw_mmu)); __activate_traps(vcpu); + if (kvm_vcpu_has_spe(vcpu)) + __spe_restore_guest_state_nvhe(vcpu, guest_ctxt); + __hyp_vgic_restore_state(vcpu); __timer_enable_traps(vcpu); @@ -232,6 +239,10 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __sysreg_save_state_nvhe(guest_ctxt); __sysreg32_save_state(vcpu); + + if (kvm_vcpu_has_spe(vcpu)) + __spe_save_guest_state_nvhe(vcpu, guest_ctxt); + __timer_disable_traps(vcpu); __hyp_vgic_save_state(vcpu); @@ -244,10 +255,14 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) __fpsimd_save_fpexc32(vcpu); __debug_switch_to_host(vcpu); + /* - * This must come after restoring the host sysregs, since a non-VHE - * system may enable SPE here and make use of the TTBRs. + * Restoring the host context must come after restoring the host + * sysregs, since a non-VHE system may enable SPE here and make use of + * the TTBRs. */ + if (kvm_vcpu_has_spe(vcpu)) + __spe_restore_host_state_nvhe(vcpu, host_ctxt); __debug_restore_host_buffers_nvhe(vcpu, host_ctxt); if (pmu_switch_needed) From patchwork Wed Aug 25 16:18:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82FD7C4338F for ; Wed, 25 Aug 2021 16:42:38 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4D3B061051 for ; Wed, 25 Aug 2021 16:42:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4D3B061051 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eTunY5k/14O2o7HElrLwbN8jO5flzS4xI2yznMcHSIo=; b=Gv9hXNHfrpSt7/ H1jEzhA8mx8opw4YUkKz++Hx+XRp3L6v2xBYFRcWNnFgaWXCLQfdTt5kDNgY0+dyF/keGyddxzp2M ita3a6VE1Civ4QapgOlxkMzBo71La6ALaQ6m7EUfmm31evFRp1FGbW01O/yWrG5NkJfzCzl2zVYU/ 4ohUJgKXH2d2ZO2zK0sEGCeuaJ8I8QPlpKvDKBnlLXrTjenTm6fF+Ss7qwM1m+BfFYNGPY8n+jh83 lXhDta0GT4ZI8HnN56WFCQUoNg90wmgTgvtyJuE8yV+S4IqwolQV1OJWcZqscZEqEVxvehQKenGUB Hv8r3zCGQsQBTllsgIIQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvxG-007ro0-7e; Wed, 25 Aug 2021 16:40:46 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbI-007gnT-3m for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:06 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 76C01D6E; Wed, 25 Aug 2021 09:18:02 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2524D3F66F; Wed, 25 Aug 2021 09:18:01 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 31/39] KVM: arm64: VHE: Context switch SPE state if VCPU has SPE Date: Wed, 25 Aug 2021 17:18:07 +0100 Message-Id: <20210825161815.266051-32-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091804_289777_122ACD3A X-CRM114-Status: GOOD ( 22.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Similar to the non-VHE case, save and restore the SPE register state at each world switch for VHE enabled systems if the VCPU has the SPE feature. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_hyp.h | 24 +++++- arch/arm64/include/asm/sysreg.h | 2 + arch/arm64/kvm/hyp/vhe/Makefile | 1 + arch/arm64/kvm/hyp/vhe/spe-sr.c | 128 +++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/switch.c | 8 ++ 5 files changed, 161 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/kvm/hyp/vhe/spe-sr.c diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 06e77a739458..03bc51049996 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -106,8 +106,28 @@ static inline void __spe_restore_host_state_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt) {} static inline void __spe_restore_guest_state_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *guest_ctxt) {} -#endif -#endif +#endif /* CONFIG_KVM_ARM_SPE */ +#else +#ifdef CONFIG_KVM_ARM_SPE +void __spe_save_host_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt); +void __spe_save_guest_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt); +void __spe_restore_host_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt); +void __spe_restore_guest_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt); +#else +static inline void __spe_save_host_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt) {} +static inline void __spe_save_guest_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt) {} +static inline void __spe_restore_host_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt) {} +static inline void __spe_restore_guest_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt) {} +#endif /* CONFIG_KVM_ARM_SPE */ +#endif /* __KVM_NVHE_HYPERVISOR__ */ void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 7b9c3acba684..b2d691bc1049 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -267,6 +267,8 @@ #define SYS_PMSCR_EL1_TS_SHIFT 5 #define SYS_PMSCR_EL1_PCT_SHIFT 6 +#define SYS_PMSCR_EL12 sys_reg(3, 5, 9, 9, 0) + #define SYS_PMSCR_EL2 sys_reg(3, 4, 9, 9, 0) #define SYS_PMSCR_EL2_E0HSPE_SHIFT 0 #define SYS_PMSCR_EL2_E2SPE_SHIFT 1 diff --git a/arch/arm64/kvm/hyp/vhe/Makefile b/arch/arm64/kvm/hyp/vhe/Makefile index 96bec0ecf9dd..7cb4a9e5ceb0 100644 --- a/arch/arm64/kvm/hyp/vhe/Makefile +++ b/arch/arm64/kvm/hyp/vhe/Makefile @@ -7,5 +7,6 @@ asflags-y := -D__KVM_VHE_HYPERVISOR__ ccflags-y := -D__KVM_VHE_HYPERVISOR__ obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o +obj-$(CONFIG_KVM_ARM_SPE) += spe-sr.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o diff --git a/arch/arm64/kvm/hyp/vhe/spe-sr.c b/arch/arm64/kvm/hyp/vhe/spe-sr.c new file mode 100644 index 000000000000..00eab9e2ec60 --- /dev/null +++ b/arch/arm64/kvm/hyp/vhe/spe-sr.c @@ -0,0 +1,128 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2021 - ARM Ltd + */ + +#include + +#include +#include + +#include + +/* + * Disable host profiling, drain the buffer and save the host SPE context. + * Extra care must be taken because profiling might be in progress. + */ +void __spe_save_host_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt) +{ + u64 pmblimitr, pmscr_el2; + + /* Disable profiling while the SPE context is being switched. */ + pmscr_el2 = read_sysreg_el2(SYS_PMSCR); + write_sysreg_el2(0, SYS_PMSCR); + isb(); + + pmblimitr = read_sysreg_s(SYS_PMBLIMITR_EL1); + if (pmblimitr & BIT(SYS_PMBLIMITR_EL1_E_SHIFT)) { + psb_csync(); + dsb(nsh); + /* Ensure hardware updates to PMBPTR_EL1 are visible. */ + isb(); + } + + ctxt_sys_reg(host_ctxt, PMBPTR_EL1) = read_sysreg_s(SYS_PMBPTR_EL1); + ctxt_sys_reg(host_ctxt, PMBSR_EL1) = read_sysreg_s(SYS_PMBSR_EL1); + ctxt_sys_reg(host_ctxt, PMBLIMITR_EL1) = pmblimitr; + ctxt_sys_reg(host_ctxt, PMSCR_EL2) = pmscr_el2; + + __spe_save_common_state(host_ctxt); +} +NOKPROBE_SYMBOL(__spe_save_host_state_vhe); + +/* + * Drain the guest's buffer and save the SPE state. Profiling is disabled + * because we're at EL2 and the buffer owning exceptions level is EL1. + */ +void __spe_save_guest_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt) +{ + u64 pmblimitr; + + /* + * We're at EL2 and the buffer owning regime is EL1, which means that + * profiling is disabled. After we disable traps and restore the host's + * MDCR_EL2, profiling will remain disabled because we've disabled it + * via PMSCR_EL2 when we saved the host's SPE state. All it's needed + * here is to drain the buffer. + */ + pmblimitr = read_sysreg_s(SYS_PMBLIMITR_EL1); + if (pmblimitr & BIT(SYS_PMBLIMITR_EL1_E_SHIFT)) { + psb_csync(); + dsb(nsh); + /* Ensure hardware updates to PMBPTR_EL1 are visible. */ + isb(); + } + + ctxt_sys_reg(guest_ctxt, PMBPTR_EL1) = read_sysreg_s(SYS_PMBPTR_EL1); + ctxt_sys_reg(guest_ctxt, PMBSR_EL1) = read_sysreg_s(SYS_PMBSR_EL1); + /* PMBLIMITR_EL1 is updated only on a trapped write. */ + ctxt_sys_reg(guest_ctxt, PMSCR_EL1) = read_sysreg_el1(SYS_PMSCR); + + __spe_save_common_state(guest_ctxt); +} +NOKPROBE_SYMBOL(__spe_save_guest_state_vhe); + +/* + * Restore the host SPE context. Special care must be taken because we're + * potentially resuming a profiling session which was stopped when we saved the + * host SPE register state. + */ +void __spe_restore_host_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *host_ctxt) +{ + __spe_restore_common_state(host_ctxt); + + write_sysreg_s(ctxt_sys_reg(host_ctxt, PMBPTR_EL1), SYS_PMBPTR_EL1); + write_sysreg_s(ctxt_sys_reg(host_ctxt, PMBLIMITR_EL1), SYS_PMBLIMITR_EL1); + write_sysreg_s(ctxt_sys_reg(host_ctxt, PMBSR_EL1), SYS_PMBSR_EL1); + + /* + * Make sure buffer pointer and limit is updated first, so we don't end + * up in a situation where profiling is enabled and the buffer uses the + * values programmed by the guest. + * + * This also serves to make sure the write to MDCR_EL2 which changes the + * buffer owning Exception level is visible. + * + * After the synchronization, profiling is still disabled at EL2, + * because we cleared PMSCR_EL2 when we saved the host context. + */ + isb(); + + write_sysreg_el2(ctxt_sys_reg(host_ctxt, PMSCR_EL2), SYS_PMSCR); +} +NOKPROBE_SYMBOL(__spe_restore_host_state_vhe); + +/* + * Restore the guest SPE context while profiling is disabled at EL2. + */ +void __spe_restore_guest_state_vhe(struct kvm_vcpu *vcpu, + struct kvm_cpu_context *guest_ctxt) +{ + __spe_restore_common_state(guest_ctxt); + + /* + * No synchronization needed here. Profiling is disabled at EL2 because + * PMSCR_EL2 has been cleared when saving the host's context, and the + * buffer has already been drained. + */ + + write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBPTR_EL1), SYS_PMBPTR_EL1); + write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBSR_EL1), SYS_PMBSR_EL1); + write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBLIMITR_EL1), SYS_PMBLIMITR_EL1); + write_sysreg_el1(ctxt_sys_reg(guest_ctxt, PMSCR_EL1), SYS_PMSCR); + /* PMSCR_EL2 has been cleared when saving the host state. */ +} +NOKPROBE_SYMBOL(__spe_restore_guest_state_vhe); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index ec4e179d56ae..46da018f4a5a 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -135,6 +135,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) guest_ctxt = &vcpu->arch.ctxt; sysreg_save_host_state_vhe(host_ctxt); + if (kvm_vcpu_has_spe(vcpu)) + __spe_save_host_state_vhe(vcpu, host_ctxt); /* * ARM erratum 1165522 requires us to configure both stage 1 and @@ -153,6 +155,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) __kvm_adjust_pc(vcpu); sysreg_restore_guest_state_vhe(guest_ctxt); + if (kvm_vcpu_has_spe(vcpu)) + __spe_restore_guest_state_vhe(vcpu, guest_ctxt); __debug_switch_to_guest(vcpu); do { @@ -163,10 +167,14 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) } while (fixup_guest_exit(vcpu, &exit_code)); sysreg_save_guest_state_vhe(guest_ctxt); + if (kvm_vcpu_has_spe(vcpu)) + __spe_save_guest_state_vhe(vcpu, guest_ctxt); __deactivate_traps(vcpu); sysreg_restore_host_state_vhe(host_ctxt); + if (kvm_vcpu_has_spe(vcpu)) + __spe_restore_host_state_vhe(vcpu, host_ctxt); if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); From patchwork Wed Aug 25 16:18:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF616C4338F for ; Wed, 25 Aug 2021 16:44:14 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B5F5E61027 for ; Wed, 25 Aug 2021 16:44:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B5F5E61027 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VRkVSojzdIoufYUoKCfJQw09ZPW0izE3z4uDx7pzoAk=; b=VstV7sPUEj44E1 nmuaTsGHAK1m6TIcpGz93qDt+FkXTFFk4ojN4/3V1FUUShRDVmSrKLn0U2xLb3o/GpmoU/mVeAzSj ZOirG0QY/RkW3Y6iw5lHt9ONZ3xT0PO4itdWXG5qVHGpoLdOcE+VCIU1Yhi4+vTGEzF/iiDrO02XW cRrUQPqjggrZydBP5Hd5eE+PQWDVQp43UcU/KJQz75iemfQcA8FpXWJutqO/36P5oYdC3aqgCVzcQ yFGx6JMbC0ug7jaxIPTpkc5l2FoONeQh1LYNAf6qYRJ/PLvMoD4xaESBqrK3PLcdl1LyWUAlEL78g quuAd4k3ibGsdZawcDtw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvyS-007sRq-IF; Wed, 25 Aug 2021 16:42:00 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbI-007gnz-FW for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:06 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1666F143D; Wed, 25 Aug 2021 09:18:04 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BA8D13F66F; Wed, 25 Aug 2021 09:18:02 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 32/39] KVM: arm64: Save/restore PMSNEVFR_EL1 on VCPU put/load Date: Wed, 25 Aug 2021 17:18:08 +0100 Message-Id: <20210825161815.266051-33-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091804_663146_C75469FA X-CRM114-Status: GOOD ( 14.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org FEAT_SPEv1p2 introduced a new register, PMSNEVFR_EL1. The SPE driver is not using the register, so save the register to the guest context on vcpu_put() and restore it on vcpu_load() since it will not be touched by the host, and the value programmed by the guest doesn't affect the host. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_spe.h | 6 ++++++ arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/kvm/arm.c | 2 ++ arch/arm64/kvm/spe.c | 29 +++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 1 + 6 files changed, 40 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e704847a7645..66f0b999cb5f 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -239,6 +239,7 @@ enum vcpu_sysreg { /* Statistical Profiling Extension Registers. */ PMSCR_EL1, /* Statistical Profiling Control Register */ + PMSNEVFR_EL1, /* Sampling Inverted Event Filter Register */ PMSICR_EL1, /* Sampling Interval Counter Register */ PMSIRR_EL1, /* Sampling Interval Reload Register */ PMSFCR_EL1, /* Sampling Filter Control Register */ diff --git a/arch/arm64/include/asm/kvm_spe.h b/arch/arm64/include/asm/kvm_spe.h index 934eedb0de46..6b8d4cf2cd37 100644 --- a/arch/arm64/include/asm/kvm_spe.h +++ b/arch/arm64/include/asm/kvm_spe.h @@ -28,6 +28,9 @@ int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu); void kvm_spe_write_sysreg(struct kvm_vcpu *vcpu, int reg, u64 val); u64 kvm_spe_read_sysreg(struct kvm_vcpu *vcpu, int reg); +void kvm_spe_vcpu_load(struct kvm_vcpu *vcpu); +void kvm_spe_vcpu_put(struct kvm_vcpu *vcpu); + int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); int kvm_spe_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); int kvm_spe_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); @@ -45,6 +48,9 @@ static inline int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu) { return -E static inline void kvm_spe_write_sysreg(struct kvm_vcpu *vcpu, int reg, u64 val) {} static inline u64 kvm_spe_read_sysreg(struct kvm_vcpu *vcpu, int reg) { return 0; } +static inline void kvm_spe_vcpu_load(struct kvm_vcpu *vcpu) {} +static inline void kvm_spe_vcpu_put(struct kvm_vcpu *vcpu) {} + static inline int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) { diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index b2d691bc1049..cedab4494a71 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -921,6 +921,7 @@ #define ID_AA64DFR0_PMSVER_8_2 0x1 #define ID_AA64DFR0_PMSVER_8_3 0x2 +#define ID_AA64DFR0_PMSVER_8_7 0x3 #define ID_DFR0_PERFMON_SHIFT 24 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 6af7ef26d2c1..cd64894b286e 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -466,6 +466,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (vcpu_has_ptrauth(vcpu)) vcpu_ptrauth_disable(vcpu); kvm_arch_vcpu_load_debug_state_flags(vcpu); + kvm_spe_vcpu_load(vcpu); if (!cpumask_empty(&vcpu->arch.supported_cpus) && !cpumask_test_cpu(smp_processor_id(), &vcpu->arch.supported_cpus)) @@ -474,6 +475,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + kvm_spe_vcpu_put(vcpu); kvm_arch_vcpu_put_debug_state_flags(vcpu); kvm_arch_vcpu_put_fp(vcpu); if (has_vhe()) diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c index f760ccd8306a..711736c49f63 100644 --- a/arch/arm64/kvm/spe.c +++ b/arch/arm64/kvm/spe.c @@ -67,6 +67,35 @@ u64 kvm_spe_read_sysreg(struct kvm_vcpu *vcpu, int reg) return __vcpu_sys_reg(vcpu, reg); } +static unsigned int kvm_spe_get_pmsver(void) +{ + u64 dfr0 = read_sysreg(id_aa64dfr0_el1); + + return cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT); +} + +void kvm_spe_vcpu_load(struct kvm_vcpu *vcpu) +{ + if (!kvm_vcpu_has_spe(vcpu)) + return; + + if (kvm_spe_get_pmsver() < ID_AA64DFR0_PMSVER_8_7) + return; + + write_sysreg_s(__vcpu_sys_reg(vcpu, PMSNEVFR_EL1), SYS_PMSNEVFR_EL1); +} + +void kvm_spe_vcpu_put(struct kvm_vcpu *vcpu) +{ + if (!kvm_vcpu_has_spe(vcpu)) + return; + + if (kvm_spe_get_pmsver() < ID_AA64DFR0_PMSVER_8_7) + return; + + __vcpu_sys_reg(vcpu, PMSNEVFR_EL1) = read_sysreg_s(SYS_PMSNEVFR_EL1); +} + static bool kvm_vcpu_supports_spe(struct kvm_vcpu *vcpu) { if (!kvm_supports_spe()) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 843822be5695..064742cee425 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1562,6 +1562,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_PAR_EL1), NULL, reset_unknown, PAR_EL1 }, { SPE_SYS_REG(SYS_PMSCR_EL1), .reg = PMSCR_EL1 }, + { SPE_SYS_REG(SYS_PMSNEVFR_EL1), .reg = PMSNEVFR_EL1 }, { SPE_SYS_REG(SYS_PMSICR_EL1), .reg = PMSICR_EL1 }, { SPE_SYS_REG(SYS_PMSIRR_EL1), .reg = PMSIRR_EL1 }, { SPE_SYS_REG(SYS_PMSFCR_EL1), .reg = PMSFCR_EL1 }, From patchwork Wed Aug 25 16:18:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23498C4338F for ; Wed, 25 Aug 2021 16:45:08 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EEBDC61A78 for ; Wed, 25 Aug 2021 16:45:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EEBDC61A78 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=REK/hX+y48hMNCkSJ9xWQQfxQ+LVAjkJghMxNpr7yNA=; b=kQrtjuW2aTP8q1 wyTkZ/5UgIQrinsIgPDhKnqA9QaHEJXpWy4da0G/heW1loojh5yToRkiUAI1vAf8XUlu1x62R/CXk Y/glkBv9s1m2eMMDeSkO31T5SWd6UIHaW84MrMtM+uL638oZwumnuYnU7P2m/BWLIVZY8Txglp28Y rdtdG8i+JrBFH1rL9gtVKTCLI3RaLGsgIVsq19NPhl579aTH2b3X9uttHwLvT+5+w5d+vUbWpPS8J c8GOCrvFe+lidrKfGAhXB66N7Qezlv9VMCh7C9R7Kh9GkSnuI7+H4Kqz8g/SiBF2QTNscEoau6ZsB O6vQEEQggssrtLkCIfdg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvz5-007sjx-Lz; Wed, 25 Aug 2021 16:42:39 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbJ-007glG-Ul for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:07 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A85F11474; Wed, 25 Aug 2021 09:18:05 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5985A3F66F; Wed, 25 Aug 2021 09:18:04 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 33/39] KVM: arm64: Allow guest to use physical timestamps if perfmon_capable() Date: Wed, 25 Aug 2021 17:18:09 +0100 Message-Id: <20210825161815.266051-34-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091806_129758_28089F15 X-CRM114-Status: GOOD ( 17.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The SPE driver allows userspace to use physical timestamps for records only if the process if perfmon_capable(). Do the same for a virtual machine with the SPE feature. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/kvm_spe.h | 7 +++++++ arch/arm64/kvm/hyp/nvhe/spe-sr.c | 2 +- arch/arm64/kvm/hyp/vhe/spe-sr.c | 2 +- arch/arm64/kvm/spe.c | 14 ++++++++++++++ 5 files changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 66f0b999cb5f..cc46f1406196 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -157,6 +157,8 @@ struct kvm_arch { /* Memory Tagging Extension enabled for the guest */ bool mte_enabled; + + struct kvm_spe spe; }; struct kvm_vcpu_fault_info { diff --git a/arch/arm64/include/asm/kvm_spe.h b/arch/arm64/include/asm/kvm_spe.h index 6b8d4cf2cd37..272f1eec64f2 100644 --- a/arch/arm64/include/asm/kvm_spe.h +++ b/arch/arm64/include/asm/kvm_spe.h @@ -21,6 +21,10 @@ struct kvm_vcpu_spe { int irq_num; /* Buffer management interrut number */ }; +struct kvm_spe { + bool perfmon_capable; /* Is the VM perfmon_capable()? */ +}; + void kvm_spe_init_supported_cpus(void); void kvm_spe_vm_init(struct kvm *kvm); int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu); @@ -41,6 +45,9 @@ int kvm_spe_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); struct kvm_vcpu_spe { }; +struct kvm_spe { +}; + static inline void kvm_spe_init_supported_cpus(void) {} static inline void kvm_spe_vm_init(struct kvm *kvm) {} static inline int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu) { return -ENOEXEC; } diff --git a/arch/arm64/kvm/hyp/nvhe/spe-sr.c b/arch/arm64/kvm/hyp/nvhe/spe-sr.c index 46e47c9fd08f..4f6579daddb5 100644 --- a/arch/arm64/kvm/hyp/nvhe/spe-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/spe-sr.c @@ -83,5 +83,5 @@ void __spe_restore_guest_state_nvhe(struct kvm_vcpu *vcpu, write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBSR_EL1), SYS_PMBSR_EL1); write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBLIMITR_EL1), SYS_PMBLIMITR_EL1); write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMSCR_EL1), SYS_PMSCR_EL1); - write_sysreg_el2(0, SYS_PMSCR); + write_sysreg_el2(ctxt_sys_reg(guest_ctxt, PMSCR_EL2), SYS_PMSCR); } diff --git a/arch/arm64/kvm/hyp/vhe/spe-sr.c b/arch/arm64/kvm/hyp/vhe/spe-sr.c index 00eab9e2ec60..f557ac64a1cc 100644 --- a/arch/arm64/kvm/hyp/vhe/spe-sr.c +++ b/arch/arm64/kvm/hyp/vhe/spe-sr.c @@ -21,7 +21,7 @@ void __spe_save_host_state_vhe(struct kvm_vcpu *vcpu, /* Disable profiling while the SPE context is being switched. */ pmscr_el2 = read_sysreg_el2(SYS_PMSCR); - write_sysreg_el2(0, SYS_PMSCR); + write_sysreg_el2(__vcpu_sys_reg(vcpu, PMSCR_EL2), SYS_PMSCR); isb(); pmblimitr = read_sysreg_s(SYS_PMBLIMITR_EL1); diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c index 711736c49f63..054bb16bbd79 100644 --- a/arch/arm64/kvm/spe.c +++ b/arch/arm64/kvm/spe.c @@ -3,6 +3,7 @@ * Copyright (C) 2021 - ARM Ltd */ +#include #include #include #include @@ -29,6 +30,16 @@ void kvm_spe_vm_init(struct kvm *kvm) { /* Set supported_cpus if it isn't already initialized. */ kvm_spe_init_supported_cpus(); + + /* + * Allow the guest to use the physical timer for timestamps only if the + * VMM is perfmon_capable(), similar to what the SPE driver allows. + * + * CAP_PERFMON can be changed during the lifetime of the VM, so record + * its value when the VM is created to avoid situations where only some + * VCPUs allow physical timer timestamps, while others don't. + */ + kvm->arch.spe.perfmon_capable = perfmon_capable(); } static int kvm_spe_check_supported_cpus(struct kvm_vcpu *vcpu) @@ -54,6 +65,9 @@ int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu) if (!vcpu->arch.spe.initialized) return -EPERM; + if (vcpu->kvm->arch.spe.perfmon_capable) + __vcpu_sys_reg(vcpu, PMSCR_EL2) = BIT(SYS_PMSCR_EL1_PCT_SHIFT); + return 0; } From patchwork Wed Aug 25 16:18:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E4FEC4338F for ; Wed, 25 Aug 2021 16:45:38 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2F76D610E9 for ; Wed, 25 Aug 2021 16:45:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2F76D610E9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GkY6khbsRBTiZ7BiXPD2vIIJ1obMvWflbZhuoJbGHJ8=; b=04b7GTnhvdLd8A Iai6TRK++JypaNSfmhrCwGY3pvz1IEfYLG8ikVpwhAFE9VjdvONNly6SDfazDm3v0xDfAziOCvhg9 W4hwNZwVx9aF5pFbPIH10oYlrcNf218kaSi2XhpXhDarkmjTf8WSWxHcoD+D5PDF8OJqPFSqivKRz flbxgzgneqjMwAsN3WeZVusup2CxSbcvTCBUafgMuac+P/2ui9PydAGzgGghwVw57WsV23GzWD3nv JPOi/3q3mWSMskCIAoK+Nng5KtKPExjz4QV9l2iW/olgoEzGPkXdOBVfT4aoBJdaUj5SPFrXgRwX/ +j8qQyUpymVI7l3rbohg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvzd-007t0z-UR; Wed, 25 Aug 2021 16:43:14 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbL-007gnT-HC for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 46C231476; Wed, 25 Aug 2021 09:18:07 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EB9613F66F; Wed, 25 Aug 2021 09:18:05 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 34/39] KVM: arm64: Emulate SPE buffer management interrupt Date: Wed, 25 Aug 2021 17:18:10 +0100 Message-Id: <20210825161815.266051-35-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091807_711072_B8BC4FFF X-CRM114-Status: GOOD ( 26.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org A profiling buffer management interrupt is asserted when the buffer fills, on a fault or on an external abort. The service bit, PMBSR_EL1.S, is set as long as SPE asserts this interrupt. The interrupt can also be asserted following a direct write to PMBSR_EL1 that sets the bit. The SPE hardware stops asserting the interrupt only when the service bit is cleared. KVM emulates the interrupt by reading the value of the service bit on each guest exit to determine if the SPE hardware asserted the interrupt (for example, if the buffer was full). Writes to the buffer registers are trapped, to determine when the interrupt should be cleared or when the guest wants to explicitely assert the interrupt by setting the service bit. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_spe.h | 4 ++ arch/arm64/kvm/arm.c | 3 ++ arch/arm64/kvm/hyp/nvhe/spe-sr.c | 28 +++++++++++-- arch/arm64/kvm/hyp/vhe/spe-sr.c | 17 ++++++-- arch/arm64/kvm/spe.c | 72 ++++++++++++++++++++++++++++++++ 5 files changed, 117 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/kvm_spe.h b/arch/arm64/include/asm/kvm_spe.h index 272f1eec64f2..d7d7b9e243de 100644 --- a/arch/arm64/include/asm/kvm_spe.h +++ b/arch/arm64/include/asm/kvm_spe.h @@ -19,6 +19,8 @@ static __always_inline bool kvm_supports_spe(void) struct kvm_vcpu_spe { bool initialized; /* SPE initialized for the VCPU */ int irq_num; /* Buffer management interrut number */ + bool irq_level; /* 'true' if the interrupt is asserted at the VGIC */ + bool hwirq_level; /* 'true' if the SPE hardware is asserting the interrupt */ }; struct kvm_spe { @@ -28,6 +30,7 @@ struct kvm_spe { void kvm_spe_init_supported_cpus(void); void kvm_spe_vm_init(struct kvm *kvm); int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu); +void kvm_spe_sync_hwstate(struct kvm_vcpu *vcpu); void kvm_spe_write_sysreg(struct kvm_vcpu *vcpu, int reg, u64 val); u64 kvm_spe_read_sysreg(struct kvm_vcpu *vcpu, int reg); @@ -51,6 +54,7 @@ struct kvm_spe { static inline void kvm_spe_init_supported_cpus(void) {} static inline void kvm_spe_vm_init(struct kvm *kvm) {} static inline int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu) { return -ENOEXEC; } +static inline void kvm_spe_sync_hwstate(struct kvm_vcpu *vcpu) {} static inline void kvm_spe_write_sysreg(struct kvm_vcpu *vcpu, int reg, u64 val) {} static inline u64 kvm_spe_read_sysreg(struct kvm_vcpu *vcpu, int reg) { return 0; } diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index cd64894b286e..ec449bc5f811 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -949,6 +949,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) */ kvm_pmu_sync_hwstate(vcpu); + if (kvm_supports_spe() && kvm_vcpu_has_spe(vcpu)) + kvm_spe_sync_hwstate(vcpu); + /* * Sync the vgic state before syncing the timer state because * the timer code needs to know if the virtual timer diff --git a/arch/arm64/kvm/hyp/nvhe/spe-sr.c b/arch/arm64/kvm/hyp/nvhe/spe-sr.c index 4f6579daddb5..b74131486a75 100644 --- a/arch/arm64/kvm/hyp/nvhe/spe-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/spe-sr.c @@ -47,6 +47,8 @@ void __spe_save_host_state_nvhe(struct kvm_vcpu *vcpu, void __spe_save_guest_state_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *guest_ctxt) { + u64 pmbsr; + if (read_sysreg_s(SYS_PMBLIMITR_EL1) & BIT(SYS_PMBLIMITR_EL1_E_SHIFT)) { psb_csync(); dsb(nsh); @@ -55,7 +57,22 @@ void __spe_save_guest_state_nvhe(struct kvm_vcpu *vcpu, } ctxt_sys_reg(guest_ctxt, PMBPTR_EL1) = read_sysreg_s(SYS_PMBPTR_EL1); - ctxt_sys_reg(guest_ctxt, PMBSR_EL1) = read_sysreg_s(SYS_PMBSR_EL1); + /* + * We need to differentiate between the hardware asserting the interrupt + * and the guest setting the service bit as a result of a direct + * register write, hence the extra field in the spe struct. + * + * The PMBSR_EL1 register is not directly accessed by the guest, KVM + * needs to update the in-memory copy when the hardware asserts the + * interrupt as that's the only case when KVM will show the guest a + * value which is different from what the guest last wrote to the + * register. + */ + pmbsr = read_sysreg_s(SYS_PMBSR_EL1); + if (pmbsr & BIT(SYS_PMBSR_EL1_S_SHIFT)) { + ctxt_sys_reg(guest_ctxt, PMBSR_EL1) = pmbsr; + vcpu->arch.spe.hwirq_level = true; + } /* PMBLIMITR_EL1 is updated only on a trapped write. */ ctxt_sys_reg(guest_ctxt, PMSCR_EL1) = read_sysreg_s(SYS_PMSCR_EL1); @@ -80,8 +97,13 @@ void __spe_restore_guest_state_nvhe(struct kvm_vcpu *vcpu, __spe_restore_common_state(guest_ctxt); write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBPTR_EL1), SYS_PMBPTR_EL1); - write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBSR_EL1), SYS_PMBSR_EL1); - write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBLIMITR_EL1), SYS_PMBLIMITR_EL1); + /* The buffer management interrupt is virtual. */ + write_sysreg_s(0, SYS_PMBSR_EL1); + /* The buffer is disabled when the interrupt is asserted. */ + if (vcpu->arch.spe.irq_level) + write_sysreg_s(0, SYS_PMBLIMITR_EL1); + else + write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBLIMITR_EL1), SYS_PMBLIMITR_EL1); write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMSCR_EL1), SYS_PMSCR_EL1); write_sysreg_el2(ctxt_sys_reg(guest_ctxt, PMSCR_EL2), SYS_PMSCR); } diff --git a/arch/arm64/kvm/hyp/vhe/spe-sr.c b/arch/arm64/kvm/hyp/vhe/spe-sr.c index f557ac64a1cc..ea4b3b69bb32 100644 --- a/arch/arm64/kvm/hyp/vhe/spe-sr.c +++ b/arch/arm64/kvm/hyp/vhe/spe-sr.c @@ -48,7 +48,7 @@ NOKPROBE_SYMBOL(__spe_save_host_state_vhe); void __spe_save_guest_state_vhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *guest_ctxt) { - u64 pmblimitr; + u64 pmblimitr, pmbsr; /* * We're at EL2 and the buffer owning regime is EL1, which means that @@ -66,7 +66,11 @@ void __spe_save_guest_state_vhe(struct kvm_vcpu *vcpu, } ctxt_sys_reg(guest_ctxt, PMBPTR_EL1) = read_sysreg_s(SYS_PMBPTR_EL1); - ctxt_sys_reg(guest_ctxt, PMBSR_EL1) = read_sysreg_s(SYS_PMBSR_EL1); + pmbsr = read_sysreg_s(SYS_PMBSR_EL1); + if (pmbsr & BIT(SYS_PMBSR_EL1_S_SHIFT)) { + ctxt_sys_reg(guest_ctxt, PMBSR_EL1) = pmbsr; + vcpu->arch.spe.hwirq_level = true; + } /* PMBLIMITR_EL1 is updated only on a trapped write. */ ctxt_sys_reg(guest_ctxt, PMSCR_EL1) = read_sysreg_el1(SYS_PMSCR); @@ -120,8 +124,13 @@ void __spe_restore_guest_state_vhe(struct kvm_vcpu *vcpu, */ write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBPTR_EL1), SYS_PMBPTR_EL1); - write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBSR_EL1), SYS_PMBSR_EL1); - write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBLIMITR_EL1), SYS_PMBLIMITR_EL1); + /* The buffer management interrupt is virtual. */ + write_sysreg_s(0, SYS_PMBSR_EL1); + /* The buffer is disabled when the interrupt is asserted. */ + if (vcpu->arch.spe.irq_level) + write_sysreg_s(0, SYS_PMBLIMITR_EL1); + else + write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBLIMITR_EL1), SYS_PMBLIMITR_EL1); write_sysreg_el1(ctxt_sys_reg(guest_ctxt, PMSCR_EL1), SYS_PMSCR); /* PMSCR_EL2 has been cleared when saving the host state. */ } diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c index 054bb16bbd79..5b69501dc3da 100644 --- a/arch/arm64/kvm/spe.c +++ b/arch/arm64/kvm/spe.c @@ -71,9 +71,81 @@ int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu) return 0; } +static void kvm_spe_update_irq(struct kvm_vcpu *vcpu, bool level) +{ + struct kvm_vcpu_spe *spe = &vcpu->arch.spe; + int ret; + + if (spe->irq_level == level) + return; + + spe->irq_level = level; + ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, spe->irq_num, + level, spe); + WARN_ON(ret); +} + +static __printf(2, 3) +void print_buf_warn(struct kvm_vcpu *vcpu, char *fmt, ...) +{ + va_list va; + + va_start(va, fmt); + kvm_warn_ratelimited("%pV [PMBSR=0x%016llx, PMBPTR=0x%016llx, PMBLIMITR=0x%016llx]\n", + &(struct va_format){ fmt, &va }, + __vcpu_sys_reg(vcpu, PMBSR_EL1), + __vcpu_sys_reg(vcpu, PMBPTR_EL1), + __vcpu_sys_reg(vcpu, PMBLIMITR_EL1)); + va_end(va); +} + +static void kvm_spe_inject_ext_abt(struct kvm_vcpu *vcpu) +{ + __vcpu_sys_reg(vcpu, PMBSR_EL1) = BIT(SYS_PMBSR_EL1_EA_SHIFT) | + BIT(SYS_PMBSR_EL1_S_SHIFT); + __vcpu_sys_reg(vcpu, PMBSR_EL1) |= SYS_PMBSR_EL1_EC_FAULT_S1; + /* Synchronous External Abort, not on translation table walk. */ + __vcpu_sys_reg(vcpu, PMBSR_EL1) |= 0x10 << SYS_PMBSR_EL1_FAULT_FSC_SHIFT; +} + +void kvm_spe_sync_hwstate(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_spe *spe = &vcpu->arch.spe; + u64 pmbsr, pmbsr_ec; + + if (!spe->hwirq_level) + return; + spe->hwirq_level = false; + + pmbsr = __vcpu_sys_reg(vcpu, PMBSR_EL1); + pmbsr_ec = pmbsr & (SYS_PMBSR_EL1_EC_MASK << SYS_PMBSR_EL1_EC_SHIFT); + + switch (pmbsr_ec) { + case SYS_PMBSR_EL1_EC_FAULT_S2: + print_buf_warn(vcpu, "SPE stage 2 data abort"); + kvm_spe_inject_ext_abt(vcpu); + break; + case SYS_PMBSR_EL1_EC_FAULT_S1: + case SYS_PMBSR_EL1_EC_BUF: + /* + * These two exception syndromes are entirely up to the guest to + * figure out, leave PMBSR_EL1 unchanged. + */ + break; + default: + print_buf_warn(vcpu, "SPE unknown buffer syndrome"); + kvm_spe_inject_ext_abt(vcpu); + } + + kvm_spe_update_irq(vcpu, true); +} + void kvm_spe_write_sysreg(struct kvm_vcpu *vcpu, int reg, u64 val) { __vcpu_sys_reg(vcpu, reg) = val; + + if (reg == PMBSR_EL1) + kvm_spe_update_irq(vcpu, val & BIT(SYS_PMBSR_EL1_S_SHIFT)); } u64 kvm_spe_read_sysreg(struct kvm_vcpu *vcpu, int reg) From patchwork Wed Aug 25 16:18:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A9B5C4338F for ; Wed, 25 Aug 2021 16:45:42 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D05C9610E9 for ; Wed, 25 Aug 2021 16:45:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D05C9610E9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=M72TZwNpdSBmfEgxiNy4FsKdLRvP2S3+cgjuh4WdpwA=; b=zIqYS36aE+knB4 1jbfrt/km1wQq2M2qiQbfwrFBJIcJXydRhihB1gWiab7pN9MqqMX5qyzmwG1LnlmiHEvbh0r7KN60 rMjyqpFNKG0gVO+MDVJp5vccEL5mI1PzPj/rZ1jZ/VLjCZbVNblcQJjeEsme4+OkTZIP/s0WWuaQA WjSy0QwfI5XRfBiJLokHCEzdSHOUa6si2q30se2+VpYvSaWEoTQpypeF/oVIVd7pSHrDWvcv9/HyM M8Qy8UF4ffXFcIiHn9BcFfIFAIuH5vkYKf2dxruvDO8JLAvLe25HjntG7vksDanqG61aSd6/58P1U wjX9Lo2yG+zg9Eak+23Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIw03-007tCL-Aw; Wed, 25 Aug 2021 16:43:39 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbN-007glG-2H for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DB4CF147A; Wed, 25 Aug 2021 09:18:08 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8AB543F66F; Wed, 25 Aug 2021 09:18:07 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 35/39] KVM: arm64: Add an userspace API to stop a VCPU profiling Date: Wed, 25 Aug 2021 17:18:11 +0100 Message-Id: <20210825161815.266051-36-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091809_239290_6231F05B X-CRM114-Status: GOOD ( 18.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add the KVM_ARM_VCPU_SPE_CTRL(KVM_ARM_VCPU_SPE_STOP) VCPU attribute to allow userspace to request that KVM disables profiling for that VCPU. The ioctl does nothing yet. Signed-off-by: Alexandru Elisei --- Documentation/virt/kvm/devices/vcpu.rst | 36 +++++++++++++++++++++++++ arch/arm64/include/uapi/asm/kvm.h | 4 +++ arch/arm64/kvm/spe.c | 23 +++++++++++++--- 3 files changed, 60 insertions(+), 3 deletions(-) diff --git a/Documentation/virt/kvm/devices/vcpu.rst b/Documentation/virt/kvm/devices/vcpu.rst index c275c320e500..b4e38261b00f 100644 --- a/Documentation/virt/kvm/devices/vcpu.rst +++ b/Documentation/virt/kvm/devices/vcpu.rst @@ -201,3 +201,39 @@ Returns: Request initialization of the Statistical Profiling Extension for this VCPU. Must be done after initializaing the in-kernel irqchip and after setting the Profiling Buffer management interrupt number for the VCPU. + +4.3 ATTRIBUTE: KVM_ARM_VCPU_SPE_STOP +------------------------------------ + +:Parameters: in kvm_device_attr.addr the address to the flag that specifies + what KVM should do when the guest enables profiling + +The flag must be exactly one of: + +- KVM_ARM_VCPU_SPE_STOP_TRAP: trap all register accesses and ignore the guest + trying to enable profiling. +- KVM_ARM_VCPU_SPE_STOP_EXIT: exit to userspace when the guest tries to enable + profiling. +- KVM_ARM_VCPU_SPE_RESUME: resume profiling, if it was previously stopped using + this attribute. + +If KVM detects that a vcpu is trying to run with SPE enabled when +KVM_ARM_VCPU_STOP_EXIT is set, KVM_RUN will return without entering the guest +with kvm_run.exit_reason equal to KVM_EXIT_FAIL_ENTRY, and the fail_entry struct +will be zeroed. + +Returns: + + ======= ============================================ + -EAGAIN SPE not initialized + -EFAULT Error accessing the flag + -EINVAL Invalid flag + -ENXIO SPE not supported or not properly configured + ======= ============================================ + +Request that KVM disables SPE for the given vcpu. This can be useful for +migration, which relies on tracking dirty pages by write-protecting memory, but +breaks SPE in the guest as KVM does not handle buffer stage 2 faults. + +The attribute must be set after SPE has been initialized successfully. It can be +set multiple times, with the latest value overwritting the previous one. diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index d4c0b53a5fb2..75a5113f610e 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -371,6 +371,10 @@ struct kvm_arm_copy_mte_tags { #define KVM_ARM_VCPU_SPE_CTRL 3 #define KVM_ARM_VCPU_SPE_IRQ 0 #define KVM_ARM_VCPU_SPE_INIT 1 +#define KVM_ARM_VCPU_SPE_STOP 2 +#define KVM_ARM_VCPU_SPE_STOP_TRAP (1 << 0) +#define KVM_ARM_VCPU_SPE_STOP_EXIT (1 << 1) +#define KVM_ARM_VCPU_SPE_RESUME (1 << 2) /* KVM_IRQ_LINE irq field index values */ #define KVM_ARM_IRQ_VCPU2_SHIFT 28 diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c index 5b69501dc3da..2630e777fe1d 100644 --- a/arch/arm64/kvm/spe.c +++ b/arch/arm64/kvm/spe.c @@ -220,14 +220,14 @@ int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) if (!kvm_vcpu_supports_spe(vcpu)) return -ENXIO; - if (vcpu->arch.spe.initialized) - return -EBUSY; - switch (attr->attr) { case KVM_ARM_VCPU_SPE_IRQ: { int __user *uaddr = (int __user *)(long)attr->addr; int irq; + if (vcpu->arch.spe.initialized) + return -EBUSY; + if (vcpu->arch.spe.irq_num) return -EBUSY; @@ -248,11 +248,27 @@ int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) if (!vgic_initialized(vcpu->kvm)) return -ENXIO; + if (vcpu->arch.spe.initialized) + return -EBUSY; + if (kvm_vgic_set_owner(vcpu, vcpu->arch.spe.irq_num, &vcpu->arch.spe)) return -ENXIO; vcpu->arch.spe.initialized = true; return 0; + case KVM_ARM_VCPU_SPE_STOP: { + int __user *uaddr = (int __user *)(long)attr->addr; + int flags; + + if (!vcpu->arch.spe.initialized) + return -EAGAIN; + + if (get_user(flags, uaddr)) + return -EFAULT; + + if (!flags) + return -EINVAL; + } } return -ENXIO; @@ -290,6 +306,7 @@ int kvm_spe_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) switch(attr->attr) { case KVM_ARM_VCPU_SPE_IRQ: case KVM_ARM_VCPU_SPE_INIT: + case KVM_ARM_VCPU_SPE_STOP: return 0; } From patchwork Wed Aug 25 16:18:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27654C4338F for ; Wed, 25 Aug 2021 16:46:30 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E5E8260F25 for ; Wed, 25 Aug 2021 16:46:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E5E8260F25 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4bffR6mTA6hkilXi+JzPbRUFVZarJtko/2/HBi3sMAU=; b=D0IDDgLFvEgKIm mcUbq9+eeOfyny9iqJlAQHIZw9mUicvQoGR3PHLl52w7zwiYJ7GeAAsT6v4w5qNv8i7i8c73QQ/5Y Hq6K3bBWY81X4Yjl9qVuZIvg9jV9K92+lc+2ipi0Icc2JQG0CYN0UXZq8/Rb8AF+Tt6ZsnNZcbccx WkL4ouwcrRm5IQrdSNc515BDhwTuS68FHIcEfyC25zQ2pF1w2pXgs6xCyaxTltll+NCEn1nXbr2Zv 5758YpMD+Uqd9teQJEVTOTf3fFdRKkfBizVcPYq6wk52/tgP7VVNn4xcv5+k+KU641bBgtiAbuwAX 50zbRWW31I9YpIbZE2Gw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIw0Z-007tRE-QX; Wed, 25 Aug 2021 16:44:12 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbO-007gnT-VB for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:13 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7A5391480; Wed, 25 Aug 2021 09:18:10 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2AD273F66F; Wed, 25 Aug 2021 09:18:09 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 36/39] KVM: arm64: Implement userspace API to stop a VCPU profiling Date: Wed, 25 Aug 2021 17:18:12 +0100 Message-Id: <20210825161815.266051-37-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091811_170461_686A0C58 X-CRM114-Status: GOOD ( 28.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When userspace requests that a VCPU is not allowed to profile anymore via the KVM_ARM_VCPU_SPE_STOP attribute, keep all the register state in memory and trap all registers, not just the buffer registers, and don't copy any of this shadow state on the hardware. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_hyp.h | 2 + arch/arm64/include/asm/kvm_spe.h | 14 +++++++ arch/arm64/include/uapi/asm/kvm.h | 3 ++ arch/arm64/kvm/arm.c | 9 ++++ arch/arm64/kvm/debug.c | 13 ++++-- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 4 +- arch/arm64/kvm/hyp/nvhe/spe-sr.c | 24 +++++++++++ arch/arm64/kvm/hyp/vhe/spe-sr.c | 56 +++++++++++++++++++++++++ arch/arm64/kvm/spe.c | 67 ++++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 2 +- 10 files changed, 188 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 03bc51049996..ce365427b483 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -86,8 +86,10 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu); #ifdef __KVM_NVHE_HYPERVISOR__ void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); +void __debug_save_spe(u64 *pmscr_el1); void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); +void __debug_restore_spe(u64 pmscr_el1); #ifdef CONFIG_KVM_ARM_SPE void __spe_save_host_state_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); diff --git a/arch/arm64/include/asm/kvm_spe.h b/arch/arm64/include/asm/kvm_spe.h index d7d7b9e243de..f51561e3b43f 100644 --- a/arch/arm64/include/asm/kvm_spe.h +++ b/arch/arm64/include/asm/kvm_spe.h @@ -16,13 +16,23 @@ static __always_inline bool kvm_supports_spe(void) return static_branch_likely(&kvm_spe_available); } +/* Guest profiling disabled by the user. */ +#define KVM_VCPU_SPE_STOP_USER (1 << 0) +/* Stop profiling and exit to userspace when guest starts profiling. */ +#define KVM_VCPU_SPE_STOP_USER_EXIT (1 << 1) + struct kvm_vcpu_spe { bool initialized; /* SPE initialized for the VCPU */ int irq_num; /* Buffer management interrut number */ bool irq_level; /* 'true' if the interrupt is asserted at the VGIC */ bool hwirq_level; /* 'true' if the SPE hardware is asserting the interrupt */ + u64 flags; }; +#define kvm_spe_profiling_stopped(vcpu) \ + (((vcpu)->arch.spe.flags & KVM_VCPU_SPE_STOP_USER) || \ + ((vcpu)->arch.spe.flags & KVM_VCPU_SPE_STOP_USER_EXIT)) \ + struct kvm_spe { bool perfmon_capable; /* Is the VM perfmon_capable()? */ }; @@ -31,6 +41,7 @@ void kvm_spe_init_supported_cpus(void); void kvm_spe_vm_init(struct kvm *kvm); int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu); void kvm_spe_sync_hwstate(struct kvm_vcpu *vcpu); +bool kvm_spe_exit_to_user(struct kvm_vcpu *vcpu); void kvm_spe_write_sysreg(struct kvm_vcpu *vcpu, int reg, u64 val); u64 kvm_spe_read_sysreg(struct kvm_vcpu *vcpu, int reg); @@ -48,6 +59,8 @@ int kvm_spe_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); struct kvm_vcpu_spe { }; +#define kvm_spe_profiling_stopped(vcpu) (false) + struct kvm_spe { }; @@ -55,6 +68,7 @@ static inline void kvm_spe_init_supported_cpus(void) {} static inline void kvm_spe_vm_init(struct kvm *kvm) {} static inline int kvm_spe_vcpu_first_run_init(struct kvm_vcpu *vcpu) { return -ENOEXEC; } static inline void kvm_spe_sync_hwstate(struct kvm_vcpu *vcpu) {} +static inline bool kvm_spe_exit_to_user(struct kvm_vcpu *vcpu) { return false; } static inline void kvm_spe_write_sysreg(struct kvm_vcpu *vcpu, int reg, u64 val) {} static inline u64 kvm_spe_read_sysreg(struct kvm_vcpu *vcpu, int reg) { return 0; } diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 75a5113f610e..63599ee39a7b 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -376,6 +376,9 @@ struct kvm_arm_copy_mte_tags { #define KVM_ARM_VCPU_SPE_STOP_EXIT (1 << 1) #define KVM_ARM_VCPU_SPE_RESUME (1 << 2) +/* run->fail_entry.hardware_entry_failure_reason codes. */ +#define KVM_EXIT_FAIL_ENTRY_SPE (1 << 0) + /* KVM_IRQ_LINE irq field index values */ #define KVM_ARM_IRQ_VCPU2_SHIFT 28 #define KVM_ARM_IRQ_VCPU2_MASK 0xf diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ec449bc5f811..b7aae25bb9da 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -873,6 +873,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) continue; } + if (unlikely(kvm_spe_exit_to_user(vcpu))) { + run->exit_reason = KVM_EXIT_FAIL_ENTRY; + run->fail_entry.hardware_entry_failure_reason + = KVM_EXIT_FAIL_ENTRY_SPE; + ret = -EAGAIN; + preempt_enable(); + continue; + } + kvm_pmu_flush_hwstate(vcpu); local_irq_disable(); diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 6e5fc1887215..6a4277a23bbb 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -96,11 +96,18 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) if (kvm_supports_spe() && kvm_vcpu_has_spe(vcpu)) { /* * Use EL1&0 for the profiling buffer translation regime and - * trap accesses to the buffer control registers; leave - * MDCR_EL2.TPMS unset and do not trap accesses to the profiling - * control registers. + * trap accesses to the buffer control registers; if profiling + * is stopped, also set MSCR_EL2.TMPS to trap accesses to the + * rest of the registers, otherwise leave it clear. + * + * Leaving MDCR_EL2.E2P unset, like we do when the VCPU does not + * have SPE, means that the PMBIDR_EL1.P (which KVM does not + * trap) will be set and the guest will detect SPE as being + * unavailable. */ vcpu->arch.mdcr_el2 |= MDCR_EL2_E2PB_EL1_TRAP << MDCR_EL2_E2PB_SHIFT; + if (kvm_spe_profiling_stopped(vcpu)) + vcpu->arch.mdcr_el2 |= MDCR_EL2_TPMS; } else { /* * Trap accesses to the profiling control registers; leave diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 1622615954b2..944972de0944 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -14,7 +14,7 @@ #include #include -static void __debug_save_spe(u64 *pmscr_el1) +void __debug_save_spe(u64 *pmscr_el1) { u64 reg; @@ -40,7 +40,7 @@ static void __debug_save_spe(u64 *pmscr_el1) dsb(nsh); } -static void __debug_restore_spe(u64 pmscr_el1) +void __debug_restore_spe(u64 pmscr_el1) { if (!pmscr_el1) return; diff --git a/arch/arm64/kvm/hyp/nvhe/spe-sr.c b/arch/arm64/kvm/hyp/nvhe/spe-sr.c index b74131486a75..8ed03aa4f965 100644 --- a/arch/arm64/kvm/hyp/nvhe/spe-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/spe-sr.c @@ -23,6 +23,11 @@ void __spe_save_host_state_nvhe(struct kvm_vcpu *vcpu, { u64 pmblimitr; + if (kvm_spe_profiling_stopped(vcpu)) { + __debug_save_spe(__ctxt_sys_reg(host_ctxt, PMSCR_EL1)); + return; + } + pmblimitr = read_sysreg_s(SYS_PMBLIMITR_EL1); if (pmblimitr & BIT(SYS_PMBLIMITR_EL1_E_SHIFT)) { psb_csync(); @@ -49,6 +54,13 @@ void __spe_save_guest_state_nvhe(struct kvm_vcpu *vcpu, { u64 pmbsr; + /* + * Profiling is stopped and all register accesses are trapped, nothing + * to save here. + */ + if (kvm_spe_profiling_stopped(vcpu)) + return; + if (read_sysreg_s(SYS_PMBLIMITR_EL1) & BIT(SYS_PMBLIMITR_EL1_E_SHIFT)) { psb_csync(); dsb(nsh); @@ -82,6 +94,11 @@ void __spe_save_guest_state_nvhe(struct kvm_vcpu *vcpu, void __spe_restore_host_state_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt) { + if (kvm_spe_profiling_stopped(vcpu)) { + __debug_restore_spe(ctxt_sys_reg(host_ctxt, PMSCR_EL1)); + return; + } + __spe_restore_common_state(host_ctxt); write_sysreg_s(ctxt_sys_reg(host_ctxt, PMBPTR_EL1), SYS_PMBPTR_EL1); @@ -94,6 +111,13 @@ void __spe_restore_host_state_nvhe(struct kvm_vcpu *vcpu, void __spe_restore_guest_state_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *guest_ctxt) { + /* + * Profiling is stopped and all register accesses are trapped, nothing + * to restore here. + */ + if (kvm_spe_profiling_stopped(vcpu)) + return; + __spe_restore_common_state(guest_ctxt); write_sysreg_s(ctxt_sys_reg(guest_ctxt, PMBPTR_EL1), SYS_PMBPTR_EL1); diff --git a/arch/arm64/kvm/hyp/vhe/spe-sr.c b/arch/arm64/kvm/hyp/vhe/spe-sr.c index ea4b3b69bb32..024a4c0618cc 100644 --- a/arch/arm64/kvm/hyp/vhe/spe-sr.c +++ b/arch/arm64/kvm/hyp/vhe/spe-sr.c @@ -10,6 +10,34 @@ #include +static void __spe_save_host_buffer(u64 *pmscr_el2) +{ + u64 pmblimitr; + + /* Disable guest profiling. */ + write_sysreg_el1(0, SYS_PMSCR); + + pmblimitr = read_sysreg_s(SYS_PMBLIMITR_EL1); + if (!(pmblimitr & BIT(SYS_PMBLIMITR_EL1_E_SHIFT))) { + *pmscr_el2 = 0; + return; + } + + *pmscr_el2 = read_sysreg_el2(SYS_PMSCR); + + /* Disable profiling at EL2 so we can drain the buffer. */ + write_sysreg_el2(0, SYS_PMSCR); + isb(); + + /* + * We're going to change the buffer owning exception level when we + * activate traps, drain the buffer now. + */ + psb_csync(); + dsb(nsh); +} +NOKPROBE_SYMBOL(__spe_save_host_buffer); + /* * Disable host profiling, drain the buffer and save the host SPE context. * Extra care must be taken because profiling might be in progress. @@ -19,6 +47,11 @@ void __spe_save_host_state_vhe(struct kvm_vcpu *vcpu, { u64 pmblimitr, pmscr_el2; + if (kvm_spe_profiling_stopped(vcpu)) { + __spe_save_host_buffer(__ctxt_sys_reg(host_ctxt, PMSCR_EL2)); + return; + } + /* Disable profiling while the SPE context is being switched. */ pmscr_el2 = read_sysreg_el2(SYS_PMSCR); write_sysreg_el2(__vcpu_sys_reg(vcpu, PMSCR_EL2), SYS_PMSCR); @@ -50,6 +83,9 @@ void __spe_save_guest_state_vhe(struct kvm_vcpu *vcpu, { u64 pmblimitr, pmbsr; + if (kvm_spe_profiling_stopped(vcpu)) + return; + /* * We're at EL2 and the buffer owning regime is EL1, which means that * profiling is disabled. After we disable traps and restore the host's @@ -78,6 +114,18 @@ void __spe_save_guest_state_vhe(struct kvm_vcpu *vcpu, } NOKPROBE_SYMBOL(__spe_save_guest_state_vhe); +static void __spe_restore_host_buffer(u64 pmscr_el2) +{ + if (!pmscr_el2) + return; + + /* Synchronize MDCR_EL2 write. */ + isb(); + + write_sysreg_el2(pmscr_el2, SYS_PMSCR); +} +NOKPROBE_SYMBOL(__spe_restore_host_buffer); + /* * Restore the host SPE context. Special care must be taken because we're * potentially resuming a profiling session which was stopped when we saved the @@ -86,6 +134,11 @@ NOKPROBE_SYMBOL(__spe_save_guest_state_vhe); void __spe_restore_host_state_vhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt) { + if (kvm_spe_profiling_stopped(vcpu)) { + __spe_restore_host_buffer(ctxt_sys_reg(host_ctxt, PMSCR_EL2)); + return; + } + __spe_restore_common_state(host_ctxt); write_sysreg_s(ctxt_sys_reg(host_ctxt, PMBPTR_EL1), SYS_PMBPTR_EL1); @@ -115,6 +168,9 @@ NOKPROBE_SYMBOL(__spe_restore_host_state_vhe); void __spe_restore_guest_state_vhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *guest_ctxt) { + if (kvm_spe_profiling_stopped(vcpu)) + return; + __spe_restore_common_state(guest_ctxt); /* diff --git a/arch/arm64/kvm/spe.c b/arch/arm64/kvm/spe.c index 2630e777fe1d..69ca731ba9d3 100644 --- a/arch/arm64/kvm/spe.c +++ b/arch/arm64/kvm/spe.c @@ -140,6 +140,28 @@ void kvm_spe_sync_hwstate(struct kvm_vcpu *vcpu) kvm_spe_update_irq(vcpu, true); } +static bool kvm_spe_buffer_enabled(struct kvm_vcpu *vcpu) +{ + return !vcpu->arch.spe.irq_level && + (__vcpu_sys_reg(vcpu, PMBLIMITR_EL1) & BIT(SYS_PMBLIMITR_EL1_E_SHIFT)); +} + +bool kvm_spe_exit_to_user(struct kvm_vcpu *vcpu) +{ + u64 pmscr_enabled_mask = BIT(SYS_PMSCR_EL1_E0SPE_SHIFT) | + BIT(SYS_PMSCR_EL1_E1SPE_SHIFT); + + if (!(vcpu->arch.spe.flags & KVM_VCPU_SPE_STOP_USER_EXIT)) + return false; + + /* + * We don't trap the guest dropping to EL0, so exit even if profiling is + * disabled at EL1, but enabled at EL0. + */ + return kvm_spe_buffer_enabled(vcpu) && + (__vcpu_sys_reg(vcpu, PMSCR_EL1) & pmscr_enabled_mask); +} + void kvm_spe_write_sysreg(struct kvm_vcpu *vcpu, int reg, u64 val) { __vcpu_sys_reg(vcpu, reg) = val; @@ -215,6 +237,31 @@ static bool kvm_spe_irq_is_valid(struct kvm *kvm, int irq) return true; } +static int kvm_spe_stop_user(struct kvm_vcpu *vcpu, int flags) +{ + struct kvm_vcpu_spe *spe = &vcpu->arch.spe; + + if (flags & KVM_ARM_VCPU_SPE_STOP_TRAP) { + if (flags & ~KVM_ARM_VCPU_SPE_STOP_TRAP) + return -EINVAL; + spe->flags = KVM_VCPU_SPE_STOP_USER; + } + + if (flags & KVM_ARM_VCPU_SPE_STOP_EXIT) { + if (flags & ~KVM_ARM_VCPU_SPE_STOP_EXIT) + return -EINVAL; + spe->flags = KVM_VCPU_SPE_STOP_USER_EXIT; + } + + if (flags & KVM_ARM_VCPU_SPE_RESUME) { + if (flags & ~KVM_ARM_VCPU_SPE_RESUME) + return -EINVAL; + spe->flags = 0; + } + + return 0; +} + int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) { if (!kvm_vcpu_supports_spe(vcpu)) @@ -268,6 +315,8 @@ int kvm_spe_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) if (!flags) return -EINVAL; + + return kvm_spe_stop_user(vcpu, flags); } } @@ -293,6 +342,24 @@ int kvm_spe_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) return 0; } + case KVM_ARM_VCPU_SPE_STOP: { + int __user *uaddr = (int __user *)(long)attr->addr; + struct kvm_vcpu_spe *spe = &vcpu->arch.spe; + int flag = 0; + + if (!vcpu->arch.spe.initialized) + return -EAGAIN; + + if (spe->flags & KVM_VCPU_SPE_STOP_USER) + flag = KVM_ARM_VCPU_SPE_STOP_TRAP; + else if (spe->flags & KVM_VCPU_SPE_STOP_USER_EXIT) + flag = KVM_ARM_VCPU_SPE_STOP_EXIT; + + if (put_user(flag, uaddr)) + return -EFAULT; + + return 0; + } } return -ENXIO; diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 064742cee425..cc711b081f31 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -608,7 +608,7 @@ static bool access_spe_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, { int reg = r->reg; u64 val = p->regval; - if (reg < PMBLIMITR_EL1) { + if (reg < PMBLIMITR_EL1 && !kvm_spe_profiling_stopped(vcpu)) { print_sys_reg_msg(p, "Unsupported guest SPE register access at: %lx [%08lx]\n", *vcpu_pc(vcpu), *vcpu_cpsr(vcpu)); } From patchwork Wed Aug 25 16:18:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 612F0C4338F for ; Wed, 25 Aug 2021 16:47:10 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 32E7E60F25 for ; Wed, 25 Aug 2021 16:47:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 32E7E60F25 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=isj3hzk0t9o3FOeD/d75VFj+xrwO3wjjyvJrTvPpl7M=; b=ugKYLxxlRIQXVD rIufwnORYo3oZjTPhwGcauWQhJhqv2CDKKDcoh3ZSai7yOKjIg1XHfXo1D88lszEy4vD+0wQNGAPb eHJao22fctrPwwS3Z2e3v1eyVsGxESBr98TBbOAAoKtzp9To17qpwmH1k0rp+ItIHfcKPke526tKa Ww3UveAt3L01pcwUWE0kTs9nIM4zAHDz73kuiLuC+oVyfQ05ocsekwF1Oat2vaR/J68iBe7dZHyfU DUJiU6xtVmNebNj/NYV6Ju0YmFjkGhUgoUxrGH38IpyDI5ztlXEY6OZeldmEKd4LGGWyKh0Nixfk1 cumhYcHbZzcEQN0BtD+Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIw1I-007tkF-0I; Wed, 25 Aug 2021 16:44:56 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbQ-007gtH-Le for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:14 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 183DC14BF; Wed, 25 Aug 2021 09:18:12 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BDAE53F66F; Wed, 25 Aug 2021 09:18:10 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 37/39] KVM: arm64: Add PMSIDR_EL1 to the SPE register context Date: Wed, 25 Aug 2021 17:18:13 +0100 Message-Id: <20210825161815.266051-38-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091812_869283_20FA7A2D X-CRM114-Status: GOOD ( 16.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org PMSIDR_EL1 is not part of the VCPU register context because the profiling control registers were not trapped and the register is read-only. With the introduction of the KVM_ARM_VCPU_SPE_STOP API, KVM will start trapping accesses to the profiling control registers, add PMSIDR_EL1 to the VCPU register context to prevent undefined exceptions in the guest. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/sys_regs.c | 22 +++++++++++++++++++--- 2 files changed, 20 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index cc46f1406196..f866c4556ff9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -247,6 +247,7 @@ enum vcpu_sysreg { PMSFCR_EL1, /* Sampling Filter Control Register */ PMSEVFR_EL1, /* Sampling Event Filter Register */ PMSLATFR_EL1, /* Sampling Latency Filter Register */ + PMSIDR_EL1, /* Sampling Profiling ID Register */ PMBLIMITR_EL1, /* Profiling Buffer Limit Address Register */ PMBPTR_EL1, /* Profiling Buffer Write Pointer Register */ PMBSR_EL1, /* Profiling Buffer Status/syndrome Register */ diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index cc711b081f31..1a85a0cedbec 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -603,6 +603,18 @@ static unsigned int spe_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } +static void reset_pmsidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +{ + /* + * When SPE is stopped by userspace, the guest reads the in-memory value + * of the register. When SPE is resumed, accesses to the control + * registers are not trapped and the guest reads the hardware + * value. Reset PMSIDR_EL1 to the hardware value to avoid mistmatches + * between the two. + */ + vcpu_write_sys_reg(vcpu, read_sysreg_s(SYS_PMSIDR_EL1), PMSIDR_EL1); +} + static bool access_spe_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { int reg = r->reg; @@ -613,10 +625,14 @@ static bool access_spe_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, *vcpu_pc(vcpu), *vcpu_cpsr(vcpu)); } - if (p->is_write) + if (p->is_write) { + if (reg == PMSIDR_EL1) + return write_to_read_only(vcpu, p, r); + kvm_spe_write_sysreg(vcpu, reg, val); - else + } else { p->regval = kvm_spe_read_sysreg(vcpu, reg); + } return true; } @@ -1568,7 +1584,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SPE_SYS_REG(SYS_PMSFCR_EL1), .reg = PMSFCR_EL1 }, { SPE_SYS_REG(SYS_PMSEVFR_EL1), .reg = PMSEVFR_EL1 }, { SPE_SYS_REG(SYS_PMSLATFR_EL1), .reg = PMSLATFR_EL1 }, - { SPE_SYS_REG(SYS_PMSIDR_EL1), .reset = NULL }, + { SPE_SYS_REG(SYS_PMSIDR_EL1), .reset = reset_pmsidr, .reg = PMSIDR_EL1 }, { SPE_SYS_REG(SYS_PMBLIMITR_EL1), .reg = PMBLIMITR_EL1 }, { SPE_SYS_REG(SYS_PMBPTR_EL1), .reg = PMBPTR_EL1 }, { SPE_SYS_REG(SYS_PMBSR_EL1), .reg = PMBSR_EL1 }, From patchwork Wed Aug 25 16:18:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDC10C4338F for ; Wed, 25 Aug 2021 16:47:46 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 98B2F60F25 for ; Wed, 25 Aug 2021 16:47:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 98B2F60F25 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4gbK5ID4JYPXm+l7+/7oJb/Mi3UN972HYiV+s8s7LzQ=; b=t1b/I1uW4lYSBz sfa6p399czwsX5K+ThuzqHD2w7NQtFw9oINeU8albAb5w/GDg8S/mi5C6NOekh2uH1q5E3xr7xpgb QTloThKKhUpOoMlWl6s1efMagFdJqnXuaRaGNQJy+4NXGl0S8+H1LuOVDsCidxuFBrU0AlXnYT6v6 UC69gYo4u6oJ1dAuDmyj6vIocjScY5DEp1byoBBzT2JHViW0aUZcPUfgwn3bVbXuD0tThHrplaRRc plDJmqTiznuJPITMSfbXY9VRPYwWzmO4stFJkzZ3nuK9Cbx19psJHyuH6/tE3cCN/WFXDpSm33aJQ ObjTWkuJUS1MDt5l4SeQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIw1x-007u83-3C; Wed, 25 Aug 2021 16:45:37 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbR-007gnT-Rz for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:15 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ABECD1042; Wed, 25 Aug 2021 09:18:13 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5B7E43F66F; Wed, 25 Aug 2021 09:18:12 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 38/39] KVM: arm64: Make CONFIG_KVM_ARM_SPE depend on !CONFIG_NUMA_BALANCING Date: Wed, 25 Aug 2021 17:18:14 +0100 Message-Id: <20210825161815.266051-39-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091814_011038_20BE875A X-CRM114-Status: GOOD ( 13.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Automatic NUMA balancing is a performance strategy that Linux uses to reduce the cost associated with memory accesses by having a task use the memory closest to the NUMA node where the task is executing. This is accomplished by triggering periodic page faults to examine the memory location that a task uses, and decide if page migration is necessary. The periodic page faults that drive automatic NUMA balancing are triggered by clearing permissions on certain pages from the task's address space. Clearing the permissions invokes mmu_notifier_invalidate_range_start(), which causes guest memory from being unmapped from stage 2. As a result, SPE can start reporting stage 2 faults, which KVM has no way of handling. Make CONFIG_KVM_ARM_SPE depend on !CONFIG_NUMA_BALANCING to keep SPE usable for a guest. Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index c6ad5a05efb3..1ea34eb29fb4 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -48,7 +48,7 @@ source "virt/kvm/Kconfig" config KVM_ARM_SPE bool "Virtual Statistical Profiling Extension (SPE) support" - depends on ARM_SPE_PMU=y + depends on ARM_SPE_PMU=y && !NUMA_BALANCING default y help Adds support for Statistical Profiling Extension (SPE) in virtual From patchwork Wed Aug 25 16:18:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01E86C4338F for ; Wed, 25 Aug 2021 16:48:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C063960F42 for ; Wed, 25 Aug 2021 16:48:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C063960F42 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7qivx0/FYE8Dej/5/F4+9BNNkccNyYjLAC7s87zHr5E=; b=tY8q77y4XnLJb7 D4y8Cr+Tz8+yrgtw4VXSUuXtmLicor6fstmapXIWURs1S9G4k2XzEPMTp/5WblpbvpQ32ZkR0T74g xqPRRv/JYTFMJTtv2M7ewSOITkG5B8PPg1+qEizpu1CZP4vEnYlhZ9mjv8XOR88rJOAL8bDIYu6bv GkiPjwnXJ6OUmxX718Fwn0Vw/u7297GISc6/Y+4nFH0mEUXd4QzKck+ow8wxzi93RfQwxtFYcPoAk r5FYRjBTGn9mLim8aBYcpaGODo2fen//qfBojBDM55TK8F84+50UnsdUOQv31BUAyYq4A5nKIED/P 6mhs1oBTn4R9LnM60sNA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIw2b-007uRj-3o; Wed, 25 Aug 2021 16:46:17 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbT-007gtH-NY for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:17 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 49DAD1063; Wed, 25 Aug 2021 09:18:15 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EFF723F66F; Wed, 25 Aug 2021 09:18:13 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 39/39] KVM: arm64: Allow userspace to enable SPE for guests Date: Wed, 25 Aug 2021 17:18:15 +0100 Message-Id: <20210825161815.266051-40-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091815_855640_2F58CFE1 X-CRM114-Status: GOOD ( 11.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Everything is in place to emulate SPE for a guest, allow userspace to set the VCPU feature. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kvm/arm.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f866c4556ff9..040da3b0cf2b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -39,7 +39,7 @@ #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS -#define KVM_VCPU_MAX_FEATURES 7 +#define KVM_VCPU_MAX_FEATURES 8 #define KVM_REQ_SLEEP \ KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index b7aae25bb9da..8016b98a8ac3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -309,7 +309,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) break; case KVM_CAP_ARM_SPE: kvm_spe_init_supported_cpus(); - r = 0; + r = kvm_supports_spe(); break; default: r = 0;