From patchwork Mon Dec 13 15:23:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12695932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 80C59C433EF for ; Mon, 13 Dec 2021 15:26:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PIXFXELGPmon+MwZj7updzw0l9WPSys4BNUbjdwgBHM=; b=Rnsuas10QdSOyu FoZk6FGJLRCG66jhNCYgwrYpCQ/j2doMIRxcmFzTtJ7q0JjBBTQp3tqHdBwNB+B6J4Rbx00tDxlEf ud3uvb/hEospbJfk8GNUcQloILnQCYD/AiYzYdyRsQ9h+OU/DAvMCknVTktgzxoJERtqq8dq+uYyi hE4WJNvLYeE9Km+DUh3AuPWLCzHwivyPFGc09At4aa7iPRFr7vx1BES7t2mzvfbKcqZpMr2mTtGbr nv4FaKry6SlNwg3/0NzJZ5ih5LM4Hz8UZi/RHM3hjMszw0WTa6u4zqj0iddp5ryen0KYaF9+TEPTU PUQgd7F64kdpeSF36PXg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwnBg-00AMHu-6D; Mon, 13 Dec 2021 15:24:24 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwnAj-00AM2X-K0 for linux-arm-kernel@lists.infradead.org; Mon, 13 Dec 2021 15:23:27 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C924FD6E; Mon, 13 Dec 2021 07:23:24 -0800 (PST) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BA4513F73B; Mon, 13 Dec 2021 07:23:22 -0800 (PST) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, will@kernel.org, mark.rutland@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: tglx@linutronix.de, mingo@redhat.com, peter.maydell@linaro.org Subject: [PATCH v3 1/4] perf: Fix wrong name in comment for struct perf_cpu_context Date: Mon, 13 Dec 2021 15:23:06 +0000 Message-Id: <20211213152309.158462-2-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211213152309.158462-1-alexandru.elisei@arm.com> References: <20211213152309.158462-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211213_072325_755678_6ADBF0B8 X-CRM114-Status: GOOD ( 11.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Commit 0793a61d4df8 ("performance counters: core code") added the perf subsystem (then called Performance Counters) to Linux, creating the struct perf_cpu_context. The comment for the struct referred to it as a "struct perf_counter_cpu_context". Commit cdd6c482c9ff ("perf: Do the big rename: Performance Counters -> Performance Events") changed the comment to refer to a "struct perf_event_cpu_context", which was still the wrong name for the struct. Change the comment to say "struct perf_cpu_context". CC: Thomas Gleixner CC: Ingo Molnar Signed-off-by: Alexandru Elisei --- include/linux/perf_event.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 0dcfd265beed..14132570ea5d 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -862,7 +862,7 @@ struct perf_event_context { #define PERF_NR_CONTEXTS 4 /** - * struct perf_event_cpu_context - per cpu event context structure + * struct perf_cpu_context - per cpu event context structure */ struct perf_cpu_context { struct perf_event_context ctx; From patchwork Mon Dec 13 15:23:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12695933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1AF62C433EF for ; Mon, 13 Dec 2021 15:26:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wheemmPfxBlik4IOwgiVX4nTHWwUS+xfPIVQcQEol2w=; b=DloHHmxaHIJVpQ UCGtUfvws0iw43JydLqkTY/A4zAMqblm+vEArLgq9JoR2DAkKnaXF/DmIsQLoiU1JcTCQk9pTi2bh h27nmN5wXZCbRcGCwxj6Jll+WLZgF1gDprGA69QTZnYCK+sKTHsMPeUvbz4rI5vzHnmj0/4NjQiZm 9/DpHdNT2xeb2HhPaM2mEyBYhWBvim7E86lRDiBEN66C0lU4Yb0yuAWncqVwXmt25lgxkBbkAE0fm Le076MZXCg6fdnyAbV5o+tv6dPY3cdMnXuPCidumiEkLM07ASO5MljILaocgu89LdiwTdVMk9jCQl vez6wghmU+ligwkeFNTA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwnCK-00AMU7-2m; Mon, 13 Dec 2021 15:25:04 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwnAl-00AM3l-CD for linux-arm-kernel@lists.infradead.org; Mon, 13 Dec 2021 15:23:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CB6EE1FB; Mon, 13 Dec 2021 07:23:26 -0800 (PST) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1AB903F73B; Mon, 13 Dec 2021 07:23:24 -0800 (PST) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, will@kernel.org, mark.rutland@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: tglx@linutronix.de, mingo@redhat.com, peter.maydell@linaro.org Subject: [PATCH v3 2/4] KVM: arm64: Keep a list of probed PMUs Date: Mon, 13 Dec 2021 15:23:07 +0000 Message-Id: <20211213152309.158462-3-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211213152309.158462-1-alexandru.elisei@arm.com> References: <20211213152309.158462-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211213_072327_528366_C0C575B5 X-CRM114-Status: GOOD ( 12.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The ARM PMU driver calls kvm_host_pmu_init() after probing to tell KVM that a hardware PMU is available for guest emulation. Heterogeneous systems can have more than one PMU present, and the callback gets called multiple times, once for each of them. Keep track of all the PMUs available to KVM, as they're going to be needed later. Signed-off-by: Alexandru Elisei Reviewed-by: Reiji Watanabe --- arch/arm64/kvm/pmu-emul.c | 25 +++++++++++++++++++++++-- include/kvm/arm_pmu.h | 5 +++++ 2 files changed, 28 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index a5e4bbf5e68f..eb4be96f144d 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -14,6 +15,9 @@ #include #include +static LIST_HEAD(arm_pmus); +static DEFINE_MUTEX(arm_pmus_lock); + static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx); static void kvm_pmu_update_pmc_chained(struct kvm_vcpu *vcpu, u64 select_idx); static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc); @@ -742,9 +746,26 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, void kvm_host_pmu_init(struct arm_pmu *pmu) { - if (pmu->pmuver != 0 && pmu->pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && - !kvm_arm_support_pmu_v3() && !is_protected_kvm_enabled()) + struct arm_pmu_entry *entry; + + if (pmu->pmuver == 0 || pmu->pmuver == ID_AA64DFR0_PMUVER_IMP_DEF || + is_protected_kvm_enabled()) + return; + + mutex_lock(&arm_pmus_lock); + + entry = kmalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + goto out_unlock; + + if (list_empty(&arm_pmus)) static_branch_enable(&kvm_arm_pmu_available); + + entry->arm_pmu = pmu; + list_add_tail(&entry->entry, &arm_pmus); + +out_unlock: + mutex_unlock(&arm_pmus_lock); } static int kvm_pmu_probe_pmuver(void) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 90f21898aad8..e249c5f172aa 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -36,6 +36,11 @@ struct kvm_pmu { struct irq_work overflow_work; }; +struct arm_pmu_entry { + struct list_head entry; + struct arm_pmu *arm_pmu; +}; + #define kvm_arm_pmu_irq_initialized(v) ((v)->arch.pmu.irq_num >= VGIC_NR_SGIS) u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx); void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val); From patchwork Mon Dec 13 15:23:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12695934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9B08AC433EF for ; Mon, 13 Dec 2021 15:27:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FrnMMfKgPqoIZ6v8rhbP63yWJHbiPuSt5BEf4BvU9bs=; b=Gcrobd8jC1CH3s 9hxgfvJEGc3m13lJnnZTdKyLwwJmihxMYYN+yEJvyA4HicJvAe7jR3QPDrMtcEg7na5fsciyNJWt7 WMynVcPdGLNrPQVPVdB6kn6zbEfWWBJyTBfFh5VCSBKRovcXWVIHjjYgC4gLSxuOWv/ft/lREepRl +LE6y2KpvkRtNEODIXWbY+6rLjlGQ7Rqk2i21tUv5dVXtO3Mv+ar2nTjUR/36/Ifm81d2TL/S4XrH OLIF1mX7w32Wf7KNpzFbWqhuEXqlL39ksxCbCALN/OjfPa2QyvjSBmDxnvyftBA8C7t8wG77oOaDx jmfMz6vhFof4Ew2kLZpw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwnD9-00AMn9-89; Mon, 13 Dec 2021 15:25:55 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwnAn-00AM4Z-EW for linux-arm-kernel@lists.infradead.org; Mon, 13 Dec 2021 15:23:31 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CA55AD6E; Mon, 13 Dec 2021 07:23:28 -0800 (PST) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 190553F73B; Mon, 13 Dec 2021 07:23:26 -0800 (PST) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, will@kernel.org, mark.rutland@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: tglx@linutronix.de, mingo@redhat.com, peter.maydell@linaro.org Subject: [PATCH v3 3/4] KVM: arm64: Add KVM_ARM_VCPU_PMU_V3_SET_PMU attribute Date: Mon, 13 Dec 2021 15:23:08 +0000 Message-Id: <20211213152309.158462-4-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211213152309.158462-1-alexandru.elisei@arm.com> References: <20211213152309.158462-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211213_072329_637974_8462457B X-CRM114-Status: GOOD ( 32.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When KVM creates an event and there are more than one PMUs present on the system, perf_init_event() will go through the list of available PMUs and will choose the first one that can create the event. The order of the PMUs in the PMU list depends on the probe order, which can change under various circumstances, for example if the order of the PMU nodes change in the DTB or if asynchronous driver probing is enabled on the kernel command line (with the driver_async_probe=armv8-pmu option). Another consequence of this approach is that, on heteregeneous systems, all virtual machines that KVM creates will use the same PMU. This might cause unexpected behaviour for userspace: when a VCPU is executing on the physical CPU that uses this PMU, PMU events in the guest work correctly; but when the same VCPU executes on another CPU, PMU events in the guest will suddenly stop counting. Fortunately, perf core allows user to specify on which PMU to create an event by using the perf_event_attr->type field, which is used by perf_init_event() as an index in the radix tree of available PMUs. Add the KVM_ARM_VCPU_PMU_V3_CTRL(KVM_ARM_VCPU_PMU_V3_SET_PMU) VCPU attribute to allow userspace to specify the arm_pmu that KVM will use when creating events for that VCPU. KVM will make no attempt to run the VCPU on the physical CPUs that share this PMU, leaving it up to userspace to manage the VCPU threads' affinity accordingly. Setting the PMU for a VCPU is an all of nothing affair to avoid exposing an asymmetric system to the guest: either all VCPUs have the same PMU, either none of the VCPUs have a PMU set. Attempting to do something in between will result in an error being returned when doing KVM_ARM_VCPU_PMU_V3_INIT. Signed-off-by: Alexandru Elisei --- Checking that all VCPUs have the same PMU is done when the PMU is initialized because setting the VCPU PMU is optional, and KVM cannot know what the user intends until the KVM_ARM_VCPU_PMU_V3_INIT ioctl, which prevents further changes to the VCPU PMU. vcpu->arch.pmu.created has been changed to an atomic variable because changes to the VCPU PMU state now need to be observable by all physical CPUs. Documentation/virt/kvm/devices/vcpu.rst | 30 ++++++++- arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/pmu-emul.c | 88 ++++++++++++++++++++----- include/kvm/arm_pmu.h | 4 +- tools/arch/arm64/include/uapi/asm/kvm.h | 1 + 5 files changed, 104 insertions(+), 20 deletions(-) diff --git a/Documentation/virt/kvm/devices/vcpu.rst b/Documentation/virt/kvm/devices/vcpu.rst index 60a29972d3f1..b918669bf925 100644 --- a/Documentation/virt/kvm/devices/vcpu.rst +++ b/Documentation/virt/kvm/devices/vcpu.rst @@ -49,8 +49,8 @@ Returns: ======= ====================================================== -EEXIST Interrupt number already used -ENODEV PMUv3 not supported or GIC not initialized - -ENXIO PMUv3 not supported, missing VCPU feature or interrupt - number not set + -ENXIO PMUv3 not supported, missing VCPU feature, interrupt + number not set or mismatched PMUs set -EBUSY PMUv3 already initialized ======= ====================================================== @@ -104,6 +104,32 @@ hardware event. Filtering event 0x1E (CHAIN) has no effect either, as it isn't strictly speaking an event. Filtering the cycle counter is possible using event 0x11 (CPU_CYCLES). +1.4 ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_SET_PMU +------------------------------------------ + +:Parameters: in kvm_device_attr.addr the address to an int representing the PMU + identifier. + +:Returns: + + ======= =============================================== + -EBUSY PMUv3 already initialized + -EFAULT Error accessing the PMU identifier + -ENXIO PMU not found + -ENODEV PMUv3 not supported or GIC not initialized + -ENOMEM Could not allocate memory + ======= =============================================== + +Request that the VCPU uses the specified hardware PMU when creating guest events +for the purpose of PMU emulation. The PMU identifier can be read from the "type" +file for the desired PMU instance under /sys/devices (or, equivalent, +/sys/bus/even_source). This attribute is particularly useful on heterogeneous +systems where there are at least two CPU PMUs on the system. All VCPUs must have +the same PMU, otherwise KVM_ARM_VCPU_PMU_V3_INIT will fail. + +Note that KVM will not make any attempts to run the VCPU on the physical CPUs +associated with the PMU specified by this attribute. This is entirely left to +userspace. 2. GROUP: KVM_ARM_VCPU_TIMER_CTRL ================================= diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index b3edde68bc3e..1d0a0a2a9711 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -362,6 +362,7 @@ struct kvm_arm_copy_mte_tags { #define KVM_ARM_VCPU_PMU_V3_IRQ 0 #define KVM_ARM_VCPU_PMU_V3_INIT 1 #define KVM_ARM_VCPU_PMU_V3_FILTER 2 +#define KVM_ARM_VCPU_PMU_V3_SET_PMU 3 #define KVM_ARM_VCPU_TIMER_CTRL 1 #define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0 #define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1 diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index eb4be96f144d..8de38d7fa493 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -24,9 +24,16 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc); #define PERF_ATTR_CFG1_KVM_PMU_CHAINED 0x1 -static u32 kvm_pmu_event_mask(struct kvm *kvm) +static u32 kvm_pmu_event_mask(struct kvm_vcpu *vcpu) { - switch (kvm->arch.pmuver) { + unsigned int pmuver; + + if (vcpu->arch.pmu.arm_pmu) + pmuver = vcpu->arch.pmu.arm_pmu->pmuver; + else + pmuver = vcpu->kvm->arch.pmuver; + + switch (pmuver) { case ID_AA64DFR0_PMUVER_8_0: return GENMASK(9, 0); case ID_AA64DFR0_PMUVER_8_1: @@ -34,7 +41,7 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm) case ID_AA64DFR0_PMUVER_8_5: return GENMASK(15, 0); default: /* Shouldn't be here, just for sanity */ - WARN_ONCE(1, "Unknown PMU version %d\n", kvm->arch.pmuver); + WARN_ONCE(1, "Unknown PMU version %d\n", pmuver); return 0; } } @@ -119,7 +126,7 @@ static bool kvm_pmu_idx_has_chain_evtype(struct kvm_vcpu *vcpu, u64 select_idx) return false; reg = PMEVTYPER0_EL0 + select_idx; - eventsel = __vcpu_sys_reg(vcpu, reg) & kvm_pmu_event_mask(vcpu->kvm); + eventsel = __vcpu_sys_reg(vcpu, reg) & kvm_pmu_event_mask(vcpu); return eventsel == ARMV8_PMUV3_PERFCTR_CHAIN; } @@ -534,7 +541,7 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) /* PMSWINC only applies to ... SW_INC! */ type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i); - type &= kvm_pmu_event_mask(vcpu->kvm); + type &= kvm_pmu_event_mask(vcpu); if (type != ARMV8_PMUV3_PERFCTR_SW_INCR) continue; @@ -602,6 +609,7 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) { struct kvm_pmu *pmu = &vcpu->arch.pmu; + struct arm_pmu *arm_pmu = pmu->arm_pmu; struct kvm_pmc *pmc; struct perf_event *event; struct perf_event_attr attr; @@ -622,7 +630,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) if (pmc->idx == ARMV8_PMU_CYCLE_IDX) eventsel = ARMV8_PMUV3_PERFCTR_CPU_CYCLES; else - eventsel = data & kvm_pmu_event_mask(vcpu->kvm); + eventsel = data & kvm_pmu_event_mask(vcpu); /* Software increment event doesn't need to be backed by a perf event */ if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR) @@ -637,8 +645,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) return; memset(&attr, 0, sizeof(struct perf_event_attr)); - attr.type = PERF_TYPE_RAW; - attr.size = sizeof(attr); + attr.type = arm_pmu ? arm_pmu->pmu.type : PERF_TYPE_RAW; attr.pinned = 1; attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, pmc->idx); attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0; @@ -733,7 +740,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, mask = ARMV8_PMU_EVTYPE_MASK; mask &= ~ARMV8_PMU_EVTYPE_EVENT; - mask |= kvm_pmu_event_mask(vcpu->kvm); + mask |= kvm_pmu_event_mask(vcpu); reg = (select_idx == ARMV8_PMU_CYCLE_IDX) ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx; @@ -836,7 +843,7 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1) if (!bmap) return val; - nr_events = kvm_pmu_event_mask(vcpu->kvm) + 1; + nr_events = kvm_pmu_event_mask(vcpu) + 1; for (i = 0; i < 32; i += 8) { u64 byte; @@ -857,7 +864,7 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) if (!kvm_vcpu_has_pmu(vcpu)) return 0; - if (!vcpu->arch.pmu.created) + if (!atomic_read(&vcpu->arch.pmu.created)) return -EINVAL; /* @@ -887,15 +894,20 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) { - if (irqchip_in_kernel(vcpu->kvm)) { - int ret; + struct arm_pmu *arm_pmu = vcpu->arch.pmu.arm_pmu; + struct kvm *kvm = vcpu->kvm; + struct kvm_vcpu *v; + int ret = 0; + int i; + + if (irqchip_in_kernel(kvm)) { /* * If using the PMU with an in-kernel virtual GIC * implementation, we require the GIC to be already * initialized when initializing the PMU. */ - if (!vgic_initialized(vcpu->kvm)) + if (!vgic_initialized(kvm)) return -ENODEV; if (!kvm_arm_pmu_irq_initialized(vcpu)) @@ -910,7 +922,16 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) init_irq_work(&vcpu->arch.pmu.overflow_work, kvm_pmu_perf_overflow_notify_vcpu); - vcpu->arch.pmu.created = true; + atomic_set(&vcpu->arch.pmu.created, 1); + + kvm_for_each_vcpu(i, v, kvm) { + if (!atomic_read(&v->arch.pmu.created)) + continue; + + if (v->arch.pmu.arm_pmu != arm_pmu) + return -ENXIO; + } + return 0; } @@ -940,12 +961,35 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq) return true; } +static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) +{ + struct kvm_pmu *kvm_pmu = &vcpu->arch.pmu; + struct arm_pmu_entry *entry; + struct arm_pmu *arm_pmu; + int ret = -ENXIO; + + mutex_lock(&arm_pmus_lock); + + list_for_each_entry(entry, &arm_pmus, entry) { + arm_pmu = entry->arm_pmu; + if (arm_pmu->pmu.type == pmu_id) { + kvm_pmu->arm_pmu = arm_pmu; + ret = 0; + goto out_unlock; + } + } + +out_unlock: + mutex_unlock(&arm_pmus_lock); + return ret; +} + int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) { if (!kvm_vcpu_has_pmu(vcpu)) return -ENODEV; - if (vcpu->arch.pmu.created) + if (atomic_read(&vcpu->arch.pmu.created)) return -EBUSY; if (!vcpu->kvm->arch.pmuver) @@ -984,7 +1028,7 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) struct kvm_pmu_event_filter filter; int nr_events; - nr_events = kvm_pmu_event_mask(vcpu->kvm) + 1; + nr_events = kvm_pmu_event_mask(vcpu) + 1; uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr; @@ -1026,6 +1070,15 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) return 0; } + case KVM_ARM_VCPU_PMU_V3_SET_PMU: { + int __user *uaddr = (int __user *)(long)attr->addr; + int pmu_id; + + if (get_user(pmu_id, uaddr)) + return -EFAULT; + + return kvm_arm_pmu_v3_set_pmu(vcpu, pmu_id); + } case KVM_ARM_VCPU_PMU_V3_INIT: return kvm_arm_pmu_v3_init(vcpu); } @@ -1063,6 +1116,7 @@ int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) case KVM_ARM_VCPU_PMU_V3_IRQ: case KVM_ARM_VCPU_PMU_V3_INIT: case KVM_ARM_VCPU_PMU_V3_FILTER: + case KVM_ARM_VCPU_PMU_V3_SET_PMU: if (kvm_vcpu_has_pmu(vcpu)) return 0; } diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index e249c5f172aa..892728f85b25 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -7,6 +7,7 @@ #ifndef __ASM_ARM_KVM_PMU_H #define __ASM_ARM_KVM_PMU_H +#include #include #include @@ -31,9 +32,10 @@ struct kvm_pmu { int irq_num; struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS]; DECLARE_BITMAP(chained, ARMV8_PMU_MAX_COUNTER_PAIRS); - bool created; + atomic_t created; bool irq_level; struct irq_work overflow_work; + struct arm_pmu *arm_pmu; }; struct arm_pmu_entry { diff --git a/tools/arch/arm64/include/uapi/asm/kvm.h b/tools/arch/arm64/include/uapi/asm/kvm.h index b3edde68bc3e..1d0a0a2a9711 100644 --- a/tools/arch/arm64/include/uapi/asm/kvm.h +++ b/tools/arch/arm64/include/uapi/asm/kvm.h @@ -362,6 +362,7 @@ struct kvm_arm_copy_mte_tags { #define KVM_ARM_VCPU_PMU_V3_IRQ 0 #define KVM_ARM_VCPU_PMU_V3_INIT 1 #define KVM_ARM_VCPU_PMU_V3_FILTER 2 +#define KVM_ARM_VCPU_PMU_V3_SET_PMU 3 #define KVM_ARM_VCPU_TIMER_CTRL 1 #define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0 #define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1 From patchwork Mon Dec 13 15:23:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12695935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A38A0C433F5 for ; Mon, 13 Dec 2021 15:28:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8VEG76qiQHteVlyCJMdSB/XuFaQmS7jcizgzuGGdigo=; b=WHf0aIx4qRQAZL Za8sYsQhPmrzfdkD5qsOrZ5EfpXXoXyu7C7z7FsSBNQxig2eZtfaQXNf7KZMPX8IG3BNCUpINjcva JLweVfpjzhQaWYUi6xW6FLd9Euy2LFh6WM6So4TVUAAEQic+9fD3FqJF18kGMSRQQ3mB1aaYj2hGi 1KbB+UmG3KCQF4+6RsOO5zOc9zNbYD/tz2yxkheXW4WqAmZRxkOXvTBCbDcyKKgiX3oBQuufv39q6 EPKr7K3LKyWTO/6dX0Ofv+7qnDsG0rUnrkdrQ/IoY1Q6NsTc831f2ycdfyNRc+ew/6Pow09RkgYn1 slFflV4sZmwCMJNccsdQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwnE6-00ANA9-3Q; Mon, 13 Dec 2021 15:26:55 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwnAp-00AM3l-1u for linux-arm-kernel@lists.infradead.org; Mon, 13 Dec 2021 15:23:32 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C7B591FB; Mon, 13 Dec 2021 07:23:30 -0800 (PST) Received: from monolith.localdoman (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 182413F73B; Mon, 13 Dec 2021 07:23:28 -0800 (PST) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, will@kernel.org, mark.rutland@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: tglx@linutronix.de, mingo@redhat.com, peter.maydell@linaro.org Subject: [PATCH v3 4/4] KVM: arm64: Refuse to run VCPU if the PMU doesn't match the physical CPU Date: Mon, 13 Dec 2021 15:23:09 +0000 Message-Id: <20211213152309.158462-5-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211213152309.158462-1-alexandru.elisei@arm.com> References: <20211213152309.158462-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211213_072331_253586_48BD8F6D X-CRM114-Status: GOOD ( 21.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Userspace can assign a PMU to a VCPU with the KVM_ARM_VCPU_PMU_V3_SET_PMU device ioctl. If the VCPU is scheduled on a physical CPU which has a different PMU, the perf events needed to emulate a guest PMU won't be scheduled in and the guest performance counters will stop counting. Treat it as an userspace error and refuse to run the VCPU in this situation. Suggested-by: Marc Zyngier Signed-off-by: Alexandru Elisei --- Documentation/virt/kvm/devices/vcpu.rst | 6 ++++- arch/arm64/include/asm/kvm_host.h | 12 ++++++++++ arch/arm64/include/uapi/asm/kvm.h | 3 +++ arch/arm64/kvm/arm.c | 29 +++++++++++++++++++++++-- arch/arm64/kvm/pmu-emul.c | 1 + 5 files changed, 48 insertions(+), 3 deletions(-) diff --git a/Documentation/virt/kvm/devices/vcpu.rst b/Documentation/virt/kvm/devices/vcpu.rst index b918669bf925..dd8348879a8e 100644 --- a/Documentation/virt/kvm/devices/vcpu.rst +++ b/Documentation/virt/kvm/devices/vcpu.rst @@ -129,7 +129,11 @@ the same PMU, otherwise KVM_ARM_VCPU_PMU_V3_INIT will fail. Note that KVM will not make any attempts to run the VCPU on the physical CPUs associated with the PMU specified by this attribute. This is entirely left to -userspace. +userspace. However, attempting to run the VCPU on a physical CPU not supported +by the PMU will fail and KVM_RUN will return with +exit_reason = KVM_EXIT_FAIL_ENTRY and populate the fail_entry struct by setting +hardare_entry_failure_reason field to KVM_EXIT_FAIL_ENTRY_CPU_UNSUPPORTED and +the cpu field to the processor id. 2. GROUP: KVM_ARM_VCPU_TIMER_CTRL ================================= diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 2a5f7f38006f..0c453f2e48b6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -385,6 +385,8 @@ struct kvm_vcpu_arch { u64 last_steal; gpa_t base; } steal; + + cpumask_var_t supported_cpus; }; /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ @@ -420,6 +422,7 @@ struct kvm_vcpu_arch { #define KVM_ARM64_EXCEPT_MASK (7 << 9) /* Target EL/MODE */ #define KVM_ARM64_DEBUG_STATE_SAVE_SPE (1 << 12) /* Save SPE context if active */ #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE (1 << 13) /* Save TRBE context if active */ +#define KVM_ARM64_ON_UNSUPPORTED_CPU (1 << 14) /* Physical CPU not in supported_cpus */ #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ KVM_GUESTDBG_USE_SW_BP | \ @@ -460,6 +463,15 @@ struct kvm_vcpu_arch { #define vcpu_has_ptrauth(vcpu) false #endif +#define vcpu_on_unsupported_cpu(vcpu) \ + ((vcpu)->arch.flags & KVM_ARM64_ON_UNSUPPORTED_CPU) + +#define vcpu_set_on_unsupported_cpu(vcpu) \ + ((vcpu)->arch.flags |= KVM_ARM64_ON_UNSUPPORTED_CPU) + +#define vcpu_clear_on_unsupported_cpu(vcpu) \ + ((vcpu)->arch.flags &= ~KVM_ARM64_ON_UNSUPPORTED_CPU) + #define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs) /* diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 1d0a0a2a9711..d49f714f48e6 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -414,6 +414,9 @@ struct kvm_arm_copy_mte_tags { #define KVM_PSCI_RET_INVAL PSCI_RET_INVALID_PARAMS #define KVM_PSCI_RET_DENIED PSCI_RET_DENIED +/* run->fail_entry.hardware_entry_failure_reason codes. */ +#define KVM_EXIT_FAIL_ENTRY_CPU_UNSUPPORTED (1ULL << 0) + #endif #endif /* __ARM_KVM_H__ */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e4727dc771bf..373e6a3d7221 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -327,6 +327,10 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; + if (!zalloc_cpumask_var(&vcpu->arch.supported_cpus, GFP_KERNEL)) + return -ENOMEM; + cpumask_copy(vcpu->arch.supported_cpus, cpu_possible_mask); + /* Set up the timer */ kvm_timer_vcpu_init(vcpu); @@ -340,9 +344,16 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) err = kvm_vgic_vcpu_init(vcpu); if (err) - return err; + goto out_err; + + err = create_hyp_mappings(vcpu, vcpu + 1, PAGE_HYP); + if (err) + goto out_err; + return 0; - return create_hyp_mappings(vcpu, vcpu + 1, PAGE_HYP); +out_err: + free_cpumask_var(vcpu->arch.supported_cpus); + return err; } void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) @@ -354,6 +365,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) if (vcpu->arch.has_run_once && unlikely(!irqchip_in_kernel(vcpu->kvm))) static_branch_dec(&userspace_irqchip_in_use); + free_cpumask_var(vcpu->arch.supported_cpus); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); @@ -432,6 +444,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (vcpu_has_ptrauth(vcpu)) vcpu_ptrauth_disable(vcpu); kvm_arch_vcpu_load_debug_state_flags(vcpu); + + if (!cpumask_test_cpu(smp_processor_id(), vcpu->arch.supported_cpus)) + vcpu_set_on_unsupported_cpu(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) @@ -444,6 +459,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) kvm_vgic_put(vcpu); kvm_vcpu_pmu_restore_host(vcpu); + vcpu_clear_on_unsupported_cpu(vcpu); vcpu->cpu = -1; } @@ -759,6 +775,15 @@ static bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu, int *ret) } } + if (unlikely(vcpu_on_unsupported_cpu(vcpu))) { + run->exit_reason = KVM_EXIT_FAIL_ENTRY; + run->fail_entry.hardware_entry_failure_reason + = KVM_EXIT_FAIL_ENTRY_CPU_UNSUPPORTED; + run->fail_entry.cpu = smp_processor_id(); + *ret = 0; + return true; + } + return kvm_request_pending(vcpu) || need_new_vmid_gen(&vcpu->arch.hw_mmu->vmid) || xfer_to_guest_mode_work_pending(); diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 8de38d7fa493..d0581e3258f0 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -974,6 +974,7 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) arm_pmu = entry->arm_pmu; if (arm_pmu->pmu.type == pmu_id) { kvm_pmu->arm_pmu = arm_pmu; + cpumask_copy(vcpu->arch.supported_cpus, &arm_pmu->supported_cpus); ret = 0; goto out_unlock; }