From patchwork Fri Aug 4 10:13:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13341628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 73C98C001DB for ; Fri, 4 Aug 2023 10:14:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gHnkZxYWNKGu0T6BuQ8KJmu9RIJ1Y66fnvN8aUBpouI=; b=D91kg5Awkp2ony 9tVVfZ6eZWqgC+TgZz3OZ/nfkyQzYZUCnGACBNhneCJBtaWad9FMrwFR5G2vwG29KlGxjJlneGHpn cLAx5JUkblMoDLJehvkQOhpcumKvaftsY5pi/sqt1BF1q1xbH/u5t+FLswwOeyv4IswIM3iNbkFi/ Tnbd5B7bRZHJ5AzCSdW6PrEsvJEMaNuD5+CM6X4hzxHTplsG7Vkjqp8KshZNVQILRHni9M34bYNAY XKDWBUCEU+g58aOJw+qQTzGDsEHmrNxydZ6Wn/pFhZk+dpNRa41ZwZmmS9Iu/aqn6lxjqRtHpuuKs qCsnDIbdzEnfJQvZ3XFA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRroc-00C6Vr-3A; Fri, 04 Aug 2023 10:13:50 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRroZ-00C6VG-2b for linux-arm-kernel@lists.infradead.org; Fri, 04 Aug 2023 10:13:49 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 524F8113E; Fri, 4 Aug 2023 03:14:28 -0700 (PDT) Received: from e127643.arm.com (unknown [10.57.3.154]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D5AF93F6C4; Fri, 4 Aug 2023 03:13:42 -0700 (PDT) From: James Clark To: coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: James Clark , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Leo Yan , Alexander Shishkin , Anshuman Khandual , Rob Herring , linux-kernel@vger.kernel.org Subject: [RFC PATCH 1/3] arm64: KVM: Add support for exclude_guest and exclude_host for ETM Date: Fri, 4 Aug 2023 11:13:11 +0100 Message-Id: <20230804101317.460697-2-james.clark@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230804101317.460697-1-james.clark@arm.com> References: <20230804101317.460697-1-james.clark@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230804_031347_938458_0AB2F3F5 X-CRM114-Status: GOOD ( 25.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add an interface for the Coresight driver to use to set the current exclude settings for the current CPU. This will be used to configure TRFCR_EL1. The settings must be copied to the vCPU before each run in the same way that PMU events are because the per-cpu struct isn't accessible in protected mode. This is only needed for nVHE, otherwise it works automatically with TRFCR_EL{1,2}. Unfortunately it can't be gated on CONFIG_CORESIGHT because Coresight can be built as a module. It can however be gated on CONFIG_PERF_EVENTS because that is required by Coresight. Signed-off-by: James Clark --- arch/arm64/include/asm/kvm_host.h | 10 ++++++- arch/arm64/kvm/Makefile | 1 + arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/etm.c | 48 +++++++++++++++++++++++++++++++ include/kvm/etm.h | 43 +++++++++++++++++++++++++++ 5 files changed, 102 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/kvm/etm.c create mode 100644 include/kvm/etm.h diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index d7b1403a3fb2..f33262217c84 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -35,6 +35,7 @@ #include #include #include +#include #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS @@ -500,7 +501,7 @@ struct kvm_vcpu_arch { u8 cflags; /* Input flags to the hypervisor code, potentially cleared after use */ - u8 iflags; + u16 iflags; /* State flags for kernel bookkeeping, unused by the hypervisor code */ u8 sflags; @@ -541,6 +542,9 @@ struct kvm_vcpu_arch { u64 pmscr_el1; /* Self-hosted trace */ u64 trfcr_el1; + /* exclude_guest settings for nVHE */ + struct kvm_etm_event etm_event; + } host_debug_state; /* VGIC state */ @@ -713,6 +717,8 @@ struct kvm_vcpu_arch { #define DEBUG_STATE_SAVE_TRBE __vcpu_single_flag(iflags, BIT(6)) /* vcpu running in HYP context */ #define VCPU_HYP_CONTEXT __vcpu_single_flag(iflags, BIT(7)) +/* Save TRFCR and apply exclude_guest rules */ +#define DEBUG_STATE_SAVE_TRFCR __vcpu_single_flag(iflags, BIT(8)) /* SVE enabled for host EL0 */ #define HOST_SVE_ENABLED __vcpu_single_flag(sflags, BIT(0)) @@ -1096,6 +1102,8 @@ void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu); void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr); void kvm_clr_pmu_events(u32 clr); bool kvm_set_pmuserenr(u64 val); +void kvm_set_etm_events(struct perf_event_attr *attr); +void kvm_clr_etm_events(void); #else static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {} static inline void kvm_clr_pmu_events(u32 clr) {} diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index c0c050e53157..0faff57423c4 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -23,6 +23,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \ vgic/vgic-its.o vgic/vgic-debug.o kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o pmu.o +kvm-$(CONFIG_PERF_EVENTS) += etm.o always-y := hyp_constants.h hyp-constants.s diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index b1a9d47fb2f3..7bd5975328a3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -952,6 +952,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) kvm_vgic_flush_hwstate(vcpu); kvm_pmu_update_vcpu_events(vcpu); + kvm_etm_update_vcpu_events(vcpu); /* * Ensure we set mode to IN_GUEST_MODE after we disable diff --git a/arch/arm64/kvm/etm.c b/arch/arm64/kvm/etm.c new file mode 100644 index 000000000000..359c37745de2 --- /dev/null +++ b/arch/arm64/kvm/etm.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include + +#include + +static DEFINE_PER_CPU(struct kvm_etm_event, kvm_etm_events); + +struct kvm_etm_event *kvm_get_etm_event(void) +{ + return this_cpu_ptr(&kvm_etm_events); +} + +void kvm_etm_set_events(struct perf_event_attr *attr) +{ + struct kvm_etm_event *etm_event; + + /* + * Exclude guest option only requires extra work with nVHE. + * Otherwise it works automatically with TRFCR_EL{1,2} + */ + if (has_vhe()) + return; + + etm_event = kvm_get_etm_event(); + + etm_event->exclude_guest = attr->exclude_guest; + etm_event->exclude_host = attr->exclude_host; + etm_event->exclude_kernel = attr->exclude_kernel; + etm_event->exclude_user = attr->exclude_user; +} +EXPORT_SYMBOL_GPL(kvm_etm_set_events); + +void kvm_etm_clr_events(void) +{ + struct kvm_etm_event *etm_event; + + if (has_vhe()) + return; + + etm_event = kvm_get_etm_event(); + + etm_event->exclude_guest = false; + etm_event->exclude_host = false; + etm_event->exclude_kernel = false; + etm_event->exclude_user = false; +} +EXPORT_SYMBOL_GPL(kvm_etm_clr_events); diff --git a/include/kvm/etm.h b/include/kvm/etm.h new file mode 100644 index 000000000000..95c4809fa2b0 --- /dev/null +++ b/include/kvm/etm.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef __KVM_DEBUG_H +#define __KVM_DEBUG_H + +struct perf_event_attr; +struct kvm_vcpu; + +#if IS_ENABLED(CONFIG_KVM) && IS_ENABLED(CONFIG_PERF_EVENTS) + +struct kvm_etm_event { + bool exclude_host; + bool exclude_guest; + bool exclude_kernel; + bool exclude_user; +}; + +struct kvm_etm_event *kvm_get_etm_event(void); +void kvm_etm_clr_events(void); +void kvm_etm_set_events(struct perf_event_attr *attr); + +/* + * Updates the vcpu's view of the etm events for this cpu. Must be + * called before every vcpu run after disabling interrupts, to ensure + * that an interrupt cannot fire and update the structure. + */ +#define kvm_etm_update_vcpu_events(vcpu) \ + do { \ + if (!has_vhe() && vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRFCR)) \ + vcpu->arch.host_debug_state.etm_event = *kvm_get_etm_event(); \ + } while (0) + +#else + +struct kvm_etm_event {}; + +static inline void kvm_etm_update_vcpu_events(struct kvm_vcpu *vcpu) {} +static inline void kvm_etm_set_events(struct perf_event_attr *attr) {} +static inline void kvm_etm_clr_events(void) {} + +#endif + +#endif