From patchwork Wed Jan 19 18:28:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Dunn X-Patchwork-Id: 12717711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72269C433F5 for ; Wed, 19 Jan 2022 18:28:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356717AbiASS20 (ORCPT ); Wed, 19 Jan 2022 13:28:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244082AbiASS2Z (ORCPT ); Wed, 19 Jan 2022 13:28:25 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66F29C061574 for ; Wed, 19 Jan 2022 10:28:25 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id 19-20020a17090a001300b001b480b09680so2280894pja.2 for ; Wed, 19 Jan 2022 10:28:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=kC6elWjXhiQzM3dfT/1g4Hf/d6gd/gh+kSa3O2yzp3A=; b=AXwRuIY5F+0nmXtuUOfeY08UjqYhP0l6r4/9mZURGrDVP36ZWwq/+blAq4sdWvRCQO WUOnVCayxomj3rFxTruDjEv/hY4TQ15vXls9jeXqCasI7aIwKVqBECyCk+WUeAeEfy9V ri3pqPJ0Hd/VvsEyn9gIbrq+PndP3Id1CpHIZEF1144Zi1qCSGyJL3u3qrDimnQCYbOZ 67OhnHX0179XAFunD5rG32NBmnbuTqhQz4OYclEZw+JB5/O1oj5dt8mhuTr9aW+Ms4P4 cRx9JXNVDJv5ksgqx7zU+j1hUc610CG9UBnTVjrX0VKRcYlMuPqYMfhH/siEld0YVfWs O1Gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=kC6elWjXhiQzM3dfT/1g4Hf/d6gd/gh+kSa3O2yzp3A=; b=UkPdOX31Pqkuqc8VMendyjzutTvWyc1RWJhUeIP6txMuSdDCUKPblhyA5xmg2apmXe nt6h8KonGLpeUBI48VRih5TXtHssriPt25wIJ9YYfXjSEa7v8mtBwEZ3yEr8lSQRdazG iHwhCBk88tKRaE7EtU1xsqT6qJ9PVoPFqw0z/DdOVYRzZBVOMOhInkaSg6NBQpP9jTOS ZiMuQ1hUORblt0qwfVI0O6SX+e+irlXk/ZumUGwHOYAWeE1C5rI4pZ5e1fvXwlpbCeSN mqWCrCr/4dnlPlkTzzln69fELRRfoSgaL36iBUgtLz3QAmD+E8f59deDcaI+CATmsYOC XtDQ== X-Gm-Message-State: AOAM531l/ox3XuSO64RZqjJdtKif0sZ2piQDpc+y6Ix6yo9HOacNJ2l7 0xCQ5fjFUS2+ZltO9pJORU/iIlhXfQmb7lim8P86imjmh3KSTpv8TQNHpy7YQJEtZk9PxTmHHmT AmPiEMzNBwVA1YFuXR5eC4/tlSTxN51XVo0UK4J2KHEHhc2FQVWagWiyVoKDzbwOnjw== X-Google-Smtp-Source: ABdhPJxfW0Xln8FWZV+q5Nrb0BWJxqNS328F8RP4ozxHU/NyNuzn3JkzBnhDutQUWixzFoekdtL9RcTwUgNIavw= X-Received: from daviddunn-glinux.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:782]) (user=daviddunn job=sendgmr) by 2002:a05:6a00:24c2:b0:4bc:bea:1e60 with SMTP id d2-20020a056a0024c200b004bc0bea1e60mr31541182pfv.63.1642616904746; Wed, 19 Jan 2022 10:28:24 -0800 (PST) Date: Wed, 19 Jan 2022 18:28:16 +0000 Message-Id: <20220119182818.3641304-1-daviddunn@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.34.1.703.g22d0c6ccf7-goog Subject: [PATCH 1/3] Provide VM capability to disable PMU virtualization for individual VMs From: David Dunn To: kvm@vger.kernel.org, pbonzini@redhat.com, like.xu.linux@gmail.com, jmattson@google.com, cloudliang@tencent.com Cc: daviddunn@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When PMU virtualization is enabled via the module parameter, usermode can disable PMU virtualization on individual VMs using this new capability. This provides a uniform way to disable PMU virtualization on x86. Since AMD doesn't have a CPUID bit for PMU support, disabling PMU virtualization requires some other state to indicate whether the PMU related MSRs are ignored. Since KVM_GET_SUPPORTED_CPUID reports the maximal CPUID information based on module parameters, usermode will need to adjust CPUID when disabling PMU virtualization on individual VMs. On Intel CPUs, the change to PMU enablement will not alter existing until SET_CPUID2 is invoked. Signed-off-by: David Dunn --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm/pmu.c | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 2 +- arch/x86/kvm/x86.c | 11 +++++++++++ include/uapi/linux/kvm.h | 1 + tools/include/uapi/linux/kvm.h | 1 + 6 files changed, 16 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 682ad02a4e58..5cdcd4a7671b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1232,6 +1232,7 @@ struct kvm_arch { hpa_t hv_root_tdp; spinlock_t hv_root_tdp_lock; #endif + bool enable_pmu; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 5aa45f13b16d..605bcfb55625 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -101,7 +101,7 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, { struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); - if (!enable_pmu) + if (!enable_pmu || !vcpu->kvm->arch.enable_pmu) return NULL; switch (msr) { diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 466d18fc0c5d..4c3885765027 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -487,7 +487,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->reserved_bits = 0xffffffff00200000ull; entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); - if (!entry || !enable_pmu) + if (!entry || !vcpu->kvm->arch.enable_pmu || !enable_pmu) return; eax.full = entry->eax; edx.full = entry->edx; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 55518b7d3b96..9b640c5bb4f6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4326,6 +4326,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) if (r < sizeof(struct kvm_xsave)) r = sizeof(struct kvm_xsave); break; + case KVM_CAP_ENABLE_PMU: + r = enable_pmu; + break; } default: break; @@ -5937,6 +5940,13 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, kvm->arch.exit_on_emulation_error = cap->args[0]; r = 0; break; + case KVM_CAP_ENABLE_PMU: + r = -EINVAL; + if (!enable_pmu || cap->args[0] & ~1) + break; + kvm->arch.enable_pmu = cap->args[0]; + r = 0; + break; default: r = -EINVAL; break; @@ -11562,6 +11572,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); kvm->arch.guest_can_read_msr_platform_info = true; + kvm->arch.enable_pmu = true; #if IS_ENABLED(CONFIG_HYPERV) spin_lock_init(&kvm->arch.hv_root_tdp_lock); diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 9563d294f181..37cbcdffe773 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1133,6 +1133,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM 206 #define KVM_CAP_VM_GPA_BITS 207 #define KVM_CAP_XSAVE2 208 +#define KVM_CAP_ENABLE_PMU 209 #ifdef KVM_CAP_IRQ_ROUTING diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index f066637ee206..e71712c71ab1 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -1132,6 +1132,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_ARM_MTE 205 #define KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM 206 #define KVM_CAP_XSAVE2 207 +#define KVM_CAP_ENABLE_PMU 209 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Wed Jan 19 18:28:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Dunn X-Patchwork-Id: 12717712 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15979C433F5 for ; Wed, 19 Jan 2022 18:28:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356745AbiASS2g (ORCPT ); Wed, 19 Jan 2022 13:28:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356737AbiASS2d (ORCPT ); Wed, 19 Jan 2022 13:28:33 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 912A4C061574 for ; Wed, 19 Jan 2022 10:28:33 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id i6-20020a626d06000000b004c0abfd53b3so2044991pfc.12 for ; Wed, 19 Jan 2022 10:28:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iSxUOXtoygGU8o/jZ4hSHjW4ZKLLKccQwPWLrYeSFxo=; b=BSE/+mZ+7qvXXEgIYeW6ZXgl/BFmjt34Xopf58X7S85GMtWRgALhEEODrBwS0hk95m HTgsQIQP+pkaRi5Lb2MLobkTR66n/7aGdPG1qt+JfeksW+BC2qwXgGPw7PMfAgkFzCEr nL8knQVodd46aG5g5XIIhW5ElovLWlnsekmEbR+ugVLqK4ERvmjwC3N9p3qwOv7fptP6 3qDFSQv2R8n+SSHqRSZI/AGZ0F7tAZK//PzyMwMkn9DueryCDE3WhCKcymR2EIhRGSb3 WHGzeCY4ZWgjE6aWWDG+74wZSR4x334Iy7p8B149FBsisHkJR32Dqby495RyAsIfLdYu OYVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iSxUOXtoygGU8o/jZ4hSHjW4ZKLLKccQwPWLrYeSFxo=; b=AgTpRtadbMVdZGN9xckGiRFfA2QATBkdUWAIOWsWMPcx8Hsh6sOYRYubt6TFkypuJN S5m+d6lcBaejPPmsEwtvQElov/n2BEvS3az9+9fVilAKPBfpUfDW0LUGhZ0YI+iXkT0Z PGXugzPFaSAVWlSos1ASqChWmWMH6LGrWfCTIeKn/LG7agFroOJodCufZ7kgBD3hyWpv 93ckGIK8keky2Vu4U/Be1CeQDQwbe2yHchNgFb8MSy/3bPtyEdLNU6JkOf9gTHNJo5AX jWhf3YplGX34QicMxY2JI8wUzdul1zHg6fZjyIjvgou4PKjq2XA9VAGoNOKYFeUsbBnZ Kg2A== X-Gm-Message-State: AOAM533p/fjjNZrhPO3EZEXmvNxC+pwAoBJqcMKUv7tPsDCxldSdCDf/ u/ia3VUzFuaRVIJD2iocKjBWNr5EtPLbadS31ixwB/0oZIS1LFXZa8hbXl3bqCoSaVJD2Qkr3Ms GutYyf9SBO19wAVYsg+iNHEJ7iLTC5Oo1ln8LHCIpGlbcznQDyV7AdE6ZXQfJmOYFrw== X-Google-Smtp-Source: ABdhPJwPOSSXDpGyVuikRWpNzMCi/bqsSd0dc/WpYVAQBakfv/6Qt1ALwQ50oxeXQIxdTfROiipE8VgEjdUoqR0= X-Received: from daviddunn-glinux.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:782]) (user=daviddunn job=sendgmr) by 2002:a17:902:e844:b0:14a:ef67:ed96 with SMTP id t4-20020a170902e84400b0014aef67ed96mr5408734plg.104.1642616912909; Wed, 19 Jan 2022 10:28:32 -0800 (PST) Date: Wed, 19 Jan 2022 18:28:17 +0000 In-Reply-To: <20220119182818.3641304-1-daviddunn@google.com> Message-Id: <20220119182818.3641304-2-daviddunn@google.com> Mime-Version: 1.0 References: <20220119182818.3641304-1-daviddunn@google.com> X-Mailer: git-send-email 2.34.1.703.g22d0c6ccf7-goog Subject: [PATCH 2/3] Verify that the PMU event filter works as expected. From: David Dunn To: kvm@vger.kernel.org, pbonzini@redhat.com, like.xu.linux@gmail.com, jmattson@google.com, cloudliang@tencent.com Cc: daviddunn@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Note that the virtual PMU doesn't work as expected on AMD Zen CPUs (an intercepted rdmsr is counted as a retired branch instruction), but the PMU event filter does work. This is a local application of change authored by Jim Mattson and sent to kvm mailiong list on Jan 14, 2022. Signed-off-by: Jim Mattson Signed-off-by: David Dunn --- .../kvm/x86_64/pmu_event_filter_test.c | 194 +++++++++++++++--- 1 file changed, 163 insertions(+), 31 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 8ac99d4cbc73..aa104946e6e0 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -16,10 +16,38 @@ #include "processor.h" /* - * In lieue of copying perf_event.h into tools... + * In lieu of copying perf_event.h into tools... */ -#define ARCH_PERFMON_EVENTSEL_ENABLE BIT(22) -#define ARCH_PERFMON_EVENTSEL_OS BIT(17) +#define ARCH_PERFMON_EVENTSEL_OS (1ULL << 17) +#define ARCH_PERFMON_EVENTSEL_ENABLE (1ULL << 22) + +union cpuid10_eax { + struct { + unsigned int version_id:8; + unsigned int num_counters:8; + unsigned int bit_width:8; + unsigned int mask_length:8; + } split; + unsigned int full; +}; + +union cpuid10_ebx { + struct { + unsigned int no_unhalted_core_cycles:1; + unsigned int no_instructions_retired:1; + unsigned int no_unhalted_reference_cycles:1; + unsigned int no_llc_reference:1; + unsigned int no_llc_misses:1; + unsigned int no_branch_instruction_retired:1; + unsigned int no_branch_misses_retired:1; + } split; + unsigned int full; +}; + +/* End of stuff taken from perf_event.h. */ + +/* Oddly, this isn't in perf_event.h. */ +#define ARCH_PERFMON_BRANCHES_RETIRED 5 #define VCPU_ID 0 #define NUM_BRANCHES 42 @@ -45,14 +73,15 @@ * Preliminary Processor Programming Reference (PPR) for AMD Family * 17h Model 31h, Revision B0 Processors, and Preliminary Processor * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision - * B1 Processors Volume 1 of 2 + * B1 Processors Volume 1 of 2. */ #define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0) /* * This event list comprises Intel's eight architectural events plus - * AMD's "branch instructions retired" for Zen[123]. + * AMD's "retired branch instructions" for Zen[123] (and possibly + * other AMD CPUs). */ static const uint64_t event_list[] = { EVENT(0x3c, 0), @@ -66,11 +95,45 @@ static const uint64_t event_list[] = { AMD_ZEN_BR_RETIRED, }; +/* + * If we encounter a #GP during the guest PMU sanity check, then the guest + * PMU is not functional. Inform the hypervisor via GUEST_SYNC(0). + */ +static void guest_gp_handler(struct ex_regs *regs) +{ + GUEST_SYNC(0); +} + +/* + * Check that we can write a new value to the given MSR and read it back. + * The caller should provide a non-empty set of bits that are safe to flip. + * + * Return on success. GUEST_SYNC(0) on error. + */ +static void check_msr(uint32_t msr, uint64_t bits_to_flip) +{ + uint64_t v = rdmsr(msr) ^ bits_to_flip; + + wrmsr(msr, v); + if (rdmsr(msr) != v) + GUEST_SYNC(0); + + v ^= bits_to_flip; + wrmsr(msr, v); + if (rdmsr(msr) != v) + GUEST_SYNC(0); +} + static void intel_guest_code(void) { - uint64_t br0, br1; + check_msr(MSR_CORE_PERF_GLOBAL_CTRL, 1); + check_msr(MSR_P6_EVNTSEL0, 0xffff); + check_msr(MSR_IA32_PMC0, 0xffff); + GUEST_SYNC(1); for (;;) { + uint64_t br0, br1; + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED); @@ -83,15 +146,19 @@ static void intel_guest_code(void) } /* - * To avoid needing a check for CPUID.80000001:ECX.PerfCtrExtCore[bit - * 23], this code uses the always-available, legacy K7 PMU MSRs, which - * alias to the first four of the six extended core PMU MSRs. + * To avoid needing a check for CPUID.80000001:ECX.PerfCtrExtCore[bit 23], + * this code uses the always-available, legacy K7 PMU MSRs, which alias to + * the first four of the six extended core PMU MSRs. */ static void amd_guest_code(void) { - uint64_t br0, br1; + check_msr(MSR_K7_EVNTSEL0, 0xffff); + check_msr(MSR_K7_PERFCTR0, 0xffff); + GUEST_SYNC(1); for (;;) { + uint64_t br0, br1; + wrmsr(MSR_K7_EVNTSEL0, 0); wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED); @@ -102,7 +169,11 @@ static void amd_guest_code(void) } } -static uint64_t test_branches_retired(struct kvm_vm *vm) +/* + * Run the VM to the next GUEST_SYNC(value), and return the value passed + * to the sync. Any other exit from the guest is fatal. + */ +static uint64_t run_vm_to_sync(struct kvm_vm *vm) { struct kvm_run *run = vcpu_state(vm, VCPU_ID); struct ucall uc; @@ -118,6 +189,25 @@ static uint64_t test_branches_retired(struct kvm_vm *vm) return uc.args[1]; } +/* + * In a nested environment or if the vPMU is disabled, the guest PMU + * might not work as architected (accessing the PMU MSRs may raise + * #GP, or writes could simply be discarded). In those situations, + * there is no point in running these tests. The guest code will perform + * a sanity check and then GUEST_SYNC(success). In the case of failure, + * the behavior of the guest on resumption is undefined. + */ +static bool sanity_check_pmu(struct kvm_vm *vm) +{ + bool success; + + vm_install_exception_handler(vm, GP_VECTOR, guest_gp_handler); + success = run_vm_to_sync(vm); + vm_install_exception_handler(vm, GP_VECTOR, NULL); + + return success; +} + static struct kvm_pmu_event_filter *make_pmu_event_filter(uint32_t nevents) { struct kvm_pmu_event_filter *f; @@ -143,6 +233,10 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) return f; } +/* + * Remove the first occurrence of 'event' (if any) from the filter's + * event list. + */ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, uint64_t event) { @@ -160,9 +254,9 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, return f; } -static void test_no_filter(struct kvm_vm *vm) +static void test_without_filter(struct kvm_vm *vm) { - uint64_t count = test_branches_retired(vm); + uint64_t count = run_vm_to_sync(vm); if (count != NUM_BRANCHES) pr_info("%s: Branch instructions retired = %lu (expected %u)\n", @@ -174,7 +268,7 @@ static uint64_t test_with_filter(struct kvm_vm *vm, struct kvm_pmu_event_filter *f) { vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f); - return test_branches_retired(vm); + return run_vm_to_sync(vm); } static void test_member_deny_list(struct kvm_vm *vm) @@ -231,40 +325,70 @@ static void test_not_member_allow_list(struct kvm_vm *vm) TEST_ASSERT(!count, "Disallowed PMU Event is counting"); } +/* + * Check for a non-zero PMU version, at least one general-purpose + * counter per logical processor, an EBX bit vector of length greater + * than 5, and EBX[5] clear. + */ +static bool check_intel_pmu_leaf(struct kvm_cpuid_entry2 *entry) +{ + union cpuid10_eax eax = { .full = entry->eax }; + union cpuid10_ebx ebx = { .full = entry->ebx }; + + return eax.split.version_id && eax.split.num_counters > 0 && + eax.split.mask_length > ARCH_PERFMON_BRANCHES_RETIRED && + !ebx.split.no_branch_instruction_retired; +} + /* * Note that CPUID leaf 0xa is Intel-specific. This leaf should be * clear on AMD hardware. */ -static bool vcpu_supports_intel_br_retired(void) +static bool use_intel_pmu(void) { struct kvm_cpuid_entry2 *entry; struct kvm_cpuid2 *cpuid; cpuid = kvm_get_supported_cpuid(); entry = kvm_get_supported_cpuid_index(0xa, 0); - return entry && - (entry->eax & 0xff) && - (entry->eax >> 24) > 5 && - !(entry->ebx & BIT(5)); + return is_intel_cpu() && entry && check_intel_pmu_leaf(entry); +} + +static bool is_zen1(uint32_t eax) +{ + return x86_family(eax) == 0x17 && x86_model(eax) <= 0x0f; +} + +static bool is_zen2(uint32_t eax) +{ + return x86_family(eax) == 0x17 && + x86_model(eax) >= 0x30 && x86_model(eax) <= 0x3f; +} + +static bool is_zen3(uint32_t eax) +{ + return x86_family(eax) == 0x19 && x86_model(eax) <= 0x0f; } /* * Determining AMD support for a PMU event requires consulting the AMD - * PPR for the CPU or reference material derived therefrom. + * PPR for the CPU or reference material derived therefrom. The AMD + * test code herein has been verified to work on Zen1, Zen2, and Zen3. + * + * Feel free to add more AMD CPUs that are documented to support event + * select 0xc2 umask 0 as "retired branch instructions." */ -static bool vcpu_supports_amd_zen_br_retired(void) +static bool use_amd_pmu(void) { struct kvm_cpuid_entry2 *entry; struct kvm_cpuid2 *cpuid; cpuid = kvm_get_supported_cpuid(); entry = kvm_get_supported_cpuid_index(1, 0); - return entry && - ((x86_family(entry->eax) == 0x17 && - (x86_model(entry->eax) == 1 || - x86_model(entry->eax) == 0x31)) || - (x86_family(entry->eax) == 0x19 && - x86_model(entry->eax) == 1)); + return is_amd_cpu() && entry && + (is_zen1(entry->eax) || + is_zen2(entry->eax) || + is_zen3(entry->eax)); } int main(int argc, char *argv[]) @@ -282,19 +406,27 @@ int main(int argc, char *argv[]) exit(KSFT_SKIP); } - if (vcpu_supports_intel_br_retired()) + if (use_intel_pmu()) guest_code = intel_guest_code; - else if (vcpu_supports_amd_zen_br_retired()) + else if (use_amd_pmu()) guest_code = amd_guest_code; if (!guest_code) { - print_skip("Branch instructions retired not supported"); + print_skip("Don't know how to test this guest PMU"); exit(KSFT_SKIP); } vm = vm_create_default(VCPU_ID, 0, guest_code); - test_no_filter(vm); + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vm, VCPU_ID); + + if (!sanity_check_pmu(vm)) { + print_skip("Guest PMU is not functional"); + exit(KSFT_SKIP); + } + + test_without_filter(vm); test_member_deny_list(vm); test_member_allow_list(vm); test_not_member_deny_list(vm); From patchwork Wed Jan 19 18:28:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Dunn X-Patchwork-Id: 12717713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6810C433FE for ; Wed, 19 Jan 2022 18:28:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356751AbiASS2i (ORCPT ); Wed, 19 Jan 2022 13:28:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356737AbiASS2h (ORCPT ); Wed, 19 Jan 2022 13:28:37 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAA12C061574 for ; Wed, 19 Jan 2022 10:28:36 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id f18-20020a63dc52000000b0034d062c66a0so1129930pgj.17 for ; Wed, 19 Jan 2022 10:28:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=UOgaPad6+1+/B3WL1oX/XHSwIBpKnbyM6W50JYa52vE=; b=Q//Zw3KmqKY5MnBtXKmdT5hqbqO4HDCduHOiUkz1x+9yDfZEu3cBdDOybEBOQ1FIfR ZansvSYzahNUsY+MkDEnATbQ8Xxb9/auUAIr6LjjoFX3fMpqaYTGyebdGjNyC/fTyoKo o+EhELXnh0hhqSlZYc3VkKbw802U742X0EoBoRXo5l/Vw+lz6xffjEjHRAeyblJ6U+Fj qsxZTxJPSTcqOnPXZlDgp0kiie66DkTSUYCQdejQY1cGi8kmqfRR6FjlOWOhxXIdX8aJ +eyB+mBnkE2iRiLqQoO5folfvAm2wKX21I2x56+5vkpv1VsAxd6K8Sw+Rr0xhCwENixw Lcgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=UOgaPad6+1+/B3WL1oX/XHSwIBpKnbyM6W50JYa52vE=; b=vgeRpHmcYu6sD4l24oxEaWzMDyBF7kpSxWhWV1DSmO4o2djXk4p3xIk+KGhF9ZOTrI TqcMUZtbe7WyEo2A+2Qj3aog0gLk77T5IGYsjQRhcrTLFeK7nUBJY++DjBQJlDWGc/C4 8ty+VPYWIx2Bou5D1kIqI6ZHrqOuL6hBIFfv3n1Br13L/w64gEDppTRvMcggtJTJOs5v KjhgUA6OauGUWPy0qmGloM+IOEkm/CWDRf4uzoeUx2YamCzfnfCdXreCDj/GnTdRHAf4 GkveJOGxlFyv4ad0o0cTn5bjD42/keA6inReyTVG+qzHAEE+7cd0Zw2H5sMLhYElJ9cj /Npw== X-Gm-Message-State: AOAM5323w1fwahHuJJp5LLQiX3DNTfifDuL1pdHsbmiuLgpRC1NiJ8q2 I8P3HKMTLvZB/D/lQYBEUhlebSueDvd+YYZwlJayp7rG6WgLIEvhUYdGSTlbs6MlC6Pu2t6eatz t1OEckN0aJAFGFZ3CXr7MJxbdqQPlnMoY53OICf6PG79ZjsnetLaCE2lBo9N5dK8PGQ== X-Google-Smtp-Source: ABdhPJyj9BIoGkJvuUB7aK8rYIajgJAnJ14N7Wuqn8En+M0BVmXa0mjmxGr64VXHwjZEopNV0bhq33T5dqVq5pY= X-Received: from daviddunn-glinux.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:782]) (user=daviddunn job=sendgmr) by 2002:a05:6a00:21ca:b0:4c1:eb90:1267 with SMTP id t10-20020a056a0021ca00b004c1eb901267mr29963408pfj.23.1642616916146; Wed, 19 Jan 2022 10:28:36 -0800 (PST) Date: Wed, 19 Jan 2022 18:28:18 +0000 In-Reply-To: <20220119182818.3641304-1-daviddunn@google.com> Message-Id: <20220119182818.3641304-3-daviddunn@google.com> Mime-Version: 1.0 References: <20220119182818.3641304-1-daviddunn@google.com> X-Mailer: git-send-email 2.34.1.703.g22d0c6ccf7-goog Subject: [PATCH 3/3] Verify KVM_CAP_ENABLE_PMU in kvm pmu_event_filter_test selftest. From: David Dunn To: kvm@vger.kernel.org, pbonzini@redhat.com, like.xu.linux@gmail.com, jmattson@google.com, cloudliang@tencent.com Cc: daviddunn@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org After disabling PMU using KVM_CAP_ENABLE_PMU, the PMU should no longer be visible to the guest. On Intel, this causes a #GP and on AMD the counters are no longer functional. Signed-off-by: David Dunn --- .../kvm/x86_64/pmu_event_filter_test.c | 29 +++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index aa104946e6e0..0bd502d3055c 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -325,6 +325,34 @@ static void test_not_member_allow_list(struct kvm_vm *vm) TEST_ASSERT(!count, "Disallowed PMU Event is counting"); } +/* + * Verify that disabling PMU using KVM_CAP_ENABLE_PMU does not allow PMU. + * + * After every change to CAP_ENABLE_PMU, SET_CPUID2 is required to refresh + * KVM PMU state on existing VCPU. + */ +static void test_cap_enable_pmu(struct kvm_vm *vm) +{ + int r; + struct kvm_cpuid2 *cpuid2; + struct kvm_enable_cap cap = { .cap = KVM_CAP_ENABLE_PMU }; + bool sane; + + r = kvm_check_cap(KVM_CAP_ENABLE_PMU); + if (!r) + return; + + cpuid2 = vcpu_get_cpuid(vm, VCPU_ID); + + cap.args[0] = 0; + r = vm_enable_cap(vm, &cap); + vcpu_set_cpuid(vm, VCPU_ID, cpuid2); + + sane = sanity_check_pmu(vm); + + TEST_ASSERT(!sane, "Guest should not see PMU when disabled."); +} + /* * Check for a non-zero PMU version, at least one general-purpose * counter per logical processor, an EBX bit vector of length greater @@ -431,6 +459,7 @@ int main(int argc, char *argv[]) test_member_allow_list(vm); test_not_member_deny_list(vm); test_not_member_allow_list(vm); + test_cap_enable_pmu(vm); kvm_vm_free(vm);