From patchwork Tue Mar 7 14:13:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13163692 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFA58C6FD1E for ; Tue, 7 Mar 2023 14:23:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230413AbjCGOXY (ORCPT ); Tue, 7 Mar 2023 09:23:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231424AbjCGOWv (ORCPT ); Tue, 7 Mar 2023 09:22:51 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8C557A92F for ; Tue, 7 Mar 2023 06:18:11 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id z19-20020a056a001d9300b005d8fe305d8bso7363857pfw.22 for ; Tue, 07 Mar 2023 06:18:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678198631; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BzCMQ571F2WO9LWAD1EwN+lMykAg6bUWw3+SVP+kARo=; b=RksqaONSskexR2Fats+78uKEPYfTYGvXHhU7LGBcg1VmQncP1V5D1HOpcKs8/Ra3g3 Crwl31Y5VBUktxWoO0DwaVkfgHYFRlFYiRjMS6P3XgyhZwoJe26fzMlVizwjYpEwm5f1 16K2V1gI9cad9Rn/FlwxqY46+t1Y0fhPKUf1ozsEx58MMe7Zir/HEbn90xe0yuan5QkF urJgmjQc+aE0lS8Es3ShVM9l2R/TyCXA0NUA3Pqghgs1Q96CLPuljz4NgUCoTbdCYq8U Ue9FzXjPrTQwasanb07sqeM1GzHUiQGjO3RYR3PUhEDhgd2Vs6/gpaN6E9/uVugZLOHc 2U9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678198631; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BzCMQ571F2WO9LWAD1EwN+lMykAg6bUWw3+SVP+kARo=; b=Rpf+vJowB7z5UeXBQY4NR7PbCvhulOC2k94aC1ASBBKmWg6B8Uet5KhlG8azWuMUeS KNpMqj99oYh3jD3nfhGggMUxp7j9Oe8BXpWxYwh0O+QWTddIEUC6eBOce4057J78WQSo RdFoSW8PL/dZkGZLImYe1MkbECEuRBgWjyT8H4+1PtSB8YVX6RxR+q4gmzJ0TFNp0dP1 ESnzWvmvThUVwze8vVxclCESvkV0SCIsgzckclMzmnQLJMqqvv1CEvPhaGYM6P81+IFi dqjoQNONGmLjGQx2RhhoRisHmTDn4QXcQXPp3uINgUp3eJ5N5YCTQ75Ge1ShKBVDcWB1 1uCQ== X-Gm-Message-State: AO0yUKV4ZinTjB63cXNJraD9f5BrgvnYVApbx7y0pKoNpRH7anOvBnvF VaI5lyXwDalMLYBJPjB2bMcD/GYkEcAQEy6KOvItDBLZ93nK3gkh8YjIeK1xoJk6BFwkWhKzX3N GXa2Y/ZNgd231pQxs6zLlFa3vtdr6U8YGFm6vQePyJ+g3h2mJW/ydjMh/rCLM3cxZMWVf X-Google-Smtp-Source: AK7set+3eqEflYq0/MmG8HxeV3v7IeNtORt/dO4+Ech++4ak6QUElumeJhtNEzUDNl1HYb5/WdClLKkUvWeia8vE X-Received: from aaronlewis-2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:519c]) (user=aaronlewis job=sendgmr) by 2002:a17:903:328e:b0:19b:456:6400 with SMTP id jh14-20020a170903328e00b0019b04566400mr5980295plb.7.1678198630974; Tue, 07 Mar 2023 06:17:10 -0800 (PST) Date: Tue, 7 Mar 2023 14:13:56 +0000 In-Reply-To: <20230307141400.1486314-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20230307141400.1486314-1-aaronlewis@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307141400.1486314-2-aaronlewis@google.com> Subject: [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When counting "Instructions Retired" (0xc0) in a guest, KVM will occasionally increment the PMU counter regardless of if that event is being filtered. This is because some PMU events are incremented via kvm_pmu_trigger_event(), which doesn't know about the event filter. Add the event filter to kvm_pmu_trigger_event(), so events that are disallowed do not increment their counters. Fixes: 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions") Signed-off-by: Aaron Lewis Reviewed-by: Like Xu --- arch/x86/kvm/pmu.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 612e6c70ce2e..9914a9027c60 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -400,6 +400,12 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return is_fixed_event_allowed(filter, pmc->idx); } +static bool event_is_allowed(struct kvm_pmc *pmc) +{ + return pmc_is_enabled(pmc) && pmc_speculative_in_use(pmc) && + check_pmu_event_filter(pmc); +} + static void reprogram_counter(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); @@ -409,10 +415,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) pmc_pause_counter(pmc); - if (!pmc_speculative_in_use(pmc) || !pmc_is_enabled(pmc)) - goto reprogram_complete; - - if (!check_pmu_event_filter(pmc)) + if (!event_is_allowed(pmc)) goto reprogram_complete; if (pmc->counter < pmc->prev_counter) @@ -684,7 +687,7 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); - if (!pmc || !pmc_is_enabled(pmc) || !pmc_speculative_in_use(pmc)) + if (!pmc || !event_is_allowed(pmc)) continue; /* Ignore checks for edge detect, pin control, invert and CMASK bits */ From patchwork Tue Mar 7 14:13:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13163691 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9073FC678D4 for ; Tue, 7 Mar 2023 14:23:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231438AbjCGOXU (ORCPT ); Tue, 7 Mar 2023 09:23:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230078AbjCGOWn (ORCPT ); Tue, 7 Mar 2023 09:22:43 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8AF024C6C6 for ; Tue, 7 Mar 2023 06:18:10 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id q30-20020a631f5e000000b0050760997f4dso625855pgm.6 for ; Tue, 07 Mar 2023 06:18:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678198633; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mXsWRR21xlX1AxqGU85OySF1oOucpc8+al/VZkv3gHk=; b=DwdKhil9E9tXdvVRvFMC8+IOUqu5GaK0P5LObu+ne70/qDHiHRwHmw566kqXo+fisp WXd5uzHTZ2gOI4IVfVeHggI6mWfGnkzmeF2s5KBafTrx/mm31T6YSi/LvVkQf5w5JNJD nYE5Z2TbgI73SqAZdjURPVw1qutLDBcQslIhF0s6/OCkwlvAimUTrGmonX7C9r6jOlum 10KjJ76QKN89Dq1+qKkBCRzOisMj0ND7GchpqfbUbCOCIQZQ68Gr8Di5DrE20jST2M0J YDOzNH7/1y4+eT0HghgttGWaCwuxtupFdv7L34WUE/laM6nJV8aCx54qXc/WhLimSrbl Qe7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678198633; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mXsWRR21xlX1AxqGU85OySF1oOucpc8+al/VZkv3gHk=; b=qNaf6QSmGTFN2ptJEU9dfiK1ktRej7bU7DAqdhzS0xQf0leoNyCeltrrSpdu1nHhtx j6jlorC3fnOOfEaKicJHBwn6vBvFII0FXZYHobh2WsL04LJfm0jFgWIUQlhevnwCH0M0 COkTZuH4l49RQv3WA/Y6brNROs27U9j+/OjIvxSjhDXPcBQS0sDrQGm7FmOywGYayuBl 6swMpGj0HyEU3uDv3w5pzs8LH1mKLDKVveE3az41cmYWvg1+/xIyaBvgrMJXSSqt0t2G 6MTwO+IXs17DGFMbTYtNcoE41kXkw6MwjdLQdqX0pgZN6XYPeMi86yJVFr/p0bJL7oYD hbGg== X-Gm-Message-State: AO0yUKVs/L5TaXXwjLttJUhw4826M7d75+kGDTNQK/o/jVwgB4V6m3Gs B5jVUSdYI6f14fUDG4VNitDlY1q9ie7vhKYyuVbECrSwrlJnFtBoYlfV7bZBI5hUlLKs/BR6txk GEHDPl0Ag7cuZoOoQnxsPOA8VX0eulNFmfphPUIe7A+IzJOq33ZCfFm+K7/K+8uf/hQvC X-Google-Smtp-Source: AK7set8qSg4V1yG+pDhO/eJpJ9+wv4mnyDjoX4eHHM3OH0LDlBZR1zwxOjYA4/Bu3B+FHkrh2dZ/2dSk9zBDvf+5 X-Received: from aaronlewis-2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:519c]) (user=aaronlewis job=sendgmr) by 2002:a17:903:2c1:b0:199:1aba:b1d7 with SMTP id s1-20020a17090302c100b001991abab1d7mr5977475plk.5.1678198632730; Tue, 07 Mar 2023 06:17:12 -0800 (PST) Date: Tue, 7 Mar 2023 14:13:57 +0000 In-Reply-To: <20230307141400.1486314-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20230307141400.1486314-1-aaronlewis@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307141400.1486314-3-aaronlewis@google.com> Subject: [PATCH v3 2/5] KVM: selftests: Add a common helper to the guest From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split out the common parts of the Intel and AMD guest code into a helper function. This is in preparation for adding additional counters to the test. No functional changes intended. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 31 ++++++++++++------- 1 file changed, 20 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index bad7ef8c5b92..f33079fc552b 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -100,6 +100,17 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip) GUEST_SYNC(0); } +static uint64_t test_guest(uint32_t msr_base) +{ + uint64_t br0, br1; + + br0 = rdmsr(msr_base + 0); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + br1 = rdmsr(msr_base + 0); + + return br1 - br0; +} + static void intel_guest_code(void) { check_msr(MSR_CORE_PERF_GLOBAL_CTRL, 1); @@ -108,16 +119,15 @@ static void intel_guest_code(void) GUEST_SYNC(1); for (;;) { - uint64_t br0, br1; + uint64_t count; wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED); - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 1); - br0 = rdmsr(MSR_IA32_PMC0); - __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); - br1 = rdmsr(MSR_IA32_PMC0); - GUEST_SYNC(br1 - br0); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x1); + + count = test_guest(MSR_IA32_PMC0); + GUEST_SYNC(count); } } @@ -133,15 +143,14 @@ static void amd_guest_code(void) GUEST_SYNC(1); for (;;) { - uint64_t br0, br1; + uint64_t count; wrmsr(MSR_K7_EVNTSEL0, 0); wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED); - br0 = rdmsr(MSR_K7_PERFCTR0); - __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); - br1 = rdmsr(MSR_K7_PERFCTR0); - GUEST_SYNC(br1 - br0); + + count = test_guest(MSR_K7_PERFCTR0); + GUEST_SYNC(count); } } From patchwork Tue Mar 7 14:13:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13163695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D185EC678D5 for ; Tue, 7 Mar 2023 14:24:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230107AbjCGOYp (ORCPT ); Tue, 7 Mar 2023 09:24:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230129AbjCGOY1 (ORCPT ); Tue, 7 Mar 2023 09:24:27 -0500 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 463338C0FC for ; Tue, 7 Mar 2023 06:19:41 -0800 (PST) Received: by mail-pl1-f202.google.com with SMTP id u4-20020a170902bf4400b0019e30a57694so7886345pls.20 for ; Tue, 07 Mar 2023 06:19:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678198634; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/MMz8rdBKrtWI7nlrUpQqxtFPLeRESqGCq83DZzXLMk=; b=tnAwhuWbVQoYJNDHY8n7woojAntfW20a3Ld8LgbzC4I4voZaeblfpwZLDFITtkJFWH +lpbrHohNvyqU7Js4yDw+KUxA7ZiU9ec5uqt046trl+gCYbYQIC6IOFgiftLulSJFKV3 RXSphSqWNt+8BN5+1Gr7L3yubryDAaVhneT4yfa3kTklE7+ud1abFyDIMPfsVO1oo9H6 F/X3GS+7vgsYv0XD+odjnaTarJ1TBoBij+ygt8+F0XVXrJm85AttcRJTQk2zW/nbAtpZ uvHpfapBRtKkyFh5MHXrwKAinrWFffTMvb/2KdnCKy9+X36XhRWmTv0RCJOOln+KO7Mx w0jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678198634; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/MMz8rdBKrtWI7nlrUpQqxtFPLeRESqGCq83DZzXLMk=; b=WCAvn+h8rWFlDLrRBR6u4jLWo9/3/LhzRFSZtf+yMsqBDlZif4X78XRhW7rwIm5PxR R1Trl7wjl9VzrKDme00/iJUlLTgqKHasTLKKzxWmFhQCKkLWXn5BFBP1at+283icHlFW BH0baSlu5z3XViweTcJjf6xfJXhf4GjhZ11CpY7iN7aUjDmHV7xFneF+9ifO+cEiwfPO 90528YSkbieLfEgFkJ1N4g0kgbxEIUvtyuZT45SSwfZLZXHHRFMzEOhsgAdxQJH2jHrj co+vj4mnLc52G2hsgB2R5j32TuWar3O1Q7hN3weK0ISIP6Ma/AfCSniwRzn5r0s7gzTc EnIQ== X-Gm-Message-State: AO0yUKWv/dQ/p+UZn9voNWkY/2myR1j5lYcAiDJ8Z9pMGkcr8RNMeo/O RSN0zvpAJIwcTV1owvoF69UbqVqOMO4WE76fUH0RNCBmFZ3nRuDDhkncr1zx1fzu2KP0dHS7ktg w702fFXaYztnthEKSlaNdcq8o5OrMEB3x29Y3OykDoFLuoJIiUZ4nB6fCxYdUBEqpkALK X-Google-Smtp-Source: AK7set+0yz0wVLYPy2obVY4Z3jHYRxtmnxH5dw3Kvsp9B/YQi9iTRNPUu2KJbrQGG3e8iWYXQGiqBbtjY9nIn9B1 X-Received: from aaronlewis-2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:519c]) (user=aaronlewis job=sendgmr) by 2002:a62:db83:0:b0:61c:67d2:a332 with SMTP id f125-20020a62db83000000b0061c67d2a332mr2735714pfg.3.1678198634473; Tue, 07 Mar 2023 06:17:14 -0800 (PST) Date: Tue, 7 Mar 2023 14:13:58 +0000 In-Reply-To: <20230307141400.1486314-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20230307141400.1486314-1-aaronlewis@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307141400.1486314-4-aaronlewis@google.com> Subject: [PATCH v3 3/5] KVM: selftests: Add helpers for PMC asserts From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add the helpers, ASSERT_PMC_COUNTING and ASSERT_PMC_NOT_COUNTING, to split out the asserts into one place. This will make it easier to add additional asserts related to counting later on. No functional changes intended. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 70 ++++++++++--------- 1 file changed, 36 insertions(+), 34 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index f33079fc552b..8277b8f49dca 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -250,14 +250,27 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, return f; } +#define ASSERT_PMC_COUNTING(count) \ +do { \ + if (count != NUM_BRANCHES) \ + pr_info("%s: Branch instructions retired = %lu (expected %u)\n", \ + __func__, count, NUM_BRANCHES); \ + TEST_ASSERT(count, "Allowed PMU event is not counting."); \ +} while (0) + +#define ASSERT_PMC_NOT_COUNTING(count) \ +do { \ + if (count) \ + pr_info("%s: Branch instructions retired = %lu (expected 0)\n", \ + __func__, count); \ + TEST_ASSERT(!count, "Disallowed PMU Event is counting"); \ +} while (0) + static void test_without_filter(struct kvm_vcpu *vcpu) { - uint64_t count = run_vcpu_to_sync(vcpu); + uint64_t c = run_vcpu_to_sync(vcpu); - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + ASSERT_PMC_COUNTING(c); } static uint64_t test_with_filter(struct kvm_vcpu *vcpu, @@ -271,70 +284,59 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu) { uint64_t event = EVENT(0x1C2, 0); struct kvm_pmu_event_filter *f; - uint64_t count; + uint64_t c; f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); - count = test_with_filter(vcpu, f); - + c = test_with_filter(vcpu, f); free(f); - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + + ASSERT_PMC_COUNTING(c); } static void test_member_deny_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); - uint64_t count = test_with_filter(vcpu, f); + uint64_t c = test_with_filter(vcpu, f); free(f); - if (count) - pr_info("%s: Branch instructions retired = %lu (expected 0)\n", - __func__, count); - TEST_ASSERT(!count, "Disallowed PMU Event is counting"); + + ASSERT_PMC_NOT_COUNTING(c); } static void test_member_allow_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); - uint64_t count = test_with_filter(vcpu, f); + uint64_t c = test_with_filter(vcpu, f); free(f); - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + + ASSERT_PMC_COUNTING(c); } static void test_not_member_deny_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); - uint64_t count; + uint64_t c; remove_event(f, INTEL_BR_RETIRED); remove_event(f, AMD_ZEN_BR_RETIRED); - count = test_with_filter(vcpu, f); + c = test_with_filter(vcpu, f); free(f); - if (count != NUM_BRANCHES) - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", - __func__, count, NUM_BRANCHES); - TEST_ASSERT(count, "Allowed PMU event is not counting"); + + ASSERT_PMC_COUNTING(c); } static void test_not_member_allow_list(struct kvm_vcpu *vcpu) { struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); - uint64_t count; + uint64_t c; remove_event(f, INTEL_BR_RETIRED); remove_event(f, AMD_ZEN_BR_RETIRED); - count = test_with_filter(vcpu, f); + c = test_with_filter(vcpu, f); free(f); - if (count) - pr_info("%s: Branch instructions retired = %lu (expected 0)\n", - __func__, count); - TEST_ASSERT(!count, "Disallowed PMU Event is counting"); + + ASSERT_PMC_NOT_COUNTING(c); } /* From patchwork Tue Mar 7 14:13:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13163694 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BA80C678D4 for ; Tue, 7 Mar 2023 14:23:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231327AbjCGOXi (ORCPT ); Tue, 7 Mar 2023 09:23:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231302AbjCGOXF (ORCPT ); Tue, 7 Mar 2023 09:23:05 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C288C87355 for ; Tue, 7 Mar 2023 06:18:15 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id q24-20020a17090a2e1800b00237c37964d4so8137018pjd.8 for ; Tue, 07 Mar 2023 06:18:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678198636; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=g4NXzDS2mrtvREQbkOUU0m6WNlkGiCgwQxG1hZMlWok=; b=hFeZ3ud0NFE7B4AwiDXGCTpDwlLO9gl8GFtxWGsJPw8qAFnCzszJWJwCDAfgzjGsQo IAk19ayswF+sCOPRRa4bPZfUm3vvEa7dnA/rneLTDGHotIHbBpadHRcXzIRZwnzqYe1z +suFvygHvjCkRruF4XA5GPBKzzu3VkLcMEODvBePSnn66H7Et6yBNTCCycoAXM6zicJo vcw+D8sUeNyYo78auLpwMKkB9HVmULREnt9Q8veKxUCbDQWWdNbR1N0/zhNJ/u81wGPQ ec4hNaSzHKuoqtE6SzWkOr29+deWRuLnDcP8HUJavt/al5s4w2S/zT1XVs5Wqiw9Qt4/ Vpdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678198636; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=g4NXzDS2mrtvREQbkOUU0m6WNlkGiCgwQxG1hZMlWok=; b=I0St7dtBeE3Ru5lnm9Pu0hqwKzPdzSxxWgQMJxg/6N7DCMc43cdUNVdHjJ7Fmz4opv E5KipgKFN9M+hse0iZQW61Pxxc05OUdKV0d3kEGeJc5vGNN8NKZKxYlMFIKoCZkyHDsh PAAkYr71F07k0CSyhEzgiVMtjWz/SvrbvwtbfZ23BM/37uH8Z/DSI+peTXqYQv1j7i39 2BUEGVMrImr0UcqgBLsF+R8afvTPzg6MM3kdMoOAR9TxD8m8AwXn3nALgMDcNdIdqXok BGSJ2VP9VEtiT0FwBqLbakC/H7J1g0TafTkFIdOPc+I+ZA/kFvmdhzk17WyyTaeCT8e7 KsWQ== X-Gm-Message-State: AO0yUKW+DS6Q3N5RIg1HXFCOgzairSdbtURdvAmL1khjOHop82Z2Nw3j R3yCU/WWOHOk0CnUaobRs0kGvpmsxCjOW563a15Sx+ArmkULx7waqhmFf3aRXmYf1kpuwcpyKUP IoyQNEiu8IFeDW+/llHKPk1Bfaohbkq8XopAZMgilFMHRHWwpmIMs99VyU5PrQvhuDf+P X-Google-Smtp-Source: AK7set+pN9KrnwYMFlSLpwYenXxXpnXd0ngvcyPAoYnxq0VjKeJna9BvMVJHppCIkW6CWAQg/UWtwIPXac0oKwoS X-Received: from aaronlewis-2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:519c]) (user=aaronlewis job=sendgmr) by 2002:a17:902:efd4:b0:19a:efe3:b922 with SMTP id ja20-20020a170902efd400b0019aefe3b922mr5839766plb.9.1678198636029; Tue, 07 Mar 2023 06:17:16 -0800 (PST) Date: Tue, 7 Mar 2023 14:13:59 +0000 In-Reply-To: <20230307141400.1486314-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20230307141400.1486314-1-aaronlewis@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307141400.1486314-5-aaronlewis@google.com> Subject: [PATCH v3 4/5] KVM: selftests: Fixup test asserts From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Fix up both ASSERT_PMC_COUNTING and ASSERT_PMC_NOT_COUNTING in the pmu_event_filter_test by adding additional context in the assert message. With the added context the print in ASSERT_PMC_NOT_COUNTING is redundant. Remove it. Signed-off-by: Aaron Lewis --- .../selftests/kvm/x86_64/pmu_event_filter_test.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 8277b8f49dca..78bb48fcd33e 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -252,18 +252,17 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, #define ASSERT_PMC_COUNTING(count) \ do { \ - if (count != NUM_BRANCHES) \ + if (count && count != NUM_BRANCHES) \ pr_info("%s: Branch instructions retired = %lu (expected %u)\n", \ __func__, count, NUM_BRANCHES); \ - TEST_ASSERT(count, "Allowed PMU event is not counting."); \ + TEST_ASSERT(count, "%s: Branch instructions retired = %lu (expected > 0)", \ + __func__, count); \ } while (0) #define ASSERT_PMC_NOT_COUNTING(count) \ do { \ - if (count) \ - pr_info("%s: Branch instructions retired = %lu (expected 0)\n", \ - __func__, count); \ - TEST_ASSERT(!count, "Disallowed PMU Event is counting"); \ + TEST_ASSERT(!count, "%s: Branch instructions retired = %lu (expected 0)", \ + __func__, count); \ } while (0) static void test_without_filter(struct kvm_vcpu *vcpu) From patchwork Tue Mar 7 14:14:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 13163693 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1953AC678D4 for ; Tue, 7 Mar 2023 14:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231241AbjCGOX3 (ORCPT ); Tue, 7 Mar 2023 09:23:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231231AbjCGOXD (ORCPT ); Tue, 7 Mar 2023 09:23:03 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02C6184F71 for ; Tue, 7 Mar 2023 06:18:14 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id fb7-20020a056a002d8700b0061c7b700c6dso3051355pfb.13 for ; Tue, 07 Mar 2023 06:18:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678198638; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7R1jE7dXe+87I0ZjzRfACzHiCLjSEhr5c910FFAphtk=; b=dTh+GKUj7WtD9tbKFPKhQTYOvB/T6LCCFjxnaBx5QqO9KWbIwRQ8cG9umxlKTWiKun JShjb0G0/pd7P+CMkyP8Kz379JMEYaBrNk1yIyWi9vucGjWH2hGr+YEJQDz/cuU8B2Qv EI10DKH+/SH3/U7Dh+Zv34o93ZNmDW/rPFiyiCe9vrVN1Sa+JlYDGBzi7gE+4FJqcapj SVOsUPFc/vrEWmen0+6MQUPsso9NgfEOFdNRadpxvSRoTfCUpW8cWAPL9k6BCDZmb7LB hnWnfYqah0I1e+yLQBzhf2LaFDeWBTE9XXvBW6499pXZ5kFKl03f8HE1SgIuExptLBym v4ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678198638; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7R1jE7dXe+87I0ZjzRfACzHiCLjSEhr5c910FFAphtk=; b=hAQrAEDDIVhmBY2mj29hhu1CQz2SSjr52NqmdXMZ1I6wWjTgQbn3VZy9ZI8OyQammN baolZfNMpc3pC4ineOOckMG+Z7RhrnNAgta7OxIx4DZ9rqYdMBsuiM2W+FllawbJAOLn NQWLbhjvipQU90gk//YsSLGQYOYeiUQSKbpAFIW+h6x/UpLdfrEGVctwOF48LU7PxXPC tDvjg+Fyqo9PgZCGpYBZPNTo2OflTZEs9ln6kPz8t2wOpGncDf4hAo8iMYZ9NgcY0v/j ZkCB1WGtN7XCIR/fiCWXlgLcg+T5NCWSzCvfBDCmf251ZiNujFTLOlvtIftW/2cIXByD jCmw== X-Gm-Message-State: AO0yUKWMHOF+LDgaBXsi0tXylbhxjNlVRc9fvzgyPTSHqc4VFvdGnuvz q3vgtNM3WLGofEqUs0Xf3VOnki/AjG6JHIsoMF3P20NJCk7p/64Ua0YqLnkjB/+M8EQHlMWEfOQ oxtE17/6wbobfUJgrnZXS16Wn/udzAgFoKw9ccvNk0qQMxyuO+yyXfjmo8Y+UqzCA+9zP X-Google-Smtp-Source: AK7set/SlnIok4amCUb27qbJYCJeft4fIM8mUudS33UZ+d7a24xEKP+eEGnSOhWx64K9/OJJQj+iitajnW2FdRFR X-Received: from aaronlewis-2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:519c]) (user=aaronlewis job=sendgmr) by 2002:a05:6a00:8cd:b0:606:a4bd:8dde with SMTP id s13-20020a056a0008cd00b00606a4bd8ddemr6337350pfu.4.1678198637738; Tue, 07 Mar 2023 06:17:17 -0800 (PST) Date: Tue, 7 Mar 2023 14:14:00 +0000 In-Reply-To: <20230307141400.1486314-1-aaronlewis@google.com> Mime-Version: 1.0 References: <20230307141400.1486314-1-aaronlewis@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307141400.1486314-6-aaronlewis@google.com> Subject: [PATCH v3 5/5] KVM: selftests: Test the PMU event "Instructions retired" From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, like.xu.linux@gmail.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add testing for the event "Instructions retired" (0xc0) in the PMU event filter on both Intel and AMD to ensure that the event doesn't count when it is disallowed. Unlike most of the other events, the event "Instructions retired", will be incremented by KVM when an instruction is emulated. Test that this case is being properly handled and that KVM doesn't increment the counter when that event is disallowed. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 80 ++++++++++++++----- 1 file changed, 62 insertions(+), 18 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 78bb48fcd33e..9e932b99d4fa 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -54,6 +54,21 @@ #define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0) + +/* + * "Retired instructions", from Processor Programming Reference + * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, + * Preliminary Processor Programming Reference (PPR) for AMD Family + * 17h Model 31h, Revision B0 Processors, and Preliminary Processor + * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision + * B1 Processors Volume 1 of 2. + * --- and --- + * "Instructions retired", from the Intel SDM, volume 3, + * "Pre-defined Architectural Performance Events." + */ + +#define INST_RETIRED EVENT(0xc0, 0) + /* * This event list comprises Intel's eight architectural events plus * AMD's "retired branch instructions" for Zen[123] (and possibly @@ -61,7 +76,7 @@ */ static const uint64_t event_list[] = { EVENT(0x3c, 0), - EVENT(0xc0, 0), + INST_RETIRED, EVENT(0x3c, 1), EVENT(0x2e, 0x4f), EVENT(0x2e, 0x41), @@ -71,6 +86,16 @@ static const uint64_t event_list[] = { AMD_ZEN_BR_RETIRED, }; +struct perf_results { + union { + uint64_t raw; + struct { + uint64_t br_count:32; + uint64_t ir_count:32; + }; + }; +}; + /* * If we encounter a #GP during the guest PMU sanity check, then the guest * PMU is not functional. Inform the hypervisor via GUEST_SYNC(0). @@ -102,13 +127,20 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip) static uint64_t test_guest(uint32_t msr_base) { + struct perf_results r; uint64_t br0, br1; + uint64_t ir0, ir1; br0 = rdmsr(msr_base + 0); + ir0 = rdmsr(msr_base + 1); __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); br1 = rdmsr(msr_base + 0); + ir1 = rdmsr(msr_base + 1); - return br1 - br0; + r.br_count = br1 - br0; + r.ir_count = ir1 - ir0; + + return r.raw; } static void intel_guest_code(void) @@ -119,15 +151,17 @@ static void intel_guest_code(void) GUEST_SYNC(1); for (;;) { - uint64_t count; + uint64_t counts; wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED); - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x1); + wrmsr(MSR_P6_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x3); - count = test_guest(MSR_IA32_PMC0); - GUEST_SYNC(count); + counts = test_guest(MSR_IA32_PMC0); + GUEST_SYNC(counts); } } @@ -143,14 +177,16 @@ static void amd_guest_code(void) GUEST_SYNC(1); for (;;) { - uint64_t count; + uint64_t counts; wrmsr(MSR_K7_EVNTSEL0, 0); wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED); + wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | + ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED); - count = test_guest(MSR_K7_PERFCTR0); - GUEST_SYNC(count); + counts = test_guest(MSR_K7_PERFCTR0); + GUEST_SYNC(counts); } } @@ -250,19 +286,25 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, return f; } -#define ASSERT_PMC_COUNTING(count) \ +#define ASSERT_PMC_COUNTING(counts) \ do { \ - if (count && count != NUM_BRANCHES) \ - pr_info("%s: Branch instructions retired = %lu (expected %u)\n", \ - __func__, count, NUM_BRANCHES); \ - TEST_ASSERT(count, "%s: Branch instructions retired = %lu (expected > 0)", \ - __func__, count); \ + struct perf_results r = {.raw = counts}; \ + if (r.br_count && r.br_count != NUM_BRANCHES) \ + pr_info("%s: Branch instructions retired = %u (expected %u)\n", \ + __func__, r.br_count, NUM_BRANCHES); \ + TEST_ASSERT(r.br_count, "%s: Branch instructions retired = %u (expected > 0)", \ + __func__, r.br_count); \ + TEST_ASSERT(r.ir_count, "%s: Instructions retired = %u (expected > 0)", \ + __func__, r.ir_count); \ } while (0) -#define ASSERT_PMC_NOT_COUNTING(count) \ +#define ASSERT_PMC_NOT_COUNTING(counts) \ do { \ - TEST_ASSERT(!count, "%s: Branch instructions retired = %lu (expected 0)", \ - __func__, count); \ + struct perf_results r = {.raw = counts}; \ + TEST_ASSERT(!r.br_count, "%s: Branch instructions retired = %u (expected 0)", \ + __func__, r.br_count); \ + TEST_ASSERT(!r.ir_count, "%s: Instructions retired = %u (expected 0)", \ + __func__, r.ir_count); \ } while (0) static void test_without_filter(struct kvm_vcpu *vcpu) @@ -317,6 +359,7 @@ static void test_not_member_deny_list(struct kvm_vcpu *vcpu) struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY); uint64_t c; + remove_event(f, INST_RETIRED); remove_event(f, INTEL_BR_RETIRED); remove_event(f, AMD_ZEN_BR_RETIRED); c = test_with_filter(vcpu, f); @@ -330,6 +373,7 @@ static void test_not_member_allow_list(struct kvm_vcpu *vcpu) struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW); uint64_t c; + remove_event(f, INST_RETIRED); remove_event(f, INTEL_BR_RETIRED); remove_event(f, AMD_ZEN_BR_RETIRED); c = test_with_filter(vcpu, f);