From patchwork Sat Dec 2 00:04:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13476667 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JHiIk0sf" Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4B3C173B for ; Fri, 1 Dec 2023 16:04:57 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5d10f5bf5d9so45203507b3.3 for ; Fri, 01 Dec 2023 16:04:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1701475497; x=1702080297; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=tAuhvXq6cBpnB6C4oo6DppaIL90HLN4H/La3MC06/hY=; b=JHiIk0sfjYlz6ZXEnfkfQMwzsocr3vRNLceC0LmM0J3hHuMOvHNLnQQvNMPDWAwwG7 3VBqTTCTpa0UKi37pGb7sK0VnvqQICVGw42+5jiKyKAhXqo+vM2G0o0rcDyRZ49v1K64 Nzjh4n2/3TOyYlWNHVnxnY3e5GWFUQQ1YcksP+T6Jhr8SsfisJ1IU95ljQVQdWmpj1D+ Q4DaznvIeQHLQzmQfo79PpyIOv0+I4WwoKLBy+fFpxea1dcV7KbvLQ+5MVeI0A65bK3/ XSbI0WE9k/mTkSSPSSoUSF4Pg4+BWO6Xxbkyzq06NKkLSRCZmRjX2n1Tbg9deC4aLOJQ ATRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701475497; x=1702080297; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=tAuhvXq6cBpnB6C4oo6DppaIL90HLN4H/La3MC06/hY=; b=eoF14YMh6g0fayGdN98MJr+4fY4+Qy/Kr7Ddcojf0hRPmgL/JguNuwbNZnN6AEY3Zb bKYuDJ03D5hBzL4aY5f15eM/LVs8o8OuP/R5z4CUrpDQ4CLj9ogHQL5XK8LDnJCz0mPw tByauXCqlK7KFezlGRIvMdkJ2P8B7r8Ee0U/joQnkHNpIIs03wK8V6bjLWaTK3yMkWy2 ZFq5GJFzJolqHdLkivdou7wTE3bc5FUHU5zQdAJqYF9Kc5dL0wGNceVSSB2dRjL6rZp4 GPWTzmzgv7GYUJnhfI3LbhD/wAWXUsljXXf8gcf3t3wvwKgtszEaV9hX46db6qYHb/ab 65FQ== X-Gm-Message-State: AOJu0Yyluj1RNCTMcLl8C6WQKgnKD8NOh9lxYRGBrreTWDn2C3RzeVNi n+0/ZLgU2PV2BnNlhusnjOMA3CY/QvY= X-Google-Smtp-Source: AGHT+IGepKiHJX3JCSYhJbKSztYprVqCGJdtmV9iTUMAIo0iRkaWyqCcIGFwRsLKauIMzjw4Jwb5+gW23UM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:2712:b0:5d4:1b2d:f436 with SMTP id dy18-20020a05690c271200b005d41b2df436mr116491ywb.5.1701475497031; Fri, 01 Dec 2023 16:04:57 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 1 Dec 2023 16:04:08 -0800 In-Reply-To: <20231202000417.922113-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231202000417.922113-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.rc2.451.g8631bc7472-goog Message-ID: <20231202000417.922113-20-seanjc@google.com> Subject: [PATCH v9 19/28] KVM: selftests: Add functional test for Intel's fixed PMU counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu From: Jinrong Liang Extend the fixed counters test to verify that supported counters can actually be enabled in the control MSRs, that unsupported counters cannot, and that enabled counters actually count. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang [sean: fold into the rd/wr access test, massage changelog] Reviewed-by: Dapeng Mi Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 31 ++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index b07294af71a3..f5dedd112471 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -332,7 +332,6 @@ static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters vector = wrmsr_safe(msr, 0); GUEST_ASSERT_PMC_MSR_ACCESS(WRMSR, msr, expect_gp, vector); } - GUEST_DONE(); } static void guest_test_gp_counters(void) @@ -350,6 +349,7 @@ static void guest_test_gp_counters(void) base_msr = MSR_IA32_PERFCTR0; guest_rd_wr_counters(base_msr, MAX_NR_GP_COUNTERS, nr_gp_counters, 0); + GUEST_DONE(); } static void test_gp_counters(uint8_t pmu_version, uint64_t perf_capabilities, @@ -373,6 +373,7 @@ static void guest_test_fixed_counters(void) { uint64_t supported_bitmask = 0; uint8_t nr_fixed_counters = 0; + uint8_t i; /* Fixed counters require Architectural vPMU Version 2+. */ if (guest_get_pmu_version() >= 2) @@ -387,6 +388,34 @@ static void guest_test_fixed_counters(void) guest_rd_wr_counters(MSR_CORE_PERF_FIXED_CTR0, MAX_NR_FIXED_COUNTERS, nr_fixed_counters, supported_bitmask); + + for (i = 0; i < MAX_NR_FIXED_COUNTERS; i++) { + uint8_t vector; + uint64_t val; + + if (i >= nr_fixed_counters && !(supported_bitmask & BIT_ULL(i))) { + vector = wrmsr_safe(MSR_CORE_PERF_FIXED_CTR_CTRL, + FIXED_PMC_CTRL(i, FIXED_PMC_KERNEL)); + __GUEST_ASSERT(vector == GP_VECTOR, + "Expected #GP for counter %u in FIXED_CTR_CTRL", i); + + vector = wrmsr_safe(MSR_CORE_PERF_GLOBAL_CTRL, + FIXED_PMC_GLOBAL_CTRL_ENABLE(i)); + __GUEST_ASSERT(vector == GP_VECTOR, + "Expected #GP for counter %u in PERF_GLOBAL_CTRL", i); + continue; + } + + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + i, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, FIXED_PMC_CTRL(i, FIXED_PMC_KERNEL)); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, FIXED_PMC_GLOBAL_CTRL_ENABLE(i)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + val = rdmsr(MSR_CORE_PERF_FIXED_CTR0 + i); + + GUEST_ASSERT_NE(val, 0); + } + GUEST_DONE(); } static void test_fixed_counters(uint8_t pmu_version, uint64_t perf_capabilities,