From patchwork Thu Mar 23 07:27:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13185184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDAB7C6FD1C for ; Thu, 23 Mar 2023 07:27:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231140AbjCWH1m (ORCPT ); Thu, 23 Mar 2023 03:27:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231127AbjCWH1k (ORCPT ); Thu, 23 Mar 2023 03:27:40 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D8632A6D7; Thu, 23 Mar 2023 00:27:37 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id z19so11111062plo.2; Thu, 23 Mar 2023 00:27:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679556457; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DHgR8WnUGtyQlzlXJoBF94+5SlpfH7HFwfB03S3iMjg=; b=k25/JmpULurq7a+KwS00111eD3hcJsdFkYd2R+WTM/eDcosEvnl/WTVZX/0pso7TM+ s77HTnChysFAL+H8QdiRZBcn9kmkI3pXVcAEXNveRPsOxAN94d/2FBG1XuhP1mriRDWP ck03f6B9q0AfQums0mckX79XZ+wzKGFxVSZ1QsJA7fKBW9Ujh99Bq+lyLdsJr166H7Df 3o2DpCcfyvA2ZOQQIogObdwIhu4g1b5/xTUN0z8V/ReVJ9BUwHPMkP4K6AyS0vMxr9ic d6MpLz5x47FVM8fG/vbW0zZB5nPCpsMbvoUk2EehRiUXHV988ujCaeYQrggr8q3Aa7R6 Sm7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679556457; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DHgR8WnUGtyQlzlXJoBF94+5SlpfH7HFwfB03S3iMjg=; b=zxbXxnyDR65mVks2BGu5FY5N1nZ9DgsOrkhx0afkbZeeByUqud812187pHfW1bWum9 Kf20nHQHB3DeXXZlkHpYTYFRySRTkbASmmfNpXUIl/feeWxRV6iNncpnNKZqF21nYl5w vOpOvwnP3mPgRTtX9Cqri6Iqba7bAM7ESQRvKlUiNPcdX8b/sIijp93f91IWFm/qmK11 i/J8+aVXW9Oo+HfY87zSBAEG9sWGB3pBGWT3V8RGSYDal2f/bzS2jPOkzVJs2+EDW8X4 2dWYllw8wXyVRVT535yQsbPoRGq+3a2HbiyQCLMqo/kAwTC0M6mzOH3wJHxx2PaiTuM2 jnRw== X-Gm-Message-State: AO0yUKV32cWEwPQuI0e+Irdo6o4frur7p8JI6tJtFifkl5/8JPH4g1mt NbH2LZjZFD6JiYc1NokFPRo= X-Google-Smtp-Source: AK7set9G62XWGMycWIpk9Ve2tXMtC8+bS7oASx1uWpyeNDJSPN/s2LBbyoBbZtzoRZkwRWr/QF/JPw== X-Received: by 2002:a17:903:2345:b0:19e:cfbf:f68d with SMTP id c5-20020a170903234500b0019ecfbff68dmr6564770plh.23.1679556456689; Thu, 23 Mar 2023 00:27:36 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017a032d7ae4sm11645447plg.104.2023.03.23.00.27.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Mar 2023 00:27:36 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jinrong Liang , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 1/7] KVM: selftests: Test Intel PMU architectural events on gp counters Date: Thu, 23 Mar 2023 15:27:08 +0800 Message-Id: <20230323072714.82289-2-likexu@tencent.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230323072714.82289-1-likexu@tencent.com> References: <20230323072714.82289-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Add test cases to check if different Architectural events are available after it's marked as unavailable via CPUID. It covers vPMU event filtering logic based on Intel CPUID, which is a complement to pmu_event_filter. According to Intel SDM, the number of architectural events is reported through CPUID.0AH:EAX[31:24] and the architectural event x is supported if EBX[x]=0 && EAX[31:24]>x. Co-developed-by: Jinrong Liang Signed-off-by: Jinrong Liang Signed-off-by: Like Xu --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/pmu_cpuid_test.c | 202 ++++++++++++++++++ 2 files changed, 203 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 84a627c43795..8aa63081b3e6 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -78,6 +78,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test TEST_GEN_PROGS_x86_64 += x86_64/monitor_mwait_test TEST_GEN_PROGS_x86_64 += x86_64/nested_exceptions_test TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test +TEST_GEN_PROGS_x86_64 += x86_64/pmu_cpuid_test TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test diff --git a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c new file mode 100644 index 000000000000..faab0a91e191 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c @@ -0,0 +1,202 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test the consistency of the PMU's CPUID and its features + * + * Copyright (C) 2023, Tencent, Inc. + * + * Check that the VM's PMU behaviour is consistent with the + * VM CPUID definition. + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include + +#include "processor.h" + +/* Guest payload for any performance counter counting */ +#define NUM_BRANCHES 10 + +#define EVENTSEL_OS BIT_ULL(17) +#define EVENTSEL_EN BIT_ULL(22) +#define PMU_CAP_FW_WRITES BIT_ULL(13) +#define EVENTS_MASK GENMASK_ULL(7, 0) +#define PMU_VERSION_MASK GENMASK_ULL(7, 0) +#define GP_CTR_NUM_OFS_BIT 8 +#define GP_CTR_NUM_MASK GENMASK_ULL(15, GP_CTR_NUM_OFS_BIT) +#define EVT_LEN_OFS_BIT 24 +#define EVT_LEN_MASK GENMASK_ULL(31, EVT_LEN_OFS_BIT) + +#define ARCH_EVENT(select, umask) (((select) & 0xff) | ((umask) & 0xff) << 8) + +/* + * Intel Pre-defined Architectural Performance Events. Note some events + * are skipped for testing due to difficulties in stable reproduction. + */ +static const uint64_t arch_events[] = { + [0] = ARCH_EVENT(0x3c, 0x0), + [1] = ARCH_EVENT(0xc0, 0x0), + [2] = ARCH_EVENT(0x3c, 0x1), + [3] = ARCH_EVENT(0x2e, 0x4f), /* LLC Reference */ + [4] = ARCH_EVENT(0x2e, 0x41), /* LLC Misses */ + [5] = ARCH_EVENT(0xc4, 0x0), + [6] = ARCH_EVENT(0xc5, 0x0), /* Branch Misses Retired */ + [7] = ARCH_EVENT(0xa4, 0x1), /* Topdown Slots */ +}; + +static struct kvm_vcpu *new_vcpu(void *guest_code) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code); + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vcpu); + + return vcpu; +} + +static void free_vcpu(struct kvm_vcpu *vcpu) +{ + kvm_vm_free(vcpu->vm); +} + +static void run_vcpu(struct kvm_vcpu *vcpu, const char *msg, + bool (*check_ucall)(struct ucall *uc, void *data), + void *expect_args) +{ + struct ucall uc; + + for (;;) { + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_SYNC: + TEST_ASSERT(check_ucall(&uc, expect_args), "%s", msg); + continue; + case UCALL_DONE: + break; + default: + TEST_ASSERT(false, "Unexpected exit: %s", + exit_reason_str(vcpu->run->exit_reason)); + } + break; + } +} + +static bool first_uc_arg_non_zero(struct ucall *uc, void *data) +{ + return uc->args[1]; +} + +static void intel_guest_run_arch_event(uint8_t version, uint8_t max_gp_num, + bool supported, uint32_t ctr_base_msr, + uint64_t evt_code) +{ + uint32_t global_msr = MSR_CORE_PERF_GLOBAL_CTRL; + unsigned int i; + + for (i = 0; i < max_gp_num; i++) { + wrmsr(ctr_base_msr + i, 0); + wrmsr(MSR_P6_EVNTSEL0 + i, EVENTSEL_OS | EVENTSEL_EN | evt_code); + if (version > 1) + wrmsr(global_msr, BIT_ULL(i)); + + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + + if (version > 1) + wrmsr(global_msr, 0); + + GUEST_SYNC(supported == !!_rdpmc(i)); + } + + GUEST_DONE(); +} + +static void test_arch_events_setup(struct kvm_vcpu *vcpu, uint8_t evt_vector, + uint8_t unavl_mask, uint8_t idx) +{ + struct kvm_cpuid_entry2 *entry; + uint32_t ctr_msr = MSR_IA32_PERFCTR0; + bool is_supported; + + entry = vcpu_get_cpuid_entry(vcpu, 0xa); + entry->eax = (entry->eax & ~EVT_LEN_MASK) | + (evt_vector << EVT_LEN_OFS_BIT); + entry->ebx = (entry->ebx & ~EVENTS_MASK) | unavl_mask; + vcpu_set_cpuid(vcpu); + + if (vcpu_get_msr(vcpu, MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES) + ctr_msr = MSR_IA32_PMC0; + + /* Arch event x is supported if EBX[x]=0 && EAX[31:24]>x */ + is_supported = !(entry->ebx & BIT_ULL(idx)) && + (((entry->eax & EVT_LEN_MASK) >> EVT_LEN_OFS_BIT) > idx); + + vcpu_args_set(vcpu, 5, entry->eax & PMU_VERSION_MASK, + (entry->eax & GP_CTR_NUM_MASK) >> GP_CTR_NUM_OFS_BIT, + is_supported, ctr_msr, arch_events[idx]); +} + +static void intel_check_arch_event_is_unavl(uint8_t idx) +{ + const char *msg = "Unavailable arch event is counting."; + uint8_t eax_evt_vec, ebx_unavl_mask, i, j; + struct kvm_vcpu *vcpu; + + /* + * A brute force iteration of all combinations of values is likely to + * exhaust the limit of the single-threaded thread fd nums, so it's + * tested here by iterating through all valid values on a single bit. + */ + for (i = 0; i < ARRAY_SIZE(arch_events); i++) { + eax_evt_vec = BIT_ULL(i); + for (j = 0; j < ARRAY_SIZE(arch_events); j++) { + ebx_unavl_mask = BIT_ULL(j); + + vcpu = new_vcpu(intel_guest_run_arch_event); + test_arch_events_setup(vcpu, eax_evt_vec, + ebx_unavl_mask, idx); + run_vcpu(vcpu, msg, first_uc_arg_non_zero, NULL); + free_vcpu(vcpu); + } + } +} + +static void intel_test_arch_events(void) +{ + uint8_t idx; + + for (idx = 0; idx < ARRAY_SIZE(arch_events); idx++) { + /* + * Given the stability of performance event recurrence, + * only these arch events are currently being tested: + * - Core cycle event (idx = 0) + * - Instruction retired event (idx = 1) + * - Reference cycles event (idx = 2) + * - Branch instruction retired event (idx = 5) + */ + if (idx > 2 && idx != 5) + continue; + + intel_check_arch_event_is_unavl(idx); + } +} + +static void intel_test_pmu_cpuid(void) +{ + intel_test_arch_events(); +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); + + if (host_cpu_is_intel) { + TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION)); + TEST_REQUIRE(kvm_cpu_property(X86_PROPERTY_PMU_VERSION) > 0); + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM)); + + intel_test_pmu_cpuid(); + } + + return 0; +} From patchwork Thu Mar 23 07:27:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13185186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33EFCC6FD1C for ; Thu, 23 Mar 2023 07:27:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231127AbjCWH1v (ORCPT ); Thu, 23 Mar 2023 03:27:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231149AbjCWH1n (ORCPT ); Thu, 23 Mar 2023 03:27:43 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01CDB2A168; Thu, 23 Mar 2023 00:27:40 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id l9-20020a17090a3f0900b0023d32684e7fso2751946pjc.1; Thu, 23 Mar 2023 00:27:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679556459; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Chk6Tj1rRZBFg6p/u1Cz16TGflXfW9DAHZR0hRI7XxQ=; b=W+rD1fTyQmesSPEFu9HKEt1vI65zgHbfaSHy/DB3ynMfEsFrDJbOsG5aWh6yLyVPKW +fqS0spogSm1i0dR8jO3M7ZOX/ZfwU0hZaBc/W1ESfQnI6hU4v9mqHorFnccu2j4gMDO /g7KsZKYIwcdjTPZVhFibRQCcGQhTA0GqfK0bh2EdkLhqb/B+fZNZWo7KqNIURoWIQH4 x+89GxVtGVUiVC5JqqxrAFWdq7xr0UPtLZqHU7R6KKxCL/Q0E/Z28sucEQcd87uFGhH9 iVU06YyZBGWsEokDpXcPDLoqkmlcRpA/sY3utsKrMrR/Q07P3Rprv4/zuFn0EBVfSAJ8 IYMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679556459; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Chk6Tj1rRZBFg6p/u1Cz16TGflXfW9DAHZR0hRI7XxQ=; b=qztLUGpU3Uv946Ym6b1bM4rNH/70S8H48Hh9w4dYNmOUZ4h0/1BKOT2ECxYx+wk6m5 ytrFd+ve7i1pdO6wQeQeuRF+93c3Jibjk4AUzuj4q51titI1XeUMDizXgacsMUglS0zo klPJR5qYwRDrn5FjJ+yCWbRo4xyyFmpuNmIm4OJ7CkX2VJV8jCUcShBZoyKBqI7rfSZ0 8AAo1JGsh9w3YPloQ8+ty3A/C3sH48UxHdY+lCxsqsKDJANA0FxFRjea8hnq+//JTzQe nSTL8bizY/rbd4X/1YNGduWb+FjVphypITswxAaEmslDmkJYPD65mSQavIoFd14uTdOG JK2Q== X-Gm-Message-State: AO0yUKWiyRKjz2sHuVKOdcBulj7XW4mzlWxw/L0SAKvZ3KYeIPVgWvBR zOMHsyLHfGPoIrwnjmYSYDc= X-Google-Smtp-Source: AK7set/oSLzdNvM3QVVWbJ2qcN5VQOEkJ4HoYhcdiCiTz1Qh69Lbu+ECg4MlEiB29cUGHRoIvInZVw== X-Received: by 2002:a17:903:120a:b0:19e:7d67:84e6 with SMTP id l10-20020a170903120a00b0019e7d6784e6mr6576914plh.0.1679556459548; Thu, 23 Mar 2023 00:27:39 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017a032d7ae4sm11645447plg.104.2023.03.23.00.27.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Mar 2023 00:27:39 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jinrong Liang , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 2/7] KVM: selftests: Test Intel PMU architectural events on fixed counters Date: Thu, 23 Mar 2023 15:27:09 +0800 Message-Id: <20230323072714.82289-3-likexu@tencent.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230323072714.82289-1-likexu@tencent.com> References: <20230323072714.82289-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Update test to cover Intel PMU architectural events on fixed counters. Per Intel SDM, PMU users can also count architecture performance events on fixed counters (specifically, FIXED_CTR0 for the retired instructions and FIXED_CTR1 for cpu core cycles event). Therefore, if guest's CPUID indicates that an architecture event is not available, the corresponding fixed counter will also not count that event. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang --- .../selftests/kvm/x86_64/pmu_cpuid_test.c | 37 +++++++++++++++++-- 1 file changed, 33 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c index faab0a91e191..75434aa2a0ec 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c @@ -25,6 +25,9 @@ #define GP_CTR_NUM_MASK GENMASK_ULL(15, GP_CTR_NUM_OFS_BIT) #define EVT_LEN_OFS_BIT 24 #define EVT_LEN_MASK GENMASK_ULL(31, EVT_LEN_OFS_BIT) +#define INTEL_PMC_IDX_FIXED 32 +#define RDPMC_FIXED_BASE BIT_ULL(30) +#define FIXED_CTR_NUM_MASK GENMASK_ULL(4, 0) #define ARCH_EVENT(select, umask) (((select) & 0xff) | ((umask) & 0xff) << 8) @@ -43,6 +46,14 @@ static const uint64_t arch_events[] = { [7] = ARCH_EVENT(0xa4, 0x1), /* Topdown Slots */ }; +/* Association of Fixed Counters with Architectural Performance Events */ +static int fixed_events[] = {1, 0, 7}; + +static uint64_t evt_code_for_fixed_ctr(uint8_t idx) +{ + return arch_events[fixed_events[idx]]; +} + static struct kvm_vcpu *new_vcpu(void *guest_code) { struct kvm_vm *vm; @@ -88,8 +99,8 @@ static bool first_uc_arg_non_zero(struct ucall *uc, void *data) } static void intel_guest_run_arch_event(uint8_t version, uint8_t max_gp_num, - bool supported, uint32_t ctr_base_msr, - uint64_t evt_code) + uint8_t max_fixed_num, bool supported, + uint32_t ctr_base_msr, uint64_t evt_code) { uint32_t global_msr = MSR_CORE_PERF_GLOBAL_CTRL; unsigned int i; @@ -108,6 +119,23 @@ static void intel_guest_run_arch_event(uint8_t version, uint8_t max_gp_num, GUEST_SYNC(supported == !!_rdpmc(i)); } + /* No need to test independent arch events on fixed counters. */ + if (version > 1 && max_fixed_num > 1 && + (evt_code == evt_code_for_fixed_ctr(0) || + evt_code == evt_code_for_fixed_ctr(1))) { + i = (evt_code == evt_code_for_fixed_ctr(0)) ? 0 : 1; + + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + i, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * i)); + wrmsr(global_msr, BIT_ULL(INTEL_PMC_IDX_FIXED + i)); + + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + + wrmsr(global_msr, 0); + + GUEST_SYNC(supported == !!_rdpmc(RDPMC_FIXED_BASE | i)); + } + GUEST_DONE(); } @@ -131,9 +159,10 @@ static void test_arch_events_setup(struct kvm_vcpu *vcpu, uint8_t evt_vector, is_supported = !(entry->ebx & BIT_ULL(idx)) && (((entry->eax & EVT_LEN_MASK) >> EVT_LEN_OFS_BIT) > idx); - vcpu_args_set(vcpu, 5, entry->eax & PMU_VERSION_MASK, + vcpu_args_set(vcpu, 6, entry->eax & PMU_VERSION_MASK, (entry->eax & GP_CTR_NUM_MASK) >> GP_CTR_NUM_OFS_BIT, - is_supported, ctr_msr, arch_events[idx]); + (entry->edx & FIXED_CTR_NUM_MASK), is_supported, + ctr_msr, arch_events[idx]); } static void intel_check_arch_event_is_unavl(uint8_t idx) From patchwork Thu Mar 23 07:27:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13185185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6ACFEC6FD1C for ; Thu, 23 Mar 2023 07:27:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231231AbjCWH1x (ORCPT ); Thu, 23 Mar 2023 03:27:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231145AbjCWH1t (ORCPT ); Thu, 23 Mar 2023 03:27:49 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AB572A6F5; Thu, 23 Mar 2023 00:27:42 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id p13-20020a17090a284d00b0023d2e945aebso2998983pjf.0; Thu, 23 Mar 2023 00:27:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679556461; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GQ7Q1+dYjZORNI5EX6aTkzr9RtDccrkdusVCk0aMZNk=; b=EenW2HrrN7/CvriFrFeCwp3rtBQHL2uE90oV4zAxBQKJBrtQhjDmnx30UK7w4vUSop BETA/lDqELDAFnFiF37STmkKEzw6WBFoFMaFw8vt6Kly5Rc+6qywd9fiNYpbflyrRbsK t86GBiMFPsgWsqQUHcHXC32coiL+ZGEgQn0hpseLTCjj6oQi2rIzHxaPKaIU6ZvC7sQY aMeU/A5TkRgeArIGXEgsGX3AXSjZigLRfQw5yF35eEMUyjKx6iT+qcAM11W2cS1z3gE/ TdzFhtUpvRKAdV0YxzmRHD2EWhTaSqFDhF5f0yvAkedy1zd83lblZa+MdZQyUCar9Q06 3y5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679556461; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GQ7Q1+dYjZORNI5EX6aTkzr9RtDccrkdusVCk0aMZNk=; b=pTQmw0lUAZIHSp1ZpcopFoAPANxq83/EqI+9x71oCCdY/aG0X4swGAAZTV8dsuj0DY KO6J4MASRCa9Bd454HEU/i2hENb6wAU8yl8DzTR7Fu1ilyNHIDZYs45vA+D0Sypz+rJC UEMA1UKnTJdJ9/YAl3yd9ZFa0QR6Dd6gOA9pKsR8d9QybDw4sX6ZB5eplAZ8ue8W0UFo 6eEeGKq99xGGV2rArH2x7Eo6bjMQUU5F9rNdYeftuS3wn7AwO7TaLdeO1TdvOgIlgmQN dCQMYP2xO6a7/QXqfwdmeOAinJVw1ugDbef7LDryBFD4y293v6nfL7YBMjrDWECP4KaM nrPA== X-Gm-Message-State: AO0yUKWSD0czfeKjIA6jhMo6dqLZpBivo7OeZ1fAfr+ALL6fInqhpAb7 5DVAyMH6Q0v+Ojr6nwfAYmg= X-Google-Smtp-Source: AK7set8q6fGcwEOsvlBY2CcKSsv2CrnEtg4An1vuNvqujF7i2rjQXvIDqDECg4h24Io8rSLyU+iULw== X-Received: by 2002:a17:902:e883:b0:1a1:ee8c:eef7 with SMTP id w3-20020a170902e88300b001a1ee8ceef7mr6525093plg.48.1679556461516; Thu, 23 Mar 2023 00:27:41 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017a032d7ae4sm11645447plg.104.2023.03.23.00.27.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Mar 2023 00:27:41 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jinrong Liang , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 3/7] KVM: selftests: Test consistency of CPUID with num of GP counters Date: Thu, 23 Mar 2023 15:27:10 +0800 Message-Id: <20230323072714.82289-4-likexu@tencent.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230323072714.82289-1-likexu@tencent.com> References: <20230323072714.82289-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Add test to check if non-existent counters can be accessed in guest after determining the number of Intel generic performance counters by CPUID. When the num of counters is less than 3, KVM does not emulate #GP if a counter isn't present due to compatibility MSR_P6_PERFCTRx handling. Nor will the KVM emulate more counters than it can support. Co-developed-by: Jinrong Liang Signed-off-by: Jinrong Liang Signed-off-by: Like Xu --- .../selftests/kvm/x86_64/pmu_cpuid_test.c | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c index 75434aa2a0ec..50902187d2c9 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c @@ -49,11 +49,31 @@ static const uint64_t arch_events[] = { /* Association of Fixed Counters with Architectural Performance Events */ static int fixed_events[] = {1, 0, 7}; +static const uint64_t perf_caps[] = { + 0, + PMU_CAP_FW_WRITES, +}; + +/* + * KVM implements the first two non-existent counters (MSR_P6_PERFCTRx) + * via kvm_pr_unimpl_wrmsr() instead of #GP. It is acceptable here to test + * the third counter as there are usually more than 3 available gp counters. + */ +#define MSR_INTEL_ARCH_PMU_GPCTR (MSR_IA32_PERFCTR0 + 2) + static uint64_t evt_code_for_fixed_ctr(uint8_t idx) { return arch_events[fixed_events[idx]]; } +static uint8_t kvm_gp_ctrs_num(void) +{ + const struct kvm_cpuid_entry2 *kvm_entry; + + kvm_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0xa, 0); + return (kvm_entry->eax & GP_CTR_NUM_MASK) >> GP_CTR_NUM_OFS_BIT; +} + static struct kvm_vcpu *new_vcpu(void *guest_code) { struct kvm_vm *vm; @@ -98,6 +118,30 @@ static bool first_uc_arg_non_zero(struct ucall *uc, void *data) return uc->args[1]; } +static bool first_uc_arg_equals(struct ucall *uc, void *data) +{ + return uc->args[1] == (uint64_t)data; +} + +static void guest_gp_handler(struct ex_regs *regs) +{ + GUEST_SYNC(GP_VECTOR); + GUEST_DONE(); +} + +static void guest_wr_and_rd_msrs(uint32_t base, uint64_t value, + uint8_t begin, uint8_t offset) +{ + unsigned int i; + + for (i = begin; i < begin + offset; i++) { + wrmsr(base + i, value); + GUEST_SYNC(rdmsr(base + i)); + } + + GUEST_DONE(); +} + static void intel_guest_run_arch_event(uint8_t version, uint8_t max_gp_num, uint8_t max_fixed_num, bool supported, uint32_t ctr_base_msr, uint64_t evt_code) @@ -165,6 +209,27 @@ static void test_arch_events_setup(struct kvm_vcpu *vcpu, uint8_t evt_vector, ctr_msr, arch_events[idx]); } +static void test_oob_gp_counter_setup(struct kvm_vcpu *vcpu, uint8_t eax_gp_num, + uint64_t perf_cap) +{ + struct kvm_cpuid_entry2 *entry; + uint32_t ctr_msr = MSR_IA32_PERFCTR0; + + entry = vcpu_get_cpuid_entry(vcpu, 0xa); + entry->eax = (entry->eax & ~GP_CTR_NUM_MASK) | + (eax_gp_num << GP_CTR_NUM_OFS_BIT); + vcpu_set_cpuid(vcpu); + + if (perf_cap & PMU_CAP_FW_WRITES) + ctr_msr = MSR_IA32_PMC0; + + vcpu_set_msr(vcpu, MSR_IA32_PERF_CAPABILITIES, perf_cap); + vcpu_args_set(vcpu, 4, ctr_msr, 0xffff, + min(eax_gp_num, kvm_gp_ctrs_num()), 1); + + vm_install_exception_handler(vcpu->vm, GP_VECTOR, guest_gp_handler); +} + static void intel_check_arch_event_is_unavl(uint8_t idx) { const char *msg = "Unavailable arch event is counting."; @@ -190,6 +255,42 @@ static void intel_check_arch_event_is_unavl(uint8_t idx) } } +/* Access the first out-of-range counter register to trigger #GP */ +static void test_oob_gp_counter(uint8_t eax_gp_num, uint64_t perf_cap) +{ + const char *msg = "At least one unsupported GP counter is visible."; + struct kvm_vcpu *vcpu; + + vcpu = new_vcpu(guest_wr_and_rd_msrs); + test_oob_gp_counter_setup(vcpu, eax_gp_num, perf_cap); + run_vcpu(vcpu, msg, first_uc_arg_equals, (void *)GP_VECTOR); + free_vcpu(vcpu); +} + +static void intel_test_counters_num(void) +{ + uint8_t kvm_gp_num = kvm_gp_ctrs_num(); + unsigned int i; + + TEST_REQUIRE(kvm_gp_num > 2); + + for (i = 0; i < ARRAY_SIZE(perf_caps); i++) { + /* + * For compatibility reasons, KVM does not emulate #GP + * when MSR_P6_PERFCTR[0|1] is not present, but it doesn't + * affect checking the presence of MSR_IA32_PMCx with #GP. + */ + if (perf_caps[i] & PMU_CAP_FW_WRITES) + test_oob_gp_counter(0, perf_caps[i]); + + test_oob_gp_counter(2, perf_caps[i]); + test_oob_gp_counter(kvm_gp_num, perf_caps[i]); + + /* KVM doesn't emulate more counters than it can support. */ + test_oob_gp_counter(kvm_gp_num + 1, perf_caps[i]); + } +} + static void intel_test_arch_events(void) { uint8_t idx; @@ -213,6 +314,7 @@ static void intel_test_arch_events(void) static void intel_test_pmu_cpuid(void) { intel_test_arch_events(); + intel_test_counters_num(); } int main(int argc, char *argv[]) From patchwork Thu Mar 23 07:27:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13185187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D059CC6FD1C for ; Thu, 23 Mar 2023 07:27:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231196AbjCWH16 (ORCPT ); Thu, 23 Mar 2023 03:27:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231180AbjCWH1u (ORCPT ); Thu, 23 Mar 2023 03:27:50 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 224FE2B2A5; Thu, 23 Mar 2023 00:27:44 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id u5so21513290plq.7; Thu, 23 Mar 2023 00:27:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679556463; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SbRs12vGNPK0rZyAMvZ0uDLE67A/AExXOEyuwHCIBHM=; b=I42wI5ZVsAIBIVoMKq5e7E5t4U3HvX6418eV1k0jWg3YAbxpOGMcGhTAH1o+eISdjs 4GtK00d7JpFQtrGyWAtSjOSEzwu95VI9ndYgJX1flnRr+ipABvMtFkX3ZzV81j76FW9F WQTWlRpRLv571aaH8nkVaPSFAlo/D9eR82dAxMSqvbAYqq3pi+gPe0ZNrVSDNco2ZKWh vvPVBeeXzZYvi4PzQ8Gk+AY0vLvmaFC6u9OKh0uxLhv1fFBYuWPK+48PjQbg3ZQNNfFM Clhel+c7MrXC8/2WuYki07mupo/Brk2FnKBMBDlsbm8TwJybtSbrbytcuNSXOVUDtH+C 9sGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679556463; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SbRs12vGNPK0rZyAMvZ0uDLE67A/AExXOEyuwHCIBHM=; b=7OJGMbTrU+QBtCpq3n0piHb2X7wL2OHG1NO4uDEIjMwJ4vkZI9e7KnU8HBrIaDyvjA ugxH8ECJMSf4sVOQ8bATUaZiaLquxHrgqLvRCxJzasYAs8SAF+LVUhJy7MM5cKy29i4D kmJjOrUBBalGV4DxHL3HOCIG/krQIhmOggOCjLZjmzX7diCl4X2Yq890a3EPWYR3A6IK IHOCes7UXJJDBDtCUce8jElOVUbUITry4P/lA6lyUW2eTuh+P4ZjApd4MCL9e4zmLQfx KAarKKnkp54FX2nD3ua0WmonLBb9W2oj8Q/v91AipdC+mm9+Yz3U1OYBT/JY5c94iaG0 0xSA== X-Gm-Message-State: AO0yUKWjMylTgoW9LDCkZM/n/KAsE35sue92sexvF9TNhl92OKpppa9F Qr64fvLX1N88eGGla30OKPk= X-Google-Smtp-Source: AK7set81bB0bNMGMzmx7y/AuepQ7SYuGpKAmF11R/peJbh7nQPYjmmHLUe1wipxPsKfCvEfqQaYbSg== X-Received: by 2002:a17:902:f691:b0:1a1:be45:9857 with SMTP id l17-20020a170902f69100b001a1be459857mr6896666plg.1.1679556463620; Thu, 23 Mar 2023 00:27:43 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017a032d7ae4sm11645447plg.104.2023.03.23.00.27.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Mar 2023 00:27:43 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jinrong Liang , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 4/7] KVM: selftests: Test consistency of CPUID with num of Fixed counters Date: Thu, 23 Mar 2023 15:27:11 +0800 Message-Id: <20230323072714.82289-5-likexu@tencent.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230323072714.82289-1-likexu@tencent.com> References: <20230323072714.82289-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang Add test to check if non-existent counters can be accessed in guest after determining the number of Intel generic performance counters by CPUID. Per SDM, fixed-function performance counter 'i' is supported if ECX[i] || (EDX[4:0] > i). KVM doesn't emulate more counters than it can support. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang --- .../selftests/kvm/x86_64/pmu_cpuid_test.c | 68 +++++++++++++++++++ 1 file changed, 68 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c index 50902187d2c9..c934144be287 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c @@ -74,6 +74,22 @@ static uint8_t kvm_gp_ctrs_num(void) return (kvm_entry->eax & GP_CTR_NUM_MASK) >> GP_CTR_NUM_OFS_BIT; } +static uint8_t kvm_fixed_ctrs_num(void) +{ + const struct kvm_cpuid_entry2 *kvm_entry; + + kvm_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0xa, 0); + return kvm_entry->edx & FIXED_CTR_NUM_MASK; +} + +static uint32_t kvm_fixed_ctrs_bitmask(void) +{ + const struct kvm_cpuid_entry2 *kvm_entry; + + kvm_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0xa, 0); + return kvm_entry->ecx; +} + static struct kvm_vcpu *new_vcpu(void *guest_code) { struct kvm_vm *vm; @@ -230,6 +246,39 @@ static void test_oob_gp_counter_setup(struct kvm_vcpu *vcpu, uint8_t eax_gp_num, vm_install_exception_handler(vcpu->vm, GP_VECTOR, guest_gp_handler); } +static uint64_t test_oob_fixed_counter_setup(struct kvm_vcpu *vcpu, + uint8_t edx_fix_num, + uint32_t fixed_bitmask) +{ + struct kvm_cpuid_entry2 *entry; + uint32_t ctr_msr = MSR_CORE_PERF_FIXED_CTR0; + uint8_t idx = edx_fix_num; + bool is_supported = true; + uint64_t ret = 0xffffULL; + + entry = vcpu_get_cpuid_entry(vcpu, 0xa); + entry->ecx = fixed_bitmask; + entry->edx = (entry->edx & ~FIXED_CTR_NUM_MASK) | edx_fix_num; + vcpu_set_cpuid(vcpu); + + /* Per Intel SDM, FixCtr[i]_is_supported := ECX[i] || (EDX[4:0] > i). */ + is_supported = (entry->ecx & BIT_ULL(idx) || + ((entry->edx & FIXED_CTR_NUM_MASK) > idx)); + + /* KVM doesn't emulate more fixed counters than it can support. */ + if (idx >= kvm_fixed_ctrs_num()) + is_supported = false; + + if (!is_supported) { + vm_install_exception_handler(vcpu->vm, GP_VECTOR, guest_gp_handler); + ret = GP_VECTOR; + } + + vcpu_args_set(vcpu, 4, ctr_msr, ret, idx, 1); + + return ret; +} + static void intel_check_arch_event_is_unavl(uint8_t idx) { const char *msg = "Unavailable arch event is counting."; @@ -267,10 +316,23 @@ static void test_oob_gp_counter(uint8_t eax_gp_num, uint64_t perf_cap) free_vcpu(vcpu); } +static void intel_test_oob_fixed_ctr(uint8_t edx_fix_num, uint32_t fixed_bitmask) +{ + const char *msg = "At least one unsupported Fixed counter is visible."; + struct kvm_vcpu *vcpu; + uint64_t ret; + + vcpu = new_vcpu(guest_wr_and_rd_msrs); + ret = test_oob_fixed_counter_setup(vcpu, edx_fix_num, fixed_bitmask); + run_vcpu(vcpu, msg, first_uc_arg_equals, (void *)ret); + free_vcpu(vcpu); +} + static void intel_test_counters_num(void) { uint8_t kvm_gp_num = kvm_gp_ctrs_num(); unsigned int i; + uint32_t ecx; TEST_REQUIRE(kvm_gp_num > 2); @@ -289,6 +351,12 @@ static void intel_test_counters_num(void) /* KVM doesn't emulate more counters than it can support. */ test_oob_gp_counter(kvm_gp_num + 1, perf_caps[i]); } + + for (ecx = 0; ecx <= kvm_fixed_ctrs_bitmask() + 1; ecx++) { + intel_test_oob_fixed_ctr(0, ecx); + intel_test_oob_fixed_ctr(kvm_fixed_ctrs_num(), ecx); + intel_test_oob_fixed_ctr(kvm_fixed_ctrs_num() + 1, ecx); + } } static void intel_test_arch_events(void) From patchwork Thu Mar 23 07:27:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13185188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FCFFC6FD1D for ; Thu, 23 Mar 2023 07:28:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231299AbjCWH2I (ORCPT ); Thu, 23 Mar 2023 03:28:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231229AbjCWH1x (ORCPT ); Thu, 23 Mar 2023 03:27:53 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8AA12E0F2; Thu, 23 Mar 2023 00:27:47 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id om3-20020a17090b3a8300b0023efab0e3bfso1097405pjb.3; Thu, 23 Mar 2023 00:27:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679556467; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/exFJh7TDyJUezqCCRl00XWiJYsf32A2gpUIkspOS58=; b=Kz/YamGbw4zCdA969iHVRCjRubxnaiiRcL5s4LwndXlU1IIjhkhWD07DQc1HgW/yH5 xsBY24Fcs9J3AnGIH/waxtl6ZWV8PRZI4Risxo46vzgFckkx2dE3bRAvzrbIRle5BOWG GYxpLSIzl1hz+AeK7nvrrTaXyYEkR1FvYTCveyklan/dlZOoOrg3r+O6DLZ4tIe43Z8j Kszcip4HxZKM0WaEjvv0s+arAivMenSKfZPAxWFuXpSfvWDVQtuQGHC6PnfEzKVNMiCn JYVy9lGEIIYbkitkGGYPtlkiYxaVrEw6JOkbXADt3my1lCG00ciaK9WMmmuPDHP2NMqU 3d1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679556467; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/exFJh7TDyJUezqCCRl00XWiJYsf32A2gpUIkspOS58=; b=nACbVZYxHGCc+X0Vz2r9gc7osgZ6tPaoZxUwtp+9CTkmpoHe63OjKJSTKbMX02m5zz Pqf/JnEVZFUyrT8cUG1AvMDNmr7tktFiVs+sC6hVN2Ycv+cq2WDE0e+DdCTRJ6s7UWPT /RIyWpqfDQliODcfzJR+aLpy+I1wkVAjt+KiddU1HQu1EaiSkfzEJI3KFZZKF1wtGVbg 4quoWj6CTU7qKOY9EsOKUl6aNYAKPypJLi961q0pW59FA1Vtd05v98KnPJ0FTrGZV2Fr ORSD8iMNbwzfuxDldseHC5mL4V/ezp1ibnhTthmiNLk/9m5x79yW86WziA2QU3YjvCpD pNoQ== X-Gm-Message-State: AO0yUKWlS7S75O3rwZXJ/8c1IFdTVfWaLlAex/FpBJr0o2up9pfKQpNa 1zdhUQdELhQp2H8vOfnzvwc= X-Google-Smtp-Source: AK7set9SOgny1wyRdp5pTYJFSCj9pgt1QpmNB2Ddh7cULC9JcESMmDAgIcVUVWx4wMkNl2BUYHoWAQ== X-Received: by 2002:a17:902:fa87:b0:1a1:b748:f360 with SMTP id lc7-20020a170902fa8700b001a1b748f360mr5451363plb.47.1679556467160; Thu, 23 Mar 2023 00:27:47 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017a032d7ae4sm11645447plg.104.2023.03.23.00.27.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Mar 2023 00:27:46 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jinrong Liang , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 5/7] KVM: selftests: Test Intel supported fixed counters bit mask Date: Thu, 23 Mar 2023 15:27:12 +0800 Message-Id: <20230323072714.82289-6-likexu@tencent.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230323072714.82289-1-likexu@tencent.com> References: <20230323072714.82289-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Add a test to check that fixed counters enabled via guest CPUID.0xA.ECX (instead of EDX[04:00]) work as normal as usual. Signed-off-by: Like Xu --- .../selftests/kvm/x86_64/pmu_cpuid_test.c | 63 +++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c index c934144be287..79f2e144c6c6 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c @@ -199,6 +199,27 @@ static void intel_guest_run_arch_event(uint8_t version, uint8_t max_gp_num, GUEST_DONE(); } +static void intel_guest_run_fixed_counters(uint64_t supported_bitmask, + uint8_t max_fixed_num) +{ + unsigned int i; + + for (i = 0; i < max_fixed_num; i++) { + if (!(supported_bitmask & BIT_ULL(i))) + continue; + + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + i, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * i)); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(INTEL_PMC_IDX_FIXED + i)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + GUEST_SYNC(!!_rdpmc(RDPMC_FIXED_BASE | i)); + } + + GUEST_DONE(); +} + static void test_arch_events_setup(struct kvm_vcpu *vcpu, uint8_t evt_vector, uint8_t unavl_mask, uint8_t idx) { @@ -279,6 +300,47 @@ static uint64_t test_oob_fixed_counter_setup(struct kvm_vcpu *vcpu, return ret; } +static void test_fixed_counters_setup(struct kvm_vcpu *vcpu, uint8_t edx_fix_num, + uint32_t fixed_bitmask) +{ + struct kvm_cpuid_entry2 *entry; + uint8_t max_fixed_num = kvm_fixed_ctrs_num(); + uint64_t supported_bitmask = 0; + unsigned int i; + + entry = vcpu_get_cpuid_entry(vcpu, 0xa); + entry->ecx = fixed_bitmask; + entry->edx = (entry->edx & ~FIXED_CTR_NUM_MASK) | edx_fix_num; + vcpu_set_cpuid(vcpu); + + for (i = 0; i < max_fixed_num; i++) { + if (entry->ecx & BIT_ULL(i) || + ((entry->edx & FIXED_CTR_NUM_MASK) > i)) + supported_bitmask |= BIT_ULL(i); + } + + vcpu_args_set(vcpu, 2, supported_bitmask, max_fixed_num); + vm_install_exception_handler(vcpu->vm, GP_VECTOR, guest_gp_handler); +} + +static void intel_test_fixed_counters(void) +{ + const char *msg = "At least one fixed counter is not working as expected"; + uint8_t edx, num = kvm_fixed_ctrs_num(); + struct kvm_vcpu *vcpu; + uint32_t ecx; + + for (edx = 0; edx <= num; edx++) { + /* KVM doesn't emulate more fixed counters than it can support. */ + for (ecx = 0; ecx <= (BIT_ULL(num) - 1); ecx++) { + vcpu = new_vcpu(intel_guest_run_fixed_counters); + test_fixed_counters_setup(vcpu, edx, ecx); + run_vcpu(vcpu, msg, first_uc_arg_equals, (void *)true); + free_vcpu(vcpu); + } + } +} + static void intel_check_arch_event_is_unavl(uint8_t idx) { const char *msg = "Unavailable arch event is counting."; @@ -383,6 +445,7 @@ static void intel_test_pmu_cpuid(void) { intel_test_arch_events(); intel_test_counters_num(); + intel_test_fixed_counters(); } int main(int argc, char *argv[]) From patchwork Thu Mar 23 07:27:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13185189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D36D8C6FD1C for ; Thu, 23 Mar 2023 07:28:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231224AbjCWH2W (ORCPT ); Thu, 23 Mar 2023 03:28:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231221AbjCWH2G (ORCPT ); Thu, 23 Mar 2023 03:28:06 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 130BE2A9A0; Thu, 23 Mar 2023 00:27:50 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id p13-20020a17090a284d00b0023d2e945aebso2999185pjf.0; Thu, 23 Mar 2023 00:27:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679556470; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PaVBTRG8cSbZezD1tCmSqI5sDhrh3YVPGLoSBE21PFI=; b=JGFiy5fEqJhG69JHsen3RTM5SkCRpbEunoTL9OuMMVl6FblnyX3zpCUnfsZmpdEKW9 xatsH+iQGoH9EbHMcdszJDdT0N5W8NGE9N6LOnOCpeWqbO6S5l3y4IpTox/ceKiz9iUZ 8+rz0C1/WUMkgxh+QYOZoHjPMFxN6iFb7NH8KDJnh6y+3P8WOoL2JNSMUp484nINfnlX viFuM6ShEy+pTphrMi9GLKOrFKy94g1I9ofU+RXZoW/VY9x3Y2w7TnsGPp9mI1CRHqHy w3SddKtnP0YQGRY01mEB7n0hpb3J/VgIuizrw8WV6t3FToXC6tIwhQG8GkUQGvah5Dns pCjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679556470; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PaVBTRG8cSbZezD1tCmSqI5sDhrh3YVPGLoSBE21PFI=; b=KVU7NeI+PDMluvr9vNPDdPiT4RhWbsnL/ocfgX2RH6iPCR9352C3JW9WNED2V3J3IS tFQf8lePl2CLTlhBmKbPEaf1bvusAnvR0QxnLvojU331YFxiudMoZ8nUSNGtj54Tmr2U SruLtWuL16/QVRacArf76Uss73+fPWo1qOD/AoN4sSwFHPdfzHrlKRmqaEnXEhDaL6N4 s7hmU9snxTY/9okYtU0TPmtLqqc2GStLXxv4EjtnhUWfnsmIoI/v9mPAN0DriqwbKHAi VOoHdPtGc6Zh/ge9rcfI6WC0GQdemdBPM8HZVEegUIp6wAbBgTVP75x3wJvQPKbBiC0n jopg== X-Gm-Message-State: AO0yUKUK3XwQE2+1CpQc+AeFW+zSyylOhcA8vWJIeBgG2aSwg1OHHSnY 86kvfM9zCKJpfzxXfu7AXqg= X-Google-Smtp-Source: AK7set+e8mOkZED81SLcsDT//0D2Ox0a3o/MoRixIJ/OEekNMd4BMbgJmIpG3s9axJrzMg9Tt1/ANg== X-Received: by 2002:a17:902:d4c6:b0:1a1:e33f:d567 with SMTP id o6-20020a170902d4c600b001a1e33fd567mr6684192plg.52.1679556470516; Thu, 23 Mar 2023 00:27:50 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017a032d7ae4sm11645447plg.104.2023.03.23.00.27.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Mar 2023 00:27:50 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jinrong Liang , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 6/7] KVM: selftests: Test consistency of PMU MSRs with Intel PMU version Date: Thu, 23 Mar 2023 15:27:13 +0800 Message-Id: <20230323072714.82289-7-likexu@tencent.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230323072714.82289-1-likexu@tencent.com> References: <20230323072714.82289-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jinrong Liang KVM user sapce may control the Intel guest PMU version number via CPUID.0AH:EAX[07:00]. A test is added to check if a typical PMU register that is not available at the current version number is leaking. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang --- .../selftests/kvm/x86_64/pmu_cpuid_test.c | 57 +++++++++++++++++++ 1 file changed, 57 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c index 79f2e144c6c6..caf0d98079c7 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c @@ -17,6 +17,7 @@ #define NUM_BRANCHES 10 #define EVENTSEL_OS BIT_ULL(17) +#define EVENTSEL_ANY BIT_ULL(21) #define EVENTSEL_EN BIT_ULL(22) #define PMU_CAP_FW_WRITES BIT_ULL(13) #define EVENTS_MASK GENMASK_ULL(7, 0) @@ -90,6 +91,14 @@ static uint32_t kvm_fixed_ctrs_bitmask(void) return kvm_entry->ecx; } +static uint32_t kvm_max_pmu_version(void) +{ + const struct kvm_cpuid_entry2 *kvm_entry; + + kvm_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0xa, 0); + return kvm_entry->eax & PMU_VERSION_MASK; +} + static struct kvm_vcpu *new_vcpu(void *guest_code) { struct kvm_vm *vm; @@ -220,6 +229,25 @@ static void intel_guest_run_fixed_counters(uint64_t supported_bitmask, GUEST_DONE(); } +static void intel_guest_check_pmu_version(uint8_t version) +{ + switch (version) { + case 0: + wrmsr(MSR_INTEL_ARCH_PMU_GPCTR, 0xffffull); + case 1: + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x1ull); + case 2: + /* AnyThread Bit is only supported in version 3 */ + wrmsr(MSR_P6_EVNTSEL0, EVENTSEL_ANY); + break; + default: + /* KVM currently supports up to pmu version 2 */ + GUEST_SYNC(GP_VECTOR); + } + + GUEST_DONE(); +} + static void test_arch_events_setup(struct kvm_vcpu *vcpu, uint8_t evt_vector, uint8_t unavl_mask, uint8_t idx) { @@ -341,6 +369,18 @@ static void intel_test_fixed_counters(void) } } +static void test_pmu_version_setup(struct kvm_vcpu *vcpu, uint8_t version) +{ + struct kvm_cpuid_entry2 *entry; + + entry = vcpu_get_cpuid_entry(vcpu, 0xa); + entry->eax = (entry->eax & ~PMU_VERSION_MASK) | version; + vcpu_set_cpuid(vcpu); + + vcpu_args_set(vcpu, 1, version); + vm_install_exception_handler(vcpu->vm, GP_VECTOR, guest_gp_handler); +} + static void intel_check_arch_event_is_unavl(uint8_t idx) { const char *msg = "Unavailable arch event is counting."; @@ -441,11 +481,28 @@ static void intel_test_arch_events(void) } } +static void intel_test_pmu_version(void) +{ + const char *msg = "Something beyond this PMU version is leaked."; + uint8_t version, unsupported_version = kvm_max_pmu_version() + 1; + struct kvm_vcpu *vcpu; + + TEST_REQUIRE(kvm_gp_ctrs_num() > 2); + + for (version = 0; version <= unsupported_version; version++) { + vcpu = new_vcpu(intel_guest_check_pmu_version); + test_pmu_version_setup(vcpu, version); + run_vcpu(vcpu, msg, first_uc_arg_equals, (void *)GP_VECTOR); + free_vcpu(vcpu); + } +} + static void intel_test_pmu_cpuid(void) { intel_test_arch_events(); intel_test_counters_num(); intel_test_fixed_counters(); + intel_test_pmu_version(); } int main(int argc, char *argv[]) From patchwork Thu Mar 23 07:27:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13185192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8FBFC6FD1C for ; Thu, 23 Mar 2023 07:28:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231308AbjCWH2f (ORCPT ); Thu, 23 Mar 2023 03:28:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231314AbjCWH2S (ORCPT ); Thu, 23 Mar 2023 03:28:18 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDC02BDEA; Thu, 23 Mar 2023 00:27:55 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id p13-20020a17090a284d00b0023d2e945aebso2999323pjf.0; Thu, 23 Mar 2023 00:27:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679556474; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MpR+pmbBUcpBxRC4D9f6Nuj/vtE19mF8uVxyBNDq7rI=; b=gnWVaAicvw+VkBIP0gSVlRtXs+k1NsGetuuwDBzkE75aC0DcqwYEjyUuf2xs1ZrCVC XthvHXECOFTGlbf66P+CK3x10SKjr5P/JakLQV7MnS88L1irt17CKIGhveMFoKTxgRxc ghKH/m4HWJsd+Ue+bNkpkePqFyJFv16jSunC7pdmoFuLiyNAIG16yDZkuIVVj0ahPgs+ GUqHlITSnMJyyrcfJ1p2qe1Jau0clDIGlfN2MhzqoJWcW8Br2SbzV+kNY+Snp/CSev5l gZ7yFet3Kwfqs02jvxUkK4RLmMC7Td4gS1fEQ246ImlZ+f5sAyvZ9nIsSeuVwxJgVsx6 Qnyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679556474; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MpR+pmbBUcpBxRC4D9f6Nuj/vtE19mF8uVxyBNDq7rI=; b=CG/dGQF+gck8K3lk8XdhQ/M4p3NpHPtw9PTwqibga4JFDLHp9QsipSADD3EGHqy/Zo 3oVME0vTPOiHDCyj40lFJJCCbmpAp1FwTsTl7pipmQiXYnM3ZCYEjSpgns4YuyGiqfuu OtSDFTQ9CVo52RNd6qu4S1Zytr31AzhLmX5xIdWNsCxQwi7uyln5BkKdtOGkSKjtk0AA KgVfgODAevw/nePwXAc/3lVhRMYx3hPn0nqHirLj5GO+zGVVPZOAHnCK0JMk/ihQOyaM LqyTbjVgcCpSqA/7ydr1tQak817SQyQzX63ZValTJUlaWHDVMLEf7r10oanbq2aRDpbq KxLg== X-Gm-Message-State: AAQBX9dytpFGbDkB9Vbjg4/WKQi90HwkScLlnW4VOFLzYVytagNboBQf 9IAigUxQSxf8N44lnWVQ5rjFqWfmKiixrWWD X-Google-Smtp-Source: AKy350bD5E+H6UUYwuM5pIKh1o8qPdO2g3Oxc2RHmnGHXc5w5iNKMrcgiPiWvdD0kYeYt1/Qnb1Hkg== X-Received: by 2002:a17:903:2291:b0:1a1:a727:a802 with SMTP id b17-20020a170903229100b001a1a727a802mr4803157plh.19.1679556474704; Thu, 23 Mar 2023 00:27:54 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017a032d7ae4sm11645447plg.104.2023.03.23.00.27.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Mar 2023 00:27:54 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jinrong Liang , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 7/7] KVM: selftests: Test Intel counters' bit width emulation Date: Thu, 23 Mar 2023 15:27:14 +0800 Message-Id: <20230323072714.82289-8-likexu@tencent.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230323072714.82289-1-likexu@tencent.com> References: <20230323072714.82289-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Add tests to cover Intel counters' bit-width emulation. When the VM has FW_WRITES bit, the bitwidth of the gp and fixed counters will be specified by the CPUID (no less than 32 bits and no greater than the host bitwidth) and accessing bits that are not within the bitwidth will generate #GP. However, when it does not have FW_WRITES bit, only the low 32-bits signed data will be in effect and naturally #GP will not be generated. Co-developed-by: Jinrong Liang Signed-off-by: Jinrong Liang Signed-off-by: Like Xu --- .../selftests/kvm/x86_64/pmu_cpuid_test.c | 105 ++++++++++++++++++ 1 file changed, 105 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c index caf0d98079c7..e7465b01178a 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_cpuid_test.c @@ -29,6 +29,10 @@ #define INTEL_PMC_IDX_FIXED 32 #define RDPMC_FIXED_BASE BIT_ULL(30) #define FIXED_CTR_NUM_MASK GENMASK_ULL(4, 0) +#define GP_WIDTH_OFS_BIT 16 +#define GP_WIDTH_MASK GENMASK_ULL(23, GP_WIDTH_OFS_BIT) +#define FIXED_WIDTH_OFS_BIT 5 +#define FIXED_WIDTH_MASK GENMASK_ULL(12, FIXED_WIDTH_OFS_BIT) #define ARCH_EVENT(select, umask) (((select) & 0xff) | ((umask) & 0xff) << 8) @@ -62,6 +66,16 @@ static const uint64_t perf_caps[] = { */ #define MSR_INTEL_ARCH_PMU_GPCTR (MSR_IA32_PERFCTR0 + 2) +static const uint32_t msr_bases[] = { + MSR_INTEL_ARCH_PMU_GPCTR, + MSR_IA32_PMC0, + MSR_CORE_PERF_FIXED_CTR0, +}; + +static const uint64_t bit_widths[] = { + 0, 1, 31, 32, 47, 48, 63, 64, +}; + static uint64_t evt_code_for_fixed_ctr(uint8_t idx) { return arch_events[fixed_events[idx]]; @@ -99,6 +113,22 @@ static uint32_t kvm_max_pmu_version(void) return kvm_entry->eax & PMU_VERSION_MASK; } +static uint32_t kvm_gp_ctr_bit_width(void) +{ + const struct kvm_cpuid_entry2 *kvm_entry; + + kvm_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0xa, 0); + return (kvm_entry->eax & GP_WIDTH_MASK) >> GP_WIDTH_OFS_BIT; +} + +static uint32_t kvm_fixed_ctr_bit_width(void) +{ + const struct kvm_cpuid_entry2 *kvm_entry; + + kvm_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0xa, 0); + return (kvm_entry->edx & FIXED_WIDTH_MASK) >> FIXED_WIDTH_OFS_BIT; +} + static struct kvm_vcpu *new_vcpu(void *guest_code) { struct kvm_vm *vm; @@ -381,6 +411,50 @@ static void test_pmu_version_setup(struct kvm_vcpu *vcpu, uint8_t version) vm_install_exception_handler(vcpu->vm, GP_VECTOR, guest_gp_handler); } +static uint64_t test_ctrs_bit_width_setup(struct kvm_vcpu *vcpu, + uint8_t bit_width, + uint64_t perf_cap, + uint32_t msr_base) +{ + struct kvm_cpuid_entry2 *entry; + bool fw_wr = perf_cap & PMU_CAP_FW_WRITES; + uint64_t kvm_width; + uint64_t value; + + entry = vcpu_get_cpuid_entry(vcpu, 0xa); + if (msr_base != MSR_CORE_PERF_FIXED_CTR0) { + kvm_width = kvm_gp_ctr_bit_width(); + entry->eax = (entry->eax & ~GP_WIDTH_MASK) | + (bit_width << GP_WIDTH_OFS_BIT); + } else { + kvm_width = kvm_fixed_ctr_bit_width(); + entry->edx = (entry->edx & ~FIXED_WIDTH_MASK) | + (bit_width << FIXED_WIDTH_OFS_BIT); + } + TEST_REQUIRE(kvm_width > 31); + + vcpu_set_cpuid(vcpu); + vcpu_set_msr(vcpu, MSR_IA32_PERF_CAPABILITIES, perf_cap); + + /* No less than 32 bits and no greater than the host bitwidth */ + bit_width = fw_wr ? max_t(int, 32, bit_width) : 32; + bit_width = min_t(int, bit_width, kvm_width); + + /* Unconditionally set signed bit 31 for the case w/o FW_WRITES */ + value = BIT_ULL(bit_width) | 0x1234567ull | BIT_ULL(31); + vcpu_args_set(vcpu, 4, msr_base, value, 1, 1); + + if (fw_wr && msr_base != MSR_INTEL_ARCH_PMU_GPCTR) { + vm_install_exception_handler(vcpu->vm, GP_VECTOR, + guest_gp_handler); + return GP_VECTOR; + } else if (msr_base == MSR_INTEL_ARCH_PMU_GPCTR) { + value = (s32)(value & (BIT_ULL(32) - 1)); + } + + return value & (BIT_ULL(bit_width) - 1); +} + static void intel_check_arch_event_is_unavl(uint8_t idx) { const char *msg = "Unavailable arch event is counting."; @@ -497,12 +571,43 @@ static void intel_test_pmu_version(void) } } +static void vcpu_run_bit_width(uint8_t bit_width, uint64_t perf_cap, + uint32_t msr_base) +{ + const char *msg = "Fail to emulate counters' bit width."; + struct kvm_vcpu *vcpu; + uint64_t ret; + + vcpu = new_vcpu(guest_wr_and_rd_msrs); + ret = test_ctrs_bit_width_setup(vcpu, bit_width, perf_cap, msr_base); + run_vcpu(vcpu, msg, first_uc_arg_equals, (void *)ret); + free_vcpu(vcpu); +} + +static void intel_test_counters_bit_width(void) +{ + uint8_t i, j, k; + + for (i = 0; i < ARRAY_SIZE(perf_caps); i++) { + for (j = 0; j < ARRAY_SIZE(msr_bases); j++) { + if (!(perf_caps[i] & PMU_CAP_FW_WRITES) && + msr_bases[j] == MSR_IA32_PMC0) + continue; + + for (k = 0; k < ARRAY_SIZE(bit_widths); k++) + vcpu_run_bit_width(bit_widths[k], perf_caps[i], + msr_bases[j]); + } + } +} + static void intel_test_pmu_cpuid(void) { intel_test_arch_events(); intel_test_counters_num(); intel_test_fixed_counters(); intel_test_pmu_version(); + intel_test_counters_bit_width(); } int main(int argc, char *argv[])