From patchwork Tue Jan 9 23:02:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515503 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB8B73E472 for ; Tue, 9 Jan 2024 23:02:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2MU+ORTS" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5f53b4554b6so49582797b3.3 for ; Tue, 09 Jan 2024 15:02:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841375; x=1705446175; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=tXsV5Mg+VvJOgGB933IW2BaWsp1tYo1T/RC8C6lgjm8=; b=2MU+ORTSm2htKkCH8uXfKpPUyBU45YIGXNm7BFlwd3eiy3L6U+R7kAomhfyZ00qfZG mEsV25h+IyQOTL8T3W3DpovOZJfRL+vRrMTRvCZpAP9+MKjQF7Zn+eFLXfS0mPC4XTkZ A/6XCkn2ly0jRRrS09eBxEV2K0S/m77q3Nq0ZxwNqk3SXCse6mFc7QuVJ8PhSYr6RiX/ as0oO3O556TQUVuFn6VU37o+BeQQa4wxT7Heax5gk49Ri1o0btEHaBa8hp05c+Bgp6hd kCaUIv3ibY5wmedHijLHDPmQjOw4e5A+ff7aXrc/nnCE4vF5BSPz3Lkrd+sn2rGjwzsQ sj2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841375; x=1705446175; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=tXsV5Mg+VvJOgGB933IW2BaWsp1tYo1T/RC8C6lgjm8=; b=kLdt87VZ0O6kBmr6TRoGili6cHovbCfovn0QovdnGFzoBXZf9Wxd3DNQIKsL1z1wAT 1zz+iYGGu22MkfRo0iu59WOUfCQkb2Hyv+QOtCBxS1ABO7rUUPpXVO2814Hofnrhwr0q xLugKXNp3Nverae1uJPbcyKp5EOiPyKzQS8w3QzBX99khnYyg5tT4hmKWT6Pityc0k08 MCIDtrpRHwZbpxcvkSD4ThPX9Cxe4WPQNUzMnKPqAHltTMA71ygc9Ox13mXjDTAmg/Fu ZegOwTf7D80gyeuZxmkvd5yeaeySuecjwhnLZw9EWme0O1x5qSEPN+eNcM+QAw/iQjTy FYQw== X-Gm-Message-State: AOJu0YykAcMW6PD226oxeEPf2zPZrQ/L+1t8TrKJ2FKy0iYFgIOgvwrC s9TWo8lGNn3In8qT73pmJ48Ioi8F1bLALDJjLA== X-Google-Smtp-Source: AGHT+IHLyDG4Y+yA6YSmcp0NuC3vLgus5tCcAOHj2QDp3cAqMJFNLZ1YxcUGjpmNXT2wYTkfJoaOBoVsZL8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:c16:b0:5ee:2b5:7d75 with SMTP id cl22-20020a05690c0c1600b005ee02b57d75mr83986ywb.8.1704841374981; Tue, 09 Jan 2024 15:02:54 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:21 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-2-seanjc@google.com> Subject: [PATCH v10 01/29] KVM: x86/pmu: Always treat Fixed counters as available when supported From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Treat fixed counters as available when they are supported, i.e. don't silently ignore an enabled fixed counter just because guest CPUID says the associated general purpose architectural event is unavailable. KVM originally treated fixed counters as always available, but that got changed as part of a fix to avoid confusing REF_CPU_CYCLES, which does NOT map to an architectural event, with the actual architectural event used associated with bit 7, TOPDOWN_SLOTS. The commit justified the change with: If the event is marked as unavailable in the Intel guest CPUID 0AH.EBX leaf, we need to avoid any perf_event creation, whether it's a gp or fixed counter. but that justification doesn't mesh with reality. The Intel SDM uses "architectural events" to refer to both general purpose events (the ones with the reverse polarity mask in CPUID.0xA.EBX) and the events for fixed counters, e.g. the SDM makes statements like: Each of the fixed-function PMC can count only one architectural performance event. but the fact that fixed counter 2 (TSC reference cycles) doesn't have an associated general purpose architectural makes trying to apply the mask from CPUID.0xA.EBX impossible. Furthermore, the lack of enumeration for an architectural event in CPUID only means the CPU doesn't officially support the architectural encoding, i.e. it doesn't mean using the architectural encoding _won't_ work, it sipmly means there are no guarantees that it will work as expected. E.g. if KVM is running in a VM that advertises a fixed counters but not the corresponding architectural event encoding, and perf decides to use a general purpose counter instead of a fixed counter, odds are very good that the underlying hardware actually does support the architectrual encoding, and that programming the encoding will count the right thing. In other words, asking perf to count the event will probably work, whereas intentionally doing nothing is obviously guaranteed to fail. Note, at the time of the change, KVM didn't enforce hardware support, i.e. didn't prevent userspace from enumerating support in guest CPUID.0xA.EBX for architectural events that aren't supported in hardware. I.e. silently dropping the fixed counter didn't somehow protection against counting the wrong event, it just enforced guest CPUID. And practically speaking, this issue is almost certainly limited to running KVM on a funky virtual CPU model. No known real hardware has an asymmetric PMU where a fixed counter is supported but the associated architectural event is not. Fixes: a21864486f7e ("KVM: x86/pmu: Fix available_event_types check for REF_CPU_CYCLES event") Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/pmu_intel.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index a6216c874729..8207f8c03585 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -108,11 +108,24 @@ static bool intel_hw_event_available(struct kvm_pmc *pmc) u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; int i; + /* + * Fixed counters are always available if KVM reaches this point. If a + * fixed counter is unsupported in hardware or guest CPUID, KVM doesn't + * allow the counter's corresponding MSR to be written. KVM does use + * architectural events to program fixed counters, as the interface to + * perf doesn't allow requesting a specific fixed counter, e.g. perf + * may (sadly) back a guest fixed PMC with a general purposed counter. + * But if _hardware_ doesn't support the associated event, KVM simply + * doesn't enumerate support for the fixed counter. + */ + if (pmc_is_fixed(pmc)) + return true; + BUILD_BUG_ON(ARRAY_SIZE(intel_arch_events) != NR_INTEL_ARCH_EVENTS); /* * Disallow events reported as unavailable in guest CPUID. Note, this - * doesn't apply to pseudo-architectural events. + * doesn't apply to pseudo-architectural events (see above). */ for (i = 0; i < NR_REAL_INTEL_ARCH_EVENTS; i++) { if (intel_arch_events[i].eventsel != event_select || From patchwork Tue Jan 9 23:02:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515504 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E85B3E48E for ; Tue, 9 Jan 2024 23:02:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tmLNOqQM" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5cd61dccd77so1473518a12.3 for ; Tue, 09 Jan 2024 15:02:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841377; x=1705446177; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=HsoPw14+SR4y1nfwaUCspMsfITq+8rVX8VzUMdYGdzk=; b=tmLNOqQMRC/MuQzqnbjhsoPjImfUUfWsE/MpvXNyny9vbsGzaLzZAdSE8rsGoo6Zm3 nQQlU1ILKyBGpA9O8k/nEE7DgSAul9qYUDgiMppm81GmUqPFAfTdpnDXmTT8bBSGg+Ah pNYnJQ2Ul+vjC0EjtXrki12YV4Pj7p19YOiPS61+iFZH2+T7F2bH1mb2OG55ZLVngyhY RXReUyUjgGwBJfco+CnwrPx4HCuqu36J3k3N+BzKXyzwzeCp56esd0Foh90Vy+MRkC24 Nz/cH7BtOMCNwiXeAxUv5FN1mXCEh4umxbssS/wqtQ1mnEMohkMrYajs7HFudCVPGvG1 hrpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841377; x=1705446177; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=HsoPw14+SR4y1nfwaUCspMsfITq+8rVX8VzUMdYGdzk=; b=kfU2QU+GEAY2VA/m//ByKSXBwlU5Te7g7OrFGZRijWhO0q9fpsfd5KEK6ZxqSVpdaA QsiZTwu7v2591/OOtVz8KdyKPjgD8A8L7Xmo/koPTwH+YsIvxux65REN//XqQaaq9Eiv yn5NnVx6uNbR9iSD0YULh0M5k5V9JAbn/Ov7on5quhCE/aZS6TOzvh8had8XxpepkoOb 7n7cr3GMXL+gkYcOo6AxycYbwJ4PTBLrNtWG56KX2mx6BCuFNrM0Uavvkm4qcgpiJota 3b//PsQn4d8Lkgff5LPYQkOM/vFQDDIxONhSgm/il0kNtIkgBzPJQNP2mUNmuAc0+61s uUbQ== X-Gm-Message-State: AOJu0YxEn9xlmRkNJ0+egoFoJ+IAe2CYLWZ8tpePtrCYyfz9cngQu9vo 3Of1Yw0bMxQg3wQ1/Q+Dl6j8/J3nvlJDVqW5Kg== X-Google-Smtp-Source: AGHT+IF7BS6Wu9kwMCDADAn53whFpasanfpkx1w0pa3GTzCM8McjwS4p8hRYSr5n2VOX3f9R/bUAgLZ8dcY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a02:50d:b0:5ce:5301:f42 with SMTP id bx13-20020a056a02050d00b005ce53010f42mr182pgb.4.1704841376753; Tue, 09 Jan 2024 15:02:56 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:22 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-3-seanjc@google.com> Subject: [PATCH v10 02/29] KVM: x86/pmu: Allow programming events that match unsupported arch events From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Remove KVM's bogus restriction that the guest can't program an event whose encoding matches an unsupported architectural event. The enumeration of an architectural event only says that if a CPU supports an architectural event, then the event can be programmed using the architectural encoding. The enumeration does NOT say anything about the encoding when the CPU doesn't report support the architectural event. Preventing the guest from counting events whose encoding happens to match an architectural event breaks existing functionality whenever Intel adds an architectural encoding that was *ever* used for a CPU that doesn't enumerate support for the architectural event, even if the encoding is for the exact same event! E.g. the architectural encoding for Top-Down Slots is 0x01a4. Broadwell CPUs, which do not support the Top-Down Slots architectural event, 0x01a4 is a valid, model-specific event. Denying guest usage of 0x01a4 if/when KVM adds support for Top-Down slots would break any Broadwell-based guest. Reported-by: Kan Liang Closes: https://lore.kernel.org/all/2004baa6-b494-462c-a11f-8104ea152c6a@linux.intel.com Fixes: a21864486f7e ("KVM: x86/pmu: Fix available_event_types check for REF_CPU_CYCLES event") Reviewed-by: Dapeng Mi Reviewed-by: Jim Mattson Reviewed-by: Kan Liang Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 - arch/x86/kvm/pmu.c | 1 - arch/x86/kvm/pmu.h | 1 - arch/x86/kvm/svm/pmu.c | 6 ---- arch/x86/kvm/vmx/pmu_intel.c | 38 -------------------------- 5 files changed, 47 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index 058bc636356a..d7eebee4450c 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -12,7 +12,6 @@ BUILD_BUG_ON(1) * a NULL definition, for example if "static_call_cond()" will be used * at the call sites. */ -KVM_X86_PMU_OP(hw_event_available) KVM_X86_PMU_OP(pmc_idx_to_pmc) KVM_X86_PMU_OP(rdpmc_ecx_to_pmc) KVM_X86_PMU_OP(msr_idx_to_pmc) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 87cc6c8809ad..30945fea6988 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -441,7 +441,6 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) static bool pmc_event_is_allowed(struct kvm_pmc *pmc) { return pmc_is_globally_enabled(pmc) && pmc_speculative_in_use(pmc) && - static_call(kvm_x86_pmu_hw_event_available)(pmc) && check_pmu_event_filter(pmc); } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 7caeb3d8d4fd..87ecf22f5b25 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -19,7 +19,6 @@ #define VMWARE_BACKDOOR_PMC_APPARENT_TIME 0x10002 struct kvm_pmu_ops { - bool (*hw_event_available)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index b6a7ad4d6914..1475d47c821c 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -73,11 +73,6 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, return amd_pmc_idx_to_pmc(pmu, idx); } -static bool amd_hw_event_available(struct kvm_pmc *pmc) -{ - return true; -} - static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -233,7 +228,6 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) } struct kvm_pmu_ops amd_pmu_ops __initdata = { - .hw_event_available = amd_hw_event_available, .pmc_idx_to_pmc = amd_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = amd_msr_idx_to_pmc, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 8207f8c03585..1a7d021a6c7b 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -101,43 +101,6 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) } } -static bool intel_hw_event_available(struct kvm_pmc *pmc) -{ - struct kvm_pmu *pmu = pmc_to_pmu(pmc); - u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; - u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; - int i; - - /* - * Fixed counters are always available if KVM reaches this point. If a - * fixed counter is unsupported in hardware or guest CPUID, KVM doesn't - * allow the counter's corresponding MSR to be written. KVM does use - * architectural events to program fixed counters, as the interface to - * perf doesn't allow requesting a specific fixed counter, e.g. perf - * may (sadly) back a guest fixed PMC with a general purposed counter. - * But if _hardware_ doesn't support the associated event, KVM simply - * doesn't enumerate support for the fixed counter. - */ - if (pmc_is_fixed(pmc)) - return true; - - BUILD_BUG_ON(ARRAY_SIZE(intel_arch_events) != NR_INTEL_ARCH_EVENTS); - - /* - * Disallow events reported as unavailable in guest CPUID. Note, this - * doesn't apply to pseudo-architectural events (see above). - */ - for (i = 0; i < NR_REAL_INTEL_ARCH_EVENTS; i++) { - if (intel_arch_events[i].eventsel != event_select || - intel_arch_events[i].unit_mask != unit_mask) - continue; - - return pmu->available_event_types & BIT(i); - } - - return true; -} - static bool intel_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -780,7 +743,6 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) } struct kvm_pmu_ops intel_pmu_ops __initdata = { - .hw_event_available = intel_hw_event_available, .pmc_idx_to_pmc = intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = intel_msr_idx_to_pmc, From patchwork Tue Jan 9 23:02:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515505 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 75B443EA8B for ; Tue, 9 Jan 2024 23:02:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2YPrExjc" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5f0fd486b9aso48666507b3.2 for ; Tue, 09 Jan 2024 15:02:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841378; x=1705446178; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=1/GLluKzm5a7vgNb0LnAfQGRD4+bEom3jW+vTNfr4Fk=; b=2YPrExjc0KRY4jq/g4YhkL3pTS0rNd2lp6AtaNvDsvzVsL01wQZVxHAOSMi0brOHqN JILYPzCgJWmtk5O+Te5miinhQHwtnhPy3XF50DfR4vl/n3nXvzhOMYRT0EgFvhucVAcY PUpkH3iwEyEYSQZw89Ye18BatmB7m0AHIcHesN9arSwm0tW6frJtBM4jXux6/6+CLfEu Kc1sZuJc7JUl98yaBIRu7UOaqC2LbQfBd/EA3HWxLfIuKYCPygEjUBoOy7/mOZbire6r 1tEAk2aeKdo+2Yaa+ax8fnaA46qf97gqejOIS643304Jh0Qe4k/g2RENrsdvqCia+3JY 6EWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841378; x=1705446178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1/GLluKzm5a7vgNb0LnAfQGRD4+bEom3jW+vTNfr4Fk=; b=UHZ2/qkvKhbI7kAoPd5APFvwKOB0uphG9TiVa263mMTHetGF70H4v8bFTNMmK12yQH sMBzKosnNZB/ZWUCiqWFRvvoAZXTKfYTw6ad/meKB85mMDCalRqF/MPAN/jRprRch+Ft okXIXx6jYQj66O+uhhE7S1Lmq4OpVIEciCeUYxRFiA0kokx0TS5R9RfrWYzwEvYkuBFl 0eplxfCIw9mH9a1WY9B0EoElpvY1hOhkA0zA5QTmks9XdYI/SNEuvMbsa3+fVb4+Qf8u Ko7AZvtzXe+UOf7YcLlZDGLnpChJIa0XLab5+3B4IKBiPrknZKkPkyyI5erdiIqcS2h7 bPVw== X-Gm-Message-State: AOJu0Yz2o2lMb7FXZFtdZU0D83fAMlxE8UECqZD4bQXMQfLU9AxLrj12 dUqB4CnoTEjRkBBmRuDv8OWQfGYkvC1mKbQxQw== X-Google-Smtp-Source: AGHT+IGR1EJFsaKKNBR9ZRETsC1K9eaDazqI2WQD8Rld/XIBOTiC416sUQsUkB7ZDC3FYrVyCzUaLLFlZS8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a0d:d44b:0:b0:5fa:2d2e:a5c0 with SMTP id w72-20020a0dd44b000000b005fa2d2ea5c0mr78683ywd.3.1704841378532; Tue, 09 Jan 2024 15:02:58 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:23 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-4-seanjc@google.com> Subject: [PATCH v10 03/29] KVM: x86/pmu: Remove KVM's enumeration of Intel's architectural encodings From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Drop KVM's enumeration of Intel's architectural event encodings, and instead open code the three encodings (of which only two are real) that KVM uses to emulate fixed counters. Now that KVM doesn't incorrectly enforce the availability of architectural encodings, there is no reason for KVM to ever care about the encodings themselves, at least not in the current format of an array indexed by the encoding's position in CPUID. Opportunistically add a comment to explain why KVM cares about eventsel values for fixed counters. Suggested-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/pmu_intel.c | 72 ++++++++++++------------------------ 1 file changed, 23 insertions(+), 49 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 1a7d021a6c7b..f3c44ddc09f8 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -22,52 +22,6 @@ #define MSR_PMC_FULL_WIDTH_BIT (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0) -enum intel_pmu_architectural_events { - /* - * The order of the architectural events matters as support for each - * event is enumerated via CPUID using the index of the event. - */ - INTEL_ARCH_CPU_CYCLES, - INTEL_ARCH_INSTRUCTIONS_RETIRED, - INTEL_ARCH_REFERENCE_CYCLES, - INTEL_ARCH_LLC_REFERENCES, - INTEL_ARCH_LLC_MISSES, - INTEL_ARCH_BRANCHES_RETIRED, - INTEL_ARCH_BRANCHES_MISPREDICTED, - - NR_REAL_INTEL_ARCH_EVENTS, - - /* - * Pseudo-architectural event used to implement IA32_FIXED_CTR2, a.k.a. - * TSC reference cycles. The architectural reference cycles event may - * or may not actually use the TSC as the reference, e.g. might use the - * core crystal clock or the bus clock (yeah, "architectural"). - */ - PSEUDO_ARCH_REFERENCE_CYCLES = NR_REAL_INTEL_ARCH_EVENTS, - NR_INTEL_ARCH_EVENTS, -}; - -static struct { - u8 eventsel; - u8 unit_mask; -} const intel_arch_events[] = { - [INTEL_ARCH_CPU_CYCLES] = { 0x3c, 0x00 }, - [INTEL_ARCH_INSTRUCTIONS_RETIRED] = { 0xc0, 0x00 }, - [INTEL_ARCH_REFERENCE_CYCLES] = { 0x3c, 0x01 }, - [INTEL_ARCH_LLC_REFERENCES] = { 0x2e, 0x4f }, - [INTEL_ARCH_LLC_MISSES] = { 0x2e, 0x41 }, - [INTEL_ARCH_BRANCHES_RETIRED] = { 0xc4, 0x00 }, - [INTEL_ARCH_BRANCHES_MISPREDICTED] = { 0xc5, 0x00 }, - [PSEUDO_ARCH_REFERENCE_CYCLES] = { 0x00, 0x03 }, -}; - -/* mapping between fixed pmc index and intel_arch_events array */ -static int fixed_pmc_events[] = { - [0] = INTEL_ARCH_INSTRUCTIONS_RETIRED, - [1] = INTEL_ARCH_CPU_CYCLES, - [2] = PSEUDO_ARCH_REFERENCE_CYCLES, -}; - static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) { struct kvm_pmc *pmc; @@ -440,8 +394,29 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 0; } +/* + * Map fixed counter events to architectural general purpose event encodings. + * Perf doesn't provide APIs to allow KVM to directly program a fixed counter, + * and so KVM instead programs the architectural event to effectively request + * the fixed counter. Perf isn't guaranteed to use a fixed counter and may + * instead program the encoding into a general purpose counter, e.g. if a + * different perf_event is already utilizing the requested counter, but the end + * result is the same (ignoring the fact that using a general purpose counter + * will likely exacerbate counter contention). + * + * Note, reference cycles is counted using a perf-defined "psuedo-encoding", + * as there is no architectural general purpose encoding for reference cycles. + */ static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu) { + const struct { + u8 eventsel; + u8 unit_mask; + } fixed_pmc_events[] = { + [0] = { 0xc0, 0x00 }, /* Instruction Retired / PERF_COUNT_HW_INSTRUCTIONS. */ + [1] = { 0x3c, 0x00 }, /* CPU Cycles/ PERF_COUNT_HW_CPU_CYCLES. */ + [2] = { 0x00, 0x03 }, /* Reference Cycles / PERF_COUNT_HW_REF_CPU_CYCLES*/ + }; int i; BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_events) != KVM_PMC_MAX_FIXED); @@ -449,10 +424,9 @@ static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu) for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { int index = array_index_nospec(i, KVM_PMC_MAX_FIXED); struct kvm_pmc *pmc = &pmu->fixed_counters[index]; - u32 event = fixed_pmc_events[index]; - pmc->eventsel = (intel_arch_events[event].unit_mask << 8) | - intel_arch_events[event].eventsel; + pmc->eventsel = (fixed_pmc_events[index].unit_mask << 8) | + fixed_pmc_events[index].eventsel; } } From patchwork Tue Jan 9 23:02:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515506 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73D2D3FB13 for ; Tue, 9 Jan 2024 23:03:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CZ3HgNV0" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5f7942a16c3so44181527b3.3 for ; Tue, 09 Jan 2024 15:03:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841380; x=1705446180; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=LadkSkOnespq7J+bb9MybmOp6ANC4ZDI49MV81r/bQE=; b=CZ3HgNV0hOupopA4+p9nuwDSscAzTLrKVCboN3zZ9Ces+bq4Yf0/QzkcD3ttrkV5MM 2y6OaV39QPEddBuLGlL12i5bp+/iWgReVsWx2SLD7z6OyNR9Ejpng7U+toBExEjL0cSQ 5dXrBHZRWgJZfIMZCflX9ARKwLo3BbeDbTCXpoYNILFLM6qSL5AMuvlvSUO1RhGrCTrk TuvXAIo+6Z1oRBdfqO4JY3kn65kmtHEjsL6L4BqIkCat/W+87EhryPAeVrGb11x54LqE AyXDLNmP/GYcARXi6YPM2kACkcOw/lk8xxLzpIQlMEswMMB87oYQJn58J/HAhF2rPiZr CLYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841380; x=1705446180; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=LadkSkOnespq7J+bb9MybmOp6ANC4ZDI49MV81r/bQE=; b=PrRe0m0u6xGfL3Nc3DiTbhYDwmEqeBuJ4eF0YXiJV+O7TDUnNVKughAbApPd6Fo0K9 xvA6e4QCVnd5ZTw+YwBSmdLzq58M1Lm685gEAeVIA9DdvgkjZR0R1PVDZOy5DUWXKvJW XdhkZOW2pYVOr+I6k0sIDmdJxDE6dt1uxOcgRm5DKdU+BpHBsRtubAdWnawICty9MHiz XFe3qERwBjFLCBi+eqH+VaVnoEFLH2BCNfwkjUbs5roAoGXWkSqrWcU3rvoesbjA8pEZ 1gDx89RtiacavEWL4nYDqazmb10jPGkUFvtZ4kK8x84za/XhK6847SLhIiiktQyFhtjv vdKw== X-Gm-Message-State: AOJu0YxsVqwoseFC5ZjIxI2B6kSPDf+BViiKGNN3dCwDznk/Nn40YpC7 uMDAHdrQ1HQKAG7TzOxajipJjTeoHDlNtClizg== X-Google-Smtp-Source: AGHT+IH+wwm+B9UodY/w3WuN8h1tNmWmYi6IBiza4wg28y9Lz6PYElcilIEWQr8f7lQ/uZg7EHBE+DknZNE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:a9a:b0:5f6:f1ec:2da5 with SMTP id ci26-20020a05690c0a9a00b005f6f1ec2da5mr104412ywb.9.1704841380561; Tue, 09 Jan 2024 15:03:00 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:24 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-5-seanjc@google.com> Subject: [PATCH v10 04/29] KVM: x86/pmu: Setup fixed counters' eventsel during PMU initialization From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Set the eventsel for all fixed counters during PMU initialization, the eventsel is hardcoded and consumed if and only if the counter is supported, i.e. there is no reason to redo the setup every time the PMU is refreshed. Configuring all KVM-supported fixed counter also eliminates a potential pitfall if/when KVM supports discontiguous fixed counters, in which case configuring only nr_arch_fixed_counters will be insufficient (ignoring the fact that KVM will need many other changes to support discontiguous fixed counters). Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/pmu_intel.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index f3c44ddc09f8..98e92b9ece09 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -407,27 +407,21 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) * Note, reference cycles is counted using a perf-defined "psuedo-encoding", * as there is no architectural general purpose encoding for reference cycles. */ -static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu) +static u64 intel_get_fixed_pmc_eventsel(int index) { const struct { - u8 eventsel; + u8 event; u8 unit_mask; } fixed_pmc_events[] = { [0] = { 0xc0, 0x00 }, /* Instruction Retired / PERF_COUNT_HW_INSTRUCTIONS. */ [1] = { 0x3c, 0x00 }, /* CPU Cycles/ PERF_COUNT_HW_CPU_CYCLES. */ [2] = { 0x00, 0x03 }, /* Reference Cycles / PERF_COUNT_HW_REF_CPU_CYCLES*/ }; - int i; BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_events) != KVM_PMC_MAX_FIXED); - for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { - int index = array_index_nospec(i, KVM_PMC_MAX_FIXED); - struct kvm_pmc *pmc = &pmu->fixed_counters[index]; - - pmc->eventsel = (fixed_pmc_events[index].unit_mask << 8) | - fixed_pmc_events[index].eventsel; - } + return (fixed_pmc_events[index].unit_mask << 8) | + fixed_pmc_events[index].event; } static void intel_pmu_refresh(struct kvm_vcpu *vcpu) @@ -493,7 +487,6 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) kvm_pmu_cap.bit_width_fixed); pmu->counter_bitmask[KVM_PMC_FIXED] = ((u64)1 << edx.split.bit_width_fixed) - 1; - setup_fixed_pmc_eventsel(pmu); } for (i = 0; i < pmu->nr_arch_fixed_counters; i++) @@ -571,6 +564,7 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) pmu->fixed_counters[i].vcpu = vcpu; pmu->fixed_counters[i].idx = i + INTEL_PMC_IDX_FIXED; pmu->fixed_counters[i].current_config = 0; + pmu->fixed_counters[i].eventsel = intel_get_fixed_pmc_eventsel(i); } lbr_desc->records.nr = 0; From patchwork Tue Jan 9 23:02:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515507 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4BE103FB20 for ; Tue, 9 Jan 2024 23:03:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1TJ3eTbm" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dbcd9f4396eso4780054276.0 for ; Tue, 09 Jan 2024 15:03:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841382; x=1705446182; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=w1FIQniV5Y7kaVFZp970t8OKo7Z30rt5vxBHoT10l/c=; b=1TJ3eTbmUJxm5A8aCYU27LvnI7qYfb39aOfr9/vnG53+4g0e0jTSd/NQ3KCZS1Vq34 owBfof/BroOssxbp3X+XcMrLxh+/nbvO1n9Ngx50KyqCmc377/pa0vfSmsDpjfXGWU1d cdcssPDAX0thnx+MmZKPR0Nt/BrFeMmmE/r2fpkKkkLQur9mPsKxuE7CUe9CjUqev7QJ /P88NMY3QmFS+ees4pJWATW8gEjMl9MyxDLZkwIIh4g8TgiKQKf3V2y3tkI7gLwhZ8DC BcHfSUBOJu6tTFsZ9sj8Bp3tl4Lopej46JYZKEtw/Bezw8ptAuGJQ+ypS871tLwmtHgf t+xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841382; x=1705446182; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=w1FIQniV5Y7kaVFZp970t8OKo7Z30rt5vxBHoT10l/c=; b=hEuz9s5bRPgjwkchX+Yb5+KB1Fr/9tHqmf2Mo/sLesLIotVSfNRUzPukt/yA297Nav oQznX8tcIhVnJWGdtYqS85Q9M4o9qbh3SvZoROwyVLl8w/XiSfKYWQqCWJX/EiyTAP/z TT/yYRucAgrJ0Y0nh4AuKwlwOWpTe1B++Z21wB/0YlG70u34pcYPKVGmuCY2BqLbugxN aMJKQawKxQs0oHCTTAw0CofeyPecaWovjOwZCVTd1Oq5eM04MZ3a3O8TtFm6iscDXePs 8MDPB1nm6M3jXmIk0MTSRxaK57EHgS+JhKUnzlYmEZSIqchHLZvKq6JJulVmMV9QbeZg qZBQ== X-Gm-Message-State: AOJu0YzGFqJliyuluNsjvMGxqjqJSFoezaV4MC02iNyFaqtiR09s87eq zLrs8v9RBvCzxT3Wd/2TSTiSuYeK7fuellbg6w== X-Google-Smtp-Source: AGHT+IH5yDHCFJ5U9z7zRkzpSnWzCPJ9rOvqJ3lgo3YhoNdwT69avrh3ITugnpnqFasJw5AXyvCzTP5nkfY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:bf8d:0:b0:dbd:4683:21da with SMTP id l13-20020a25bf8d000000b00dbd468321damr41195ybk.8.1704841382519; Tue, 09 Jan 2024 15:03:02 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:25 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-6-seanjc@google.com> Subject: [PATCH v10 05/29] KVM: x86/pmu: Get eventsel for fixed counters from perf From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Get the event selectors used to effectively request fixed counters for perf events from perf itself instead of hardcoding them in KVM and hoping that they match the underlying hardware. While fixed counters 0 and 1 use architectural events, as of ffbe4ab0beda ("perf/x86/intel: Extend the ref-cycles event to GP counters") fixed counter 2 (reference TSC cycles) may use a software-defined pseudo-encoding or a real hardware-defined encoding. Reported-by: Kan Liang Closes: https://lkml.kernel.org/r/4281eee7-6423-4ec8-bb18-c6aeee1faf2c%40linux.intel.com Reviewed-by: Kan Liang Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/pmu_intel.c | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 98e92b9ece09..ec4feaef3d55 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -404,24 +404,28 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) * result is the same (ignoring the fact that using a general purpose counter * will likely exacerbate counter contention). * - * Note, reference cycles is counted using a perf-defined "psuedo-encoding", - * as there is no architectural general purpose encoding for reference cycles. + * Forcibly inlined to allow asserting on @index at build time, and there should + * never be more than one user. */ -static u64 intel_get_fixed_pmc_eventsel(int index) +static __always_inline u64 intel_get_fixed_pmc_eventsel(unsigned int index) { - const struct { - u8 event; - u8 unit_mask; - } fixed_pmc_events[] = { - [0] = { 0xc0, 0x00 }, /* Instruction Retired / PERF_COUNT_HW_INSTRUCTIONS. */ - [1] = { 0x3c, 0x00 }, /* CPU Cycles/ PERF_COUNT_HW_CPU_CYCLES. */ - [2] = { 0x00, 0x03 }, /* Reference Cycles / PERF_COUNT_HW_REF_CPU_CYCLES*/ + const enum perf_hw_id fixed_pmc_perf_ids[] = { + [0] = PERF_COUNT_HW_INSTRUCTIONS, + [1] = PERF_COUNT_HW_CPU_CYCLES, + [2] = PERF_COUNT_HW_REF_CPU_CYCLES, }; + u64 eventsel; - BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_events) != KVM_PMC_MAX_FIXED); + BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_perf_ids) != KVM_PMC_MAX_FIXED); + BUILD_BUG_ON(index >= KVM_PMC_MAX_FIXED); - return (fixed_pmc_events[index].unit_mask << 8) | - fixed_pmc_events[index].event; + /* + * Yell if perf reports support for a fixed counter but perf doesn't + * have a known encoding for the associated general purpose event. + */ + eventsel = perf_get_hw_event_config(fixed_pmc_perf_ids[index]); + WARN_ON_ONCE(!eventsel && index < kvm_pmu_cap.num_counters_fixed); + return eventsel; } static void intel_pmu_refresh(struct kvm_vcpu *vcpu) From patchwork Tue Jan 9 23:02:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515508 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 56AD43FE3A for ; Tue, 9 Jan 2024 23:03:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="M18ySga2" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5ee11c69bb8so62739637b3.2 for ; Tue, 09 Jan 2024 15:03:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841384; x=1705446184; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9LjGHaNIdIg3QzaGl/5dN2VdX2yHsiBWEBAmyUKtKyQ=; b=M18ySga2NGEfEhXdZvYUV5WBQGUXvJYOvVKexifeubFL2aUcYTpzQIumClEdZaInPh cWexXOZJ0MROf3lrySaPfo9xl5oGeU7snZA5Wm4vSzJGHyaDcb6L4wvm8bUDVuZvZsLe cM7o9Imfn4ZnU7oSvGvcmRmoMlmfUWP4oGTArbR7JyDI+j0Ayn4wfEzXY6fziQ2GI0y1 lbmI9vWXXc602neoQprGchbRuEFUrWpyVDPEduCRvS937O14RZ69g5xC/nJXiWvIsWi8 Sk4khaJFqASNIBp2SBKCbHMr4KpmGQdxdzKwQACZSc13tEBzTLGGVjfFYI2N4KMHj9xP L5mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841384; x=1705446184; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9LjGHaNIdIg3QzaGl/5dN2VdX2yHsiBWEBAmyUKtKyQ=; b=sPwqm3aB6nYYsb+wu2SrvSKxhG5zEang/rFR2x610DyiLiVlm8/HNv9DFRNyzt071i sXXdpOXZ8vCrMGlBwmkW2okh9UFgHKbKoXdho9pmwIcLxPKgg8xILfmocWyWksNBTAEH rG3Y4pRhBJmc/H6qLI2L6Owob8VHKEvwZMmJOq+yWcP43VmrsYR0z/Ab1XvMQdoiKVoC cl7mP8Ho9wm/YXVxW90JOCMj76jDe/2lZE4j9IM/c0/Sty5rMv3UFVCWWcoprl8jYiV5 ZAfjlXp7wklfjqxtqGQIffus/yHPWxIXps6VfRyprdVivLa00D9S1+wxwIVpV6Gzij7o 36DA== X-Gm-Message-State: AOJu0Yx5DlE2xOAXEg47zWTYlrzIxB8EqWJGJ1nZylfFqEyCN74xqmKF xP78zgvrjwJfFDzIi0vycVkmLj9H7gB485OG3w== X-Google-Smtp-Source: AGHT+IFFObObKoJ49JR0BCouG9QCxKOm8CG719bWT5WdSBNFPxsFOzY42TX2XHmhAWqxO4qZBMlbLn3e+8I= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:3702:b0:5e5:5bfa:8257 with SMTP id fv2-20020a05690c370200b005e55bfa8257mr121585ywb.9.1704841384589; Tue, 09 Jan 2024 15:03:04 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:26 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-7-seanjc@google.com> Subject: [PATCH v10 06/29] KVM: x86/pmu: Don't ignore bits 31:30 for RDPMC index on AMD From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Stop stripping bits 31:30 prior to validating/consuming the RDPMC index on AMD. Per the APM's documentation of RDPMC, *values* greater than 27 are reserved. The behavior of upper bits being flags is firmly Intel-only. Fixes: ca724305a2b0 ("KVM: x86/vPMU: Implement AMD vPMU code for KVM") Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/pmu.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 1475d47c821c..1fafc46f61c9 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -77,8 +77,6 @@ static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - idx &= ~(3u << 30); - return idx < pmu->nr_arch_gp_counters; } @@ -86,7 +84,7 @@ static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) static struct kvm_pmc *amd_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask) { - return amd_pmc_idx_to_pmc(vcpu_to_pmu(vcpu), idx & ~(3u << 30)); + return amd_pmc_idx_to_pmc(vcpu_to_pmu(vcpu), idx); } static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, u32 msr) From patchwork Tue Jan 9 23:02:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515509 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42D23405FA for ; Tue, 9 Jan 2024 23:03:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="D1XxwNMr" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5cde2b113e7so1217596a12.2 for ; Tue, 09 Jan 2024 15:03:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841386; x=1705446186; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=n61oJSIkyIz46YdK+ygDOevYS8a/o0WDNeUtg2/VEIk=; b=D1XxwNMrn2504xGJrx4ZWHB65gBdTMRJ/nYBl3lbbZvpCT5rTx6HCuGTtsmJvqrlsd tdy9oA92ZT7besNiE/VGtXIled6rXy2lR4148fErKpOOsSf5s/J0WUDt27XngY1lADtW bA+hqawwclRoJBaHxrVu3LR+pQx6qZMW31g7yTVnVKIiWa/eH811JuCyUC6tEuI/3lNW 68J+OJyPd/uMpcJDUA2HYkekJGjmXKbo3ckHMLnBx+NVZJxrCgMP41Xg1Raq6IhAG1m3 fcWgvM32FhfxTjFwMF3/nRh3rvrvBYaIJACi7mMuPTxzhLfqwNG2xrDYtLeEBT5aN/Dj iK0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841386; x=1705446186; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=n61oJSIkyIz46YdK+ygDOevYS8a/o0WDNeUtg2/VEIk=; b=d89KIHJUiAbRCJ29ZSGFlrmoqViSP1Grnkr7PbkylcL6ajgk5JB5BsHu/gNUFx0Gvd 42NS2D6sRtss8gqt7a6TYjezQ76w3kkxEVNRFaDQSsye0T2ZDXosi0+ba6+VOlMx6siA 3/cAERo2o6glxwkOQmV4yJkbLD2XaYoY274KWARwrDD3ZdO/OKY0U7jJWLkraL0BBQSS jpqNnnuh9RXNjUwg8UzHue9MHfZxbBoApcOcc5YnpqLv3TMuPVUkKrhAC8QhHqGF+Ww0 FDD55bUA6ZbnX57phffdHD2cz5KFSRNUy+bapjKIjohgVKLNY9GExSjPfukOYBTb/2nV PZUQ== X-Gm-Message-State: AOJu0Yyit+V0y1ZrUNOTLncKbndtkK836B7VM0BRoCpRMzeNav2818uf L9lRnrRRy9vAZOykJkDlXlKHjZIRW4NHdQl4hQ== X-Google-Smtp-Source: AGHT+IEMfFCdQZL6KpWDo3RxgNuDwCk6F9/iQeHqpmueCUojVQrOFZLdCkB8PlZOme96V1QnmYO7hmyp190= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a02:4aa:b0:5cd:fa13:7a4 with SMTP id bw42-20020a056a0204aa00b005cdfa1307a4mr212pgb.1.1704841386589; Tue, 09 Jan 2024 15:03:06 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:27 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-8-seanjc@google.com> Subject: [PATCH v10 07/29] KVM: x86/pmu: Prioritize VMX interception over #GP on RDPMC due to bad index From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Apply the pre-intercepts RDPMC validity check only to AMD, and rename all relevant functions to make it as clear as possible that the check is not a standard PMC index check. On Intel, the basic rule is that only invalid opcodes and privilege/permission/mode checks have priority over VM-Exit, i.e. RDPMC with an invalid index should VM-Exit, not #GP. While the SDM doesn't explicitly call out RDPMC, it _does_ explicitly use RDMSR of a non-existent MSR as an example where VM-Exit has priority over #GP, and RDPMC is effectively just a variation of RDMSR. Manually testing on various Intel CPUs confirms this behavior, and the inverted priority was introduced for SVM compatibility, i.e. was not an intentional change for Intel PMUs. On AMD, *all* exceptions on RDPMC have priority over VM-Exit. Check for a NULL kvm_pmu_ops.check_rdpmc_early instead of using a RET0 static call so as to provide a convenient location to document the difference between Intel and AMD, and to again try to make it as obvious as possible that the early check is a one-off thing, not a generic "is this PMC valid?" helper. Fixes: 8061252ee0d2 ("KVM: SVM: Add intercept checks for remaining twobyte instructions") Cc: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 2 +- arch/x86/kvm/emulate.c | 2 +- arch/x86/kvm/kvm_emulate.h | 2 +- arch/x86/kvm/pmu.c | 16 +++++++++++++--- arch/x86/kvm/pmu.h | 4 ++-- arch/x86/kvm/svm/pmu.c | 9 ++++++--- arch/x86/kvm/vmx/pmu_intel.c | 12 ------------ arch/x86/kvm/x86.c | 9 +++------ 8 files changed, 27 insertions(+), 29 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index d7eebee4450c..f0cd48222133 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -15,7 +15,7 @@ BUILD_BUG_ON(1) KVM_X86_PMU_OP(pmc_idx_to_pmc) KVM_X86_PMU_OP(rdpmc_ecx_to_pmc) KVM_X86_PMU_OP(msr_idx_to_pmc) -KVM_X86_PMU_OP(is_valid_rdpmc_ecx) +KVM_X86_PMU_OP_OPTIONAL(check_rdpmc_early) KVM_X86_PMU_OP(is_valid_msr) KVM_X86_PMU_OP(get_msr) KVM_X86_PMU_OP(set_msr) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index e223043ef5b2..695ab5b6055c 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -3962,7 +3962,7 @@ static int check_rdpmc(struct x86_emulate_ctxt *ctxt) * protected mode. */ if ((!(cr4 & X86_CR4_PCE) && ctxt->ops->cpl(ctxt)) || - ctxt->ops->check_pmc(ctxt, rcx)) + ctxt->ops->check_rdpmc_early(ctxt, rcx)) return emulate_gp(ctxt, 0); return X86EMUL_CONTINUE; diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index e6d149825169..4351149484fb 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -208,7 +208,7 @@ struct x86_emulate_ops { int (*set_msr_with_filter)(struct x86_emulate_ctxt *ctxt, u32 msr_index, u64 data); int (*get_msr_with_filter)(struct x86_emulate_ctxt *ctxt, u32 msr_index, u64 *pdata); int (*get_msr)(struct x86_emulate_ctxt *ctxt, u32 msr_index, u64 *pdata); - int (*check_pmc)(struct x86_emulate_ctxt *ctxt, u32 pmc); + int (*check_rdpmc_early)(struct x86_emulate_ctxt *ctxt, u32 pmc); int (*read_pmc)(struct x86_emulate_ctxt *ctxt, u32 pmc, u64 *pdata); void (*halt)(struct x86_emulate_ctxt *ctxt); void (*wbinvd)(struct x86_emulate_ctxt *ctxt); diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 30945fea6988..0b0d804ee239 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -524,10 +524,20 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) kvm_pmu_cleanup(vcpu); } -/* check if idx is a valid index to access PMU */ -bool kvm_pmu_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) +int kvm_pmu_check_rdpmc_early(struct kvm_vcpu *vcpu, unsigned int idx) { - return static_call(kvm_x86_pmu_is_valid_rdpmc_ecx)(vcpu, idx); + /* + * On Intel, VMX interception has priority over RDPMC exceptions that + * aren't already handled by the emulator, i.e. there are no additional + * check needed for Intel PMUs. + * + * On AMD, _all_ exceptions on RDPMC have priority over SVM intercepts, + * i.e. an invalid PMC results in a #GP, not #VMEXIT. + */ + if (!kvm_pmu_ops.check_rdpmc_early) + return 0; + + return static_call(kvm_x86_pmu_check_rdpmc_early)(vcpu, idx); } bool is_vmware_backdoor_pmc(u32 pmc_idx) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 87ecf22f5b25..51bbb01b21c8 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -23,7 +23,7 @@ struct kvm_pmu_ops { struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask); struct kvm_pmc *(*msr_idx_to_pmc)(struct kvm_vcpu *vcpu, u32 msr); - bool (*is_valid_rdpmc_ecx)(struct kvm_vcpu *vcpu, unsigned int idx); + int (*check_rdpmc_early)(struct kvm_vcpu *vcpu, unsigned int idx); bool (*is_valid_msr)(struct kvm_vcpu *vcpu, u32 msr); int (*get_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr_info); int (*set_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr_info); @@ -215,7 +215,7 @@ static inline bool pmc_is_globally_enabled(struct kvm_pmc *pmc) void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data); -bool kvm_pmu_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx); +int kvm_pmu_check_rdpmc_early(struct kvm_vcpu *vcpu, unsigned int idx); bool kvm_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr); int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info); int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 1fafc46f61c9..e886300f0f97 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -73,11 +73,14 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, return amd_pmc_idx_to_pmc(pmu, idx); } -static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) +static int amd_check_rdpmc_early(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - return idx < pmu->nr_arch_gp_counters; + if (idx >= pmu->nr_arch_gp_counters) + return -EINVAL; + + return 0; } /* idx is the ECX register of RDPMC instruction */ @@ -229,7 +232,7 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = { .pmc_idx_to_pmc = amd_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = amd_msr_idx_to_pmc, - .is_valid_rdpmc_ecx = amd_is_valid_rdpmc_ecx, + .check_rdpmc_early = amd_check_rdpmc_early, .is_valid_msr = amd_is_valid_msr, .get_msr = amd_pmu_get_msr, .set_msr = amd_pmu_set_msr, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index ec4feaef3d55..1b1f888ad32b 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -55,17 +55,6 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) } } -static bool intel_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) -{ - struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - bool fixed = idx & (1u << 30); - - idx &= ~(3u << 30); - - return fixed ? idx < pmu->nr_arch_fixed_counters - : idx < pmu->nr_arch_gp_counters; -} - static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask) { @@ -718,7 +707,6 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = { .pmc_idx_to_pmc = intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = intel_msr_idx_to_pmc, - .is_valid_rdpmc_ecx = intel_is_valid_rdpmc_ecx, .is_valid_msr = intel_is_valid_msr, .get_msr = intel_pmu_get_msr, .set_msr = intel_pmu_set_msr, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 27e23714e960..4d1191a944f1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8389,12 +8389,9 @@ static int emulator_get_msr(struct x86_emulate_ctxt *ctxt, return kvm_get_msr(emul_to_vcpu(ctxt), msr_index, pdata); } -static int emulator_check_pmc(struct x86_emulate_ctxt *ctxt, - u32 pmc) +static int emulator_check_rdpmc_early(struct x86_emulate_ctxt *ctxt, u32 pmc) { - if (kvm_pmu_is_valid_rdpmc_ecx(emul_to_vcpu(ctxt), pmc)) - return 0; - return -EINVAL; + return kvm_pmu_check_rdpmc_early(emul_to_vcpu(ctxt), pmc); } static int emulator_read_pmc(struct x86_emulate_ctxt *ctxt, @@ -8526,7 +8523,7 @@ static const struct x86_emulate_ops emulate_ops = { .set_msr_with_filter = emulator_set_msr_with_filter, .get_msr_with_filter = emulator_get_msr_with_filter, .get_msr = emulator_get_msr, - .check_pmc = emulator_check_pmc, + .check_rdpmc_early = emulator_check_rdpmc_early, .read_pmc = emulator_read_pmc, .halt = emulator_halt, .wbinvd = emulator_wbinvd, From patchwork Tue Jan 9 23:02:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515510 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A40FD41239 for ; Tue, 9 Jan 2024 23:03:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Pb0f6nxz" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5e73bd9079eso63517967b3.2 for ; Tue, 09 Jan 2024 15:03:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841388; x=1705446188; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=6nByD1PFJyVRmLMcbHkCGLijfHuueVKWkXFPS4squYI=; b=Pb0f6nxztz8gyKLIkCtHdNPCUnL1xh+lNJWXKBaILzAjhH//DxrqPwC6SE0p/LYyAe 043tGOqryZfZmnLOE0SbB+4A/gxhBR7yJKK+6TrvBuWRNpy82wCY1HsvK7xcLP2CEvaY V+vJ4HOT/dvBKxfjZZPBrauBW3yO5nVujInDUpSJO0T/n2hyWDzT4B5w5RulWOYHYjCX qCrJ9Jt0NXLlRjAUy3YC861mFS90KSjjOf+/EnRNG8WXXNx11Ek+Wsq6ILuSLAuIgv7f QTXtyrE5K34md9dx++iAwe96i34wJNp/u6H0t/tMzTqudjpQ7qRRVzU0lFwlU7Yeaoff PSlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841388; x=1705446188; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=6nByD1PFJyVRmLMcbHkCGLijfHuueVKWkXFPS4squYI=; b=hSaHKamwuizuYxcWPYp4hRrhMZQRHCyNhZ8KGU7IKFAhDh5mox30RCrgy6X62+v752 ySUQjeOn0VLnakH7EozmD6IRqFaGmN01ise8C5HtzYoPdjWb0lZDHHYg+8h2NvBJf6Up bG/mJVSyUzNUcAk/rq3wUv3wrUytuyF4RRsG4/W7aYe62HCoUkp8xXCYSDicT9Mf3ewi 1qyjCFDbspeMVp/7EUI4HN0Bv57I6izs7UpL20p8qMH3qK46Nevm+J+8rRVhIE6JDD9O PDEXZw8gLe3kq87KznkYmZpPnphZkFkgZHWEA6P2s/7dCVQMU5obqQaiLNeK9ElL+X/Q k1gA== X-Gm-Message-State: AOJu0Yys1R2odkKqOZ8v2roZW5RX46TEGbfNiZGnmYDx/LL5ceIQo4OU RuMqWaG4IOR3DFmSLd03ynENbjkG3eTL2q48RQ== X-Google-Smtp-Source: AGHT+IGs5BTluRvZj25tkv+392CfqPhlzNslbYQSF2MI+h+ruiReZ2lBW13cbmMKl//tul/toTWevKH1DTE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:c9b:b0:5f0:d791:c965 with SMTP id cm27-20020a05690c0c9b00b005f0d791c965mr122512ywb.3.1704841388652; Tue, 09 Jan 2024 15:03:08 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:28 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-9-seanjc@google.com> Subject: [PATCH v10 08/29] KVM: x86/pmu: Apply "fast" RDPMC only to Intel PMUs From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Move the handling of "fast" RDPMC instructions, which drop bits 63:32 of the count, to Intel. The "fast" flag, and all modifiers for that matter, are Intel-only and aren't supported by AMD. Opportunistically replace open coded bit crud with proper #defines, and add comments to try and disentangle the flags vs. values mess for non-architectural vs. architectural PMUs. Fixes: ca724305a2b0 ("KVM: x86/vPMU: Implement AMD vPMU code for KVM") Reviewed-by: Dapeng Mi Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 3 +-- arch/x86/kvm/vmx/pmu_intel.c | 16 ++++++++++++++-- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 0b0d804ee239..09b0feb975c3 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -576,10 +576,9 @@ static int kvm_pmu_rdpmc_vmware(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) { - bool fast_mode = idx & (1u << 31); struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; - u64 mask = fast_mode ? ~0u : ~0ull; + u64 mask = ~0ull; if (!pmu->version) return 1; diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 1b1f888ad32b..03bd188b5754 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -20,6 +20,15 @@ #include "nested.h" #include "pmu.h" +/* + * Perf's "BASE" is wildly misleading, architectural PMUs use bits 31:16 of ECX + * to encode the "type" of counter to read, i.e. this is not a "base". And to + * further confuse things, non-architectural PMUs use bit 31 as a flag for + * "fast" reads, whereas the "type" is an explicit value. + */ +#define INTEL_RDPMC_FIXED INTEL_PMC_FIXED_RDPMC_BASE +#define INTEL_RDPMC_FAST BIT(31) + #define MSR_PMC_FULL_WIDTH_BIT (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0) static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) @@ -59,11 +68,14 @@ static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - bool fixed = idx & (1u << 30); + bool fixed = idx & INTEL_RDPMC_FIXED; struct kvm_pmc *counters; unsigned int num_counters; - idx &= ~(3u << 30); + if (idx & INTEL_RDPMC_FAST) + *mask &= GENMASK_ULL(31, 0); + + idx &= ~(INTEL_RDPMC_FIXED | INTEL_RDPMC_FAST); if (fixed) { counters = pmu->fixed_counters; num_counters = pmu->nr_arch_fixed_counters; From patchwork Tue Jan 9 23:02:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515511 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F7234177B for ; Tue, 9 Jan 2024 23:03:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nyOOpjhB" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-6d9b8fef16aso2586660b3a.0 for ; Tue, 09 Jan 2024 15:03:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841391; x=1705446191; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=erkcQ6/oHyd7DZXbuvn3kFPsgNm/N3hXlrwK4Xzlp1A=; b=nyOOpjhBzFAE5DyVT+D4ZNJt5TapbffmsQY0qNgo/WiiG/WCTDcCJmOnAF7GO7rH2Q XHZlC+MtO8js9RuXGn6PKUQtd7y7yqABuThknMzu1Th6LOdldA25ivbqWYz7fVj7URLH xsm99FDs0LSFnphMtONVrT0b3Q/2W7/Bum8igTwo8OAA/H45t9YDO7TmtQlm1Np8QwOi kf5vEP4pxLFKdqt8oxctK2b3urwbAtl6zVcxoCHsUWc6gWccFM/zkTDyvwdeWGPbkRSW U7/LQSRnxpocIaHhFtRn2NvIHkLbg010jN9U42o1MCrkBA6FEyxgZYbB9+3uytgXw9V7 a1sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841391; x=1705446191; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=erkcQ6/oHyd7DZXbuvn3kFPsgNm/N3hXlrwK4Xzlp1A=; b=LItSHgU7AZA2E1JJM0mAvFzrt8YaIbNvpch8KULiCvW5z+cjqz2MaOrRS4gx4pFRQJ aN+63PO5+ovmY0VoMsUPN6KVIALreSRBFiE6KMyJYLaVWA4wnwPM9A5tx513hVtajm88 8HcTDUyncI5xBlV5lBDs8EX+4iC1lIHCRhLDgyEJ4zwrT07RRbd/kUr6cy3dpXo5DVIR F7jMCNXru3WgXkPyF/kjHo3CY9Lg3uqabS3SuL1Zh+6ppAgMJj1rkk89LpUPkG1lw7CV iR8j0WOw000ueumcUG438Oed0DWr7EFLEPjYY7uGtbgih3Dw4qoQ6qwRk/0w8owHIIoP VfUQ== X-Gm-Message-State: AOJu0Yx7WX7E/yqELMNhQHpC6Cz+wxiLTSG438kty+sCT7zBVb6329JB MPCt+ASv2fY36UkvFKZSRSCt9XfqUTfVz+EO1Q== X-Google-Smtp-Source: AGHT+IGaLcoqDY9PT2uXe4DrFLqWRCjt+ZckH9qsb1mNb2wkvE8WXPwMismtoMk6Ox+U/XSDj3qW55olhoo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1896:b0:6da:bf5b:bd4e with SMTP id x22-20020a056a00189600b006dabf5bbd4emr27892pfh.3.1704841390674; Tue, 09 Jan 2024 15:03:10 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:29 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-10-seanjc@google.com> Subject: [PATCH v10 09/29] KVM: x86/pmu: Disallow "fast" RDPMC for architectural Intel PMUs From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Inject #GP on RDPMC if the "fast" flag is set for architectural Intel PMUs, i.e. if the PMU version is non-zero. Per Intel's SDM, and confirmed on bare metal, the "fast" flag is supported only for non-architectural PMUs, and is reserved for architectural PMUs. If the processor does not support architectural performance monitoring (CPUID.0AH:EAX[7:0]=0), ECX[30:0] specifies the index of the PMC to be read. Setting ECX[31] selects “fast” read mode if supported. In this mode, RDPMC returns bits 31:0 of the PMC in EAX while clearing EDX to zero. If the processor does support architectural performance monitoring (CPUID.0AH:EAX[7:0] ≠ 0), ECX[31:16] specifies type of PMC while ECX[15:0] specifies the index of the PMC to be read within that type. The following PMC types are currently defined: — General-purpose counters use type 0. The index x (to read IA32_PMCx) must be less than the value enumerated by CPUID.0AH.EAX[15:8] (thus ECX[15:8] must be zero). — Fixed-function counters use type 4000H. The index x (to read IA32_FIXED_CTRx) can be used if either CPUID.0AH.EDX[4:0] > x or CPUID.0AH.ECX[x] = 1 (thus ECX[15:5] must be 0). — Performance metrics use type 2000H. This type can be used only if IA32_PERF_CAPABILITIES.PERF_METRICS_AVAILABLE[bit 15]=1. For this type, the index in ECX[15:0] is implementation specific. Opportunistically WARN if KVM ever actually tries to complete RDPMC for a non-architectural PMU, and drop the non-existent "support" for fast RDPMC, as KVM doesn't support such PMUs, i.e. kvm_pmu_rdpmc() should reject the RDPMC before getting to the Intel code. Fixes: f5132b01386b ("KVM: Expose a version 2 architectural PMU to a guests") Fixes: 67f4d4288c35 ("KVM: x86: rdpmc emulation checks the counter incorrectly") Reviewed-by: Dapeng Mi Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/pmu_intel.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 03bd188b5754..5a5dfae6055c 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -27,7 +27,6 @@ * "fast" reads, whereas the "type" is an explicit value. */ #define INTEL_RDPMC_FIXED INTEL_PMC_FIXED_RDPMC_BASE -#define INTEL_RDPMC_FAST BIT(31) #define MSR_PMC_FULL_WIDTH_BIT (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0) @@ -72,10 +71,25 @@ static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, struct kvm_pmc *counters; unsigned int num_counters; - if (idx & INTEL_RDPMC_FAST) - *mask &= GENMASK_ULL(31, 0); + /* + * The encoding of ECX for RDPMC is different for architectural versus + * non-architecturals PMUs (PMUs with version '0'). For architectural + * PMUs, bits 31:16 specify the PMC type and bits 15:0 specify the PMC + * index. For non-architectural PMUs, bit 31 is a "fast" flag, and + * bits 30:0 specify the PMC index. + * + * Yell and reject attempts to read PMCs for a non-architectural PMU, + * as KVM doesn't support such PMUs. + */ + if (WARN_ON_ONCE(!pmu->version)) + return NULL; - idx &= ~(INTEL_RDPMC_FIXED | INTEL_RDPMC_FAST); + /* + * Fixed PMCs are supported on all architectural PMUs. Note, KVM only + * emulates fixed PMCs for PMU v2+, but the flag itself is still valid, + * i.e. let RDPMC fail due to accessing a non-existent counter. + */ + idx &= ~INTEL_RDPMC_FIXED; if (fixed) { counters = pmu->fixed_counters; num_counters = pmu->nr_arch_fixed_counters; From patchwork Tue Jan 9 23:02:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515512 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A76F445BFA for ; Tue, 9 Jan 2024 23:03:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pIE3U+U/" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbdb69bc114so4534281276.0 for ; Tue, 09 Jan 2024 15:03:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841392; x=1705446192; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GbgLIH6rTnv7SO9FN+B/HV3gRuklhpSWLpN0FQT/Z7Q=; b=pIE3U+U/+O2YkfyluDy7TMg9Vbxdf/IuWMB/3iQZXxnPtHIt+J7q5lYqwdSoiYSTeb Y8Z+SSy5He0Moz+D5EIwuqCVE9QfCV8f4HJmVlhe15deWz30UM7FC211dnb0uVfiG2ox dvEQYPLEJoI+UblOA8I7tFYrRlUGGUsm7BSL79O6nl0FwbluSREIHmo1TeL1AgaIEfMA 0rXcV6IiCzwqalIJncVvWfOGbRLjL7KEY8ddya2e0Q0JTr8owWs6Q5f1gOe6F1Ae+m9M /YzKtvPdznJpGvxl5aUKMnCdBfF1l06FejSBdmg+fYIpBDlfD1H64XgJgbAyyWiCIp1x b3Jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841392; x=1705446192; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GbgLIH6rTnv7SO9FN+B/HV3gRuklhpSWLpN0FQT/Z7Q=; b=ljVk0YRhEnJKJ7FOjcCJHVdDiF8VridN0vsQ2zRZTWYUWyRp68tkWJg4zUJy/dAB8g KEhceHcKQGtc2j/cSuln4GeaZtgsWXfqaM1gXS61Dg7HVW5cYrLug3slTVNj7QqPjCqz bhYzBU/KB0pUYyk9FvlZmRlEccX2zFoP/myfVOusxsktprtnSpC7Rjc4BkIl4cdpaCmM xMW1I3F6d/ntOX9Nm+vqg0YAKbraGsX35w08NZKZl7yy9OVj8gPQDvNm//Nu6N2v7uvr /uMqtAkgsyJsRqeQu76BCMr2TdsfaNt2TB/fFtSWdfp6OrSAYnbLKPVQ77oHYFaVCmLG Oy1w== X-Gm-Message-State: AOJu0YyM+L8kR0PdkiV4h/R+bHLKRws4RBbBKx3te6rtbuYitNSkL1Ey 1WcFurpw7Hy94kxWt0pdVxDxQAWAbGga3jZADg== X-Google-Smtp-Source: AGHT+IFHc443KCf2oRp9tqj1KiGn6s2aEsUR6J7GWPA6zlL3cvuE9VayN2xL4wL0V9+OlokqdRYY8493mI8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1343:b0:dbe:a220:68f9 with SMTP id g3-20020a056902134300b00dbea22068f9mr47982ybu.0.1704841392693; Tue, 09 Jan 2024 15:03:12 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:30 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-11-seanjc@google.com> Subject: [PATCH v10 10/29] KVM: x86/pmu: Treat "fixed" PMU type in RDPMC as index as a value, not flag From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Refactor KVM's handling of ECX for RDPMC to treat the FIXED modifier as an explicit value, not a flag (minus one wart). While non-architectural PMUs do use bit 31 as a flag (for "fast" reads), architectural PMUs use the upper half of ECX to encode the type. From the SDM: ECX[31:16] specifies type of PMC while ECX[15:0] specifies the index of the PMC to be read within that type Note, that the known supported types are 4000H and 2000H, i.e. look a lot like flags, doesn't contradict the above statement that ECX[31:16] holds the type, at least not by any sane reading of the SDM. Keep the explicitly clearing of the FIXED "flag", as KVM subtly relies on that behavior to disallow unsupported types while allowing the correct indices for fixed counters. This wart will be cleaned up in short order. Opportunistically grab the per-type bitmask in the if-else blocks to eliminate the one-off usage of the local "fixed" bool. Reported-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/pmu_intel.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 5a5dfae6055c..c37dd3aa056b 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -28,6 +28,9 @@ */ #define INTEL_RDPMC_FIXED INTEL_PMC_FIXED_RDPMC_BASE +#define INTEL_RDPMC_TYPE_MASK GENMASK(31, 16) +#define INTEL_RDPMC_INDEX_MASK GENMASK(15, 0) + #define MSR_PMC_FULL_WIDTH_BIT (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0) static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) @@ -66,10 +69,11 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask) { + unsigned int type = idx & INTEL_RDPMC_TYPE_MASK; struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - bool fixed = idx & INTEL_RDPMC_FIXED; struct kvm_pmc *counters; unsigned int num_counters; + u64 bitmask; /* * The encoding of ECX for RDPMC is different for architectural versus @@ -90,16 +94,20 @@ static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, * i.e. let RDPMC fail due to accessing a non-existent counter. */ idx &= ~INTEL_RDPMC_FIXED; - if (fixed) { + if (type == INTEL_RDPMC_FIXED) { counters = pmu->fixed_counters; num_counters = pmu->nr_arch_fixed_counters; + bitmask = pmu->counter_bitmask[KVM_PMC_FIXED]; } else { counters = pmu->gp_counters; num_counters = pmu->nr_arch_gp_counters; + bitmask = pmu->counter_bitmask[KVM_PMC_GP]; } + if (idx >= num_counters) return NULL; - *mask &= pmu->counter_bitmask[fixed ? KVM_PMC_FIXED : KVM_PMC_GP]; + + *mask &= bitmask; return &counters[array_index_nospec(idx, num_counters)]; } From patchwork Tue Jan 9 23:02:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515513 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 897E746542 for ; Tue, 9 Jan 2024 23:03:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VU1MYyse" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5f6c12872fbso53387357b3.1 for ; Tue, 09 Jan 2024 15:03:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841394; x=1705446194; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Ao+rVh1VlyuTP2cCyh0ane6i4PpAFuJ+32SEPc/mF8s=; b=VU1MYyseYHJJ8hdlmEpMUXE6GXGY+XibWofGAKKotUFS9wboiksOr6P7OAjAyIalDr +NY+dbC6qJbeFaFILyW3JV3LmimBdboLLQ+pPSjK1dLR8LJAi8DWFK+1YBebiDEsnnb9 hS4UYIHZ+LZQAA+TJTPgM3t0/4PXaU5ZqqF2WwHmaBJGJE/7thkXotIMhMmGmxx2M/Ve Sc0nGcYOX87T8cvZihLPm2J4Yy1uSsLnZIzvU+ktVg/5T/5J3m+xJu+nq+m1eJGuizDJ LEkSjrzp0gFzQmhVQAh+wqG5zygZWNK34i7X/z4DIAdqevrPYwIoD+XmuDt5la5Z2njr NJCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841394; x=1705446194; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Ao+rVh1VlyuTP2cCyh0ane6i4PpAFuJ+32SEPc/mF8s=; b=E8eVYrS42wX1Zo+myu0KMmTsW8wGHsYGBclXey0QgtuIasTtZxgIYuzpEAqJyYMXtK aZP6fDhohKYYeFTtH1+lyh8sNDe1uEAFykBB7d4MkIQn5QEsp2HH+1kp8t1YV3+7h9SH sRyEk62QYAWenP9wKL/J70LJ1XAudYcXGazhCxRgv3JOytgyyunyvnthAOBE3pBhtDQx I9H/XLnLTwdnFaqyevq/CmrTAn0sxviom/6GBRR+Jrxenh/aLsUT2lsWTAQLvfypCj5L gh2a3buwUlWWmmKBfAy8p5VtS6ypDBAKt7flZgT59BWqkSLrlq93Q0zRGvRjoBxLHqXW TpEw== X-Gm-Message-State: AOJu0Yxn02TpVz8A9zx+5spCKxMhqKmv714pXOdk4jtn4fp1kvuNNwhE FWeXMEpjC9xrKFkPL65TEDf9xySvS7xiazuzLw== X-Google-Smtp-Source: AGHT+IHQunJwCiuuIH4D/UuntDa5wBMua6iLjzokPhDZ3Bl7FweK8+CkwoqLFFGYnjhgM24OP0zHXGFl2pc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:d1b:b0:5e6:27ee:67fb with SMTP id cn27-20020a05690c0d1b00b005e627ee67fbmr105239ywb.4.1704841394439; Tue, 09 Jan 2024 15:03:14 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:31 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-12-seanjc@google.com> Subject: [PATCH v10 11/29] KVM: x86/pmu: Explicitly check for RDPMC of unsupported Intel PMC types From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Explicitly check for attempts to read unsupported PMC types instead of letting the bounds check fail. Functionally, letting the check fail is ok, but it's unnecessarily subtle and does a poor job of documenting the architectural behavior that KVM is emulating. Signed-off-by: Sean Christopherson Reviewed-by: Dapeng Mi  --- arch/x86/kvm/vmx/pmu_intel.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index c37dd3aa056b..b41bdb0a0995 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -26,6 +26,7 @@ * further confuse things, non-architectural PMUs use bit 31 as a flag for * "fast" reads, whereas the "type" is an explicit value. */ +#define INTEL_RDPMC_GP 0 #define INTEL_RDPMC_FIXED INTEL_PMC_FIXED_RDPMC_BASE #define INTEL_RDPMC_TYPE_MASK GENMASK(31, 16) @@ -89,21 +90,29 @@ static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, return NULL; /* - * Fixed PMCs are supported on all architectural PMUs. Note, KVM only - * emulates fixed PMCs for PMU v2+, but the flag itself is still valid, - * i.e. let RDPMC fail due to accessing a non-existent counter. + * General Purpose (GP) PMCs are supported on all PMUs, and fixed PMCs + * are supported on all architectural PMUs, i.e. on all virtual PMUs + * supported by KVM. Note, KVM only emulates fixed PMCs for PMU v2+, + * but the type itself is still valid, i.e. let RDPMC fail due to + * accessing a non-existent counter. Reject attempts to read all other + * types, which are unknown/unsupported. */ - idx &= ~INTEL_RDPMC_FIXED; - if (type == INTEL_RDPMC_FIXED) { + switch (type) { + case INTEL_RDPMC_FIXED: counters = pmu->fixed_counters; num_counters = pmu->nr_arch_fixed_counters; bitmask = pmu->counter_bitmask[KVM_PMC_FIXED]; - } else { + break; + case INTEL_RDPMC_GP: counters = pmu->gp_counters; num_counters = pmu->nr_arch_gp_counters; bitmask = pmu->counter_bitmask[KVM_PMC_GP]; + break; + default: + return NULL; } + idx &= INTEL_RDPMC_INDEX_MASK; if (idx >= num_counters) return NULL; From patchwork Tue Jan 9 23:02:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515514 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03E824777C for ; Tue, 9 Jan 2024 23:03:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yNNaX1GP" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-6da380cc0d6so4179249b3a.1 for ; Tue, 09 Jan 2024 15:03:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841396; x=1705446196; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=zpfa/x9yEuVfUmpeFfAFqOYrAZ+6Wf628MHaj6l1xFg=; b=yNNaX1GP1q4m5xr/3s6qd/MKn0c99a00cxzOVGk5Y0wZ2u3uW1IW3ucZr1gWsFvF4Q srtqQOMTEknuMO/m4YBzB2ywUlGjMXdLJ3f2VsITEStN5jC1ilnrpdHDFunJfXlT8tng EkFkiGTiMF6ktrlBoeGDDWnkskVew+HsfHDZOIY9uw5kysNnd22HwosInh6QX+zGrXDf 5YMalFKd7fPXBuYoX8R9HRg22c5fSvw9PxV9jxLOkVGkhCah/CSpG/C8kkXAMADj3QKK Z1tRtH50hK8td3Qp0s3OGkdfm6upaczkPL5lXspNtg3lMG5CbEsmAny4/9QyyjNFD1x1 Oy5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841396; x=1705446196; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zpfa/x9yEuVfUmpeFfAFqOYrAZ+6Wf628MHaj6l1xFg=; b=oKsMSkEci1Y3l/frlDKQqGyZ0pAvpt72Viu05BCZ4WrW/qrBOwm28aoMLlGExFvNFQ 8MgE1Vej7V0NlM55QVCcmhyP0p2LF5/2yYWyK1VSketA21JD2h62zSxlffG8RyR4Zo8N DgNCcDfLYzf7/J3mPE0Ua9p9yo3UUlGqVSDcRIBvUYa8uigFYuA9robDo0zEBNqfY9Is tqKkSKDmFvRXdpInbC+rek2cHhvr6Lc/gOuH2kWkIinLTmYbaRT8xeZHhQGXtUl2Epns 1BUUFvGTR3oV+L5fPuxvxJwGdTUioR/WHgced4pw+INgvhIadFf2jsT+38N4ftGDWQHV fBDQ== X-Gm-Message-State: AOJu0YzR7WhjzgsAe+NfwluEhd+IRbrFtPjymfnaWlwKSsHhzsHs4Xhb OE4Hkj632yY9JxoM5TnIuOYOEdFJgmBKPntOlQ== X-Google-Smtp-Source: AGHT+IHExjDUtGswL0uDHzkSiN8N6SZ1wWSxlh71TMBv4bC0yv6BmT37iVMl+ZN2gHx7zh0d4bUgmT+9pYg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:2e90:b0:6d9:cb95:d738 with SMTP id fd16-20020a056a002e9000b006d9cb95d738mr42128pfb.2.1704841396543; Tue, 09 Jan 2024 15:03:16 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:32 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-13-seanjc@google.com> Subject: [PATCH v10 12/29] KVM: selftests: Add vcpu_set_cpuid_property() to set properties From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu From: Jinrong Liang Add vcpu_set_cpuid_property() helper function for setting properties, and use it instead of open coding an equivalent for MAX_PHY_ADDR. Future vPMU testcases will also need to stuff various CPUID properties. Reviewed-by: Jim Mattson Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86_64/processor.h | 4 +++- .../testing/selftests/kvm/lib/x86_64/processor.c | 15 ++++++++++++--- .../x86_64/smaller_maxphyaddr_emulation_test.c | 2 +- 3 files changed, 16 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index a84863503fcb..932944c4ea01 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -995,7 +995,9 @@ static inline void vcpu_set_cpuid(struct kvm_vcpu *vcpu) vcpu_ioctl(vcpu, KVM_GET_CPUID2, vcpu->cpuid); } -void vcpu_set_cpuid_maxphyaddr(struct kvm_vcpu *vcpu, uint8_t maxphyaddr); +void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu, + struct kvm_x86_cpu_property property, + uint32_t value); void vcpu_clear_cpuid_entry(struct kvm_vcpu *vcpu, uint32_t function); void vcpu_set_or_clear_cpuid_feature(struct kvm_vcpu *vcpu, diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index d8288374078e..67eb82a6c754 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -752,12 +752,21 @@ void vcpu_init_cpuid(struct kvm_vcpu *vcpu, const struct kvm_cpuid2 *cpuid) vcpu_set_cpuid(vcpu); } -void vcpu_set_cpuid_maxphyaddr(struct kvm_vcpu *vcpu, uint8_t maxphyaddr) +void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu, + struct kvm_x86_cpu_property property, + uint32_t value) { - struct kvm_cpuid_entry2 *entry = vcpu_get_cpuid_entry(vcpu, 0x80000008); + struct kvm_cpuid_entry2 *entry; + + entry = __vcpu_get_cpuid_entry(vcpu, property.function, property.index); + + (&entry->eax)[property.reg] &= ~GENMASK(property.hi_bit, property.lo_bit); + (&entry->eax)[property.reg] |= value << property.lo_bit; - entry->eax = (entry->eax & ~0xff) | maxphyaddr; vcpu_set_cpuid(vcpu); + + /* Sanity check that @value doesn't exceed the bounds in any way. */ + TEST_ASSERT_EQ(kvm_cpuid_property(vcpu->cpuid, property), value); } void vcpu_clear_cpuid_entry(struct kvm_vcpu *vcpu, uint32_t function) diff --git a/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c b/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c index 06edf00a97d6..9b89440dff19 100644 --- a/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c +++ b/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c @@ -63,7 +63,7 @@ int main(int argc, char *argv[]) vm_init_descriptor_tables(vm); vcpu_init_descriptor_tables(vcpu); - vcpu_set_cpuid_maxphyaddr(vcpu, MAXPHYADDR); + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_MAX_PHY_ADDR, MAXPHYADDR); rc = kvm_check_cap(KVM_CAP_EXIT_ON_EMULATION_FAILURE); TEST_ASSERT(rc, "KVM_CAP_EXIT_ON_EMULATION_FAILURE is unavailable"); From patchwork Tue Jan 9 23:02:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515515 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A91F47A72 for ; Tue, 9 Jan 2024 23:03:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="f0GbSmzp" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5efb07ddb0fso48664277b3.0 for ; Tue, 09 Jan 2024 15:03:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841398; x=1705446198; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=1kUdagJS0RkwSIGsouIbjCNUHXTtDF8gUa99CW8Gru4=; b=f0GbSmzpYP+33Jvp0fC6ebWL69OIh1Zmrq0XRjXlFYhxzSAknu5Tvy7H97AW7P/lVV mJvZJG2gsBo1F3g5dQgJOCeWCcN8Sudi29YVi0YsqekdniDOj6ZKXgEkNpw5mpRNBvER AYrL6gptOBEle8pzG1fUzKtApkdo62DkMVMNqvRxTO1KkfJ3Su9QhZKFvuH0s3I9qMo6 n7qDRZSD70opkx4ZpcCbz4oYkLrdJ1gx4ApRQUbUetgQFX7tqjQDPOxxCXr46CV6XAZT 8Kpj4Alxaz1I3Wi0t5zHKHTaZ6HyxXjaGWTFqZt5t/6iWQDsh6oiJb5nYm7fi8REoN70 F4mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841398; x=1705446198; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1kUdagJS0RkwSIGsouIbjCNUHXTtDF8gUa99CW8Gru4=; b=gVfjDhhnJePMgH5rsVKB6Dp9Ffx87uM6Djdk/+ug5pMmPexUyVN3ocY5/w2tvaJOOV sF/2kNLRyaZoMyCNcoWzwwfe0jVb8rOgkIc7MbCscdiSWaSOzJesZzyafjtXRkZ2I72h SuJJ73gtQovDs7qDoCPYNBgDH5deb/VpLh8BpFrPZUAI23DOOAw6w3jaw6Vww+L4lvor cqC4mpFcFrKqSByjnPfGViaia6bOgQ3AQoaQq3oa4qZKUHUPGntizUgb1sQ9fLoBufSM 4y2BaO5qk1TCFdAod7Es9+SDDTMej7jeiHfmOW5MsvdCnNOrH7Kd1wVv6LjNX/2LQBf1 rgGA== X-Gm-Message-State: AOJu0Ywy+FAXmnBSKw6O6aTwByyQ1N405Fwc337Fb259Pb9yHoLs2w5f koBCuSFEl+UTT+UtghGSxQ2DVoqBWCQYRktz/A== X-Google-Smtp-Source: AGHT+IFsXlkZ/mWbUUU9rSMx5+slIt6MS5Dp98xqW8gbGhoVNbmoxfLEh3pqfw8x0zQGWEJxxkp/3+iECYE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:8211:0:b0:dbe:387d:a8ef with SMTP id q17-20020a258211000000b00dbe387da8efmr5333ybk.1.1704841398399; Tue, 09 Jan 2024 15:03:18 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:33 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-14-seanjc@google.com> Subject: [PATCH v10 13/29] KVM: selftests: Drop the "name" param from KVM_X86_PMU_FEATURE() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Drop the "name" parameter from KVM_X86_PMU_FEATURE(), it's unused and the name is redundant with the macro, i.e. it's truly useless. Reviewed-by: Jim Mattson Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/include/x86_64/processor.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 932944c4ea01..4f737d3b893c 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -290,7 +290,7 @@ struct kvm_x86_cpu_property { struct kvm_x86_pmu_feature { struct kvm_x86_cpu_feature anti_feature; }; -#define KVM_X86_PMU_FEATURE(name, __bit) \ +#define KVM_X86_PMU_FEATURE(__bit) \ ({ \ struct kvm_x86_pmu_feature feature = { \ .anti_feature = KVM_X86_CPU_FEATURE(0xa, 0, EBX, __bit), \ @@ -299,7 +299,7 @@ struct kvm_x86_pmu_feature { feature; \ }) -#define X86_PMU_FEATURE_BRANCH_INSNS_RETIRED KVM_X86_PMU_FEATURE(BRANCH_INSNS_RETIRED, 5) +#define X86_PMU_FEATURE_BRANCH_INSNS_RETIRED KVM_X86_PMU_FEATURE(5) static inline unsigned int x86_family(unsigned int eax) { From patchwork Tue Jan 9 23:02:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515516 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E93ED47F63 for ; Tue, 9 Jan 2024 23:03:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cyRVBMEg" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5ce63e72bc3so1229094a12.0 for ; Tue, 09 Jan 2024 15:03:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841400; x=1705446200; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=bzI7ZO6tNGlUzmltnB+PRcMw8O3L9lh6nE9+2Vf5zg0=; b=cyRVBMEgmrai+tiaoH/GOFzx5hSoKzJI/IF/7XUN1fZctnqZJO+0gkcKRGg+12ngCJ NuV9paau9ij+aG2b7kB1n4ws/x17PL/ZL0254kFsoqJJVqdU5I9Zva9apTg/3X9/xfrd ESxv9ZFtT3idWjejYR4dsM1EqZRq/QCYZk+JiszxcT5KVSoRH7Dj4fJThtyUDVgC0Or6 hldH4BVorDsV9B1gnHFGhWvhk6/cynN0m+W2PqE2DchBFMkAPAlQ2w3AeutTZ2hdAD7Y I6IIjaHwj0eB3Qrvt80w38JEXZyYQMj8E1vfHhpd3P2R8Oro1jPexyhQEzVbB6Uk/7iW RXQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841400; x=1705446200; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=bzI7ZO6tNGlUzmltnB+PRcMw8O3L9lh6nE9+2Vf5zg0=; b=BmORoBP7AzO37udGFiEBHiV1OcV8NPPkeYyb3OTGHFXSAQMR0dmXiqJxQd+hHaEsJh bwm2vyOaN9KY/mKvQG7XtoUMWi4QT43G5GrWHX1mSeWuGSl1WTLlobFOMV9/9FSJMekC wGAL/gEm3jzaR/fTwoRrvyhFy4e8yb8psCx+uPfjNZGdK1YjdYsEmiqqq95y5rkdnApS q0z7Ap6hijne5A9TX1z0oLUDoxTADxr/NZuc7jLQ+ubRxz7KMca3qlQXyIZ2q3HGyZu5 QM2JqTOGkjrjmqpL7X/dwGT7/taKoYpK4djtmrel50EZx4yXx2ZICB9Za6ZPRnbUxU6P G6KQ== X-Gm-Message-State: AOJu0YxOQmvRJDNW5n8qsCEGnXK6hZswoGCxdJB4a/Ebvwuyjdloc6q5 9c5ZI9klqIiJ8TyIf4dA2W4JtE1KqX03dZY7rw== X-Google-Smtp-Source: AGHT+IEJx0829jjIBoOoljy2kP5D4IwQCn4OXklePKI0EzLVLwuusEhN95ynYfMMGDRkNJW6WRPHMaR5cFE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a65:6390:0:b0:5ce:d4a8:3820 with SMTP id h16-20020a656390000000b005ced4a83820mr143pgv.10.1704841400468; Tue, 09 Jan 2024 15:03:20 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:34 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-15-seanjc@google.com> Subject: [PATCH v10 14/29] KVM: selftests: Extend {kvm,this}_pmu_has() to support fixed counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Extend the kvm_x86_pmu_feature framework to allow querying for fixed counters via {kvm,this}_pmu_has(). Like architectural events, checking for a fixed counter annoyingly requires checking multiple CPUID fields, as a fixed counter exists if: FxCtr[i]_is_supported := ECX[i] || (EDX[4:0] > i); Note, KVM currently doesn't actually support exposing fixed counters via the bitmask, but that will hopefully change sooner than later, and Intel's SDM explicitly "recommends" checking both the number of counters and the mask. Rename the intermedate "anti_feature" field to simply 'f' since the fixed counter bitmask (thankfully) doesn't have reversed polarity like the architectural events bitmask. Note, ideally the helpers would use BUILD_BUG_ON() to assert on the incoming register, but the expected usage in PMU tests can't guarantee the inputs are compile-time constants. Opportunistically define macros for all of the known architectural events and fixed counters. Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86_64/processor.h | 65 ++++++++++++++----- 1 file changed, 47 insertions(+), 18 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 4f737d3b893c..92d4f8ecc730 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -282,24 +282,41 @@ struct kvm_x86_cpu_property { * that indicates the feature is _not_ supported, and a property that states * the length of the bit mask of unsupported features. A feature is supported * if the size of the bit mask is larger than the "unavailable" bit, and said - * bit is not set. + * bit is not set. Fixed counters also bizarre enumeration, but inverted from + * arch events for general purpose counters. Fixed counters are supported if a + * feature flag is set **OR** the total number of fixed counters is greater + * than index of the counter. * - * Wrap the "unavailable" feature to simplify checking whether or not a given - * architectural event is supported. + * Wrap the events for general purpose and fixed counters to simplify checking + * whether or not a given architectural event is supported. */ struct kvm_x86_pmu_feature { - struct kvm_x86_cpu_feature anti_feature; + struct kvm_x86_cpu_feature f; }; -#define KVM_X86_PMU_FEATURE(__bit) \ -({ \ - struct kvm_x86_pmu_feature feature = { \ - .anti_feature = KVM_X86_CPU_FEATURE(0xa, 0, EBX, __bit), \ - }; \ - \ - feature; \ +#define KVM_X86_PMU_FEATURE(__reg, __bit) \ +({ \ + struct kvm_x86_pmu_feature feature = { \ + .f = KVM_X86_CPU_FEATURE(0xa, 0, __reg, __bit), \ + }; \ + \ + kvm_static_assert(KVM_CPUID_##__reg == KVM_CPUID_EBX || \ + KVM_CPUID_##__reg == KVM_CPUID_ECX); \ + feature; \ }) -#define X86_PMU_FEATURE_BRANCH_INSNS_RETIRED KVM_X86_PMU_FEATURE(5) +#define X86_PMU_FEATURE_CPU_CYCLES KVM_X86_PMU_FEATURE(EBX, 0) +#define X86_PMU_FEATURE_INSNS_RETIRED KVM_X86_PMU_FEATURE(EBX, 1) +#define X86_PMU_FEATURE_REFERENCE_CYCLES KVM_X86_PMU_FEATURE(EBX, 2) +#define X86_PMU_FEATURE_LLC_REFERENCES KVM_X86_PMU_FEATURE(EBX, 3) +#define X86_PMU_FEATURE_LLC_MISSES KVM_X86_PMU_FEATURE(EBX, 4) +#define X86_PMU_FEATURE_BRANCH_INSNS_RETIRED KVM_X86_PMU_FEATURE(EBX, 5) +#define X86_PMU_FEATURE_BRANCHES_MISPREDICTED KVM_X86_PMU_FEATURE(EBX, 6) +#define X86_PMU_FEATURE_TOPDOWN_SLOTS KVM_X86_PMU_FEATURE(EBX, 7) + +#define X86_PMU_FEATURE_INSNS_RETIRED_FIXED KVM_X86_PMU_FEATURE(ECX, 0) +#define X86_PMU_FEATURE_CPU_CYCLES_FIXED KVM_X86_PMU_FEATURE(ECX, 1) +#define X86_PMU_FEATURE_REFERENCE_TSC_CYCLES_FIXED KVM_X86_PMU_FEATURE(ECX, 2) +#define X86_PMU_FEATURE_TOPDOWN_SLOTS_FIXED KVM_X86_PMU_FEATURE(ECX, 3) static inline unsigned int x86_family(unsigned int eax) { @@ -698,10 +715,16 @@ static __always_inline bool this_cpu_has_p(struct kvm_x86_cpu_property property) static inline bool this_pmu_has(struct kvm_x86_pmu_feature feature) { - uint32_t nr_bits = this_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH); + uint32_t nr_bits; - return nr_bits > feature.anti_feature.bit && - !this_cpu_has(feature.anti_feature); + if (feature.f.reg == KVM_CPUID_EBX) { + nr_bits = this_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH); + return nr_bits > feature.f.bit && !this_cpu_has(feature.f); + } + + GUEST_ASSERT(feature.f.reg == KVM_CPUID_ECX); + nr_bits = this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + return nr_bits > feature.f.bit || this_cpu_has(feature.f); } static __always_inline uint64_t this_cpu_supported_xcr0(void) @@ -917,10 +940,16 @@ static __always_inline bool kvm_cpu_has_p(struct kvm_x86_cpu_property property) static inline bool kvm_pmu_has(struct kvm_x86_pmu_feature feature) { - uint32_t nr_bits = kvm_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH); + uint32_t nr_bits; - return nr_bits > feature.anti_feature.bit && - !kvm_cpu_has(feature.anti_feature); + if (feature.f.reg == KVM_CPUID_EBX) { + nr_bits = kvm_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH); + return nr_bits > feature.f.bit && !kvm_cpu_has(feature.f); + } + + TEST_ASSERT_EQ(feature.f.reg, KVM_CPUID_ECX); + nr_bits = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + return nr_bits > feature.f.bit || kvm_cpu_has(feature.f); } static __always_inline uint64_t kvm_cpu_supported_xcr0(void) From patchwork Tue Jan 9 23:02:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515517 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 75ACA481C4 for ; Tue, 9 Jan 2024 23:03:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mmYxvslI" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-db410931c23so4530393276.2 for ; Tue, 09 Jan 2024 15:03:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841402; x=1705446202; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Q2YXBpTyedma9sCrCWepU/Mv+vEzg45jbG4uet7+rrg=; b=mmYxvslIxz0IIklcPxZEYW/DiC5FpnDWSUdcx0U+oo4U3EosT6BmjBqNWH9R3uat3r nw/ws2OWYvTE3EOMwOu7zBru9cBAQrYSxpRmcVwfugLiff8WCyDW/zoh4kId0uSbNyu7 iK0hNkCo3uU8PLegkYy5tSR3HY7uBv8ET+tv6wIlBffvjs9OYI3ZDUjZYUwupekzE6a7 s55QaQyVvCKM+Xw/9hG/fA2+/dZQX+V3UyPRX1cBaf0Fl0vY0TuU/eVC9UL2f4hKiAUt G5MSmVw2Yta0V94mtlQ9xxgQarzhH983Z5M6kOZpDXMACjsZQNPYVbMPz144pI0xv6ZZ MfgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841402; x=1705446202; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Q2YXBpTyedma9sCrCWepU/Mv+vEzg45jbG4uet7+rrg=; b=MxX41eiRGrdwG8OHMoodEwJjDhO/rw3zQ5P05kfuHdwDQ2rgoGZBQC42zz8RCU/dgL c52AQB0zvSnERmSBTYwGAN/0VW0Bt7efAk5qwi+aH2xv5GzN/Zgs30ITQuXH2Rm4Skce WNn3o35gr0d/MNQuz0XH8wmmJvy/MpYF0huJ4DMxHoUSVQz4+izGWeTeOkwEy0CbNwwZ mK6cu4DgiqY6l0FJr3nHvpGSCQ70zT9IyHADHu/+CxAqUvDgiMAqyRznYqePO2peIuXm 48GQ7bWePb2Uac4gdW5VfqTCR+IYhcGwPAJKC3OgODq7fEtL51Wy6PLC+iFulC7wFb3K L+1g== X-Gm-Message-State: AOJu0YywMXLOdEco7lydN4b4tCnNheuh8hFcJWY02BVp3dNu71YirdKV ZaxZTmcLoKRZK61xfE2r6BKAGikwD7VAMOaK0w== X-Google-Smtp-Source: AGHT+IFTgjhqbSFW19o3/4RxJvLlm7FMtY+CZCqzw8aEjtSmJ65iX/9VoCkevTvxX+iz8OeT6tHgzehsPpo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:ef49:0:b0:dbd:bce2:82f9 with SMTP id w9-20020a25ef49000000b00dbdbce282f9mr32822ybm.10.1704841402581; Tue, 09 Jan 2024 15:03:22 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:35 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-16-seanjc@google.com> Subject: [PATCH v10 15/29] KVM: selftests: Add pmu.h and lib/pmu.c for common PMU assets From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu From: Jinrong Liang Add a PMU library for x86 selftests to help eliminate open-coded event encodings, and to reduce the amount of copy+paste between PMU selftests. Use the new common macro definitions in the existing PMU event filter test. Cc: Aaron Lewis Suggested-by: Sean Christopherson Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/Makefile | 1 + tools/testing/selftests/kvm/include/pmu.h | 97 ++++++++++++ tools/testing/selftests/kvm/lib/pmu.c | 31 ++++ .../kvm/x86_64/pmu_event_filter_test.c | 141 ++++++------------ 4 files changed, 173 insertions(+), 97 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/pmu.h create mode 100644 tools/testing/selftests/kvm/lib/pmu.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 492e937fab00..479bd85e1c56 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -23,6 +23,7 @@ LIBKVM += lib/guest_modes.c LIBKVM += lib/io.c LIBKVM += lib/kvm_util.c LIBKVM += lib/memstress.c +LIBKVM += lib/pmu.c LIBKVM += lib/guest_sprintf.c LIBKVM += lib/rbtree.c LIBKVM += lib/sparsebit.c diff --git a/tools/testing/selftests/kvm/include/pmu.h b/tools/testing/selftests/kvm/include/pmu.h new file mode 100644 index 000000000000..3c10c4dc0ae8 --- /dev/null +++ b/tools/testing/selftests/kvm/include/pmu.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2023, Tencent, Inc. + */ +#ifndef SELFTEST_KVM_PMU_H +#define SELFTEST_KVM_PMU_H + +#include + +#define KVM_PMU_EVENT_FILTER_MAX_EVENTS 300 + +/* + * Encode an eventsel+umask pair into event-select MSR format. Note, this is + * technically AMD's format, as Intel's format only supports 8 bits for the + * event selector, i.e. doesn't use bits 24:16 for the selector. But, OR-ing + * in '0' is a nop and won't clobber the CMASK. + */ +#define RAW_EVENT(eventsel, umask) (((eventsel & 0xf00UL) << 24) | \ + ((eventsel) & 0xff) | \ + ((umask) & 0xff) << 8) + +/* + * These are technically Intel's definitions, but except for CMASK (see above), + * AMD's layout is compatible with Intel's. + */ +#define ARCH_PERFMON_EVENTSEL_EVENT GENMASK_ULL(7, 0) +#define ARCH_PERFMON_EVENTSEL_UMASK GENMASK_ULL(15, 8) +#define ARCH_PERFMON_EVENTSEL_USR BIT_ULL(16) +#define ARCH_PERFMON_EVENTSEL_OS BIT_ULL(17) +#define ARCH_PERFMON_EVENTSEL_EDGE BIT_ULL(18) +#define ARCH_PERFMON_EVENTSEL_PIN_CONTROL BIT_ULL(19) +#define ARCH_PERFMON_EVENTSEL_INT BIT_ULL(20) +#define ARCH_PERFMON_EVENTSEL_ANY BIT_ULL(21) +#define ARCH_PERFMON_EVENTSEL_ENABLE BIT_ULL(22) +#define ARCH_PERFMON_EVENTSEL_INV BIT_ULL(23) +#define ARCH_PERFMON_EVENTSEL_CMASK GENMASK_ULL(31, 24) + +/* RDPMC control flags, Intel only. */ +#define INTEL_RDPMC_METRICS BIT_ULL(29) +#define INTEL_RDPMC_FIXED BIT_ULL(30) +#define INTEL_RDPMC_FAST BIT_ULL(31) + +/* Fixed PMC controls, Intel only. */ +#define FIXED_PMC_GLOBAL_CTRL_ENABLE(_idx) BIT_ULL((32 + (_idx))) + +#define FIXED_PMC_KERNEL BIT_ULL(0) +#define FIXED_PMC_USER BIT_ULL(1) +#define FIXED_PMC_ANYTHREAD BIT_ULL(2) +#define FIXED_PMC_ENABLE_PMI BIT_ULL(3) +#define FIXED_PMC_NR_BITS 4 +#define FIXED_PMC_CTRL(_idx, _val) ((_val) << ((_idx) * FIXED_PMC_NR_BITS)) + +#define PMU_CAP_FW_WRITES BIT_ULL(13) +#define PMU_CAP_LBR_FMT 0x3f + +#define INTEL_ARCH_CPU_CYCLES RAW_EVENT(0x3c, 0x00) +#define INTEL_ARCH_INSTRUCTIONS_RETIRED RAW_EVENT(0xc0, 0x00) +#define INTEL_ARCH_REFERENCE_CYCLES RAW_EVENT(0x3c, 0x01) +#define INTEL_ARCH_LLC_REFERENCES RAW_EVENT(0x2e, 0x4f) +#define INTEL_ARCH_LLC_MISSES RAW_EVENT(0x2e, 0x41) +#define INTEL_ARCH_BRANCHES_RETIRED RAW_EVENT(0xc4, 0x00) +#define INTEL_ARCH_BRANCHES_MISPREDICTED RAW_EVENT(0xc5, 0x00) +#define INTEL_ARCH_TOPDOWN_SLOTS RAW_EVENT(0xa4, 0x01) + +#define AMD_ZEN_CORE_CYCLES RAW_EVENT(0x76, 0x00) +#define AMD_ZEN_INSTRUCTIONS_RETIRED RAW_EVENT(0xc0, 0x00) +#define AMD_ZEN_BRANCHES_RETIRED RAW_EVENT(0xc2, 0x00) +#define AMD_ZEN_BRANCHES_MISPREDICTED RAW_EVENT(0xc3, 0x00) + +/* + * Note! The order and thus the index of the architectural events matters as + * support for each event is enumerated via CPUID using the index of the event. + */ +enum intel_pmu_architectural_events { + INTEL_ARCH_CPU_CYCLES_INDEX, + INTEL_ARCH_INSTRUCTIONS_RETIRED_INDEX, + INTEL_ARCH_REFERENCE_CYCLES_INDEX, + INTEL_ARCH_LLC_REFERENCES_INDEX, + INTEL_ARCH_LLC_MISSES_INDEX, + INTEL_ARCH_BRANCHES_RETIRED_INDEX, + INTEL_ARCH_BRANCHES_MISPREDICTED_INDEX, + INTEL_ARCH_TOPDOWN_SLOTS_INDEX, + NR_INTEL_ARCH_EVENTS, +}; + +enum amd_pmu_zen_events { + AMD_ZEN_CORE_CYCLES_INDEX, + AMD_ZEN_INSTRUCTIONS_INDEX, + AMD_ZEN_BRANCHES_INDEX, + AMD_ZEN_BRANCH_MISSES_INDEX, + NR_AMD_ZEN_EVENTS, +}; + +extern const uint64_t intel_pmu_arch_events[]; +extern const uint64_t amd_pmu_zen_events[]; + +#endif /* SELFTEST_KVM_PMU_H */ diff --git a/tools/testing/selftests/kvm/lib/pmu.c b/tools/testing/selftests/kvm/lib/pmu.c new file mode 100644 index 000000000000..f31f0427c17c --- /dev/null +++ b/tools/testing/selftests/kvm/lib/pmu.c @@ -0,0 +1,31 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023, Tencent, Inc. + */ + +#include + +#include + +#include "kvm_util.h" +#include "pmu.h" + +const uint64_t intel_pmu_arch_events[] = { + INTEL_ARCH_CPU_CYCLES, + INTEL_ARCH_INSTRUCTIONS_RETIRED, + INTEL_ARCH_REFERENCE_CYCLES, + INTEL_ARCH_LLC_REFERENCES, + INTEL_ARCH_LLC_MISSES, + INTEL_ARCH_BRANCHES_RETIRED, + INTEL_ARCH_BRANCHES_MISPREDICTED, + INTEL_ARCH_TOPDOWN_SLOTS, +}; +kvm_static_assert(ARRAY_SIZE(intel_pmu_arch_events) == NR_INTEL_ARCH_EVENTS); + +const uint64_t amd_pmu_zen_events[] = { + AMD_ZEN_CORE_CYCLES, + AMD_ZEN_INSTRUCTIONS_RETIRED, + AMD_ZEN_BRANCHES_RETIRED, + AMD_ZEN_BRANCHES_MISPREDICTED, +}; +kvm_static_assert(ARRAY_SIZE(amd_pmu_zen_events) == NR_AMD_ZEN_EVENTS); diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 283cc55597a4..7ec9fbed92e0 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -11,72 +11,18 @@ */ #define _GNU_SOURCE /* for program_invocation_short_name */ -#include "test_util.h" + #include "kvm_util.h" +#include "pmu.h" #include "processor.h" - -/* - * In lieu of copying perf_event.h into tools... - */ -#define ARCH_PERFMON_EVENTSEL_OS (1ULL << 17) -#define ARCH_PERFMON_EVENTSEL_ENABLE (1ULL << 22) - -/* End of stuff taken from perf_event.h. */ - -/* Oddly, this isn't in perf_event.h. */ -#define ARCH_PERFMON_BRANCHES_RETIRED 5 +#include "test_util.h" #define NUM_BRANCHES 42 -#define INTEL_PMC_IDX_FIXED 32 - -/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ -#define MAX_FILTER_EVENTS 300 #define MAX_TEST_EVENTS 10 #define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) #define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAGS_VALID_MASK << 1) -#define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1) - -/* - * This is how the event selector and unit mask are stored in an AMD - * core performance event-select register. Intel's format is similar, - * but the event selector is only 8 bits. - */ -#define EVENT(select, umask) ((select & 0xf00UL) << 24 | (select & 0xff) | \ - (umask & 0xff) << 8) - -/* - * "Branch instructions retired", from the Intel SDM, volume 3, - * "Pre-defined Architectural Performance Events." - */ - -#define INTEL_BR_RETIRED EVENT(0xc4, 0) - -/* - * "Retired branch instructions", from Processor Programming Reference - * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, - * Preliminary Processor Programming Reference (PPR) for AMD Family - * 17h Model 31h, Revision B0 Processors, and Preliminary Processor - * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision - * B1 Processors Volume 1 of 2. - */ - -#define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0) - - -/* - * "Retired instructions", from Processor Programming Reference - * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, - * Preliminary Processor Programming Reference (PPR) for AMD Family - * 17h Model 31h, Revision B0 Processors, and Preliminary Processor - * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision - * B1 Processors Volume 1 of 2. - * --- and --- - * "Instructions retired", from the Intel SDM, volume 3, - * "Pre-defined Architectural Performance Events." - */ - -#define INST_RETIRED EVENT(0xc0, 0) +#define PMU_EVENT_FILTER_INVALID_NEVENTS (KVM_PMU_EVENT_FILTER_MAX_EVENTS + 1) struct __kvm_pmu_event_filter { __u32 action; @@ -84,26 +30,28 @@ struct __kvm_pmu_event_filter { __u32 fixed_counter_bitmap; __u32 flags; __u32 pad[4]; - __u64 events[MAX_FILTER_EVENTS]; + __u64 events[KVM_PMU_EVENT_FILTER_MAX_EVENTS]; }; /* - * This event list comprises Intel's eight architectural events plus - * AMD's "retired branch instructions" for Zen[123] (and possibly - * other AMD CPUs). + * This event list comprises Intel's known architectural events, plus AMD's + * "retired branch instructions" for Zen1-Zen3 (and* possibly other AMD CPUs). + * Note, AMD and Intel use the same encoding for instructions retired. */ +kvm_static_assert(INTEL_ARCH_INSTRUCTIONS_RETIRED == AMD_ZEN_INSTRUCTIONS_RETIRED); + static const struct __kvm_pmu_event_filter base_event_filter = { .nevents = ARRAY_SIZE(base_event_filter.events), .events = { - EVENT(0x3c, 0), - INST_RETIRED, - EVENT(0x3c, 1), - EVENT(0x2e, 0x4f), - EVENT(0x2e, 0x41), - EVENT(0xc4, 0), - EVENT(0xc5, 0), - EVENT(0xa4, 1), - AMD_ZEN_BR_RETIRED, + INTEL_ARCH_CPU_CYCLES, + INTEL_ARCH_INSTRUCTIONS_RETIRED, + INTEL_ARCH_REFERENCE_CYCLES, + INTEL_ARCH_LLC_REFERENCES, + INTEL_ARCH_LLC_MISSES, + INTEL_ARCH_BRANCHES_RETIRED, + INTEL_ARCH_BRANCHES_MISPREDICTED, + INTEL_ARCH_TOPDOWN_SLOTS, + AMD_ZEN_BRANCHES_RETIRED, }, }; @@ -165,9 +113,9 @@ static void intel_guest_code(void) for (;;) { wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | - ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED); + ARCH_PERFMON_EVENTSEL_OS | INTEL_ARCH_BRANCHES_RETIRED); wrmsr(MSR_P6_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | - ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED); + ARCH_PERFMON_EVENTSEL_OS | INTEL_ARCH_INSTRUCTIONS_RETIRED); wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x3); run_and_measure_loop(MSR_IA32_PMC0); @@ -189,9 +137,9 @@ static void amd_guest_code(void) for (;;) { wrmsr(MSR_K7_EVNTSEL0, 0); wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | - ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED); + ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BRANCHES_RETIRED); wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | - ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED); + ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_INSTRUCTIONS_RETIRED); run_and_measure_loop(MSR_K7_PERFCTR0); GUEST_SYNC(0); @@ -312,7 +260,7 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu) .action = KVM_PMU_EVENT_DENY, .nevents = 1, .events = { - EVENT(0x1C2, 0), + RAW_EVENT(0x1C2, 0), }, }; @@ -347,9 +295,9 @@ static void test_not_member_deny_list(struct kvm_vcpu *vcpu) f.action = KVM_PMU_EVENT_DENY; - remove_event(&f, INST_RETIRED); - remove_event(&f, INTEL_BR_RETIRED); - remove_event(&f, AMD_ZEN_BR_RETIRED); + remove_event(&f, INTEL_ARCH_INSTRUCTIONS_RETIRED); + remove_event(&f, INTEL_ARCH_BRANCHES_RETIRED); + remove_event(&f, AMD_ZEN_BRANCHES_RETIRED); test_with_filter(vcpu, &f); ASSERT_PMC_COUNTING_INSTRUCTIONS(); @@ -361,9 +309,9 @@ static void test_not_member_allow_list(struct kvm_vcpu *vcpu) f.action = KVM_PMU_EVENT_ALLOW; - remove_event(&f, INST_RETIRED); - remove_event(&f, INTEL_BR_RETIRED); - remove_event(&f, AMD_ZEN_BR_RETIRED); + remove_event(&f, INTEL_ARCH_INSTRUCTIONS_RETIRED); + remove_event(&f, INTEL_ARCH_BRANCHES_RETIRED); + remove_event(&f, AMD_ZEN_BRANCHES_RETIRED); test_with_filter(vcpu, &f); ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(); @@ -452,9 +400,9 @@ static bool use_amd_pmu(void) * - Sapphire Rapids, Ice Lake, Cascade Lake, Skylake. */ #define MEM_INST_RETIRED 0xD0 -#define MEM_INST_RETIRED_LOAD EVENT(MEM_INST_RETIRED, 0x81) -#define MEM_INST_RETIRED_STORE EVENT(MEM_INST_RETIRED, 0x82) -#define MEM_INST_RETIRED_LOAD_STORE EVENT(MEM_INST_RETIRED, 0x83) +#define MEM_INST_RETIRED_LOAD RAW_EVENT(MEM_INST_RETIRED, 0x81) +#define MEM_INST_RETIRED_STORE RAW_EVENT(MEM_INST_RETIRED, 0x82) +#define MEM_INST_RETIRED_LOAD_STORE RAW_EVENT(MEM_INST_RETIRED, 0x83) static bool supports_event_mem_inst_retired(void) { @@ -486,9 +434,9 @@ static bool supports_event_mem_inst_retired(void) * B1 Processors Volume 1 of 2. */ #define LS_DISPATCH 0x29 -#define LS_DISPATCH_LOAD EVENT(LS_DISPATCH, BIT(0)) -#define LS_DISPATCH_STORE EVENT(LS_DISPATCH, BIT(1)) -#define LS_DISPATCH_LOAD_STORE EVENT(LS_DISPATCH, BIT(2)) +#define LS_DISPATCH_LOAD RAW_EVENT(LS_DISPATCH, BIT(0)) +#define LS_DISPATCH_STORE RAW_EVENT(LS_DISPATCH, BIT(1)) +#define LS_DISPATCH_LOAD_STORE RAW_EVENT(LS_DISPATCH, BIT(2)) #define INCLUDE_MASKED_ENTRY(event_select, mask, match) \ KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, false) @@ -729,14 +677,14 @@ static void add_dummy_events(uint64_t *events, int nevents) static void test_masked_events(struct kvm_vcpu *vcpu) { - int nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS; - uint64_t events[MAX_FILTER_EVENTS]; + int nevents = KVM_PMU_EVENT_FILTER_MAX_EVENTS - MAX_TEST_EVENTS; + uint64_t events[KVM_PMU_EVENT_FILTER_MAX_EVENTS]; /* Run the test cases against a sparse PMU event filter. */ run_masked_events_tests(vcpu, events, 0); /* Run the test cases against a dense PMU event filter. */ - add_dummy_events(events, MAX_FILTER_EVENTS); + add_dummy_events(events, KVM_PMU_EVENT_FILTER_MAX_EVENTS); run_masked_events_tests(vcpu, events, nevents); } @@ -809,20 +757,19 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) TEST_ASSERT(!r, "Masking non-existent fixed counters should be allowed"); } -static void intel_run_fixed_counter_guest_code(uint8_t fixed_ctr_idx) +static void intel_run_fixed_counter_guest_code(uint8_t idx) { for (;;) { wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); - wrmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + idx, 0); /* Only OS_EN bit is enabled for fixed counter[idx]. */ - wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * fixed_ctr_idx)); - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, - BIT_ULL(INTEL_PMC_IDX_FIXED + fixed_ctr_idx)); + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, FIXED_PMC_CTRL(idx, FIXED_PMC_KERNEL)); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, FIXED_PMC_GLOBAL_CTRL_ENABLE(idx)); __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); - GUEST_SYNC(rdmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx)); + GUEST_SYNC(rdmsr(MSR_CORE_PERF_FIXED_CTR0 + idx)); } } From patchwork Tue Jan 9 23:02:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515518 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F1EBC481D9 for ; Tue, 9 Jan 2024 23:03:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MyHoDHcW" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1d4b8ad631dso18728295ad.0 for ; Tue, 09 Jan 2024 15:03:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841404; x=1705446204; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=OgXwJ4yQRsz0JMWYuWQElNHrlAP9hF6RPqvYSgqXiIo=; b=MyHoDHcWBWfaxO+nT07Yb2HqWQENsz7d+bBexRqSnIdsrSe2VgFz/CH2JoIO9p2bYL FwJM9A+5atIiiUwc2ZWBsOB2QSxAHPZsdJpoWd/vlS+7dbNj2M6x3WhAGLfVerXvBXfh ACZ8LOtnWYLAmi6ql3IVkZtxpvSatEPGo5LDqdYJ5oqgdbIWURfjMLqLiZZeG+/bGmNA QFXZxoaC/YBIaEBT5vND/ex9Z+klBz16FsAANam4QnI9Bhe09MkkYMCnGQ3l5LjA/dbU s0Wz/QZGkqHFHcr+tfNFv/G6+oNwRlDc+K5XHSIU89a2XKYM7k5/6maF3hE4lqbJetc+ xxrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841404; x=1705446204; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OgXwJ4yQRsz0JMWYuWQElNHrlAP9hF6RPqvYSgqXiIo=; b=DPsuoHvy9r8IxF7B91hh6CFyR1jFvKLv2nIL/r48KALOCdTSQ4rx1qm2KfzBdVkYZn si1sd4wByXLqry89c2TrquuvypW8j4c+ONvbeo3yrc0CbCQFvITuoI2y8sVFdR3lrjKX NWa9ijgSKfaZ6r6EFkLgA6lrV1MErYDwf2FNUE3VKbb2x6dooMUDyLTjdPOf9edEuJjv V1v0QerBQ8esxp9RiliY3dgQ4oWiY9m7xUbaqnqEGBWAKYToZO7C6eTDPr30tkwmrWH5 Vx3advXnApJAYHwNSonJrsIrNm9TRFIksMg6PqL2bB/YLJsGgBmNp0qtQE9DpGTSZn8Y la6A== X-Gm-Message-State: AOJu0YzGVfjhdQKKOJTUDavCZWpM4vuwH7TJlSRdnxLJB9RCkWTyNKe+ KhqDfAMsjGGLYdz28jIISONh99DlNKWIy5fUsw== X-Google-Smtp-Source: AGHT+IH4wVY3UVm1xVLvg95hbqrMYTU8oSweHUmsVr4tQdIs71R+ukOwV3kRQVUu8anE4RvpTaziU17FbbQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:f80f:b0:1d4:f804:6fb6 with SMTP id ix15-20020a170902f80f00b001d4f8046fb6mr583plb.3.1704841404383; Tue, 09 Jan 2024 15:03:24 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:36 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-17-seanjc@google.com> Subject: [PATCH v10 16/29] KVM: selftests: Test Intel PMU architectural events on gp counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu From: Jinrong Liang Add test cases to verify that Intel's Architectural PMU events work as expected when they are available according to guest CPUID. Iterate over a range of sane PMU versions, with and without full-width writes enabled, and over interesting combinations of lengths/masks for the bit vector that enumerates unavailable events. Test up to vPMU version 5, i.e. the current architectural max. KVM only officially supports up to version 2, but the behavior of the counters is backwards compatible, i.e. KVM shouldn't do something completely different for a higher, architecturally-defined vPMU version. Verify KVM behavior against the effective vPMU version, e.g. advertising vPMU 5 when KVM only supports vPMU 2 shouldn't magically unlock vPMU 5 features. According to Intel SDM, the number of architectural events is reported through CPUID.0AH:EAX[31:24] and the architectural event x is supported if EBX[x]=0 && EAX[31:24]>x. Handcode the entirety of the measured section so that the test can precisely assert on the number of instructions and branches retired. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/pmu_counters_test.c | 321 ++++++++++++++++++ 2 files changed, 322 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/pmu_counters_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 479bd85e1c56..ab96fc80bfbd 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -81,6 +81,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/kvm_pv_test TEST_GEN_PROGS_x86_64 += x86_64/monitor_mwait_test TEST_GEN_PROGS_x86_64 += x86_64/nested_exceptions_test TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test +TEST_GEN_PROGS_x86_64 += x86_64/pmu_counters_test TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test TEST_GEN_PROGS_x86_64 += x86_64/private_mem_conversions_test TEST_GEN_PROGS_x86_64 += x86_64/private_mem_kvm_exits_test diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c new file mode 100644 index 000000000000..5b8687bb4639 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -0,0 +1,321 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2023, Tencent, Inc. + */ + +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include + +#include "pmu.h" +#include "processor.h" + +/* Number of LOOP instructions for the guest measurement payload. */ +#define NUM_BRANCHES 10 +/* + * Number of "extra" instructions that will be counted, i.e. the number of + * instructions that are needed to set up the loop and then disabled the + * counter. 2 MOV, 2 XOR, 1 WRMSR. + */ +#define NUM_EXTRA_INSNS 5 +#define NUM_INSNS_RETIRED (NUM_BRANCHES + NUM_EXTRA_INSNS) + +static uint8_t kvm_pmu_version; +static bool kvm_has_perf_caps; + +static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, + void *guest_code, + uint8_t pmu_version, + uint64_t perf_capabilities) +{ + struct kvm_vm *vm; + + vm = vm_create_with_one_vcpu(vcpu, guest_code); + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(*vcpu); + + sync_global_to_guest(vm, kvm_pmu_version); + + /* + * Set PERF_CAPABILITIES before PMU version as KVM disallows enabling + * features via PERF_CAPABILITIES if the guest doesn't have a vPMU. + */ + if (kvm_has_perf_caps) + vcpu_set_msr(*vcpu, MSR_IA32_PERF_CAPABILITIES, perf_capabilities); + + vcpu_set_cpuid_property(*vcpu, X86_PROPERTY_PMU_VERSION, pmu_version); + return vm; +} + +static void run_vcpu(struct kvm_vcpu *vcpu) +{ + struct ucall uc; + + do { + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_SYNC: + break; + case UCALL_ABORT: + REPORT_GUEST_ASSERT(uc); + break; + case UCALL_PRINTF: + pr_info("%s", uc.buffer); + break; + case UCALL_DONE: + break; + default: + TEST_FAIL("Unexpected ucall: %lu", uc.cmd); + } + } while (uc.cmd != UCALL_DONE); +} + +static uint8_t guest_get_pmu_version(void) +{ + /* + * Return the effective PMU version, i.e. the minimum between what KVM + * supports and what is enumerated to the guest. The host deliberately + * advertises a PMU version to the guest beyond what is actually + * supported by KVM to verify KVM doesn't freak out and do something + * bizarre with an architecturally valid, but unsupported, version. + */ + return min_t(uint8_t, kvm_pmu_version, this_cpu_property(X86_PROPERTY_PMU_VERSION)); +} + +/* + * If an architectural event is supported and guaranteed to generate at least + * one "hit, assert that its count is non-zero. If an event isn't supported or + * the test can't guarantee the associated action will occur, then all bets are + * off regarding the count, i.e. no checks can be done. + * + * Sanity check that in all cases, the event doesn't count when it's disabled, + * and that KVM correctly emulates the write of an arbitrary value. + */ +static void guest_assert_event_count(uint8_t idx, + struct kvm_x86_pmu_feature event, + uint32_t pmc, uint32_t pmc_msr) +{ + uint64_t count; + + count = _rdpmc(pmc); + if (!this_pmu_has(event)) + goto sanity_checks; + + switch (idx) { + case INTEL_ARCH_INSTRUCTIONS_RETIRED_INDEX: + GUEST_ASSERT_EQ(count, NUM_INSNS_RETIRED); + break; + case INTEL_ARCH_BRANCHES_RETIRED_INDEX: + GUEST_ASSERT_EQ(count, NUM_BRANCHES); + break; + case INTEL_ARCH_CPU_CYCLES_INDEX: + case INTEL_ARCH_REFERENCE_CYCLES_INDEX: + GUEST_ASSERT_NE(count, 0); + break; + default: + break; + } + +sanity_checks: + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + GUEST_ASSERT_EQ(_rdpmc(pmc), count); + + wrmsr(pmc_msr, 0xdead); + GUEST_ASSERT_EQ(_rdpmc(pmc), 0xdead); +} + +static void __guest_test_arch_event(uint8_t idx, struct kvm_x86_pmu_feature event, + uint32_t pmc, uint32_t pmc_msr, + uint32_t ctrl_msr, uint64_t ctrl_msr_value) +{ + wrmsr(pmc_msr, 0); + + /* + * Enable and disable the PMC in a monolithic asm blob to ensure that + * the compiler can't insert _any_ code into the measured sequence. + * Note, ECX doesn't need to be clobbered as the input value, @pmc_msr, + * is restored before the end of the sequence. + */ + __asm__ __volatile__("wrmsr\n\t" + "mov $" __stringify(NUM_BRANCHES) ", %%ecx\n\t" + "loop .\n\t" + "mov %%edi, %%ecx\n\t" + "xor %%eax, %%eax\n\t" + "xor %%edx, %%edx\n\t" + "wrmsr\n\t" + :: "a"((uint32_t)ctrl_msr_value), + "d"(ctrl_msr_value >> 32), + "c"(ctrl_msr), "D"(ctrl_msr) + ); + + guest_assert_event_count(idx, event, pmc, pmc_msr); +} + +static void guest_test_arch_event(uint8_t idx) +{ + const struct { + struct kvm_x86_pmu_feature gp_event; + } intel_event_to_feature[] = { + [INTEL_ARCH_CPU_CYCLES_INDEX] = { X86_PMU_FEATURE_CPU_CYCLES }, + [INTEL_ARCH_INSTRUCTIONS_RETIRED_INDEX] = { X86_PMU_FEATURE_INSNS_RETIRED }, + [INTEL_ARCH_REFERENCE_CYCLES_INDEX] = { X86_PMU_FEATURE_REFERENCE_CYCLES }, + [INTEL_ARCH_LLC_REFERENCES_INDEX] = { X86_PMU_FEATURE_LLC_REFERENCES }, + [INTEL_ARCH_LLC_MISSES_INDEX] = { X86_PMU_FEATURE_LLC_MISSES }, + [INTEL_ARCH_BRANCHES_RETIRED_INDEX] = { X86_PMU_FEATURE_BRANCH_INSNS_RETIRED }, + [INTEL_ARCH_BRANCHES_MISPREDICTED_INDEX] = { X86_PMU_FEATURE_BRANCHES_MISPREDICTED }, + [INTEL_ARCH_TOPDOWN_SLOTS_INDEX] = { X86_PMU_FEATURE_TOPDOWN_SLOTS }, + }; + + uint32_t nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); + uint32_t pmu_version = guest_get_pmu_version(); + /* PERF_GLOBAL_CTRL exists only for Architectural PMU Version 2+. */ + bool guest_has_perf_global_ctrl = pmu_version >= 2; + struct kvm_x86_pmu_feature gp_event; + uint32_t base_pmc_msr; + unsigned int i; + + /* The host side shouldn't invoke this without a guest PMU. */ + GUEST_ASSERT(pmu_version); + + if (this_cpu_has(X86_FEATURE_PDCM) && + rdmsr(MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES) + base_pmc_msr = MSR_IA32_PMC0; + else + base_pmc_msr = MSR_IA32_PERFCTR0; + + gp_event = intel_event_to_feature[idx].gp_event; + GUEST_ASSERT_EQ(idx, gp_event.f.bit); + + GUEST_ASSERT(nr_gp_counters); + + for (i = 0; i < nr_gp_counters; i++) { + uint64_t eventsel = ARCH_PERFMON_EVENTSEL_OS | + ARCH_PERFMON_EVENTSEL_ENABLE | + intel_pmu_arch_events[idx]; + + wrmsr(MSR_P6_EVNTSEL0 + i, 0); + if (guest_has_perf_global_ctrl) + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, BIT_ULL(i)); + + __guest_test_arch_event(idx, gp_event, i, base_pmc_msr + i, + MSR_P6_EVNTSEL0 + i, eventsel); + } +} + +static void guest_test_arch_events(void) +{ + uint8_t i; + + for (i = 0; i < NR_INTEL_ARCH_EVENTS; i++) + guest_test_arch_event(i); + + GUEST_DONE(); +} + +static void test_arch_events(uint8_t pmu_version, uint64_t perf_capabilities, + uint8_t length, uint8_t unavailable_mask) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + /* Testing arch events requires a vPMU (there are no negative tests). */ + if (!pmu_version) + return; + + vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_test_arch_events, + pmu_version, perf_capabilities); + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH, + length); + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_EVENTS_MASK, + unavailable_mask); + + run_vcpu(vcpu); + + kvm_vm_free(vm); +} + +static void test_intel_counters(void) +{ + uint8_t nr_arch_events = kvm_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH); + uint8_t pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION); + unsigned int i; + uint8_t v, j; + uint32_t k; + + const uint64_t perf_caps[] = { + 0, + PMU_CAP_FW_WRITES, + }; + + /* + * Test up to PMU v5, which is the current maximum version defined by + * Intel, i.e. is the last version that is guaranteed to be backwards + * compatible with KVM's existing behavior. + */ + uint8_t max_pmu_version = max_t(typeof(pmu_version), pmu_version, 5); + + /* + * Detect the existence of events that aren't supported by selftests. + * This will (obviously) fail any time the kernel adds support for a + * new event, but it's worth paying that price to keep the test fresh. + */ + TEST_ASSERT(nr_arch_events <= NR_INTEL_ARCH_EVENTS, + "New architectural event(s) detected; please update this test (length = %u, mask = %x)", + nr_arch_events, kvm_cpu_property(X86_PROPERTY_PMU_EVENTS_MASK)); + + /* + * Force iterating over known arch events regardless of whether or not + * KVM/hardware supports a given event. + */ + nr_arch_events = max_t(typeof(nr_arch_events), nr_arch_events, NR_INTEL_ARCH_EVENTS); + + for (v = 0; v <= max_pmu_version; v++) { + for (i = 0; i < ARRAY_SIZE(perf_caps); i++) { + if (!kvm_has_perf_caps && perf_caps[i]) + continue; + + pr_info("Testing arch events, PMU version %u, perf_caps = %lx\n", + v, perf_caps[i]); + /* + * To keep the total runtime reasonable, test every + * possible non-zero, non-reserved bitmap combination + * only with the native PMU version and the full bit + * vector length. + */ + if (v == pmu_version) { + for (k = 1; k < (BIT(nr_arch_events) - 1); k++) + test_arch_events(v, perf_caps[i], nr_arch_events, k); + } + /* + * Test single bits for all PMU version and lengths up + * the number of events +1 (to verify KVM doesn't do + * weird things if the guest length is greater than the + * host length). Explicitly test a mask of '0' and all + * ones i.e. all events being available and unavailable. + */ + for (j = 0; j <= nr_arch_events + 1; j++) { + test_arch_events(v, perf_caps[i], j, 0); + test_arch_events(v, perf_caps[i], j, 0xff); + + for (k = 0; k < nr_arch_events; k++) + test_arch_events(v, perf_caps[i], j, BIT(k)); + } + } + } +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); + + TEST_REQUIRE(host_cpu_is_intel); + TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION)); + TEST_REQUIRE(kvm_cpu_property(X86_PROPERTY_PMU_VERSION) > 0); + + kvm_pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION); + kvm_has_perf_caps = kvm_cpu_has(X86_FEATURE_PDCM); + + test_intel_counters(); + + return 0; +} From patchwork Tue Jan 9 23:02:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515519 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74AE6482E9 for ; Tue, 9 Jan 2024 23:03:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XXcLU9dW" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5f325796097so60973067b3.1 for ; Tue, 09 Jan 2024 15:03:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841406; x=1705446206; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GD4EZFYi0i8CuDV5X7w+RXfnLUBQZe1U+Lgyy49HRZw=; b=XXcLU9dWue3DciT3rek1jKpZaoIQ/f53YXcYrlkzcSMJpTfv9AW9TRsglfMFiKY63/ zpXUgKEsl2P9v/VfSuJ6tdEkjW5ZPQ/IM1fYYwc3dnqGdxOs0vfglZLOCHrerYFyEoSp tgZm67siU53QDrvh9L6dQhzfIbpjcN9dvQ3i/qtSRCM3HxG965KtUR003t+tt1pAF7lK MLbRS1lDN7yOs7nxkkfb4GIeujvC6E4n41xPmVn+10AmTwNDfROdOtCBzlulafYqfOet rFd2kSpOZEn+8tOWwWX7L5aDJrd8ixiCeP/RPjkfDo+Yfzu9XEz6A+gbgtySa89jpgtz HpMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841406; x=1705446206; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GD4EZFYi0i8CuDV5X7w+RXfnLUBQZe1U+Lgyy49HRZw=; b=syJqfhhQstYxWk8zqVH97fveToaCg4Z1HFhJ/OZMXR3EwDhkQX2fwHb8QrAflGLsku 7nsldTP5MQFHotyvG5Ygx5DTDuE8H1JpOrfcpUGAHz+Y4sX2ZZSNlWprkaKqFhu/Dm0Q EAVITSM1faPwH+Cb1yHw/yvXhCeLyOggXqQqOXZVv82IQq7ZMeUUfzOQmMxJOKhl3qE/ TY/Vq/inGI9u1RT05NYnpc8JbNHuITksAT3TcaU9iTfPNb5CSMtEYApJg6ATgIsbhTf1 vr22uPCfhmg32+U6LtkWy8K9SRrj+8H8w+cZW86Aqn+z1uob+CmVJWrwwkYhWIEHEY0R oxKw== X-Gm-Message-State: AOJu0Yz9gAowW9XYR819ADnGT+YMRVmU1IiTiJoGPepyQgRd0f/Xjpky sbRbwkAi+5Tzkae3NtxCMeAnCwETJnXHYPboEg== X-Google-Smtp-Source: AGHT+IELcb2UhTJc8OghlKvwKt0x5vNCuGewplnPILsb847xjZY2sFQq4qLEKGx000YHiBYWoLg3rKppTUg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:c84:b0:5f6:ed3d:53f9 with SMTP id cm4-20020a05690c0c8400b005f6ed3d53f9mr114559ywb.10.1704841406473; Tue, 09 Jan 2024 15:03:26 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:37 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-18-seanjc@google.com> Subject: [PATCH v10 17/29] KVM: selftests: Test Intel PMU architectural events on fixed counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu From: Jinrong Liang Extend the PMU counters test to validate architectural events using fixed counters. The core logic is largely the same, the biggest difference being that if a fixed counter exists, its associated event is available (the SDM doesn't explicitly state this to be true, but it's KVM's ABI and letting software program a fixed counter that doesn't actually count would be quite bizarre). Note, fixed counters rely on PERF_GLOBAL_CTRL. Reviewed-by: Jim Mattson Reviewed-by: Dapeng Mi Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 54 +++++++++++++++---- 1 file changed, 45 insertions(+), 9 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index 5b8687bb4639..663e8fbe7ff8 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -150,26 +150,46 @@ static void __guest_test_arch_event(uint8_t idx, struct kvm_x86_pmu_feature even guest_assert_event_count(idx, event, pmc, pmc_msr); } +#define X86_PMU_FEATURE_NULL \ +({ \ + struct kvm_x86_pmu_feature feature = {}; \ + \ + feature; \ +}) + +static bool pmu_is_null_feature(struct kvm_x86_pmu_feature event) +{ + return !(*(u64 *)&event); +} + static void guest_test_arch_event(uint8_t idx) { const struct { struct kvm_x86_pmu_feature gp_event; + struct kvm_x86_pmu_feature fixed_event; } intel_event_to_feature[] = { - [INTEL_ARCH_CPU_CYCLES_INDEX] = { X86_PMU_FEATURE_CPU_CYCLES }, - [INTEL_ARCH_INSTRUCTIONS_RETIRED_INDEX] = { X86_PMU_FEATURE_INSNS_RETIRED }, - [INTEL_ARCH_REFERENCE_CYCLES_INDEX] = { X86_PMU_FEATURE_REFERENCE_CYCLES }, - [INTEL_ARCH_LLC_REFERENCES_INDEX] = { X86_PMU_FEATURE_LLC_REFERENCES }, - [INTEL_ARCH_LLC_MISSES_INDEX] = { X86_PMU_FEATURE_LLC_MISSES }, - [INTEL_ARCH_BRANCHES_RETIRED_INDEX] = { X86_PMU_FEATURE_BRANCH_INSNS_RETIRED }, - [INTEL_ARCH_BRANCHES_MISPREDICTED_INDEX] = { X86_PMU_FEATURE_BRANCHES_MISPREDICTED }, - [INTEL_ARCH_TOPDOWN_SLOTS_INDEX] = { X86_PMU_FEATURE_TOPDOWN_SLOTS }, + [INTEL_ARCH_CPU_CYCLES_INDEX] = { X86_PMU_FEATURE_CPU_CYCLES, X86_PMU_FEATURE_CPU_CYCLES_FIXED }, + [INTEL_ARCH_INSTRUCTIONS_RETIRED_INDEX] = { X86_PMU_FEATURE_INSNS_RETIRED, X86_PMU_FEATURE_INSNS_RETIRED_FIXED }, + /* + * Note, the fixed counter for reference cycles is NOT the same + * as the general purpose architectural event. The fixed counter + * explicitly counts at the same frequency as the TSC, whereas + * the GP event counts at a fixed, but uarch specific, frequency. + * Bundle them here for simplicity. + */ + [INTEL_ARCH_REFERENCE_CYCLES_INDEX] = { X86_PMU_FEATURE_REFERENCE_CYCLES, X86_PMU_FEATURE_REFERENCE_TSC_CYCLES_FIXED }, + [INTEL_ARCH_LLC_REFERENCES_INDEX] = { X86_PMU_FEATURE_LLC_REFERENCES, X86_PMU_FEATURE_NULL }, + [INTEL_ARCH_LLC_MISSES_INDEX] = { X86_PMU_FEATURE_LLC_MISSES, X86_PMU_FEATURE_NULL }, + [INTEL_ARCH_BRANCHES_RETIRED_INDEX] = { X86_PMU_FEATURE_BRANCH_INSNS_RETIRED, X86_PMU_FEATURE_NULL }, + [INTEL_ARCH_BRANCHES_MISPREDICTED_INDEX] = { X86_PMU_FEATURE_BRANCHES_MISPREDICTED, X86_PMU_FEATURE_NULL }, + [INTEL_ARCH_TOPDOWN_SLOTS_INDEX] = { X86_PMU_FEATURE_TOPDOWN_SLOTS, X86_PMU_FEATURE_TOPDOWN_SLOTS_FIXED }, }; uint32_t nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); uint32_t pmu_version = guest_get_pmu_version(); /* PERF_GLOBAL_CTRL exists only for Architectural PMU Version 2+. */ bool guest_has_perf_global_ctrl = pmu_version >= 2; - struct kvm_x86_pmu_feature gp_event; + struct kvm_x86_pmu_feature gp_event, fixed_event; uint32_t base_pmc_msr; unsigned int i; @@ -199,6 +219,22 @@ static void guest_test_arch_event(uint8_t idx) __guest_test_arch_event(idx, gp_event, i, base_pmc_msr + i, MSR_P6_EVNTSEL0 + i, eventsel); } + + if (!guest_has_perf_global_ctrl) + return; + + fixed_event = intel_event_to_feature[idx].fixed_event; + if (pmu_is_null_feature(fixed_event) || !this_pmu_has(fixed_event)) + return; + + i = fixed_event.f.bit; + + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, FIXED_PMC_CTRL(i, FIXED_PMC_KERNEL)); + + __guest_test_arch_event(idx, fixed_event, i | INTEL_RDPMC_FIXED, + MSR_CORE_PERF_FIXED_CTR0 + i, + MSR_CORE_PERF_GLOBAL_CTRL, + FIXED_PMC_GLOBAL_CTRL_ENABLE(i)); } static void guest_test_arch_events(void) From patchwork Tue Jan 9 23:02:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515520 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47F2648788 for ; Tue, 9 Jan 2024 23:03:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2YpnGuwr" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dbe868fdc33so4488394276.0 for ; Tue, 09 Jan 2024 15:03:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841408; x=1705446208; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=AWl5hfwTfsrMtEvU0KjZOppzEl6irPgMeLi04MWQvxY=; b=2YpnGuwrF+qdm7LkxSuWVIUIJB299h+FrbTYXHI98zwQqA76m9+iyg1z/NubegjF52 Pndlu755eHfb8wrGBDo3BsMbwcd2lUoukRGueCyo9CqJfIrJJhwJQGleF8gKcoAhIlCY qTJfW4jaVMog3RFNRVQCr2xasuvtGy+cep2Vy5xvYwSgv5F7iEGDbb0gjTBx8IxZwuGS QDCI5PKvolYYt9R+afuJ5cxIMO06jxxB4+xsyg/QWDlolrjtujCqE8O9VQz92mCAlgDm GGmbAyHaaF6xaL5FB+VbNZOR2/PJhEKKt05gVDffsd4DItC6A3uz3Ni1tRL5ktp0uxyI vbOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841408; x=1705446208; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=AWl5hfwTfsrMtEvU0KjZOppzEl6irPgMeLi04MWQvxY=; b=dEOIZd7Bk3EjogxV7Mi3XV8SN0a2jfsyY+rsFeqeEQMrDVS+awhbm3s+epQPuOgSRa xbFLp2GSDg59xBUaAebPotts1kuf+OYkYVVOaZNCTvuqiwf/mZ8weB/JtdZiB659uVfH bz3nDvFnTxlv5WdkudszQDq6Qss470zFxjj9aatQejhAP6FWcqcJjunzE85WcUgXp0wa cY6DbfQtdRqdaB57NV3GCHFkVFRYM+f/KtJcvH2Y+TgMtK4liMk51xccETJSVSflcFaC sN7IaXzC57rWV+DhGPUbmXGoFzeAdu7FXVniKHSIJa1w3z27xF+zwEcDWgxMM8UzHgop x63g== X-Gm-Message-State: AOJu0Yyewkl4BCZf4G+cudxWKLiCoB83eDuYwDRaf2fuByaNRbLifMpi RqIEQCZ2r+IwUfLk01Xinc0KHuB/dTbw0aYH0w== X-Google-Smtp-Source: AGHT+IEWtH/VaQ9VJ5xeD8XjKZhmF6WRIrplv6mOqvS5AwBqA96Ps0i3dwycLhaGQPLNwXjB5uJQdEpXAG0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:8251:0:b0:dbd:b056:b468 with SMTP id d17-20020a258251000000b00dbdb056b468mr32770ybn.7.1704841408519; Tue, 09 Jan 2024 15:03:28 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:38 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-19-seanjc@google.com> Subject: [PATCH v10 18/29] KVM: selftests: Test consistency of CPUID with num of gp counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu From: Jinrong Liang Add a test to verify that KVM correctly emulates MSR-based accesses to general purpose counters based on guest CPUID, e.g. that accesses to non-existent counters #GP and accesses to existent counters succeed. Note, for compatibility reasons, KVM does not emulate #GP when MSR_P6_PERFCTR[0|1] is not present (writes should be dropped). Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 99 +++++++++++++++++++ 1 file changed, 99 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index 663e8fbe7ff8..863418842ef8 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -270,9 +270,103 @@ static void test_arch_events(uint8_t pmu_version, uint64_t perf_capabilities, kvm_vm_free(vm); } +/* + * Limit testing to MSRs that are actually defined by Intel (in the SDM). MSRs + * that aren't defined counter MSRs *probably* don't exist, but there's no + * guarantee that currently undefined MSR indices won't be used for something + * other than PMCs in the future. + */ +#define MAX_NR_GP_COUNTERS 8 +#define MAX_NR_FIXED_COUNTERS 3 + +#define GUEST_ASSERT_PMC_MSR_ACCESS(insn, msr, expect_gp, vector) \ +__GUEST_ASSERT(expect_gp ? vector == GP_VECTOR : !vector, \ + "Expected %s on " #insn "(0x%x), got vector %u", \ + expect_gp ? "#GP" : "no fault", msr, vector) \ + +#define GUEST_ASSERT_PMC_VALUE(insn, msr, val, expected) \ + __GUEST_ASSERT(val == expected_val, \ + "Expected " #insn "(0x%x) to yield 0x%lx, got 0x%lx", \ + msr, expected_val, val); + +static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters, + uint8_t nr_counters) +{ + uint8_t i; + + for (i = 0; i < nr_possible_counters; i++) { + /* + * TODO: Test a value that validates full-width writes and the + * width of the counters. + */ + const uint64_t test_val = 0xffff; + const uint32_t msr = base_msr + i; + const bool expect_success = i < nr_counters; + + /* + * KVM drops writes to MSR_P6_PERFCTR[0|1] if the counters are + * unsupported, i.e. doesn't #GP and reads back '0'. + */ + const uint64_t expected_val = expect_success ? test_val : 0; + const bool expect_gp = !expect_success && msr != MSR_P6_PERFCTR0 && + msr != MSR_P6_PERFCTR1; + uint8_t vector; + uint64_t val; + + vector = wrmsr_safe(msr, test_val); + GUEST_ASSERT_PMC_MSR_ACCESS(WRMSR, msr, expect_gp, vector); + + vector = rdmsr_safe(msr, &val); + GUEST_ASSERT_PMC_MSR_ACCESS(RDMSR, msr, expect_gp, vector); + + /* On #GP, the result of RDMSR is undefined. */ + if (!expect_gp) + GUEST_ASSERT_PMC_VALUE(RDMSR, msr, val, expected_val); + + vector = wrmsr_safe(msr, 0); + GUEST_ASSERT_PMC_MSR_ACCESS(WRMSR, msr, expect_gp, vector); + } + GUEST_DONE(); +} + +static void guest_test_gp_counters(void) +{ + uint8_t nr_gp_counters = 0; + uint32_t base_msr; + + if (guest_get_pmu_version()) + nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); + + if (this_cpu_has(X86_FEATURE_PDCM) && + rdmsr(MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES) + base_msr = MSR_IA32_PMC0; + else + base_msr = MSR_IA32_PERFCTR0; + + guest_rd_wr_counters(base_msr, MAX_NR_GP_COUNTERS, nr_gp_counters); +} + +static void test_gp_counters(uint8_t pmu_version, uint64_t perf_capabilities, + uint8_t nr_gp_counters) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_test_gp_counters, + pmu_version, perf_capabilities); + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_NR_GP_COUNTERS, + nr_gp_counters); + + run_vcpu(vcpu); + + kvm_vm_free(vm); +} + static void test_intel_counters(void) { uint8_t nr_arch_events = kvm_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH); + uint8_t nr_gp_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); uint8_t pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION); unsigned int i; uint8_t v, j; @@ -336,6 +430,11 @@ static void test_intel_counters(void) for (k = 0; k < nr_arch_events; k++) test_arch_events(v, perf_caps[i], j, BIT(k)); } + + pr_info("Testing GP counters, PMU version %u, perf_caps = %lx\n", + v, perf_caps[i]); + for (j = 0; j <= nr_gp_counters; j++) + test_gp_counters(v, perf_caps[i], j); } } } From patchwork Tue Jan 9 23:02:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515521 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 90A6848CCA for ; Tue, 9 Jan 2024 23:03:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RpgR6PX/" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5f8ffd9fb8aso34885407b3.3 for ; Tue, 09 Jan 2024 15:03:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841410; x=1705446210; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=V3HBLKV9dLG9W0FGZjUnOV/dOoWVZS8lhx6QyuIwmfg=; b=RpgR6PX/Qf72r8uGhy98Ac5N5AZpzPmrsOy6e/FcSAORupfDMZ3k9ZVugwn3Wa983O gzp/FVwIf8maT8wzwz3F4vNn/xgRz/CsTxAtsHv+wv9ZNlQj+6eobNV2s4nHQA4+tYit 6DyBqzFehobGxu+MfvmdIjiAjyKbqqohK9DXoQdp3xl7TGpPDYh7PjHIBjvdMb4NZiBb svOshw0bIsgobRmJ61l6J4PbJ8x++DROVLcQWJ4d2lHje7AKFX1YBAodTzzh1w24wAf6 V4Ml4ajF0Wfpk0DyG/dNtsAOHCAOGMwtY3vlvciFEkWbNHiwaAwiPsr1DWqEG7SCu3lf rB9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841410; x=1705446210; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=V3HBLKV9dLG9W0FGZjUnOV/dOoWVZS8lhx6QyuIwmfg=; b=dCas6nPxUiEqYiAXFcfgNtGvnxeE9j3zbibyn9Th21/9BTvMoh9Sibwd/rwZ5haJzC Ul9BvnkdRCKNDZjc3jmwypHL4PfQCkY4TG5CVO9kvgh9JlQsQQSywfe/fYH3ct7A5buc aukEWYSUmBuhfF7PENd36mMNAbgu5QEg5bY0WA+oHg8yB8FhvaESkkx3VaTcwkqOY40X d1/wSM6XrNja88usTU6oZ13g+h4YQMxmrxP3lo3MqX0m90BNl7maez7E6TVTeVV+vKIi +D3IFukPRnB/n7tKomKfC3cZJaSMl3HJCenlVgRCyjl7GDTdpcAgMd/A2bPIf7ujMKq3 W2gQ== X-Gm-Message-State: AOJu0YwlkqdUJlbBV0Q/bvX0RrBKGOwWlfhZ4AMCHY0ZwnWLabWbt7dg AmJQtXKeHWsRCODjtOCiRsnY0udMU1S3vwSxpw== X-Google-Smtp-Source: AGHT+IG1Ge95404hqGLT8btxtkMpRkXTbHOLwKNDNUdXtu0RebdAB+W5LspYi2Z7g+Z5RWmq9dd1g6wCFok= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:83d1:0:b0:dbc:ed8b:feaa with SMTP id v17-20020a2583d1000000b00dbced8bfeaamr33135ybm.10.1704841410603; Tue, 09 Jan 2024 15:03:30 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:39 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-20-seanjc@google.com> Subject: [PATCH v10 19/29] KVM: selftests: Test consistency of CPUID with num of fixed counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu From: Jinrong Liang Extend the PMU counters test to verify KVM emulation of fixed counters in addition to general purpose counters. Fixed counters add an extra wrinkle in the form of an extra supported bitmask. Thus quoth the SDM: fixed-function performance counter 'i' is supported if ECX[i] || (EDX[4:0] > i) Test that KVM handles a counter being available through either method. Reviewed-by: Dapeng Mi Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 60 ++++++++++++++++++- 1 file changed, 57 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index 863418842ef8..b07294af71a3 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -290,7 +290,7 @@ __GUEST_ASSERT(expect_gp ? vector == GP_VECTOR : !vector, \ msr, expected_val, val); static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters, - uint8_t nr_counters) + uint8_t nr_counters, uint32_t or_mask) { uint8_t i; @@ -301,7 +301,13 @@ static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters */ const uint64_t test_val = 0xffff; const uint32_t msr = base_msr + i; - const bool expect_success = i < nr_counters; + + /* + * Fixed counters are supported if the counter is less than the + * number of enumerated contiguous counters *or* the counter is + * explicitly enumerated in the supported counters mask. + */ + const bool expect_success = i < nr_counters || (or_mask & BIT(i)); /* * KVM drops writes to MSR_P6_PERFCTR[0|1] if the counters are @@ -343,7 +349,7 @@ static void guest_test_gp_counters(void) else base_msr = MSR_IA32_PERFCTR0; - guest_rd_wr_counters(base_msr, MAX_NR_GP_COUNTERS, nr_gp_counters); + guest_rd_wr_counters(base_msr, MAX_NR_GP_COUNTERS, nr_gp_counters, 0); } static void test_gp_counters(uint8_t pmu_version, uint64_t perf_capabilities, @@ -363,9 +369,50 @@ static void test_gp_counters(uint8_t pmu_version, uint64_t perf_capabilities, kvm_vm_free(vm); } +static void guest_test_fixed_counters(void) +{ + uint64_t supported_bitmask = 0; + uint8_t nr_fixed_counters = 0; + + /* Fixed counters require Architectural vPMU Version 2+. */ + if (guest_get_pmu_version() >= 2) + nr_fixed_counters = this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); + + /* + * The supported bitmask for fixed counters was introduced in PMU + * version 5. + */ + if (guest_get_pmu_version() >= 5) + supported_bitmask = this_cpu_property(X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK); + + guest_rd_wr_counters(MSR_CORE_PERF_FIXED_CTR0, MAX_NR_FIXED_COUNTERS, + nr_fixed_counters, supported_bitmask); +} + +static void test_fixed_counters(uint8_t pmu_version, uint64_t perf_capabilities, + uint8_t nr_fixed_counters, + uint32_t supported_bitmask) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_test_fixed_counters, + pmu_version, perf_capabilities); + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK, + supported_bitmask); + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_NR_FIXED_COUNTERS, + nr_fixed_counters); + + run_vcpu(vcpu); + + kvm_vm_free(vm); +} + static void test_intel_counters(void) { uint8_t nr_arch_events = kvm_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH); + uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); uint8_t nr_gp_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); uint8_t pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION); unsigned int i; @@ -435,6 +482,13 @@ static void test_intel_counters(void) v, perf_caps[i]); for (j = 0; j <= nr_gp_counters; j++) test_gp_counters(v, perf_caps[i], j); + + pr_info("Testing fixed counters, PMU version %u, perf_caps = %lx\n", + v, perf_caps[i]); + for (j = 0; j <= nr_fixed_counters; j++) { + for (k = 0; k <= (BIT(nr_fixed_counters) - 1); k++) + test_fixed_counters(v, perf_caps[i], j, k); + } } } } From patchwork Tue Jan 9 23:02:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515522 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C46F48CF5 for ; Tue, 9 Jan 2024 23:03:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PNBGrC20" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5f240ace2efso48088037b3.1 for ; Tue, 09 Jan 2024 15:03:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841412; x=1705446212; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=UmbwugfbZjx9BroAj5L+ZfHrFa8HDb/zVhEfTcoxtfc=; b=PNBGrC20KjTlNRwSmRKglaBQjnWMvV2NhhcSeGpSO/eGbavokHb/tkfZE+BEBoEKrA +NR40h+6MmYuAfYNBdv9VpTE/UthS6U7sGharyAphHCgt3Wz+6OwkpMyiE5DTMXYr0i5 n56s+CvBO7pmwLu0fTzrlTpWdbqnocEml/qNodSZ9oZY6+3uXZ690PMwR9OdnZS/x6WD lUugeQDOiA5BaAP0WZiSjKl6iODsx6uvSwyoqFjF0QFnpi53j+Wa3WK4wcUcFqzHLukF 7wGmDxp/etNP3oiaAkore0cHA2YxrTV2NNjsUsJHx/X4QU3EdPCYA4FuhdpFgT3Ia6oj RWzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841412; x=1705446212; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=UmbwugfbZjx9BroAj5L+ZfHrFa8HDb/zVhEfTcoxtfc=; b=F4GpZRC+WXlE3WdZ3fDIqZYACw4yEIMW+JPNApWHnQmO9Q9GAh4inKptv6AcFTnVxe tfnQ8kmevXxtmts5zilUyBj1pXc6sI08hN3pCnQQQxTFUr1fzXIkQjIaCLsf/8Uys9oo KSE7BavObA8qr0ZLXpXV5K9CFKuWKTKvMAJI0yywm4YgYdvtsv17YdaPFcirTFQ8MLLC 7yi8Mekt5BI3RhajjecjT82jsZDaVVCgfKDU6yNvXdiwvVscFY2rWIEM3iVg5nHBugC1 DDgTKQ8QZID4ZdfQwkDTzwfeCOvGnB2dmJ/Co4Wpu7BZoSbYJtHxaiDHAtV9xTBMnrkx xAeg== X-Gm-Message-State: AOJu0YyCyTN73XsGdPRSe0oH3LSci6XCTppTloxFoni1voBy91ruzreb U7pHvRlxF9shTBCSVhnRBHWs92UA1JfMfNf/Xg== X-Google-Smtp-Source: AGHT+IH55KoAefoSAgaGx1KUdGWyk7jVjq0nl1o93I+EEXhXygMCZvACNTy+k9279E6nOb8jlXDG24cKxK0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:b03:b0:5e3:2a36:b4d with SMTP id cj3-20020a05690c0b0300b005e32a360b4dmr88378ywb.1.1704841412585; Tue, 09 Jan 2024 15:03:32 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:40 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-21-seanjc@google.com> Subject: [PATCH v10 20/29] KVM: selftests: Add functional test for Intel's fixed PMU counters From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu From: Jinrong Liang Extend the fixed counters test to verify that supported counters can actually be enabled in the control MSRs, that unsupported counters cannot, and that enabled counters actually count. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Jinrong Liang [sean: fold into the rd/wr access test, massage changelog] Reviewed-by: Dapeng Mi Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 31 ++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index b07294af71a3..f5dedd112471 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -332,7 +332,6 @@ static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters vector = wrmsr_safe(msr, 0); GUEST_ASSERT_PMC_MSR_ACCESS(WRMSR, msr, expect_gp, vector); } - GUEST_DONE(); } static void guest_test_gp_counters(void) @@ -350,6 +349,7 @@ static void guest_test_gp_counters(void) base_msr = MSR_IA32_PERFCTR0; guest_rd_wr_counters(base_msr, MAX_NR_GP_COUNTERS, nr_gp_counters, 0); + GUEST_DONE(); } static void test_gp_counters(uint8_t pmu_version, uint64_t perf_capabilities, @@ -373,6 +373,7 @@ static void guest_test_fixed_counters(void) { uint64_t supported_bitmask = 0; uint8_t nr_fixed_counters = 0; + uint8_t i; /* Fixed counters require Architectural vPMU Version 2+. */ if (guest_get_pmu_version() >= 2) @@ -387,6 +388,34 @@ static void guest_test_fixed_counters(void) guest_rd_wr_counters(MSR_CORE_PERF_FIXED_CTR0, MAX_NR_FIXED_COUNTERS, nr_fixed_counters, supported_bitmask); + + for (i = 0; i < MAX_NR_FIXED_COUNTERS; i++) { + uint8_t vector; + uint64_t val; + + if (i >= nr_fixed_counters && !(supported_bitmask & BIT_ULL(i))) { + vector = wrmsr_safe(MSR_CORE_PERF_FIXED_CTR_CTRL, + FIXED_PMC_CTRL(i, FIXED_PMC_KERNEL)); + __GUEST_ASSERT(vector == GP_VECTOR, + "Expected #GP for counter %u in FIXED_CTR_CTRL", i); + + vector = wrmsr_safe(MSR_CORE_PERF_GLOBAL_CTRL, + FIXED_PMC_GLOBAL_CTRL_ENABLE(i)); + __GUEST_ASSERT(vector == GP_VECTOR, + "Expected #GP for counter %u in PERF_GLOBAL_CTRL", i); + continue; + } + + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + i, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, FIXED_PMC_CTRL(i, FIXED_PMC_KERNEL)); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, FIXED_PMC_GLOBAL_CTRL_ENABLE(i)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + val = rdmsr(MSR_CORE_PERF_FIXED_CTR0 + i); + + GUEST_ASSERT_NE(val, 0); + } + GUEST_DONE(); } static void test_fixed_counters(uint8_t pmu_version, uint64_t perf_capabilities, From patchwork Tue Jan 9 23:02:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515523 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A495495EA for ; Tue, 9 Jan 2024 23:03:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Mw3Gr6h5" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5ce63e72bc3so1229184a12.0 for ; Tue, 09 Jan 2024 15:03:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841414; x=1705446214; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=xaOHR9B/i9PUufPdoG9YEGW+GjjakEAYkB6Tb38apR0=; b=Mw3Gr6h5kL2KWYFwpjiThB470QC1nxaSMPnufjktRX/3bGMih2JjLFW1LWt41Ua2oT vaCxqGv4MfhfY7WDSRJvO8PC4RKCXX4lCFP4IQ+FWOrqxBU6BiGfhJQgCNgxZ1aLXPJ9 ZF1X5LNnBjWMjBVJiXuC+faZe2jkkha8OUrNyhNQfH8PKCTc557ct6t/tKYQkMH18Uqa Wi/D8fakcLFDnNuX+ez4LxebU771ov6gs0QOfoVljUUWlxSmKt4YhyfpbApiRAGZsnM0 6GA5e89FUcG58JpZivOr0yF4KwcWsg7HQPdq6WP4qX4c/557PBCEnplKaKBcWNDRcSCn NSqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841414; x=1705446214; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xaOHR9B/i9PUufPdoG9YEGW+GjjakEAYkB6Tb38apR0=; b=ohEhaS9zzmy8YABTMmNHRBCKz2OG3tMxwtTVJigYXSCduNYjw7RxuXXxirCbY3T0+Q zIOxY+/VDI12YF1Eo70CJjuCZaq1wC5i2eCAqH0/6Fn+P+WpsPP4fQ4s9nAeCV6VPuMJ 8JjZ3qj2798Er/JDt7nXHq7PGFvlgEF9CLoQUt2f3c7WQQilm5hd6TxZ3hXp/r/HD+NF lJYBMKrKO8oIPour2/eJcG1zqbC55bogFcF6E8P3DV9gfXLR2F7mV/rVjPXLq8k5VqHZ LlezH/I3KaJsKO1he6mniP2cyprhkl4b3heM7wPy7SrdZ/nRdTQ0PlMfAzuGvN9/JzTx a6DQ== X-Gm-Message-State: AOJu0YypWVyzXkSzdaymbUDx6jhq6iULwE7eZs3X1Uop26PMTn9zVChO QZvKt8DHlHWsNGBKXK+kYLIb+UsCsduq3rbbOA== X-Google-Smtp-Source: AGHT+IF9QwgswU6o0HVTB+TbwQMRDPCvyeVRdapwlQ5XkLv17/EocgeLhU1c4E+ngEN5RAJDs2jaVnzwvjY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a65:6a4a:0:b0:5ce:98f:4492 with SMTP id o10-20020a656a4a000000b005ce098f4492mr202pgu.6.1704841414611; Tue, 09 Jan 2024 15:03:34 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:41 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-22-seanjc@google.com> Subject: [PATCH v10 21/29] KVM: selftests: Expand PMU counters test to verify LLC events From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Expand the PMU counters test to verify that LLC references and misses have non-zero counts when the code being executed while the LLC event(s) is active is evicted via CFLUSH{,OPT}. Note, CLFLUSH{,OPT} requires a fence of some kind to ensure the cache lines are flushed before execution continues. Use MFENCE for simplicity (performance is not a concern). Suggested-by: Jim Mattson Reviewed-by: Dapeng Mi Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 59 +++++++++++++------ 1 file changed, 40 insertions(+), 19 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index f5dedd112471..4c7133ddcda8 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -14,9 +14,9 @@ /* * Number of "extra" instructions that will be counted, i.e. the number of * instructions that are needed to set up the loop and then disabled the - * counter. 2 MOV, 2 XOR, 1 WRMSR. + * counter. 1 CLFLUSH/CLFLUSHOPT/NOP, 1 MFENCE, 2 MOV, 2 XOR, 1 WRMSR. */ -#define NUM_EXTRA_INSNS 5 +#define NUM_EXTRA_INSNS 7 #define NUM_INSNS_RETIRED (NUM_BRANCHES + NUM_EXTRA_INSNS) static uint8_t kvm_pmu_version; @@ -107,6 +107,12 @@ static void guest_assert_event_count(uint8_t idx, case INTEL_ARCH_BRANCHES_RETIRED_INDEX: GUEST_ASSERT_EQ(count, NUM_BRANCHES); break; + case INTEL_ARCH_LLC_REFERENCES_INDEX: + case INTEL_ARCH_LLC_MISSES_INDEX: + if (!this_cpu_has(X86_FEATURE_CLFLUSHOPT) && + !this_cpu_has(X86_FEATURE_CLFLUSH)) + break; + fallthrough; case INTEL_ARCH_CPU_CYCLES_INDEX: case INTEL_ARCH_REFERENCE_CYCLES_INDEX: GUEST_ASSERT_NE(count, 0); @@ -123,29 +129,44 @@ static void guest_assert_event_count(uint8_t idx, GUEST_ASSERT_EQ(_rdpmc(pmc), 0xdead); } +/* + * Enable and disable the PMC in a monolithic asm blob to ensure that the + * compiler can't insert _any_ code into the measured sequence. Note, ECX + * doesn't need to be clobbered as the input value, @pmc_msr, is restored + * before the end of the sequence. + * + * If CLFUSH{,OPT} is supported, flush the cacheline containing (at least) the + * start of the loop to force LLC references and misses, i.e. to allow testing + * that those events actually count. + */ +#define GUEST_MEASURE_EVENT(_msr, _value, clflush) \ +do { \ + __asm__ __volatile__("wrmsr\n\t" \ + clflush "\n\t" \ + "mfence\n\t" \ + "1: mov $" __stringify(NUM_BRANCHES) ", %%ecx\n\t" \ + "loop .\n\t" \ + "mov %%edi, %%ecx\n\t" \ + "xor %%eax, %%eax\n\t" \ + "xor %%edx, %%edx\n\t" \ + "wrmsr\n\t" \ + :: "a"((uint32_t)_value), "d"(_value >> 32), \ + "c"(_msr), "D"(_msr) \ + ); \ +} while (0) + static void __guest_test_arch_event(uint8_t idx, struct kvm_x86_pmu_feature event, uint32_t pmc, uint32_t pmc_msr, uint32_t ctrl_msr, uint64_t ctrl_msr_value) { wrmsr(pmc_msr, 0); - /* - * Enable and disable the PMC in a monolithic asm blob to ensure that - * the compiler can't insert _any_ code into the measured sequence. - * Note, ECX doesn't need to be clobbered as the input value, @pmc_msr, - * is restored before the end of the sequence. - */ - __asm__ __volatile__("wrmsr\n\t" - "mov $" __stringify(NUM_BRANCHES) ", %%ecx\n\t" - "loop .\n\t" - "mov %%edi, %%ecx\n\t" - "xor %%eax, %%eax\n\t" - "xor %%edx, %%edx\n\t" - "wrmsr\n\t" - :: "a"((uint32_t)ctrl_msr_value), - "d"(ctrl_msr_value >> 32), - "c"(ctrl_msr), "D"(ctrl_msr) - ); + if (this_cpu_has(X86_FEATURE_CLFLUSHOPT)) + GUEST_MEASURE_EVENT(ctrl_msr, ctrl_msr_value, "clflushopt 1f"); + else if (this_cpu_has(X86_FEATURE_CLFLUSH)) + GUEST_MEASURE_EVENT(ctrl_msr, ctrl_msr_value, "clflush 1f"); + else + GUEST_MEASURE_EVENT(ctrl_msr, ctrl_msr_value, "nop"); guest_assert_event_count(idx, event, pmc, pmc_msr); } From patchwork Tue Jan 9 23:02:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515524 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A18FB4A9B3 for ; Tue, 9 Jan 2024 23:03:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Sw6FAc/2" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbe9ef2422cso3923197276.0 for ; Tue, 09 Jan 2024 15:03:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841416; x=1705446216; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=xs0Tu69ShBr9y0BRF4QkbIzT6NPLb5tl+Q4gi6KQ7zM=; b=Sw6FAc/2HgTn2VDtKHnuQ8bxgGkPvuvWhTpuMfEAlxV5E2qb3dTTzzp6La/2hLe87n UMsAhAiT0UV4+4p3oW76fUB4t5IOMP2MwaGbudQmrguUGyibgdPfGV/vTbw7QJJ3f6A/ pNQ+jCXd+ptwGa+2B7aczr9K5e7zo8fZk5E/StRxsH6MyU8zOGiuX+CeYGEZDiEdH7LG U4Q3/5/gxcp16rzGceARZV/5j4fX4BRkpNC4RcyYkc/7CGTp8KCUym0ZL6pnYxImbSHp XbvjkCPrFMMlXcVltCgObAzppRm8PZxCB4KMK2+rjN5QSKmhwUaH65o2O10GNShb6VWM rh+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841416; x=1705446216; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xs0Tu69ShBr9y0BRF4QkbIzT6NPLb5tl+Q4gi6KQ7zM=; b=TIxgEW23hSAYcegLSwJWaggr3BgOf9R1StzxfvmVlQ8XrnKAjL9wRnzjD4mX/AZtDd jq9n+mMyHq0ZaQHb8/RolGafgJcrvKDrzB9KJrrOQdSIPXgIHVOI3YHB4A6X8Y+G8qTj 0TVCa0xK7VX6d/x5qLphmrfm72SzRSSYsqUODjXbdQdLbmC0IGN8UU2ersGld7vtyRFs 5nJxTHC5eQ6zWTbSQjKR6uyjf4P7TmxfBYhbz9z4Tzjw1pfc6cPaUAsLq+M5wFvXLTZt VxMQTHSbXlRk8oVcMErfHWSIV8ek0vbRECTDhKzIZc62OLLpfyYvaKAH0N5QGTLNVZGd i+YA== X-Gm-Message-State: AOJu0YzPg5OG1YH9XRRH1c7FRYLqTQnkrQ6mB5x0nt6nMWEJQMb3t0Xp wNkX9m3LEAegRqQcNT1qnOTdg/ekSL28HRkWBA== X-Google-Smtp-Source: AGHT+IG9ZsgQhQXqNM7DqFbl0VSMm15g4/tnqdP+I9jecPH3K8Ab3d5X141dsCgys7GSjDjlt5oHbIO4Ol0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:8043:0:b0:dbd:6dff:943a with SMTP id a3-20020a258043000000b00dbd6dff943amr5679ybn.10.1704841416746; Tue, 09 Jan 2024 15:03:36 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:42 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-23-seanjc@google.com> Subject: [PATCH v10 22/29] KVM: selftests: Add a helper to query if the PMU module param is enabled From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Add a helper to probe KVM's "enable_pmu" param, open coding strings in multiple places is just asking for false negatives and/or runtime errors due to typos. Reviewed-by: Dapeng Mi Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/include/x86_64/processor.h | 5 +++++ tools/testing/selftests/kvm/x86_64/pmu_counters_test.c | 2 +- tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 2 +- tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c | 2 +- 4 files changed, 8 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 92d4f8ecc730..ee082ae58f40 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -1217,6 +1217,11 @@ static inline uint8_t xsetbv_safe(uint32_t index, uint64_t value) bool kvm_is_tdp_enabled(void); +static inline bool kvm_is_pmu_enabled(void) +{ + return get_kvm_param_bool("enable_pmu"); +} + uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr, int *level); uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr); diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index 4c7133ddcda8..9e9dc4084c0d 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -545,7 +545,7 @@ static void test_intel_counters(void) int main(int argc, char *argv[]) { - TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); + TEST_REQUIRE(kvm_is_pmu_enabled()); TEST_REQUIRE(host_cpu_is_intel); TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION)); diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 7ec9fbed92e0..fa407e2ccb2f 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -867,7 +867,7 @@ int main(int argc, char *argv[]) struct kvm_vcpu *vcpu, *vcpu2 = NULL; struct kvm_vm *vm; - TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); + TEST_REQUIRE(kvm_is_pmu_enabled()); TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_FILTER)); TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_MASKED_EVENTS)); diff --git a/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c b/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c index 2a8d4ac2f020..8ded194c5a6d 100644 --- a/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c +++ b/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c @@ -237,7 +237,7 @@ int main(int argc, char *argv[]) { union perf_capabilities host_cap; - TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); + TEST_REQUIRE(kvm_is_pmu_enabled()); TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM)); TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION)); From patchwork Tue Jan 9 23:02:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515525 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98EF84B5B9 for ; Tue, 9 Jan 2024 23:03:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0EtyW66g" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5ea6aa02fa4so58960587b3.0 for ; Tue, 09 Jan 2024 15:03:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841419; x=1705446219; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Xd2xi7qOc7aGWuph6eBSD2zpQtVKqQ03Fccj63JD6OI=; b=0EtyW66g7h9wTbLDVFakjezJ1qsmhifwRve6JwP8jY32t43TSefkcmuhs78qygtSn8 30KH71a5bGShkRKzJRFWJSQkRBcL9Zx9Rr48vh2GUlIxq9dKg2jlIY2qRZHsg1ODyNec nvSs3ewHz8T3eFuC3AcBeJgaqVYgFnuNtqcw4ApeWtcOVVS4D/vbu73k3h20amPKLj/o dn55nzyBSFtNb+/iXRibbva0nrEbingm/q1ibXsfvCwL0d6AbyxVypSi4VYt83NXsPVF rghiyEgYXz4vJaV8l6s9ttdjB5DBC0JEP8syCqd6+l83eltUEi42c9hzkYcJTrrYnlWh 48Mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841419; x=1705446219; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Xd2xi7qOc7aGWuph6eBSD2zpQtVKqQ03Fccj63JD6OI=; b=Jbo3z8maLMtWZQF1WVsdg89Zr+BiPLXIrLTBb3M4V+mElNQGq7yoCg/E69fgg+oclA gwSn9tsjFkztArBcEDNyYAbcCjluwumZbJx3gOM7wQtzatIFEuyg7R7pSkFk9Eegn2Vx kTdkie2bFdtn+ERwV0D7CcWZ2FXepwiPu57tC2UHdHe8Q81t1hJVrwSCWTlakWbLEX9M krJ6dpnYKwqkmE0l8X1vx69eYqncZpwFcH/NeM5et8HAeAW+omhrmczAYUa3SDb21Co4 plJfoAXPQK5mPGA0fWoCd3gmjqQxLNt4IcKV3dCotZVveITBf7twrVx8JzXmplc6REeA H2Rw== X-Gm-Message-State: AOJu0Yz1ybfXX+88ENYFQTrpKMgj+lsfOJZiNADS3EuTCEKqmYi7GMl4 EwUj8G5AJDokOOwhCGMBFqBkxj9I3F8j2+wJ5g== X-Google-Smtp-Source: AGHT+IFb7hq34ZiLwcnqvvhqVazGOzeft5UTdfQzhTKUjv6qFO4HcNCyzgsQQTKpk+B3S5Qp43JxcGyWAlg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:8251:0:b0:dbd:b056:b468 with SMTP id d17-20020a258251000000b00dbdb056b468mr32904ybn.7.1704841418864; Tue, 09 Jan 2024 15:03:38 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:43 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-24-seanjc@google.com> Subject: [PATCH v10 23/29] KVM: selftests: Add helpers to read integer module params From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Add helpers to read integer module params, which is painfully non-trivial because the pain of dealing with strings in C is exacerbated by the kernel inserting a newline. Don't bother differentiating between int, uint, short, etc. They all fit in an int, and KVM (thankfully) doesn't have any integer params larger than an int. Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/kvm_util_base.h | 4 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 62 +++++++++++++++++-- 2 files changed, 60 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 9e5afc472c14..070f250036fc 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -259,6 +259,10 @@ bool get_kvm_param_bool(const char *param); bool get_kvm_intel_param_bool(const char *param); bool get_kvm_amd_param_bool(const char *param); +int get_kvm_param_integer(const char *param); +int get_kvm_intel_param_integer(const char *param); +int get_kvm_amd_param_integer(const char *param); + unsigned int kvm_check_cap(long cap); static inline bool kvm_has_cap(long cap) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index e066d584c656..9bafe44cb978 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -51,13 +51,13 @@ int open_kvm_dev_path_or_exit(void) return _open_kvm_dev_path_or_exit(O_RDONLY); } -static bool get_module_param_bool(const char *module_name, const char *param) +static ssize_t get_module_param(const char *module_name, const char *param, + void *buffer, size_t buffer_size) { const int path_size = 128; char path[path_size]; - char value; - ssize_t r; - int fd; + ssize_t bytes_read; + int fd, r; r = snprintf(path, path_size, "/sys/module/%s/parameters/%s", module_name, param); @@ -66,11 +66,46 @@ static bool get_module_param_bool(const char *module_name, const char *param) fd = open_path_or_exit(path, O_RDONLY); - r = read(fd, &value, 1); - TEST_ASSERT(r == 1, "read(%s) failed", path); + bytes_read = read(fd, buffer, buffer_size); + TEST_ASSERT(bytes_read > 0, "read(%s) returned %ld, wanted %ld bytes", + path, bytes_read, buffer_size); r = close(fd); TEST_ASSERT(!r, "close(%s) failed", path); + return bytes_read; +} + +static int get_module_param_integer(const char *module_name, const char *param) +{ + /* + * 16 bytes to hold a 64-bit value (1 byte per char), 1 byte for the + * NUL char, and 1 byte because the kernel sucks and inserts a newline + * at the end. + */ + char value[16 + 1 + 1]; + ssize_t r; + + memset(value, '\0', sizeof(value)); + + r = get_module_param(module_name, param, value, sizeof(value)); + TEST_ASSERT(value[r - 1] == '\n', + "Expected trailing newline, got char '%c'", value[r - 1]); + + /* + * Squash the newline, otherwise atoi_paranoid() will complain about + * trailing non-NUL characters in the string. + */ + value[r - 1] = '\0'; + return atoi_paranoid(value); +} + +static bool get_module_param_bool(const char *module_name, const char *param) +{ + char value; + ssize_t r; + + r = get_module_param(module_name, param, &value, sizeof(value)); + TEST_ASSERT_EQ(r, 1); if (value == 'Y') return true; @@ -95,6 +130,21 @@ bool get_kvm_amd_param_bool(const char *param) return get_module_param_bool("kvm_amd", param); } +int get_kvm_param_integer(const char *param) +{ + return get_module_param_integer("kvm", param); +} + +int get_kvm_intel_param_integer(const char *param) +{ + return get_module_param_integer("kvm_intel", param); +} + +int get_kvm_amd_param_integer(const char *param) +{ + return get_module_param_integer("kvm_amd", param); +} + /* * Capability * From patchwork Tue Jan 9 23:02:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515526 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A2DD64BA91 for ; Tue, 9 Jan 2024 23:03:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="H3+sxRIK" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dbdb46770d7so4434810276.1 for ; Tue, 09 Jan 2024 15:03:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841421; x=1705446221; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=0dnLmZ28wujiU5GE+YLfWBhmje+8bXhZoFPGUCSCmRo=; b=H3+sxRIKHI1ey7MUf/Ss+NDH52XKvngO4fKh9F5ZOZ8X77VkbhDIbOBLIe5sE4nJIv p88u938gxdyupydhMk68JLJeYeGHiWuXL2xEoYzzMaPdySnAWqZIa0ohmAjZbDADwm4Q dvck0DftwuD34/OYib/UWI+vORmJkICDLt3fdCzZQ+iknPNABmj4Tm4RjPx3u3IVKBM7 Xl4IzGPupcy5eFx+7LrTRkR+3ZIXvKAqx2Nh/QZ2irB1hShXAPyOLT/ineVQ/kRPb34S 2rt3WGAWNNMZlQHrO5Lt8Vmb3hQn2VJZIP0lc1w8V7sJeOuLBTeg5NRmBRNsSlJmJ4iP ADWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841421; x=1705446221; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0dnLmZ28wujiU5GE+YLfWBhmje+8bXhZoFPGUCSCmRo=; b=AoftHvH2GQsJ5cOK8/11Wjj0gPbsAbSo11JLZMRbBxWGVs2GIOgh/G0ir2NyfpAoHv gsLCmDY4Fk/G2gyCQr+9K7neK7pnhLmX91tQr3+1u40AmEPVpfjIoyQLripbzjBQTc7S QqvaEOiQH4eXHoBdP3oz+Fbcc3vz2d8rvhXkAYtmhZPjgIz4A2iI4a4ww84aA7G2zyTR x8qXOpjulFB/ZIXn+gRsC9P8t2nkFePfTc5zbEB0aeM6jtic9bueFdsusWGWB0vT5+8q oo4asHxQSBbYmknjtfwzcS3DdMsqWJURI7L48+IWMfkrarvmef1v/XNWmcPS3aeQEYao 723A== X-Gm-Message-State: AOJu0YxZAOnj1gyENQixeHKukJ//tK+ddTwwKTFdCGFmBjVqT6eddmfb Zk87KwdLiAaoXnvdUQE5N/FYERqzGO6YAwZIrw== X-Google-Smtp-Source: AGHT+IH5ES29Oz0sx+cI2qPJpFDGqMFHbK+xcyjLaTkC4c9QJc5wFXryWYpXZkzBJLjF6oRto2FVeRbsgjE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1348:b0:dbc:c697:63bd with SMTP id g8-20020a056902134800b00dbcc69763bdmr43795ybu.0.1704841420785; Tue, 09 Jan 2024 15:03:40 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:44 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-25-seanjc@google.com> Subject: [PATCH v10 24/29] KVM: selftests: Query module param to detect FEP in MSR filtering test From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Add a helper to detect KVM support for forced emulation by querying the module param, and use the helper to detect support for the MSR filtering test instead of throwing a noodle/NOP at KVM to see if it sticks. Cc: Aaron Lewis Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86_64/processor.h | 5 ++++ .../kvm/x86_64/userspace_msr_exit_test.c | 27 +++++++------------ 2 files changed, 14 insertions(+), 18 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index ee082ae58f40..d211cea188be 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -1222,6 +1222,11 @@ static inline bool kvm_is_pmu_enabled(void) return get_kvm_param_bool("enable_pmu"); } +static inline bool kvm_is_forced_emulation_enabled(void) +{ + return !!get_kvm_param_integer("force_emulation_prefix"); +} + uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr, int *level); uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr); diff --git a/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c b/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c index 3533dc2fbfee..9e12dbc47a72 100644 --- a/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c +++ b/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c @@ -14,8 +14,7 @@ /* Forced emulation prefix, used to invoke the emulator unconditionally. */ #define KVM_FEP "ud2; .byte 'k', 'v', 'm';" -#define KVM_FEP_LENGTH 5 -static int fep_available = 1; +static bool fep_available; #define MSR_NON_EXISTENT 0x474f4f00 @@ -260,13 +259,6 @@ static void guest_code_filter_allow(void) GUEST_ASSERT(data == 2); GUEST_ASSERT(guest_exception_count == 0); - /* - * Test to see if the instruction emulator is available (ie: the module - * parameter 'kvm.force_emulation_prefix=1' is set). This instruction - * will #UD if it isn't available. - */ - __asm__ __volatile__(KVM_FEP "nop"); - if (fep_available) { /* Let userspace know we aren't done. */ GUEST_SYNC(0); @@ -388,12 +380,6 @@ static void guest_fep_gp_handler(struct ex_regs *regs) &em_wrmsr_start, &em_wrmsr_end); } -static void guest_ud_handler(struct ex_regs *regs) -{ - fep_available = 0; - regs->rip += KVM_FEP_LENGTH; -} - static void check_for_guest_assert(struct kvm_vcpu *vcpu) { struct ucall uc; @@ -531,9 +517,11 @@ static void test_msr_filter_allow(void) { struct kvm_vcpu *vcpu; struct kvm_vm *vm; + uint64_t cmd; int rc; vm = vm_create_with_one_vcpu(&vcpu, guest_code_filter_allow); + sync_global_to_guest(vm, fep_available); rc = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR); TEST_ASSERT(rc, "KVM_CAP_X86_USER_SPACE_MSR is available"); @@ -561,11 +549,11 @@ static void test_msr_filter_allow(void) run_guest_then_process_wrmsr(vcpu, MSR_NON_EXISTENT); run_guest_then_process_rdmsr(vcpu, MSR_NON_EXISTENT); - vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler); vcpu_run(vcpu); - vm_install_exception_handler(vm, UD_VECTOR, NULL); + cmd = process_ucall(vcpu); - if (process_ucall(vcpu) != UCALL_DONE) { + if (fep_available) { + TEST_ASSERT_EQ(cmd, UCALL_SYNC); vm_install_exception_handler(vm, GP_VECTOR, guest_fep_gp_handler); /* Process emulated rdmsr and wrmsr instructions. */ @@ -583,6 +571,7 @@ static void test_msr_filter_allow(void) /* Confirm the guest completed without issues. */ run_guest_then_process_ucall_done(vcpu); } else { + TEST_ASSERT_EQ(cmd, UCALL_DONE); printf("To run the instruction emulated tests set the module parameter 'kvm.force_emulation_prefix=1'\n"); } @@ -804,6 +793,8 @@ static void test_user_exit_msr_flags(void) int main(int argc, char *argv[]) { + fep_available = kvm_is_forced_emulation_enabled(); + test_msr_filter_allow(); test_msr_filter_deny(); From patchwork Tue Jan 9 23:02:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515527 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE22A4C3AA for ; Tue, 9 Jan 2024 23:03:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1VWWrcEW" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5e744f7ca3bso51760087b3.2 for ; Tue, 09 Jan 2024 15:03:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841423; x=1705446223; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GBHZvvbKUxYIR94IFblpt9Vgo5x0HhpKHQKHcFpIvzk=; b=1VWWrcEW0uqzCBmnm9VaWWbNc4pHzgy049j7kKTYmaEO/fIPGK2/K1VnAG8i5g0TaP 9nBN49LSQmiggLSh2W0QygAFLRZYHTrXkPPrN4ckA9WcB5UnZB0BjpgS5pAZBpGbIhAZ ih8OMOnl/t9KcQikLk9Gm2rXjlh328HmVTE4m0m6FYEsggMUT80brnWwZWLYUCREC51z LqHiqkAyjJJmw9SkEJZQTuAo0nFqrzUabGcDKFBbjOpM+oNeZCdL0DjTIAwPeo4k8bzM sXQ0wh8FmWHBoSFC12K1xtVeKM4rVIge5DouycXyrR/W+Eis1f4t615JYQ3ipyc1TnPn GuTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841423; x=1705446223; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GBHZvvbKUxYIR94IFblpt9Vgo5x0HhpKHQKHcFpIvzk=; b=AJN8tX8vgZ0AfHH7KmHTlyOI41StrJc+KcPr4q6GSfl+VUXeNUkeCJWnQwijmJ0Gka E/7/Zmu13cOeHC4DYSVA9oG1TvrVu4Tr+BPMsTWI7KFYYhxw+BUU8Sw1pK98j9iG7r1n GUW36wZcl0dBH60padEz/ES/nJYaVIxyepIFbFuI20dmV6JMZAJM+G3BGVkyMNTUFteP drlLwpp9ZPkwcGr0ckqBlFNS9ufwbqnKZYbaUy4DBSVVDlrLxlvuvM/AaAWo/Q95FNgB VInrSmOI59YjdZO0ARcJjcEfn1pID5nvXaSxlsdw1vKqthn0evB+dCl6gqyrU20eid+S yvjg== X-Gm-Message-State: AOJu0Yz50XFpte7KGNuDWVOsVguwfA4EYfdFBrG5Y5r3xcbkCRPZ/lIZ 2abm0gTbwBIXUjd1nmcK6jy/iFGHnXsrpIo9uw== X-Google-Smtp-Source: AGHT+IERuwxDjrQAQv1weIwGckuOEgEz3Oq4Fpy6arL8duIdZYFULCn1Df8JQWeqeoI6t/qcbPRIMl7DCgU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:3384:b0:5e9:358e:2cb6 with SMTP id fl4-20020a05690c338400b005e9358e2cb6mr105069ywb.10.1704841422890; Tue, 09 Jan 2024 15:03:42 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:45 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-26-seanjc@google.com> Subject: [PATCH v10 25/29] KVM: selftests: Move KVM_FEP macro into common library header From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Move the KVM_FEP definition, a.k.a. the KVM force emulation prefix, into processor.h so that it can be used for other tests besides the MSR filter test. Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/include/x86_64/processor.h | 3 +++ tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c | 2 -- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index d211cea188be..6be365ac2a85 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -23,6 +23,9 @@ extern bool host_cpu_is_intel; extern bool host_cpu_is_amd; +/* Forced emulation prefix, used to invoke the emulator unconditionally. */ +#define KVM_FEP "ud2; .byte 'k', 'v', 'm';" + #define NMI_VECTOR 0x02 #define X86_EFLAGS_FIXED (1u << 1) diff --git a/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c b/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c index 9e12dbc47a72..ab3a8c4f0b86 100644 --- a/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c +++ b/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c @@ -12,8 +12,6 @@ #include "kvm_util.h" #include "vmx.h" -/* Forced emulation prefix, used to invoke the emulator unconditionally. */ -#define KVM_FEP "ud2; .byte 'k', 'v', 'm';" static bool fep_available; #define MSR_NON_EXISTENT 0x474f4f00 From patchwork Tue Jan 9 23:02:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515528 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F6414C609 for ; Tue, 9 Jan 2024 23:03:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Uwo36VEA" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dbedc37d66cso3100834276.3 for ; Tue, 09 Jan 2024 15:03:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841425; x=1705446225; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=F9sZDwb5omzJm3x59fBf4fSiaUaW+1zj+LLJbTuyNqE=; b=Uwo36VEAfdbe3DNbKMZharOhWm9NSeUMnh0og3LlvGKhsw43ixZ/K/fdk5pznERtH4 9k/8AhQoj2OqL/pz1SuU8cS6C6T7c/SJGazYTSA/+TpPDk8xZp+fw28eN99B3iHpR5mO J2UnQ2JW6niGhgCBtJF/uexweqpwM6G7tAUE0BuUadvv3vSTWbO1YdF10bN/cNdTz5GW E6PtH/pfEkh84HPL2sxtTJr/URmauxS5cWd2/EUk1fmF8Ip33dOrwHX59PQDGM+ODyFD saRYm+y+XqoQ6mw7fRdYzYSF9cpYmTkjfBO4ZSOgjLSax2VQ9g6nUuqJ3BBKa0VweQAK uhDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841425; x=1705446225; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=F9sZDwb5omzJm3x59fBf4fSiaUaW+1zj+LLJbTuyNqE=; b=ZBxOc4EMuGKBORaZ/RuEt+9jCe5XfKgPZ+KvYUolqeRrt4zX5ORO0hCuMVAMC7OSGR 7xoBFCNWbIeXf97yDZTH8YCrJTlFx/DHW2a7A/lQh47CpA8twPijQNgjZLeheE+WWSeP 7rbQJtyR1w6q4enO5YitGHyktxqYyP5rmfZl+G/5S8rKjCRziWkWmdiBWNxurxMZgR/d gJnvxvf4A9jvROaakGrS5nHgG7urOz9MZI6iw/pktGWUljZpbXa+PMI1VyUZaqIcTo+f fYHuSFmhy7jQGP0F4gOXsWx0+y+we5KMgZFkx7mYJtTaPT5FqqHuAXxK5thGAEhtGHWp Wedw== X-Gm-Message-State: AOJu0YzHHRS8w19BHorL5hMwxsy3p88u8v9QSJtYN8WdZie1UEIT0bhP i1a+K5SxcOWqLyRXgZonX0roJEWqD33UfZNyXQ== X-Google-Smtp-Source: AGHT+IFAVqqEDuuqgAycgeZxB6g/QSy4QBiwEufkpqTQx5V9nB2UtnwbLmvQU/uNlybiGL9ErNbi3XpVIr0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:bb46:0:b0:dbe:ee69:bf1a with SMTP id b6-20020a25bb46000000b00dbeee69bf1amr5312ybk.7.1704841424802; Tue, 09 Jan 2024 15:03:44 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:46 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-27-seanjc@google.com> Subject: [PATCH v10 26/29] KVM: selftests: Test PMC virtualization with forced emulation From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Extend the PMC counters test to use forced emulation to verify that KVM emulates counter events for instructions retired and branches retired. Force emulation for only a subset of the measured code to test that KVM does the right thing when mixing perf events with emulated events. Reviewed-by: Dapeng Mi Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 44 +++++++++++++------ 1 file changed, 30 insertions(+), 14 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index 9e9dc4084c0d..cb808ac827ba 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -21,6 +21,7 @@ static uint8_t kvm_pmu_version; static bool kvm_has_perf_caps; +static bool is_forced_emulation_enabled; static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, void *guest_code, @@ -34,6 +35,7 @@ static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, vcpu_init_descriptor_tables(*vcpu); sync_global_to_guest(vm, kvm_pmu_version); + sync_global_to_guest(vm, is_forced_emulation_enabled); /* * Set PERF_CAPABILITIES before PMU version as KVM disallows enabling @@ -138,37 +140,50 @@ static void guest_assert_event_count(uint8_t idx, * If CLFUSH{,OPT} is supported, flush the cacheline containing (at least) the * start of the loop to force LLC references and misses, i.e. to allow testing * that those events actually count. + * + * If forced emulation is enabled (and specified), force emulation on a subset + * of the measured code to verify that KVM correctly emulates instructions and + * branches retired events in conjunction with hardware also counting said + * events. */ -#define GUEST_MEASURE_EVENT(_msr, _value, clflush) \ +#define GUEST_MEASURE_EVENT(_msr, _value, clflush, FEP) \ do { \ __asm__ __volatile__("wrmsr\n\t" \ clflush "\n\t" \ "mfence\n\t" \ "1: mov $" __stringify(NUM_BRANCHES) ", %%ecx\n\t" \ - "loop .\n\t" \ - "mov %%edi, %%ecx\n\t" \ - "xor %%eax, %%eax\n\t" \ - "xor %%edx, %%edx\n\t" \ + FEP "loop .\n\t" \ + FEP "mov %%edi, %%ecx\n\t" \ + FEP "xor %%eax, %%eax\n\t" \ + FEP "xor %%edx, %%edx\n\t" \ "wrmsr\n\t" \ :: "a"((uint32_t)_value), "d"(_value >> 32), \ "c"(_msr), "D"(_msr) \ ); \ } while (0) +#define GUEST_TEST_EVENT(_idx, _event, _pmc, _pmc_msr, _ctrl_msr, _value, FEP) \ +do { \ + wrmsr(pmc_msr, 0); \ + \ + if (this_cpu_has(X86_FEATURE_CLFLUSHOPT)) \ + GUEST_MEASURE_EVENT(_ctrl_msr, _value, "clflushopt 1f", FEP); \ + else if (this_cpu_has(X86_FEATURE_CLFLUSH)) \ + GUEST_MEASURE_EVENT(_ctrl_msr, _value, "clflush 1f", FEP); \ + else \ + GUEST_MEASURE_EVENT(_ctrl_msr, _value, "nop", FEP); \ + \ + guest_assert_event_count(_idx, _event, _pmc, _pmc_msr); \ +} while (0) + static void __guest_test_arch_event(uint8_t idx, struct kvm_x86_pmu_feature event, uint32_t pmc, uint32_t pmc_msr, uint32_t ctrl_msr, uint64_t ctrl_msr_value) { - wrmsr(pmc_msr, 0); + GUEST_TEST_EVENT(idx, event, pmc, pmc_msr, ctrl_msr, ctrl_msr_value, ""); - if (this_cpu_has(X86_FEATURE_CLFLUSHOPT)) - GUEST_MEASURE_EVENT(ctrl_msr, ctrl_msr_value, "clflushopt 1f"); - else if (this_cpu_has(X86_FEATURE_CLFLUSH)) - GUEST_MEASURE_EVENT(ctrl_msr, ctrl_msr_value, "clflush 1f"); - else - GUEST_MEASURE_EVENT(ctrl_msr, ctrl_msr_value, "nop"); - - guest_assert_event_count(idx, event, pmc, pmc_msr); + if (is_forced_emulation_enabled) + GUEST_TEST_EVENT(idx, event, pmc, pmc_msr, ctrl_msr, ctrl_msr_value, KVM_FEP); } #define X86_PMU_FEATURE_NULL \ @@ -553,6 +568,7 @@ int main(int argc, char *argv[]) kvm_pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION); kvm_has_perf_caps = kvm_cpu_has(X86_FEATURE_PDCM); + is_forced_emulation_enabled = kvm_is_forced_emulation_enabled(); test_intel_counters(); From patchwork Tue Jan 9 23:02:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515529 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 624694CB2E for ; Tue, 9 Jan 2024 23:03:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lg7IKhNr" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-6db0c931f7cso1014034b3a.1 for ; Tue, 09 Jan 2024 15:03:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841426; x=1705446226; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=wANFkcf3/mrpI/C7ga3lU3ObqtiYxLFAZO6wmonones=; b=lg7IKhNrHY24Mgh2hctYvI6Ypg71cSimjNs3TvcS6+rjsXm88VyScerW2kH/N6VBqn ZUmd582XcPsjj3UJXgKTTz6nMF5v8JxCHeeTixL53nYdhPCsao+dH7m2GWxfAq3toM/+ 1A/lQRoXRkVyBeYlctbKMkQY9N4TIYnU3Suh5xOoxxrvv4ZeeLvR5kawqZQxNwtOeC39 LL+M1cBpLboUR16ZtVhD+SWckFLSr6XKvjHaGW2AZ/W80PrfxCEJqyTN/CSC/MzV2YzC ueFPsisdoZtqYjYEBGKFMO3PaAP9JAYMlZmWqTp/GcfCb1gYIdtXSEAFI8Hif50JNyoO Xkuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841426; x=1705446226; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=wANFkcf3/mrpI/C7ga3lU3ObqtiYxLFAZO6wmonones=; b=Z+N25kE+PAejdsTPQg2fEbR2LdzL6kTH9H6yMiDk3FHRZcDz9Z6MLX9QkeTeGe4svc NlyY1Pon5XM9Pb/7fKB8YCnSNf8op6koRI6C52Zmb598V3Ham+tB6mRQDuTM+5IkAUzL rKFkVUlAsNNRSiSMSoOhdN5ygUuqD66ux4EpUc40UlYB9npE6TpdbzWCEyRmMxputV5+ /96CBGJAyvp6vngUgJuTahxYDcOXtNyH9EfXPCSxK+HAtrnFE0YP9dLQzXqNZY9xTmGK 8YigvZIJWwejdPYQXVFPmwygP4IZzsni9Fb/Dqh/Zqzg0kHYzOtckImrIO4Q9B9Yh56y yV7A== X-Gm-Message-State: AOJu0Yy/i1KVaKd3RKTNTXAzfaJqx1wRs1sjnlJllXFIFQjJ31gYTR2U ZdcoHX8yHcJjFu+3fTvhNRRE2IqyhR55aibDaQ== X-Google-Smtp-Source: AGHT+IFSSOuKmnwGFSF6cWXqEiHbC+sn8cv2NT2Xn9cD3EODS/1ni7aYjWy/ewfqyaN7X8YdGXdI5Xw5CUQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:1896:b0:6da:bf5b:bd4e with SMTP id x22-20020a056a00189600b006dabf5bbd4emr27897pfh.3.1704841426743; Tue, 09 Jan 2024 15:03:46 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:47 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-28-seanjc@google.com> Subject: [PATCH v10 27/29] KVM: selftests: Add a forced emulation variation of KVM_ASM_SAFE() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Add KVM_ASM_SAFE_FEP() to allow forcing emulation on an instruction that might fault. Note, KVM skips RIP past the FEP prefix before injecting an exception, i.e. the fixup needs to be on the instruction itself. Do not check for FEP support, that is firmly the responsibility of whatever code wants to use KVM_ASM_SAFE_FEP(). Sadly, chaining variadic arguments that contain commas doesn't work, thus the unfortunate amount of copy+paste. Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86_64/processor.h | 30 +++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 6be365ac2a85..fe891424ff55 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -1154,16 +1154,19 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector, * r9 = exception vector (non-zero) * r10 = error code */ -#define KVM_ASM_SAFE(insn) \ +#define __KVM_ASM_SAFE(insn, fep) \ "mov $" __stringify(KVM_EXCEPTION_MAGIC) ", %%r9\n\t" \ "lea 1f(%%rip), %%r10\n\t" \ "lea 2f(%%rip), %%r11\n\t" \ - "1: " insn "\n\t" \ + fep "1: " insn "\n\t" \ "xor %%r9, %%r9\n\t" \ "2:\n\t" \ "mov %%r9b, %[vector]\n\t" \ "mov %%r10, %[error_code]\n\t" +#define KVM_ASM_SAFE(insn) __KVM_ASM_SAFE(insn, "") +#define KVM_ASM_SAFE_FEP(insn) __KVM_ASM_SAFE(insn, KVM_FEP) + #define KVM_ASM_SAFE_OUTPUTS(v, ec) [vector] "=qm"(v), [error_code] "=rm"(ec) #define KVM_ASM_SAFE_CLOBBERS "r9", "r10", "r11" @@ -1190,6 +1193,29 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector, vector; \ }) +#define kvm_asm_safe_fep(insn, inputs...) \ +({ \ + uint64_t ign_error_code; \ + uint8_t vector; \ + \ + asm volatile(KVM_ASM_SAFE(insn) \ + : KVM_ASM_SAFE_OUTPUTS(vector, ign_error_code) \ + : inputs \ + : KVM_ASM_SAFE_CLOBBERS); \ + vector; \ +}) + +#define kvm_asm_safe_ec_fep(insn, error_code, inputs...) \ +({ \ + uint8_t vector; \ + \ + asm volatile(KVM_ASM_SAFE_FEP(insn) \ + : KVM_ASM_SAFE_OUTPUTS(vector, error_code) \ + : inputs \ + : KVM_ASM_SAFE_CLOBBERS); \ + vector; \ +}) + static inline uint8_t rdmsr_safe(uint32_t msr, uint64_t *val) { uint64_t error_code; From patchwork Tue Jan 9 23:02:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515530 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F1E84CB5F for ; Tue, 9 Jan 2024 23:03:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ST4c0qte" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-28d1df091ecso1881778a91.0 for ; Tue, 09 Jan 2024 15:03:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841428; x=1705446228; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=08OoRr/gbTB9xGNdl1frY+t6wa0iZxEBlExGG+2stds=; b=ST4c0qteUlWOfrfQ0hqN9mQanQILL4cysEKvUDw6TSJZSe74lt3s20RpQIZSurzrJV VS2f6bQM4PI8xQsc8NwO1n4tsbdcPYbjpb0O3+Ot/NXsSFW94HW8OJYXI6TMgdNn3WE3 v0QcQfDch20WguMX+3uwt4cfpNBy2NdF0adZ5ebaQQYbil5SoSGGbvpTd9cl9LLEAUev HjwY3LrFKTDx96Gr0tglVP7G4QbBBKcop6TbMgvpjNnfrARJ2Q/5fDV6iqzPnuoW/srU xzY8tOnyoFJvUZXmH+OSSztM+MzzbBNJUmVwyl4dj31St+GpT4y512qp149kKUGaiDYT r3Xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841428; x=1705446228; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=08OoRr/gbTB9xGNdl1frY+t6wa0iZxEBlExGG+2stds=; b=XF8CcWpV+ej4nxl0AWgQ5w7ibrz8NzG6ALkrEDe2MNB/oZz3BlBUyCYIt/C+phf3TN 7JOaCrDK28CvMCZON7H/apjcprP6Y1hT2TdmCEdlMOX9fwCiBodqa78R1DG2JSA4L2TB PEkjxu5wu1nFKSXozASmBy3ewVemVh7IxDcMoNCWCP1Pg4Se4nVEzPBHr5aLkc5vK2Ki 82ZwjkO+FptFlWPJsrweCTpEtcL1dpymK3OomztTa1s2SjIlJ2cLt703ql3NXKRi1OTY CniKnzeCrtPSP5YocvjQtvCfC7VufjP6uGw8Vqtk7n/IfgGzV9lw+dacjwjnEe5ewmCH vecg== X-Gm-Message-State: AOJu0Yz2pUuv+L7dljW6FxRxfxsdswh/sSpvDeHWAldK+XgUVkFL5fqW 1kLKhu8qTXIuEWy0Rvqy4fZl2ojtymavUd8hRA== X-Google-Smtp-Source: AGHT+IETKsJXOUZ8pmBB5saZkiKhZnJWFfGpi9h9YTd5Unqj4jQonc/7ucRrblymqMDdM0uxsscoIY97xDc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:4aca:b0:28d:28f5:9b5e with SMTP id mh10-20020a17090b4aca00b0028d28f59b5emr407pjb.0.1704841428700; Tue, 09 Jan 2024 15:03:48 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:48 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-29-seanjc@google.com> Subject: [PATCH v10 28/29] KVM: selftests: Add helpers for safe and safe+forced RDMSR, RDPMC, and XGETBV From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Add helpers for safe and safe-with-forced-emulations versions of RDMSR, RDPMC, and XGETBV. Use macro shenanigans to eliminate the rather large amount of boilerplate needed to get values in and out of registers. Signed-off-by: Sean Christopherson --- .../selftests/kvm/include/x86_64/processor.h | 40 +++++++++++++------ 1 file changed, 27 insertions(+), 13 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index fe891424ff55..abac816f6594 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -1216,21 +1216,35 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector, vector; \ }) -static inline uint8_t rdmsr_safe(uint32_t msr, uint64_t *val) -{ - uint64_t error_code; - uint8_t vector; - uint32_t a, d; - - asm volatile(KVM_ASM_SAFE("rdmsr") - : "=a"(a), "=d"(d), KVM_ASM_SAFE_OUTPUTS(vector, error_code) - : "c"(msr) - : KVM_ASM_SAFE_CLOBBERS); - - *val = (uint64_t)a | ((uint64_t)d << 32); - return vector; +#define BUILD_READ_U64_SAFE_HELPER(insn, _fep, _FEP) \ +static inline uint8_t insn##_safe ##_fep(uint32_t idx, uint64_t *val) \ +{ \ + uint64_t error_code; \ + uint8_t vector; \ + uint32_t a, d; \ + \ + asm volatile(KVM_ASM_SAFE##_FEP(#insn) \ + : "=a"(a), "=d"(d), \ + KVM_ASM_SAFE_OUTPUTS(vector, error_code) \ + : "c"(idx) \ + : KVM_ASM_SAFE_CLOBBERS); \ + \ + *val = (uint64_t)a | ((uint64_t)d << 32); \ + return vector; \ } +/* + * Generate {insn}_safe() and {insn}_safe_fep() helpers for instructions that + * use ECX as in input index, and EDX:EAX as a 64-bit output. + */ +#define BUILD_READ_U64_SAFE_HELPERS(insn) \ + BUILD_READ_U64_SAFE_HELPER(insn, , ) \ + BUILD_READ_U64_SAFE_HELPER(insn, _fep, _FEP) \ + +BUILD_READ_U64_SAFE_HELPERS(rdmsr) +BUILD_READ_U64_SAFE_HELPERS(rdpmc) +BUILD_READ_U64_SAFE_HELPERS(xgetbv) + static inline uint8_t wrmsr_safe(uint32_t msr, uint64_t val) { return kvm_asm_safe("wrmsr", "a"(val & -1u), "d"(val >> 32), "c"(msr)); From patchwork Tue Jan 9 23:02:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13515531 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 682374D105 for ; Tue, 9 Jan 2024 23:03:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ad/fdzU8" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbee47bacb5so3903669276.1 for ; Tue, 09 Jan 2024 15:03:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1704841430; x=1705446230; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=KPmV5XTeXIjrJabLJdrKSKnfueERqfbYH9Vd6C8BrEE=; b=ad/fdzU8zoAarCh2kPmtvWjzsYFcdCgNeFlKbGbXUN47nLaO7FkHokEgAvnkMpVLOB hgoZJADaV4hN+X7inq9HWftIthyE7bP34CLUbHL0Mex8dguNB/XkvQn/VEtH9D/ajGbC UDtYMBoyVs7hbUBY83Wde1jPjGW77mdDLcD+c5iKZjJf/59txOtoNP7UrJjnoaB0y8of ewfau7T5KsbPq04QEHhU23xKiaygNHQb1eOSzKwNFlTAIIWyBIywNFGjQ3pEfgJBpfes iuKafT0xHBESUOZUulO6FJoLN/g04RuMayW4qAzaPuMJ9RNhLg5Dy0ZhMV9Cxmg48V0H 922A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704841430; x=1705446230; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KPmV5XTeXIjrJabLJdrKSKnfueERqfbYH9Vd6C8BrEE=; b=OK9Fu526A+s55CeTDEQMWSlavhGXCQ2xyg8YsUfdyx2lbIL1+7MFjdjDZYHSMB4IYE +UtMBDjogkE/+sqDIl53kDGzp9QT2qr+MdThGoY59An4xTaWknKUcjax8oTR5SNNrZFV tRFGvuBzuY8KinVDyEZHxSjW7oeW5QEBt4ADNA5rphaKTA1VwCi48mUV092VJzLfJpdI whCltbGnNld2JWAS+SFAp9rEmXN2DmB/S9CupYU40IitfETUoPJylqP/peCo/YRR7lB8 LPl/205kmJ+y0kEFP1b/rPXlR15CYxFPLkfCdNmVSg+GvxFPs20qmbnWb2L3ajdrFfvL JpjA== X-Gm-Message-State: AOJu0YwwpAG+3GyegL97NBSYy37YqVNQS7Ejl2RQ3SgMYwiC1J6sIkAm QdqIPJxnvpn6fw65mzNAKWPPQNpkOakWi7V+yQ== X-Google-Smtp-Source: AGHT+IFKFj7UWt0nW0skQEeBaGqmZnlHaiJGfY0quyK3EQ+1Sm76KUaFfHiI6t6TK+z1TaL1Os331OvwqRw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:8750:0:b0:dbd:30b0:828e with SMTP id e16-20020a258750000000b00dbd30b0828emr98708ybn.1.1704841430539; Tue, 09 Jan 2024 15:03:50 -0800 (PST) Reply-To: Sean Christopherson Date: Tue, 9 Jan 2024 15:02:49 -0800 In-Reply-To: <20240109230250.424295-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240109230250.424295-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20240109230250.424295-30-seanjc@google.com> Subject: [PATCH v10 29/29] KVM: selftests: Extend PMU counters test to validate RDPMC after WRMSR From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Kan Liang , Dapeng Mi , Jim Mattson , Jinrong Liang , Aaron Lewis , Like Xu Extend the read/write PMU counters subtest to verify that RDPMC also reads back the written value. Opportunsitically verify that attempting to use the "fast" mode of RDPMC fails, as the "fast" flag is only supported by non-architectural PMUs, which KVM doesn't virtualize. Signed-off-by: Sean Christopherson --- .../selftests/kvm/x86_64/pmu_counters_test.c | 41 +++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index cb808ac827ba..ae5f6042f1e8 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -325,9 +325,30 @@ __GUEST_ASSERT(expect_gp ? vector == GP_VECTOR : !vector, \ "Expected " #insn "(0x%x) to yield 0x%lx, got 0x%lx", \ msr, expected_val, val); +static void guest_test_rdpmc(uint32_t rdpmc_idx, bool expect_success, + uint64_t expected_val) +{ + uint8_t vector; + uint64_t val; + + vector = rdpmc_safe(rdpmc_idx, &val); + GUEST_ASSERT_PMC_MSR_ACCESS(RDPMC, rdpmc_idx, !expect_success, vector); + if (expect_success) + GUEST_ASSERT_PMC_VALUE(RDPMC, rdpmc_idx, val, expected_val); + + if (!is_forced_emulation_enabled) + return; + + vector = rdpmc_safe_fep(rdpmc_idx, &val); + GUEST_ASSERT_PMC_MSR_ACCESS(RDPMC, rdpmc_idx, !expect_success, vector); + if (expect_success) + GUEST_ASSERT_PMC_VALUE(RDPMC, rdpmc_idx, val, expected_val); +} + static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters, uint8_t nr_counters, uint32_t or_mask) { + const bool pmu_has_fast_mode = !guest_get_pmu_version(); uint8_t i; for (i = 0; i < nr_possible_counters; i++) { @@ -352,6 +373,7 @@ static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters const uint64_t expected_val = expect_success ? test_val : 0; const bool expect_gp = !expect_success && msr != MSR_P6_PERFCTR0 && msr != MSR_P6_PERFCTR1; + uint32_t rdpmc_idx; uint8_t vector; uint64_t val; @@ -365,6 +387,25 @@ static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters if (!expect_gp) GUEST_ASSERT_PMC_VALUE(RDMSR, msr, val, expected_val); + /* + * Redo the read tests with RDPMC, which has different indexing + * semantics and additional capabilities. + */ + rdpmc_idx = i; + if (base_msr == MSR_CORE_PERF_FIXED_CTR0) + rdpmc_idx |= INTEL_RDPMC_FIXED; + + guest_test_rdpmc(rdpmc_idx, expect_success, expected_val); + + /* + * KVM doesn't support non-architectural PMUs, i.e. it should + * impossible to have fast mode RDPMC. Verify that attempting + * to use fast RDPMC always #GPs. + */ + GUEST_ASSERT(!expect_success || !pmu_has_fast_mode); + rdpmc_idx |= INTEL_RDPMC_FAST; + guest_test_rdpmc(rdpmc_idx, false, -1ull); + vector = wrmsr_safe(msr, 0); GUEST_ASSERT_PMC_MSR_ACCESS(WRMSR, msr, expect_gp, vector); }