From patchwork Fri Nov 10 02:28:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13452048 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8660E1390 for ; Fri, 10 Nov 2023 02:29:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rTt8sICL" Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8F1A420B for ; Thu, 9 Nov 2023 18:29:03 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-da2b8af7e89so1986246276.1 for ; Thu, 09 Nov 2023 18:29:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583343; x=1700188143; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=z5ryUyEZOO+/tMRx5bR+5vxzqerjQqbnVQhlXKSl6+s=; b=rTt8sICLvmhzb66yf4Xg/orrbf5+mXq3xrlorMU8hAZRy+5Xy3GtXnMiKFrTxwvoq7 xIlIew+1GwdH7VANSsY1nBxmLqI4pRX0O2+U9KZJE7BOIzFeYtAhcsrZsYQ4pxtX9JUy fZ8exvqcNitO75mteFqoliQsFOrvEmDZeMQpchAxnzrwi+b5v+OzZALB/CZzjxgCnK63 HoBn7V1EDPTohPVe9UGUcXxgfmudnLwMU+58GBWcLOzMPcgGF4C62WaX++b+groccAah fEZ9/FtBS5ejYjaOLfxohyD0g6sdjyC8MpP3Tsm+PlSKDWf1Qg5lmDclTRCaHeBoywIZ w32g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583343; x=1700188143; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=z5ryUyEZOO+/tMRx5bR+5vxzqerjQqbnVQhlXKSl6+s=; b=P86fVqgUPMFszbTPNlcxwhntWqZgVvwjNiOsYghZqHfFFeZyfjiXAT+KMicwoBlZ30 VYfde6xY9fXK0OQ3XE1TQHcNf6yitimm1ZR84c2km6wawE+QsRJUmwtliBV2UfKylxls fii/Mm2Ar64W8JDzv8j6TSj7mRREpeGASsDfZUojYZA9YCvM+cGEYwSJugIR1hZjZafD rvvOa4W6lgnpwAs+/whRDyiUgZju58I3Qzy0Jd8MYgf/9yAd1kw090A8c5DIXNeA4KsR vO5D9uESAtnjqY4i3SMrgwemvrWRzHquvj0ms26XkEvfBfimMdf5Nww/N+9759E6Kwzr yFKQ== X-Gm-Message-State: AOJu0Yw3/BniJkHWOhoDnyQGBwWCQ/tRRrX08z6cyFk2ONqoT3Y/Jow0 69aCIxDkcV5dd0v6OUC6fMTBpu1+GCs= X-Google-Smtp-Source: AGHT+IEOoL78sfqQfuTgzbOo5Q96/UpjUTTutLExKXWogUjIC0qmLjdeWWb3VMacgMBVzqQmS2VhR9s9lw8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:ba4a:0:b0:d9a:5e10:c34d with SMTP id z10-20020a25ba4a000000b00d9a5e10c34dmr162289ybj.11.1699583342989; Thu, 09 Nov 2023 18:29:02 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:48 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-2-seanjc@google.com> Subject: [PATCH 01/10] KVM: x86/pmu: Zero out PMU metadata on AMD if PMU is disabled From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Move the purging of common PMU metadata from intel_pmu_refresh() to kvm_pmu_refresh(), and invoke the vendor refresh() hook if and only if the VM is supposed to have a vPMU. KVM already denies access to the PMU based on kvm->arch.enable_pmu, as get_gp_pmc_amd() returns NULL for all PMCs in that case, i.e. KVM already violates AMD's architecture by not virtualizing a PMU (kernels have long since learned to not panic when the PMU is unavailable). But configuring the PMU as if it were enabled causes unwanted side effects, e.g. calls to kvm_pmu_trigger_event() waste an absurd number of cycles due to the all_valid_pmc_idx bitmap being non-zero. Fixes: b1d66dad65dc ("KVM: x86/svm: Add module param to control PMU virtualization") Reported-by: Konstantin Khorenko Closes: https://lore.kernel.org/all/20231109180646.2963718-2-khorenko@virtuozzo.com Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 20 ++++++++++++++++++-- arch/x86/kvm/vmx/pmu_intel.c | 16 ++-------------- 2 files changed, 20 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 1b74a29ed250..b52bab7dc422 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -739,6 +739,8 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) */ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) { + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + if (KVM_BUG_ON(kvm_vcpu_has_run(vcpu), vcpu->kvm)) return; @@ -748,8 +750,22 @@ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) */ kvm_pmu_reset(vcpu); - bitmap_zero(vcpu_to_pmu(vcpu)->all_valid_pmc_idx, X86_PMC_IDX_MAX); - static_call(kvm_x86_pmu_refresh)(vcpu); + pmu->version = 0; + pmu->nr_arch_gp_counters = 0; + pmu->nr_arch_fixed_counters = 0; + pmu->counter_bitmask[KVM_PMC_GP] = 0; + pmu->counter_bitmask[KVM_PMC_FIXED] = 0; + pmu->reserved_bits = 0xffffffff00200000ull; + pmu->raw_event_mask = X86_RAW_EVENT_MASK; + pmu->global_ctrl_mask = ~0ull; + pmu->global_status_mask = ~0ull; + pmu->fixed_ctr_ctrl_mask = ~0ull; + pmu->pebs_enable_mask = ~0ull; + pmu->pebs_data_cfg_mask = ~0ull; + bitmap_zero(pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX); + + if (vcpu->kvm->arch.enable_pmu) + static_call(kvm_x86_pmu_refresh)(vcpu); } void kvm_pmu_init(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index c3a841d8df27..0d2fd9fdcf4b 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -463,19 +463,6 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) u64 counter_mask; int i; - pmu->nr_arch_gp_counters = 0; - pmu->nr_arch_fixed_counters = 0; - pmu->counter_bitmask[KVM_PMC_GP] = 0; - pmu->counter_bitmask[KVM_PMC_FIXED] = 0; - pmu->version = 0; - pmu->reserved_bits = 0xffffffff00200000ull; - pmu->raw_event_mask = X86_RAW_EVENT_MASK; - pmu->global_ctrl_mask = ~0ull; - pmu->global_status_mask = ~0ull; - pmu->fixed_ctr_ctrl_mask = ~0ull; - pmu->pebs_enable_mask = ~0ull; - pmu->pebs_data_cfg_mask = ~0ull; - memset(&lbr_desc->records, 0, sizeof(lbr_desc->records)); /* @@ -487,8 +474,9 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) return; entry = kvm_find_cpuid_entry(vcpu, 0xa); - if (!entry || !vcpu->kvm->arch.enable_pmu) + if (!entry) return; + eax.full = entry->eax; edx.full = entry->edx; From patchwork Fri Nov 10 02:28:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13452049 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60FB017E6 for ; Fri, 10 Nov 2023 02:29:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IkP870qs" Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E822144BA for ; Thu, 9 Nov 2023 18:29:05 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-5abf640f19aso1695235a12.2 for ; Thu, 09 Nov 2023 18:29:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583345; x=1700188145; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GlkhPPn6nZDv6bOYLVvOscI4P+zXzeSQiIG/AC/sj/I=; b=IkP870qs9jvQWXsGaSfIGhMqIvwEzm9LPvVJ0ueQ0zIJRvteHxeza5xHsitorBxpyK xM/4yD2/lR4Uf/d9bmoij0J+wkYrNYB+S0Ot2HL4lWDhVGWbFVQqNIz6CGEvXpU+61iG esZ9dqXBteQljUtz4higRq0J22RddRevrEix4F6Yd2wekSAlJg2EUFC3q5l9M4qJtV+H hqj0iVk4s5nWfVJEWnMCjdh+9sx3M+WkF92+Qbdh8VuL+YeO+IAABkV12fy4EHJmx3F7 5uVxfp+kEc5z3L9p269CD4LWR3e3Kb4l0wLzwT/3q1g/42qQQex3xC77vRyNYHJ0T7JM B/JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583345; x=1700188145; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GlkhPPn6nZDv6bOYLVvOscI4P+zXzeSQiIG/AC/sj/I=; b=onuoojX6O0iAed9I6HNg+nv6k8hX8bE0xXXA3LIUC3NcD1mxKkaPlnElKDryMjIOVK Upg5gKenDQ7au8IhA9M0hW60gN6U93MVG7ZE5s1f/e5l+iezIgmO3BxRhIauigs0vLaK +UO0DLVeY2sWdxBkZ3ahx6dVcYQXmYH06nS7yMpvNYBpczVHlgriqgFzyxTnZsRiy6w3 7iXBRtkCz3oY1a54D2tE2mq2N8hy6lWZka1j4DzbtUHpQ8Z/YdNCB1wZ/LdzzJpeof1k ssFXGSh5bmfvDgqQxTZtsG7DGEASpKxpTpfnjAWKqDgSfmgu/53d91khJSOK0oohWlsT jNKA== X-Gm-Message-State: AOJu0Yzz9d4bomXMQnimkauN4lN09DLM1Io7jMgpA+34EZS7Edw5kTd4 tzURUSzVFUAiImuzE/+qI+RbkU8HLuM= X-Google-Smtp-Source: AGHT+IEN6+OgyXbLfjlEOgo7Ie4q6e8D88WD1mennCFfHp0tDEnQffxSo3K/M1KYKqn2aUYg1UiSCMpPjMM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:1389:b0:1cc:cc77:73ba with SMTP id jx9-20020a170903138900b001cccc7773bamr839603plb.8.1699583345452; Thu, 09 Nov 2023 18:29:05 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:49 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-3-seanjc@google.com> Subject: [PATCH 02/10] KVM: x86/pmu: Add common define to capture fixed counters offset From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Add a common define to "officially" solidify KVM's split of counters, i.e. to commit to using bits 31:0 to track general purpose counters and bits 63:32 to track fixed counters (which only Intel supports). KVM already bleeds this behavior all over common PMU code, and adding a KVM- defined macro allows clarifying that the value is a _base_, as oppposed to the _flag_ that is used to access fixed PMCs via RDPMC (which perf confusingly calls INTEL_PMC_FIXED_RDPMC_BASE). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 8 ++++---- arch/x86/kvm/pmu.h | 4 +++- arch/x86/kvm/vmx/pmu_intel.c | 12 ++++++------ 3 files changed, 13 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b52bab7dc422..714fa6dd912e 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -67,7 +67,7 @@ static const struct x86_cpu_id vmx_pebs_pdist_cpu[] = { * all perf counters (both gp and fixed). The mapping relationship * between pmc and perf counters is as the following: * * Intel: [0 .. KVM_INTEL_PMC_MAX_GENERIC-1] <=> gp counters - * [INTEL_PMC_IDX_FIXED .. INTEL_PMC_IDX_FIXED + 2] <=> fixed + * [KVM_FIXED_PMC_BASE_IDX .. KVM_FIXED_PMC_BASE_IDX + 2] <=> fixed * * AMD: [0 .. AMD64_NUM_COUNTERS-1] and, for families 15H * and later, [0 .. AMD64_NUM_COUNTERS_CORE-1] <=> gp counters */ @@ -411,7 +411,7 @@ static bool is_gp_event_allowed(struct kvm_x86_pmu_event_filter *f, static bool is_fixed_event_allowed(struct kvm_x86_pmu_event_filter *filter, int idx) { - int fixed_idx = idx - INTEL_PMC_IDX_FIXED; + int fixed_idx = idx - KVM_FIXED_PMC_BASE_IDX; if (filter->action == KVM_PMU_EVENT_DENY && test_bit(fixed_idx, (ulong *)&filter->fixed_counter_bitmap)) @@ -465,7 +465,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) if (pmc_is_fixed(pmc)) { fixed_ctr_ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED); + pmc->idx - KVM_FIXED_PMC_BASE_IDX); if (fixed_ctr_ctrl & 0x1) eventsel |= ARCH_PERFMON_EVENTSEL_OS; if (fixed_ctr_ctrl & 0x2) @@ -831,7 +831,7 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc) select_user = config & ARCH_PERFMON_EVENTSEL_USR; } else { config = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED); + pmc->idx - KVM_FIXED_PMC_BASE_IDX); select_os = config & 0x1; select_user = config & 0x2; } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 87ecf22f5b25..7ffa4f1dedb0 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -18,6 +18,8 @@ #define VMWARE_BACKDOOR_PMC_REAL_TIME 0x10001 #define VMWARE_BACKDOOR_PMC_APPARENT_TIME 0x10002 +#define KVM_FIXED_PMC_BASE_IDX INTEL_PMC_IDX_FIXED + struct kvm_pmu_ops { struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, @@ -130,7 +132,7 @@ static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) if (pmc_is_fixed(pmc)) return fixed_ctrl_field(pmu->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; + pmc->idx - KVM_FIXED_PMC_BASE_IDX) & 0x3; return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 0d2fd9fdcf4b..61252bb733c4 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -42,18 +42,18 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); - __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); + __set_bit(KVM_FIXED_PMC_BASE_IDX + i, pmu->pmc_in_use); kvm_pmu_request_counter_reprogram(pmc); } } static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) { - if (pmc_idx < INTEL_PMC_IDX_FIXED) { + if (pmc_idx < KVM_FIXED_PMC_BASE_IDX) { return get_gp_pmc(pmu, MSR_P6_EVNTSEL0 + pmc_idx, MSR_P6_EVNTSEL0); } else { - u32 idx = pmc_idx - INTEL_PMC_IDX_FIXED; + u32 idx = pmc_idx - KVM_FIXED_PMC_BASE_IDX; return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0); } @@ -508,7 +508,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) for (i = 0; i < pmu->nr_arch_fixed_counters; i++) pmu->fixed_ctr_ctrl_mask &= ~(0xbull << (i * 4)); counter_mask = ~(((1ull << pmu->nr_arch_gp_counters) - 1) | - (((1ull << pmu->nr_arch_fixed_counters) - 1) << INTEL_PMC_IDX_FIXED)); + (((1ull << pmu->nr_arch_fixed_counters) - 1) << KVM_FIXED_PMC_BASE_IDX)); pmu->global_ctrl_mask = counter_mask; /* @@ -552,7 +552,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->reserved_bits &= ~ICL_EVENTSEL_ADAPTIVE; for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { pmu->fixed_ctr_ctrl_mask &= - ~(1ULL << (INTEL_PMC_IDX_FIXED + i * 4)); + ~(1ULL << (KVM_FIXED_PMC_BASE_IDX + i * 4)); } pmu->pebs_data_cfg_mask = ~0xff00000full; } else { @@ -578,7 +578,7 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) for (i = 0; i < KVM_PMC_MAX_FIXED; i++) { pmu->fixed_counters[i].type = KVM_PMC_FIXED; pmu->fixed_counters[i].vcpu = vcpu; - pmu->fixed_counters[i].idx = i + INTEL_PMC_IDX_FIXED; + pmu->fixed_counters[i].idx = i + KVM_FIXED_PMC_BASE_IDX; pmu->fixed_counters[i].current_config = 0; pmu->fixed_counters[i].eventsel = intel_get_fixed_pmc_eventsel(i); } From patchwork Fri Nov 10 02:28:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13452050 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93785186D for ; Fri, 10 Nov 2023 02:29:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MSmY0KNg" Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B008449A for ; Thu, 9 Nov 2023 18:29:08 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-daee86e2d70so1702442276.0 for ; Thu, 09 Nov 2023 18:29:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583347; x=1700188147; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=md27dvFGlb5hqViLj+F5m5pmIHcQvWG8IIW5rr2ILTg=; b=MSmY0KNgVhLeYaew0QFsjs3FhLSTvxKXPCl04y8eigQ4iZAjyj32iNrc4OJ/6lVGRo nPxi+Bf1Itdbj1o1H0zFi/+8P2GwNy7UiJzZJv1+hZobGr9B4D1y2VoRd8cNdDbmD2sv bV7kA4NdXkPspzg8DmOnwD4iHsY7Qzc5Sm95D/Oj2kCk8OrN6RnA2dAtYarMbtss78iE g5ZYP6IbsTxsdIeKXJyc2FideiWJUtFWR1IbADepTbIdAf9bQuRXkFW3g1dDcm9VnWs5 kPXWH9+KuWLiEjp/WcT+AAGCaqe5zx8YJeRpfVAeWwCl9um+QajmUKvvpH01lRDKkAIv zqRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583347; x=1700188147; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=md27dvFGlb5hqViLj+F5m5pmIHcQvWG8IIW5rr2ILTg=; b=pJ6K47D360E1gZP7pWDPX7Y/y2TM7kdOodaAwBsidS29K0wmJ43m1bjL8v3N+vF9M4 c2EaOMb7MOFXq4cYAcRb8kNXU6lHAsjT/c0fT+AQ4DQ1q72mni9OiBlQVN+1CxQyOG/x p8z99zLlGKBkwkm1JpGilru75bqXsw82E1jQgSQ03jivpr2KwVbnFAJ7MQ5YO/CkwELv 1EGpgkxCvZh+oMaBVDU6AHf/4/RTb1GTuMdrQYAXNjpSpmoGwDLbJPMsU9pT0EZ7+Zmg UBMMyCme3KsPksp8ctjve0PXrLJXWiyuwGRa7q/wZ02cgne0quV8ZAPLhSNs6BrHAkDQ 9Qgw== X-Gm-Message-State: AOJu0YyzIyvo3xjqZdDKu3GWhH6mmkwKNoHnrbalgTIS4a4VrV7h1bPl /8ziu10F9z6QmGV/Zsm7t2EsLUpKPx4= X-Google-Smtp-Source: AGHT+IEPIA5i6GwZZmuF2E4xXE3dLgW/PVXTWanSm3nsZNSdpMXZWr3b6o61SS8QwqedZXfcgDN7oTKXJ64= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:cb97:0:b0:da0:73c2:db78 with SMTP id b145-20020a25cb97000000b00da073c2db78mr188375ybg.9.1699583347379; Thu, 09 Nov 2023 18:29:07 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:50 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-4-seanjc@google.com> Subject: [PATCH 03/10] KVM: x86/pmu: Move pmc_idx => pmc translation helper to common code From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Add a common helper for *internal* PMC lookups, and delete the ops hook and Intel's implementation. Keep AMD's implementation, but rename it to amd_pmu_get_pmc() to make it somewhat more obvious that it's suited for both KVM-internal and guest-initiated lookups. Because KVM tracks all counters in a single bitmap, getting a counter when iterating over a bitmap, e.g. of all valid PMCs, requires a small amount of math, that while simple, isn't super obvious and doesn't use the same semantics as PMC lookups from RDPMC! Although AMD doesn't support fixed counters, the common PMU code still behaves as if there a split, the high half of which just happens to always be empty. Opportunstically add a comment to explain both what is going on, and why KVM uses a single bitmap, e.g. the boilerplate for iterating over separate bitmaps could be done via macros, so it's not (just) about deduplicating code. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 - arch/x86/kvm/pmu.c | 8 +++---- arch/x86/kvm/pmu.h | 29 +++++++++++++++++++++++++- arch/x86/kvm/svm/pmu.c | 7 +++---- arch/x86/kvm/vmx/pmu_intel.c | 15 +------------ 5 files changed, 36 insertions(+), 24 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index d7eebee4450c..e5e7f036587f 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -12,7 +12,6 @@ BUILD_BUG_ON(1) * a NULL definition, for example if "static_call_cond()" will be used * at the call sites. */ -KVM_X86_PMU_OP(pmc_idx_to_pmc) KVM_X86_PMU_OP(rdpmc_ecx_to_pmc) KVM_X86_PMU_OP(msr_idx_to_pmc) KVM_X86_PMU_OP(is_valid_rdpmc_ecx) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 714fa6dd912e..6ee05ad35f55 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -505,7 +505,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) int bit; for_each_set_bit(bit, pmu->reprogram_pmi, X86_PMC_IDX_MAX) { - struct kvm_pmc *pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, bit); + struct kvm_pmc *pmc = kvm_pmc_idx_to_pmc(pmu, bit); if (unlikely(!pmc)) { clear_bit(bit, pmu->reprogram_pmi); @@ -715,7 +715,7 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) bitmap_zero(pmu->reprogram_pmi, X86_PMC_IDX_MAX); for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { - pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); + pmc = kvm_pmc_idx_to_pmc(pmu, i); if (!pmc) continue; @@ -791,7 +791,7 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) pmu->pmc_in_use, X86_PMC_IDX_MAX); for_each_set_bit(i, bitmask, X86_PMC_IDX_MAX) { - pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); + pmc = kvm_pmc_idx_to_pmc(pmu, i); if (pmc && pmc->perf_event && !pmc_speculative_in_use(pmc)) pmc_stop_counter(pmc); @@ -846,7 +846,7 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) int i; for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { - pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); + pmc = kvm_pmc_idx_to_pmc(pmu, i); if (!pmc || !pmc_event_is_allowed(pmc)) continue; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 7ffa4f1dedb0..2235772a495b 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -4,6 +4,8 @@ #include +#include + #define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu) #define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu)) #define pmc_to_pmu(pmc) (&(pmc)->vcpu->arch.pmu) @@ -21,7 +23,6 @@ #define KVM_FIXED_PMC_BASE_IDX INTEL_PMC_IDX_FIXED struct kvm_pmu_ops { - struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask); struct kvm_pmc *(*msr_idx_to_pmc)(struct kvm_vcpu *vcpu, u32 msr); @@ -56,6 +57,32 @@ static inline bool kvm_pmu_has_perf_global_ctrl(struct kvm_pmu *pmu) return pmu->version > 1; } +/* + * KVM tracks all counters in 64-bit bitmaps, with general purpose counters + * mapped to bits 31:0 and fixed counters mapped to 63:32, e.g. fixed counter 0 + * is tracked internally via index 32. On Intel, (AMD doesn't support fixed + * counters), this mirrors how fixed counters are mapped to PERF_GLOBAL_CTRL + * and similar MSRs, i.e. tracking fixed counters at base index 32 reduces the + * amounter of boilerplate needed to iterate over PMCs *and* simplifies common + * enabling/disable/reset operations. + * + * WARNING! This helper is only for lookups that are initiated by KVM, it is + * NOT safe for guest lookups, e.g. will do the wrong thing if passed a raw + * ECX value from RDPMC (fixed counters are accessed by setting bit 30 in ECX + * for RDPMC, not by adding 32 to the fixed counter index). + */ +static inline struct kvm_pmc *kvm_pmc_idx_to_pmc(struct kvm_pmu *pmu, int idx) +{ + if (idx < pmu->nr_arch_gp_counters) + return &pmu->gp_counters[idx]; + + idx -= KVM_FIXED_PMC_BASE_IDX; + if (idx >= 0 && idx < pmu->nr_arch_fixed_counters) + return &pmu->fixed_counters[idx]; + + return NULL; +} + static inline u64 pmc_bitmask(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 1fafc46f61c9..b6c1d1c3f204 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -25,7 +25,7 @@ enum pmu_type { PMU_TYPE_EVNTSEL, }; -static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) +static struct kvm_pmc *amd_pmu_get_pmc(struct kvm_pmu *pmu, int pmc_idx) { unsigned int num_counters = pmu->nr_arch_gp_counters; @@ -70,7 +70,7 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, return NULL; } - return amd_pmc_idx_to_pmc(pmu, idx); + return amd_pmu_get_pmc(pmu, idx); } static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) @@ -84,7 +84,7 @@ static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) static struct kvm_pmc *amd_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask) { - return amd_pmc_idx_to_pmc(vcpu_to_pmu(vcpu), idx); + return amd_pmu_get_pmc(vcpu_to_pmu(vcpu), idx); } static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, u32 msr) @@ -226,7 +226,6 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) } struct kvm_pmu_ops amd_pmu_ops __initdata = { - .pmc_idx_to_pmc = amd_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = amd_msr_idx_to_pmc, .is_valid_rdpmc_ecx = amd_is_valid_rdpmc_ecx, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 61252bb733c4..4254411be467 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -47,18 +47,6 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) } } -static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) -{ - if (pmc_idx < KVM_FIXED_PMC_BASE_IDX) { - return get_gp_pmc(pmu, MSR_P6_EVNTSEL0 + pmc_idx, - MSR_P6_EVNTSEL0); - } else { - u32 idx = pmc_idx - KVM_FIXED_PMC_BASE_IDX; - - return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0); - } -} - static u32 intel_rdpmc_get_masked_idx(struct kvm_pmu *pmu, u32 idx) { /* @@ -710,7 +698,7 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) for_each_set_bit(bit, (unsigned long *)&pmu->global_ctrl, X86_PMC_IDX_MAX) { - pmc = intel_pmc_idx_to_pmc(pmu, bit); + pmc = kvm_pmc_idx_to_pmc(pmu, bit); if (!pmc || !pmc_speculative_in_use(pmc) || !pmc_is_globally_enabled(pmc) || !pmc->perf_event) @@ -727,7 +715,6 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) } struct kvm_pmu_ops intel_pmu_ops __initdata = { - .pmc_idx_to_pmc = intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = intel_msr_idx_to_pmc, .is_valid_rdpmc_ecx = intel_is_valid_rdpmc_ecx, From patchwork Fri Nov 10 02:28:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13452051 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2AE5D1FBB for ; Fri, 10 Nov 2023 02:29:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bFn5J5mS" Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65A1444BF for ; Thu, 9 Nov 2023 18:29:10 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5bc3be6a91bso20124957b3.1 for ; Thu, 09 Nov 2023 18:29:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583349; x=1700188149; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/ewAJtcLDJCjiqPeU8Oy31szzWjWOzI4Ilaj+lyQ/Xs=; b=bFn5J5mSZ22wXwvlrgl0UN1xcmAvRiLLTCNYRfW8DB9SC9qNK5USE3A9heh63+PoYG IJH565kGwuM0TtiJvr+o+yFQHpBN7ZGgl/XViZy5GNPqJ3kU4ERAb8hU4TGh7XkHpZGX sxUYhFa84p5ino0OaMYUR66BhbEU/Gu91c6GvQUSa4QrbR2AMLPeRAukIi2lpKarqVDb iOsCk1uJ6a3HelkeU+rAH99VypbuuwHWB4ycH41SQjbZg5mw4HY+Lau6yy7l/Bq63YkE ryTzOTivWndXGblsjsj+a902INleON4kxqNfrtFk+2UaAP5O3Sh9QFHvxe/knQGom03e 4atg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583349; x=1700188149; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/ewAJtcLDJCjiqPeU8Oy31szzWjWOzI4Ilaj+lyQ/Xs=; b=hqWhPMNY5L2JKyDOERWQRyBfZsPKyd1eCunOr3/kYrExaRFpXiSQ9OWJYPTVdsfNtx gQa89jPX1W00fLHm8VOYyJaQxVqdpIKa0lAiyRVi9ag1enCHeV4BwEpZHTN4HKxBkddP xVhCfqCjmtmT9gzKzg4Ad4D0xZu99aZKOyio/gSns0lfGaMRGXLhaHzyIAFN6opTL9UY qWuWjqnGVbSfErNGTRC1MhtLtAal++MwH2eZBezCCkuaf/MBmyL2W3qQycl/HNYF1QUL Vwu/ACdgVbTdWHPdjUELMNx40R1Uc30bfvQRKIo34uQSA0XHv1BH0ihcvfPSeZoEo2WC TiCg== X-Gm-Message-State: AOJu0YxPdak/o6tkgzweKtrAQ0XYDhU+PJ9U4rKuHC3Q4IX2RzHSYDaw AmvOorhiDd+dlrJ9p6wU/vt8wl4kyq0= X-Google-Smtp-Source: AGHT+IHGMt2jktFG5bE27wELiNJLEDJKDs34DhbaMrjwulehu1Br5N+AE3S6fFh8oMDyRfcQwpBoDsQQEX4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:18d:b0:d89:42d7:e72d with SMTP id t13-20020a056902018d00b00d8942d7e72dmr45730ybh.3.1699583349370; Thu, 09 Nov 2023 18:29:09 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:51 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-5-seanjc@google.com> Subject: [PATCH 04/10] KVM: x86/pmu: Snapshot and clear reprogramming bitmap before reprogramming From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Refactor the handling of the reprogramming bitmap to snapshot and clear to-be-processed bits before doing the reprogramming, and then explicitly set bits for PMCs that need to be reprogrammed (again). This will allow adding a macro to iterate over all valid PMCs without having to add special handling for the reprogramming bit, which (a) can have bits set for non-existent PMCs and (b) needs to clear such bits to avoid wasting cycles in perpetuity. Note, the existing behavior of clearing bits after reprogramming does NOT have a race with kvm_vm_ioctl_set_pmu_event_filter(). Setting a new PMU filter synchronizes SRCU _before_ setting the bitmap, i.e. guarantees that the vCPU isn't in the middle of reprogramming with a stale filter prior to setting the bitmap. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/pmu.c | 52 ++++++++++++++++++--------------- 2 files changed, 30 insertions(+), 23 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d8bc9ba88cfc..22ba24d0fd4f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -535,6 +535,7 @@ struct kvm_pmc { #define KVM_PMC_MAX_FIXED 3 #define MSR_ARCH_PERFMON_FIXED_CTR_MAX (MSR_ARCH_PERFMON_FIXED_CTR0 + KVM_PMC_MAX_FIXED - 1) #define KVM_AMD_PMC_MAX_GENERIC 6 + struct kvm_pmu { u8 version; unsigned nr_arch_gp_counters; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 6ee05ad35f55..ee921b24d9e4 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -444,7 +444,7 @@ static bool pmc_event_is_allowed(struct kvm_pmc *pmc) check_pmu_event_filter(pmc); } -static void reprogram_counter(struct kvm_pmc *pmc) +static int reprogram_counter(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); u64 eventsel = pmc->eventsel; @@ -455,7 +455,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) emulate_overflow = pmc_pause_counter(pmc); if (!pmc_event_is_allowed(pmc)) - goto reprogram_complete; + return 0; if (emulate_overflow) __kvm_perf_overflow(pmc, false); @@ -476,43 +476,49 @@ static void reprogram_counter(struct kvm_pmc *pmc) } if (pmc->current_config == new_config && pmc_resume_counter(pmc)) - goto reprogram_complete; + return 0; pmc_release_perf_event(pmc); pmc->current_config = new_config; - /* - * If reprogramming fails, e.g. due to contention, leave the counter's - * regprogram bit set, i.e. opportunistically try again on the next PMU - * refresh. Don't make a new request as doing so can stall the guest - * if reprogramming repeatedly fails. - */ - if (pmc_reprogram_counter(pmc, PERF_TYPE_RAW, - (eventsel & pmu->raw_event_mask), - !(eventsel & ARCH_PERFMON_EVENTSEL_USR), - !(eventsel & ARCH_PERFMON_EVENTSEL_OS), - eventsel & ARCH_PERFMON_EVENTSEL_INT)) - return; - -reprogram_complete: - clear_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->reprogram_pmi); + return pmc_reprogram_counter(pmc, PERF_TYPE_RAW, + (eventsel & pmu->raw_event_mask), + !(eventsel & ARCH_PERFMON_EVENTSEL_USR), + !(eventsel & ARCH_PERFMON_EVENTSEL_OS), + eventsel & ARCH_PERFMON_EVENTSEL_INT); } void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) { + DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX); struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); int bit; - for_each_set_bit(bit, pmu->reprogram_pmi, X86_PMC_IDX_MAX) { + bitmap_copy(bitmap, pmu->reprogram_pmi, X86_PMC_IDX_MAX); + + /* + * The reprogramming bitmap can be written asynchronously by something + * other than the task that holds vcpu->mutex, take care to clear only + * the bits that will actually processed. + */ + BUILD_BUG_ON(sizeof(bitmap) != sizeof(atomic64_t)); + atomic64_andnot(*(s64 *)bitmap, &pmu->__reprogram_pmi); + + for_each_set_bit(bit, bitmap, X86_PMC_IDX_MAX) { struct kvm_pmc *pmc = kvm_pmc_idx_to_pmc(pmu, bit); - if (unlikely(!pmc)) { - clear_bit(bit, pmu->reprogram_pmi); + if (unlikely(!pmc)) continue; - } - reprogram_counter(pmc); + /* + * If reprogramming fails, e.g. due to contention, re-set the + * regprogram bit set, i.e. opportunistically try again on the + * next PMU refresh. Don't make a new request as doing so can + * stall the guest if reprogramming repeatedly fails. + */ + if (reprogram_counter(pmc)) + set_bit(pmc->idx, pmu->reprogram_pmi); } /* From patchwork Fri Nov 10 02:28:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13452052 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8144B23C9 for ; Fri, 10 Nov 2023 02:29:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aGTsrWI6" Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9A2A4687 for ; Thu, 9 Nov 2023 18:29:12 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-da0c6d62ec8so1930496276.1 for ; Thu, 09 Nov 2023 18:29:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583351; x=1700188151; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=5SlrZj36n0KChIF9h7ClFhiP6zYdMuLYWl1Pm7fXb0o=; b=aGTsrWI6rNx2REpLaZfKFSLSD/hcmTzs1814Pp82zEYuYvL3JggSLNVzAob/txjcJs ldcP57jwqy9gnJNPzwVvM4qC/WOsXZ7+/5UblA3iyFsP5QK53pXs37YBELaYwv9NNLKi 9MMo2fBt40/WVfbubfj/RfJ0EscJyX9nTCODarsHSbV0xcYg7K56LU43PEoCWP5HgMSD /QetNZmWomGIDPb1Cr7GObxq32cUg3hNIWqnoawoWGgckqVd9EijwzaM8IqocF3TeE65 MEJWuV4srT4g31h1NomwG3MmDAKyNMS/KNL/8cz/YvK5a19yS5e3zMLcosAI8G/qF2zu Hs4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583351; x=1700188151; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5SlrZj36n0KChIF9h7ClFhiP6zYdMuLYWl1Pm7fXb0o=; b=qvIgNAvz/gwK3Bt4MnV5CbMEfTnmHLcWpp0QA0O5xrlwbwl5hvIdEEfviyrs9ME6xV TiahYRFWE5n2TIMAfPBa2UezX4o5JhoLqc8nAXZ+7rcbI+0WvUlcqWpEJe+K9G6PHO3T wghaPILDsGPUzW4CcOG74KmV42kpHBonNjioSepr1LZkcZv89jzDB97jZW/PB1RtlpDx kbW9tIwHOTkCZselFvjz/pftVXPjm673wwh+I5KgYvYmNAx3+JV4DLKEVGChKHFzJKkr XOf7PGJWxFZ7yeUONCZL1xQESx+G134dfbm4ygIpOLWkNVZoBNEBw2qw2cGcyi3RKwjh 9kIA== X-Gm-Message-State: AOJu0YwepI25RMrl/z7i1UuUgXc4N2kBfmU/n3+uCoERHnKgd5KyG4lX U9Fw9ZtZHDNKIiUpp7Sc7UwGMQFQHxc= X-Google-Smtp-Source: AGHT+IGFCa9KkZ81i+JJNjZhR4CIbz66m6zI5H/yCZMjmXJL+cscV1zQxhDZNMxmnsOJUKFv8SOM0ug181w= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a5b:f12:0:b0:da1:513d:8a3c with SMTP id x18-20020a5b0f12000000b00da1513d8a3cmr161843ybr.11.1699583351272; Thu, 09 Nov 2023 18:29:11 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:52 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-6-seanjc@google.com> Subject: [PATCH 05/10] KVM: x86/pmu: Add macros to iterate over all PMCs given a bitmap From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Add and use kvm_for_each_pmc() to dedup a variety of open coded for-loops that iterate over valid PMCs given a bitmap (and because seeing checkpatch whine about bad macro style is always amusing). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 26 +++++++------------------- arch/x86/kvm/pmu.h | 6 ++++++ arch/x86/kvm/vmx/pmu_intel.c | 7 ++----- 3 files changed, 15 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index ee921b24d9e4..0e2175170038 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -493,6 +493,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) { DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX); struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; int bit; bitmap_copy(bitmap, pmu->reprogram_pmi, X86_PMC_IDX_MAX); @@ -505,12 +506,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) BUILD_BUG_ON(sizeof(bitmap) != sizeof(atomic64_t)); atomic64_andnot(*(s64 *)bitmap, &pmu->__reprogram_pmi); - for_each_set_bit(bit, bitmap, X86_PMC_IDX_MAX) { - struct kvm_pmc *pmc = kvm_pmc_idx_to_pmc(pmu, bit); - - if (unlikely(!pmc)) - continue; - + kvm_for_each_pmc(pmu, pmc, bit, bitmap) { /* * If reprogramming fails, e.g. due to contention, re-set the * regprogram bit set, i.e. opportunistically try again on the @@ -720,11 +716,7 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) bitmap_zero(pmu->reprogram_pmi, X86_PMC_IDX_MAX); - for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { - pmc = kvm_pmc_idx_to_pmc(pmu, i); - if (!pmc) - continue; - + kvm_for_each_pmc(pmu, pmc, i, pmu->all_valid_pmc_idx) { pmc_stop_counter(pmc); pmc->counter = 0; pmc->emulated_counter = 0; @@ -796,10 +788,8 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) bitmap_andnot(bitmask, pmu->all_valid_pmc_idx, pmu->pmc_in_use, X86_PMC_IDX_MAX); - for_each_set_bit(i, bitmask, X86_PMC_IDX_MAX) { - pmc = kvm_pmc_idx_to_pmc(pmu, i); - - if (pmc && pmc->perf_event && !pmc_speculative_in_use(pmc)) + kvm_for_each_pmc(pmu, pmc, i, bitmask) { + if (pmc->perf_event && !pmc_speculative_in_use(pmc)) pmc_stop_counter(pmc); } @@ -851,10 +841,8 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) struct kvm_pmc *pmc; int i; - for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { - pmc = kvm_pmc_idx_to_pmc(pmu, i); - - if (!pmc || !pmc_event_is_allowed(pmc)) + kvm_for_each_pmc(pmu, pmc, i, pmu->all_valid_pmc_idx) { + if (!pmc_event_is_allowed(pmc)) continue; /* Ignore checks for edge detect, pin control, invert and CMASK bits */ diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 2235772a495b..cb62a4e44849 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -83,6 +83,12 @@ static inline struct kvm_pmc *kvm_pmc_idx_to_pmc(struct kvm_pmu *pmu, int idx) return NULL; } +#define kvm_for_each_pmc(pmu, pmc, i, bitmap) \ + for_each_set_bit(i, bitmap, X86_PMC_IDX_MAX) \ + if (!(pmc = kvm_pmc_idx_to_pmc(pmu, i))) \ + continue; \ + else \ + static inline u64 pmc_bitmask(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 4254411be467..ee3e122d3617 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -696,11 +696,8 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) struct kvm_pmc *pmc = NULL; int bit, hw_idx; - for_each_set_bit(bit, (unsigned long *)&pmu->global_ctrl, - X86_PMC_IDX_MAX) { - pmc = kvm_pmc_idx_to_pmc(pmu, bit); - - if (!pmc || !pmc_speculative_in_use(pmc) || + kvm_for_each_pmc(pmu, pmc, bit, (unsigned long *)&pmu->global_ctrl) { + if (!pmc_speculative_in_use(pmc) || !pmc_is_globally_enabled(pmc) || !pmc->perf_event) continue; From patchwork Fri Nov 10 02:28:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13452053 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7E6123D7 for ; Fri, 10 Nov 2023 02:29:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bxd9OyhR" Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C52446B0 for ; Thu, 9 Nov 2023 18:29:13 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-6b31cb3cc7eso1613578b3a.0 for ; Thu, 09 Nov 2023 18:29:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583353; x=1700188153; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=oUplQH4QKJNeOGVBIGmyoAKP+5jCzgTgFjMw0NNsbV8=; b=bxd9OyhRQfuVpkEeKnGJO9SydgcnMgzqO0QovgXETG/z0aCw3KhdoLNYUMUiwxCx3Z mho76pDdNhzA1TR0VsXvgZ7Lsg5+4z+ZGkwt5cXC3vIJWXb2FjhRlcpByRsx7dIMexEr 45hD/HNiouVhXQ1YdZkzonuwFZ9mM0tTvpnh1aD7wJZmxRWpsi+92dv36pQl/LASUU6i +5zC+9EispEhTtKs7C0mtKaLnWVaw8bjBV4C7MakHq7i6tBhSGLygXiAYoT5WryMck0J xaTtnXPsngXix9N/bvseKFhdG3puqm/nRWFeXll13Xy0qKY8ZFXxkm2VbzX9ZJwPabZZ BDPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583353; x=1700188153; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=oUplQH4QKJNeOGVBIGmyoAKP+5jCzgTgFjMw0NNsbV8=; b=wIKIs3RUmfSK+e7rCwGbuWxwW20h7XRg21YiwnNQ8Ce+qEd94l6Im+BN+FRmydkdJx 0IXO1Rk9Zl4qVQx9Pcyn05qpHhWwXBHSs3rdF/qJfqyoRO6YQsyUk/rsSfYez/J2cTAM CiNS9tHpeDqvp4/MGnD13QDISFieY7ANLFR0Npx1xdJXWQB0XO4rR+3/n7JHlBwc2rsM d04c1xJI/WEP/nR8B7W5204g1cWQ5JMIHL/lA3hB59WnfARmPDYlCSZtVg+Wv+yqIS3t PEV9YzFDeLlEolSBSywhM7W3Mf0qMr5JYupabyvzedoyIVYSYTmTAdv5/beWeZAZizQ2 LVHA== X-Gm-Message-State: AOJu0Yxtj0CGs+VPlFFpwPBwJMGtKSiv28yHh5PdrrkdWJnSd8p1cEz4 rux/Qap0FoHTZl1zyu2nsaUGHGPnTkA= X-Google-Smtp-Source: AGHT+IEucuu8/wKbjkKW4lLWW91QLJUJ5SOAPQg86CMbuYGjka0qEOMY6aBvXFhk7y3bJLf4TgCoQArYkak= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:850a:b0:690:29c0:ef51 with SMTP id ha10-20020a056a00850a00b0069029c0ef51mr307317pfb.1.1699583352940; Thu, 09 Nov 2023 18:29:12 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:53 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-7-seanjc@google.com> Subject: [PATCH 06/10] KVM: x86/pmu: Process only enabled PMCs when emulating events in software From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Mask off disabled counters based on PERF_GLOBAL_CTRL *before* iterating over PMCs to emulate (branch) instruction required events in software. In the common case where the guest isn't utilizing the PMU, pre-checking for enabled counters turns a relatively expensive search into a few AND uops and a Jcc. Sadly, PMUs without PERF_GLOBAL_CTRL, e.g. most existing AMD CPUs, are out of luck as there is no way to check that a PMC isn't being used without checking the PMC's event selector. Cc: Konstantin Khorenko Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 0e2175170038..488d21024a92 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -837,11 +837,20 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc) void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) { + DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX); struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; int i; - kvm_for_each_pmc(pmu, pmc, i, pmu->all_valid_pmc_idx) { + BUILD_BUG_ON(sizeof(pmu->global_ctrl) * BITS_PER_BYTE != X86_PMC_IDX_MAX); + + if (!kvm_pmu_has_perf_global_ctrl(pmu)) + bitmap_copy(bitmap, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX); + else if (!bitmap_and(bitmap, pmu->all_valid_pmc_idx, + (unsigned long *)&pmu->global_ctrl, X86_PMC_IDX_MAX)) + return; + + kvm_for_each_pmc(pmu, pmc, i, bitmap) { if (!pmc_event_is_allowed(pmc)) continue; From patchwork Fri Nov 10 02:28:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13452054 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B838E4A23 for ; Fri, 10 Nov 2023 02:29:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ndZG/dY7" Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B8AA4780 for ; Thu, 9 Nov 2023 18:29:15 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id d2e1a72fcca58-6bd5730bef9so1612504b3a.1 for ; Thu, 09 Nov 2023 18:29:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583355; x=1700188155; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=np39bj4y47m/snfejc8OQRZHu9kCDIS45raWJG/evqc=; b=ndZG/dY7wMUxt8tGo3ttdYofPMeGx3SlMgrfzrv82zRLbHvIwz7Kpcig5n88g2mmy2 rUhgyN+L6qOiEAce5ICuoPgL43XSYNOAxQ6AelcgXLqk0Q5I04hNSR8x4N5gBO1rCc4f gK0KoCLeof2faF0AMJaiCDEyUH2npZDnEldsacUYF6/MzuxJ1qmiSUbjdKUhR1PSAcjo x9eXNRpjiPJxioZ7zGJMtRuXvhE41EPTr9Gsvhw30gEBfQT5Gh2M9nYO3A/MRMROQPNj pSWz3ztNo5qDYzgKT/2x1y2xMsphFVrkM7vicB7Dbq+uOZEYprdUidYNtTUsclkH3+wC CXgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583355; x=1700188155; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=np39bj4y47m/snfejc8OQRZHu9kCDIS45raWJG/evqc=; b=HQGi3mPlfqGlzm99olRYXlTr9koVUKjO+FGbDvVxl3TvJMNrr1ZYlbkqBkpUZRGBh1 WiL2vUoR2HBwfifizJpxSAABl9KAWcaicwHDXYzQUY7N/ncH0uAapiwBU369z6AuqGLM zXaUH0tqlKAKl/55FVh3mNt4y06j/YbvWj2ONyaye+7jNZCbyZQAw0uUDzZkb3cDYyUY bg/bga7UBDLV0tMB3XuzmI+kp07BfghTWhS7aCMtz/DfB83B7aLEFzdgTjriwl8FxPzy 3GaoU2KHBRg1jl6rhe5bOeRZpSchYV3Lcd0GSZqoIdXnvMiN2hOxi+Ca23NSBLM3dLAn K+Mw== X-Gm-Message-State: AOJu0Yz8l+fvcDN55jIBf14dOclbWQKCoXtLS/yW9f0bxNsUB+eBvcne 1Vuh6bdwr8dq0u7VPlqUEQSLWPxBUYc= X-Google-Smtp-Source: AGHT+IHFAHHN1XHjo3fceIzvJRl7LtTpp6+aRpgDzmT4RzXKGGE6Nx//CNMXJPpbSGYwJOo6JudAIEevVnw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:a09:b0:6c3:9efc:6747 with SMTP id p9-20020a056a000a0900b006c39efc6747mr437017pfh.3.1699583354700; Thu, 09 Nov 2023 18:29:14 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:54 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-8-seanjc@google.com> Subject: [PATCH 07/10] KVM: x86/pmu: Snapshot event selectors that KVM emulates in software From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Snapshot the event selectors for the events that KVM emulates in software, which is currently instructions retired and branch instructions retired. The event selectors a tied to the underlying CPU, i.e. are constant for a given platform even though perf doesn't manage the mappings as such. Getting the event selectors from perf isn't exactly cheap, especially if mitigations are enabled, as at least one indirect call is involved. Snapshot the values in KVM instead of optimizing perf as working with the raw event selectors will be required if KVM ever wants to emulate events that aren't part of perf's uABI, i.e. that don't have an "enum perf_hw_id" entry. Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 17 ++++++++--------- arch/x86/kvm/pmu.h | 13 ++++++++++++- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/x86.c | 6 +++--- 4 files changed, 24 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 488d21024a92..45cb8b2a024b 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -29,6 +29,9 @@ struct x86_pmu_capability __read_mostly kvm_pmu_cap; EXPORT_SYMBOL_GPL(kvm_pmu_cap); +struct kvm_pmu_emulated_event_selectors __read_mostly kvm_pmu_eventsel; +EXPORT_SYMBOL_GPL(kvm_pmu_eventsel); + /* Precise Distribution of Instructions Retired (PDIR) */ static const struct x86_cpu_id vmx_pebs_pdir_cpu[] = { X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, NULL), @@ -809,13 +812,6 @@ static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) kvm_pmu_request_counter_reprogram(pmc); } -static inline bool eventsel_match_perf_hw_id(struct kvm_pmc *pmc, - unsigned int perf_hw_id) -{ - return !((pmc->eventsel ^ perf_get_hw_event_config(perf_hw_id)) & - AMD64_RAW_EVENT_MASK_NB); -} - static inline bool cpl_is_matched(struct kvm_pmc *pmc) { bool select_os, select_user; @@ -835,7 +831,7 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc) return (static_call(kvm_x86_get_cpl)(pmc->vcpu) == 0) ? select_os : select_user; } -void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) +void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel) { DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX); struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -855,7 +851,10 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) continue; /* Ignore checks for edge detect, pin control, invert and CMASK bits */ - if (eventsel_match_perf_hw_id(pmc, perf_hw_id) && cpl_is_matched(pmc)) + if ((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB) + continue; + + if (cpl_is_matched(pmc)) kvm_pmu_incr_counter(pmc); } } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index cb62a4e44849..9dc5f549c98c 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -22,6 +22,11 @@ #define KVM_FIXED_PMC_BASE_IDX INTEL_PMC_IDX_FIXED +struct kvm_pmu_emulated_event_selectors { + u64 INSTRUCTIONS_RETIRED; + u64 BRANCH_INSTRUCTIONS_RETIRED; +}; + struct kvm_pmu_ops { struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask); @@ -171,6 +176,7 @@ static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) } extern struct x86_pmu_capability kvm_pmu_cap; +extern struct kvm_pmu_emulated_event_selectors kvm_pmu_eventsel; static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops) { @@ -212,6 +218,11 @@ static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops) pmu_ops->MAX_NR_GP_COUNTERS); kvm_pmu_cap.num_counters_fixed = min(kvm_pmu_cap.num_counters_fixed, KVM_PMC_MAX_FIXED); + + kvm_pmu_eventsel.INSTRUCTIONS_RETIRED = + perf_get_hw_event_config(PERF_COUNT_HW_INSTRUCTIONS); + kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIRED = + perf_get_hw_event_config(PERF_COUNT_HW_BRANCH_INSTRUCTIONS); } static inline void kvm_pmu_request_counter_reprogram(struct kvm_pmc *pmc) @@ -259,7 +270,7 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu); void kvm_pmu_cleanup(struct kvm_vcpu *vcpu); void kvm_pmu_destroy(struct kvm_vcpu *vcpu); int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp); -void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id); +void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel); bool is_vmware_backdoor_pmc(u32 pmc_idx); diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index c5ec0ef51ff7..cf985085467b 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3564,7 +3564,7 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch) return 1; } - kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_BRANCH_INSTRUCTIONS); + kvm_pmu_trigger_event(vcpu, kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIRED); if (CC(evmptrld_status == EVMPTRLD_VMFAIL)) return nested_vmx_failInvalid(vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index efbf52a9dc83..9d9b5f9e4b28 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8839,7 +8839,7 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu) if (unlikely(!r)) return 0; - kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_INSTRUCTIONS); + kvm_pmu_trigger_event(vcpu, kvm_pmu_eventsel.INSTRUCTIONS_RETIRED); /* * rflags is the old, "raw" value of the flags. The new value has @@ -9152,9 +9152,9 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, */ if (!ctxt->have_exception || exception_type(ctxt->exception.vector) == EXCPT_TRAP) { - kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_INSTRUCTIONS); + kvm_pmu_trigger_event(vcpu, kvm_pmu_eventsel.INSTRUCTIONS_RETIRED); if (ctxt->is_branch) - kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_BRANCH_INSTRUCTIONS); + kvm_pmu_trigger_event(vcpu, kvm_pmu_eventsel.BRANCH_INSTRUCTIONS_RETIRED); kvm_rip_write(vcpu, ctxt->eip); if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP))) r = kvm_vcpu_do_singlestep(vcpu); From patchwork Fri Nov 10 02:28:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13452055 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0BC55224 for ; Fri, 10 Nov 2023 02:29:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FVYanNPj" Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DF414698 for ; Thu, 9 Nov 2023 18:29:17 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5ae5b12227fso23110457b3.0 for ; Thu, 09 Nov 2023 18:29:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583356; x=1700188156; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ViZOepyyuHgGywDg5p020KLGJgQ5sVyIecAWPub5Hkc=; b=FVYanNPjEfH+zeSC5oEqFite2i8NIG+HdTxHDaSx6wnqHorabARC7TWVNA9o5LYRaS TCEyrbmz2I4tdvWVo5zg5a2SjyPRbuvd2xwLDfdj1r0eZuoGExVICIckYrGhkoPgYF9F rxvro0cjTbu4zNRXx7/8ejk3q8INK80cCE7N+vUHD6nJyhOlD7/ARqs6gX/VVQQkugHg cDT5mHrtS93c4vXPaVfqr8bDT7U8orVJYuVFowQccUvjwv2qaysrVb0EZ/+JL+IDsoYF J7wQgdtDSVt5KsGbthOYVbWfRFiFLFl8OFc6nDi0I1nwnvzEfU9eaq4Zav/TJePoeW6k Q3Jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583356; x=1700188156; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ViZOepyyuHgGywDg5p020KLGJgQ5sVyIecAWPub5Hkc=; b=W7rhdojre/axQa9rgGbYPNp/Vk9Zju9DRU5VinmIoVkpskI6ffL3TDzU62liSi8SVD gQLgU/tjt+iSltiANOtVLaohxXb6S16Q8ZbXCZJ3D6VC47goS/ntBtdy3tAb+e+eP+vE jK6OBaAF98GoR/QvdFviJbqVtoyrxXbaxknC8TjotQQgzzkHNDh4OrXdRQqMP8SP/lqH h3BFbj/jbxUWnF3VjxEQVAXzGthhPYVBTA9mwjExEW/xDQqIz4gn06I9H/dwdn4fGcT7 QFB9XUAHv1LXYW6IML570/6WwbvhudSldNmGm/fcAudHqrboEDmllsy25bTXSkzf3Qix s/tg== X-Gm-Message-State: AOJu0YzhzCk5wriIh7C2eRm0JbwnfbV2S5y1cSP/XyezawCuSdz1oIOC xqnm0iMuyJg1flFxqlR9EjrbJf/uUfk= X-Google-Smtp-Source: AGHT+IEz4Npt3on2y4c0ZdLIwoyVoJbdLf5cd4Ndcm7P56uuGjm8a/8+nhf+7OOsw6XqF0hZ+DExIx7za1w= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:9a44:0:b0:5be:94a6:d84b with SMTP id r65-20020a819a44000000b005be94a6d84bmr197455ywg.5.1699583356692; Thu, 09 Nov 2023 18:29:16 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:55 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-9-seanjc@google.com> Subject: [PATCH 08/10] KVM: x86/pmu: Expand the comment about what bits are check emulating events From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Expand the comment about what bits are and aren't checked when emulating PMC events in software. As pointed out by Jim, AMD's mask includes bits 35:32, which on Intel overlap with the IN_TX and IN_TXCP bits (32 and 33) as well as reserved bits (34 and 45). Checking The IN_TX* bits is actually correct, as it's safe to assert that the vCPU can't be in an HLE/RTM transaction if KVM is emulating an instruction, i.e. KVM *shouldn't count if either of those bits is set. For the reserved bits, KVM is has equal odds of being right if Intel adds new behavior, i.e. ignoring them is just as likely to be correct as checking them. Opportunistically explain *why* the other flags aren't checked. Suggested-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 45cb8b2a024b..ba561849fd3e 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -850,7 +850,20 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel) if (!pmc_event_is_allowed(pmc)) continue; - /* Ignore checks for edge detect, pin control, invert and CMASK bits */ + /* + * Ignore checks for edge detect (all events currently emulated + * but KVM are always rising edges), pin control (unsupported + * by modern CPUs), and counter mask and its invert flag (KVM + * doesn't emulate multiple events in a single clock cycle). + * + * Note, the uppermost nibble of AMD's mask overlaps Intel's + * IN_TX (bit 32) and IN_TXCP (bit 33), as well as two reserved + * bits (bits 35:34). Checking the "in HLE/RTM transaction" + * flags is correct as the vCPU can't be in a transaction if + * KVM is emulating an instruction. Checking the reserved bits + * might be wrong if they are defined in the future, but so + * could ignoring them, so do the simple thing for now. + */ if ((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB) continue; From patchwork Fri Nov 10 02:28:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13452056 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 94E5F539E for ; Fri, 10 Nov 2023 02:29:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="g1Wmfa/6" Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2522B46A2 for ; Thu, 9 Nov 2023 18:29:19 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1cc23aa0096so15696395ad.2 for ; Thu, 09 Nov 2023 18:29:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583358; x=1700188158; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=fjDXmprU9Q2uJ1P2ZcBSz5/xZE4cUzlnzK3sR8+dOOk=; b=g1Wmfa/6C6BFSkjQ/aTj/HWNjTE5L0IP32bwcFxXycGtynuqUozvRb7LLkI9Hp+b4N 4do0qPR60+Nvb3um+hwpVaLKMb1qIY3rfV82sb7rKRBiLe/fE+3287+EN5LyfMGTCrkR Fh//GeAGFsBxIzTEqrSkuE5NshS/9WfSdsK39Vm1ie0+6En9lgM+YtrRPySrvoUTiVyf KiYe609q7hrYgUWuhUYu+YxF7VCJy7TndTt3h/3cz3JqRBJSgAWWW2V6cjsP3T7K9La7 IIlX7zEDYk3I8hBgRm74myvOTay5G4aqee578bKaPfc/S3ljZdvr1X7h+n4pes7nOGaT +N4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583358; x=1700188158; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=fjDXmprU9Q2uJ1P2ZcBSz5/xZE4cUzlnzK3sR8+dOOk=; b=C/8fBWtatQ7KHGHoMRmAI7yZ5bq9Rw2U61YcE/ruWk8YjwzMz4r1IrkO0NfvuwzvNb PvqJMOSXX0WbZcGUgzez707hggUvZfOJoeUIWNXRiycomDl7J5/q1UzeEgjE+cAXhnae Z+83Efq3bpGhJr6jMlBmuWcaLvwmqSxTG+R45iyS7rOP6v3i5iFzM6uPUSbDIftNoGI+ /BcpmvzUpL/i4zFWecpUXZ1+UifC4413pDOPdefbSsJF0xRIddfjkQKMaPDONTQqb0KF 3+S6bPGvbAz+IwrtD8q0MalhnWeTc9HhjCykQFhyT3YZ8vPEsLvOBekR4EFrpfFoswTO uBxg== X-Gm-Message-State: AOJu0Yzh5AaFELBbwlnLQs/1TNN0Fi6ua8kNWqmvDtOJQuiZ9EFc3MGa CWlZNYuHxN3mgKjctT+/lPxEcwnAbgE= X-Google-Smtp-Source: AGHT+IF8gdeQdC2vix2B9j35/6cn/ogiC7i3MtASLOgt733RFCpaQp4XZVOTIKgNYM7TT4sAmY3DEPPy3HQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:2591:b0:1cb:d9ff:e26f with SMTP id jb17-20020a170903259100b001cbd9ffe26fmr829172plb.5.1699583358643; Thu, 09 Nov 2023 18:29:18 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:56 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-10-seanjc@google.com> Subject: [PATCH 09/10] KVM: x86/pmu: Check eventsel first when emulating (branch) insns retired From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson When triggering events, i.e. emulating PMC events in software, check for a matching event selector before checking the event is allowed. The "is allowed" check *might* be cheap, but it could also be very costly, e.g. if userspace has defined a large PMU event filter. The event selector check on the other hand is all but guaranteed to be <10 uops, e.g. looks something like: 0xffffffff8105e615 <+5>: movabs $0xf0000ffff,%rax 0xffffffff8105e61f <+15>: xor %rdi,%rsi 0xffffffff8105e622 <+18>: test %rax,%rsi 0xffffffff8105e625 <+21>: sete %al Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index ba561849fd3e..a5ea729b16f2 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -847,9 +847,6 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel) return; kvm_for_each_pmc(pmu, pmc, i, bitmap) { - if (!pmc_event_is_allowed(pmc)) - continue; - /* * Ignore checks for edge detect (all events currently emulated * but KVM are always rising edges), pin control (unsupported @@ -864,11 +861,11 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel) * might be wrong if they are defined in the future, but so * could ignoring them, so do the simple thing for now. */ - if ((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB) + if (((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB) || + !pmc_event_is_allowed(pmc) || !cpl_is_matched(pmc)) continue; - if (cpl_is_matched(pmc)) - kvm_pmu_incr_counter(pmc); + kvm_pmu_incr_counter(pmc); } } EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); From patchwork Fri Nov 10 02:28:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13452057 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAC5D5C96 for ; Fri, 10 Nov 2023 02:29:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zRKh4SLV" Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DDE946A2 for ; Thu, 9 Nov 2023 18:29:21 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a7be940fe1so23203197b3.2 for ; Thu, 09 Nov 2023 18:29:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699583360; x=1700188160; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=nldJcIoxrE7izfzF9tusQOHRIp84adzxmoDFeEutDYo=; b=zRKh4SLVVccVIIY2w+/ksNwChGdMHeCC8urKg8hX1Uj3SHDFS47SJachQQiZoLiAi2 3s805g1nae/9qvvqbpBJ2GIyQENa1+ZDSQh+1zy/rPE3tcYTR5TXpKm1MSrxzjAyvHUh ndqcyEOvOMqSH6/ANOnVHKCjMoMLczgOIrqI0KoYOtIIS4tIOMZjXceHGG/4EU8SIU3e ah0a/KwctVPF0JUX7KP3f8863qrBcqBZiFmFi/3XjWQ7od3sLF2YSV9XmUO+JmhlV6aY J//CueNrqVCQcXng1Xey/Ro/sOGdMlxnmiG8736ASI7XEarduldYXGzV3a/uWzQHfSh+ iNCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699583360; x=1700188160; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=nldJcIoxrE7izfzF9tusQOHRIp84adzxmoDFeEutDYo=; b=lZd8ctfErXUahIqIUFLHNPQhx3PJX110AlTf1hPn2D1GtDFiJPaxObgaQTgmZvp8zg jQ49wVggIRenXItvoofRfYn+m/CMfajS4eP4US7tm+Ml8uJ5tA5XRO9XNb3y5i4D9Vn9 0MQjNDEMplfe+F87L3EULB43oNi27l1SHXDodFqh77uWAPmJ9i+KLR71ipy9d3lV+3CC LoIIb72NyQqrcOrc+ZdtSx26c8ez5GGYGcdWADfVM6Wx9ErR/4WZH33SCJtiputBnbVh 8hvK5BeLSpys0Gbo3+9o6XsWHOG4T8p4rMbsUbkjMUojqWihVQzfOOHaNMmk5GsKXcxL r0BQ== X-Gm-Message-State: AOJu0Yzn6/tHnd/kw8NyuuVJ+AcJ76tkmllXipGL454wRUAlK+8i/QDJ ANFLVxGVS1BEKC/DOH4JonjWWQm2Pqo= X-Google-Smtp-Source: AGHT+IFi2OuUxDtj5dLEyBOq5j0Fpq5odHUnkilHEMHd4jTjkDkeO9IJsflGDTKbTVEEKP8m+JUs86IyIw8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:9286:0:b0:595:5cf0:a9b0 with SMTP id j128-20020a819286000000b005955cf0a9b0mr173826ywg.9.1699583360474; Thu, 09 Nov 2023 18:29:20 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 9 Nov 2023 18:28:57 -0800 In-Reply-To: <20231110022857.1273836-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231110022857.1273836-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231110022857.1273836-11-seanjc@google.com> Subject: [PATCH 10/10] KVM: x86/pmu: Avoid CPL lookup if PMC enabline for USER and KERNEL is the same From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Konstantin Khorenko , Jim Mattson Don't bother querying the CPL if a PMC is (not) counting for both USER and KERNEL, i.e. if the end result is guaranteed to be the same regardless of the CPL. Querying the CPL on Intel requires a VMREAD, i.e. isn't free, and a single CMP+Jcc is cheap. Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index a5ea729b16f2..231604d6dc86 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -828,6 +828,13 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc) select_user = config & 0x2; } + /* + * Skip the CPL lookup, which isn't free on Intel, if the result will + * be the same regardless of the CPL. + */ + if (select_os == select_user) + return select_os; + return (static_call(kvm_x86_get_cpl)(pmc->vcpu) == 0) ? select_os : select_user; }