From patchwork Mon Sep 5 12:39:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12966084 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EEB4C6FA83 for ; Mon, 5 Sep 2022 12:43:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238282AbiIEMn4 (ORCPT ); Mon, 5 Sep 2022 08:43:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238094AbiIEMnQ (ORCPT ); Mon, 5 Sep 2022 08:43:16 -0400 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46B942127B; Mon, 5 Sep 2022 05:40:07 -0700 (PDT) Received: by mail-pg1-x52c.google.com with SMTP id 78so7991565pgb.13; Mon, 05 Sep 2022 05:40:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ssoGUZ4ZbkC1Wv+R+3XnE7I13lbD204WKPkjTaaF05k=; b=ATYARh2Ur/LSsSPMV1rkDqhpPw3L5aiffvYBBla6LjE489AIe0SnPi4ZDVw/82VZ98 w3DiE6JCVCi6NmQXtUao5HPwxXV/Dfosc8I0yF7KwfgDdACpg7H0XfNX8gatqn0vrbAL WtxgyPIbdVHxCJ0YrQ6nhXi1HD7sd1pQTl5whOG7oysVhEZRu67ybFjKACkbvXUPi2oT Fn0z5YSCmbFdrhi9lfqsg3PHyKo4k4ZxUCzctcst6nws4EsYG+7sbu7EZMiF72eo/UPU I2PWFdTxC3yPuftq/D22GyYz3vOXinFT2AwCX7EnI4TiQ0yOFP6fxe5RJg5eLBIr5/Qv hShg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ssoGUZ4ZbkC1Wv+R+3XnE7I13lbD204WKPkjTaaF05k=; b=HrAbVOYM3SYpDmXRQVTFI3kird/2tvrK4yToyvQ/b7mwPVsUlhpHxlyXNUGiS2YCeu aMGerA27v2A1AOZE1jSLGQmpfXdOMAMll6pHLheJq/HG8059TTdtGIJrRBy01rTTzH/0 j7Ri0cFoFaPiOzBHtFGnHOJ8M8UDyI9iIX99D3esRoIPYLE7PNY0S9aYYWsNU7v56fRD LRLrP4ww6QyFFwH6yYA8EiutIgi46BHNc1PdtADGihcsX25pME21fk9UrUaE5MMGffBi uNJVB1xlN6QUbXygxhzgqm+Laws2XtaDZh3LcMxRpRSHcP7MCJf8A7oiKV0/MzyzX15n DCpw== X-Gm-Message-State: ACgBeo3VqICjIrhWqkKijvjT6TG8+9tC5usOsgvGuYzgUwVfowoKjn2S ky/kRIhCD8PUzGti/+VmEYU= X-Google-Smtp-Source: AA6agR4GGSOx7YIggzyYMOLvTQHUbP/ZbPK3rKLsUvWRMzka5/DviEnu5+qfYjgCa6iuI0TR3DagSw== X-Received: by 2002:a63:40e:0:b0:42b:890d:594e with SMTP id 14-20020a63040e000000b0042b890d594emr38732333pge.331.1662381606782; Mon, 05 Sep 2022 05:40:06 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x8-20020a170902ec8800b00168dadc7354sm7428431plg.78.2022.09.05.05.40.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Sep 2022 05:40:06 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Sandipan Das , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/4] KVM: x86/svm/pmu: Limit the maximum number of supported GP counters Date: Mon, 5 Sep 2022 20:39:41 +0800 Message-Id: <20220905123946.95223-2-likexu@tencent.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220905123946.95223-1-likexu@tencent.com> References: <20220905123946.95223-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The AMD PerfMonV2 specification allows for a maximum of 16 GP counters, which is clearly not supported with zero code effort in the current KVM. A local macro (named like INTEL_PMC_MAX_GENERIC) is introduced to take back control of this virt capability, which also makes it easier to statically partition all available counters between hosts and guests. Signed-off-by: Like Xu Reviewed-by: Jim Mattson --- arch/x86/kvm/pmu.h | 2 ++ arch/x86/kvm/svm/pmu.c | 7 ++++--- arch/x86/kvm/x86.c | 2 ++ 3 files changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 847e7112a5d3..e3a3813b6a38 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -18,6 +18,8 @@ #define VMWARE_BACKDOOR_PMC_REAL_TIME 0x10001 #define VMWARE_BACKDOOR_PMC_APPARENT_TIME 0x10002 +#define KVM_AMD_PMC_MAX_GENERIC AMD64_NUM_COUNTERS_CORE + struct kvm_event_hw_type_mapping { u8 eventsel; u8 unit_mask; diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 2ec420b85d6a..f99f2c869664 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -192,9 +192,10 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); int i; - BUILD_BUG_ON(AMD64_NUM_COUNTERS_CORE > INTEL_PMC_MAX_GENERIC); + BUILD_BUG_ON(AMD64_NUM_COUNTERS_CORE > KVM_AMD_PMC_MAX_GENERIC); + BUILD_BUG_ON(KVM_AMD_PMC_MAX_GENERIC > INTEL_PMC_MAX_GENERIC); - for (i = 0; i < AMD64_NUM_COUNTERS_CORE ; i++) { + for (i = 0; i < KVM_AMD_PMC_MAX_GENERIC ; i++) { pmu->gp_counters[i].type = KVM_PMC_GP; pmu->gp_counters[i].vcpu = vcpu; pmu->gp_counters[i].idx = i; @@ -207,7 +208,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); int i; - for (i = 0; i < AMD64_NUM_COUNTERS_CORE; i++) { + for (i = 0; i < KVM_AMD_PMC_MAX_GENERIC; i++) { struct kvm_pmc *pmc = &pmu->gp_counters[i]; pmc_stop_counter(pmc); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 43a6a7efc6ec..b9738efd8425 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1444,12 +1444,14 @@ static const u32 msrs_to_save_all[] = { MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17, MSR_IA32_PEBS_ENABLE, MSR_IA32_DS_AREA, MSR_PEBS_DATA_CFG, + /* This part of MSRs should match KVM_AMD_PMC_MAX_GENERIC. */ MSR_K7_EVNTSEL0, MSR_K7_EVNTSEL1, MSR_K7_EVNTSEL2, MSR_K7_EVNTSEL3, MSR_K7_PERFCTR0, MSR_K7_PERFCTR1, MSR_K7_PERFCTR2, MSR_K7_PERFCTR3, MSR_F15H_PERF_CTL0, MSR_F15H_PERF_CTL1, MSR_F15H_PERF_CTL2, MSR_F15H_PERF_CTL3, MSR_F15H_PERF_CTL4, MSR_F15H_PERF_CTL5, MSR_F15H_PERF_CTR0, MSR_F15H_PERF_CTR1, MSR_F15H_PERF_CTR2, MSR_F15H_PERF_CTR3, MSR_F15H_PERF_CTR4, MSR_F15H_PERF_CTR5, + MSR_IA32_XFD, MSR_IA32_XFD_ERR, }; From patchwork Mon Sep 5 12:39:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12966080 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE9A9C6FA83 for ; Mon, 5 Sep 2022 12:43:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235897AbiIEMns (ORCPT ); Mon, 5 Sep 2022 08:43:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236578AbiIEMnQ (ORCPT ); Mon, 5 Sep 2022 08:43:16 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6721B220C7; Mon, 5 Sep 2022 05:40:09 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id 199so8538770pfz.2; Mon, 05 Sep 2022 05:40:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=guHLcLgjE06mkq1MSfHy1bMvLPZ+WAcCc6Skn/+UkiA=; b=RPIESGqb0BLQ4Xt3FfYbrLjtaHbY618Tga+osRFzKcUVWN6ZSUl6uMU6fhNvtDFxYb zy1hkkniKP2qe951dCyxit/baxSuNbkDP8tDJh8dPgGEAFk8zaAiDf15UpiBhSBARuUt 6xVcc1UuJ8uhw+O9P1AYVqM8qdOriCjS0wIQEOC5Dntz/EjlKcn9JatD+drGWQOFyWWn Ja6Kd87pAaWgDzT+w9xzWHD9I3s9em5LDCJsPIjCbeY87B0aiAp5bEaUQwplEk5Yznt2 Q/3EazNXN4lumtvkxgfPRGmKCnQ2KHBpem3V+yvyJC9XBpI8o+nyNj7Vi9+5equ2K63D Gc9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=guHLcLgjE06mkq1MSfHy1bMvLPZ+WAcCc6Skn/+UkiA=; b=QXY6AhkblkFOdPbQZiEnumPPlmjBqReg9GeBhcG4Zx7fZ7NGu+Cc0Ij7/rIicLDVlu tZ3Tr5YUyhVSuSmdh5PaFRQZn4NNh2+Kewwv+rtL9qKZhcKqh7ZfSz9BeeyaJxtKrgl6 vRgum39BFPHnBx4q+wx+EivKleXC8vq4/WuldUwJLommg5dIyhpZrh/jazpo9VSArIfw QWY23SIRgagq8EOqQW2DoDYk3PsO6JSTbRxgTfrPpfgYoKQeVtFzq/Wuykysa6ZiJH7r U3fZtG2Z7OUuK1u3FZ15Ibcb149eiNNNZABnHsnu6Aikv7xD9bwi6yxPbfyqCyEWKDQO Ru5Q== X-Gm-Message-State: ACgBeo0D2mgBOPdiN8bvQzjiRwnzim4EfuJjFK6Qe+BnWYV7BZRK+WWA pu+MJDvAThQms7alIqd+rQ8= X-Google-Smtp-Source: AA6agR7TttVK8MZoXzSPgB2wovAeFf7m+iq0EOnXzcKQ4mJH830r73VaKx1CrRRtGx0HPr3zCjIRgw== X-Received: by 2002:a63:151f:0:b0:434:2b09:ffeb with SMTP id v31-20020a63151f000000b004342b09ffebmr7825672pgl.498.1662381608698; Mon, 05 Sep 2022 05:40:08 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x8-20020a170902ec8800b00168dadc7354sm7428431plg.78.2022.09.05.05.40.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Sep 2022 05:40:08 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Sandipan Das , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/4] KVM: x86/pmu: Make part of the Intel v2 PMU MSRs handling x86 generic Date: Mon, 5 Sep 2022 20:39:42 +0800 Message-Id: <20220905123946.95223-3-likexu@tencent.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220905123946.95223-1-likexu@tencent.com> References: <20220905123946.95223-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The AMD PerfMonV2 defines three registers similar to part of the Intel v2 PMU registers, including the GLOBAL_CTRL, GLOBAL_STATUS and GLOBAL_OVF_CTRL MSRs. For better code reuse, this specific part of the handling can be extracted to make it generic for X86. The new non-prefix pmc_is_enabled() works well as legacy AMD vPMU version is indexeqd as 1. Note that the specific *_is_valid_msr will continue to be used to avoid cross-vendor msr access. Signed-off-by: Like Xu --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 1 - arch/x86/kvm/pmu.c | 55 +++++++++++++++++++++--- arch/x86/kvm/pmu.h | 30 ++++++++++++- arch/x86/kvm/svm/pmu.c | 9 ---- arch/x86/kvm/vmx/pmu_intel.c | 58 +------------------------- 5 files changed, 80 insertions(+), 73 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index c17e3e96fc1d..6c98f4bb4228 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -13,7 +13,6 @@ BUILD_BUG_ON(1) * at the call sites. */ KVM_X86_PMU_OP(hw_event_available) -KVM_X86_PMU_OP(pmc_is_enabled) KVM_X86_PMU_OP(pmc_idx_to_pmc) KVM_X86_PMU_OP(rdpmc_ecx_to_pmc) KVM_X86_PMU_OP(msr_idx_to_pmc) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3c42df3a55ff..7002e1b74108 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -83,11 +83,6 @@ void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops) #undef __KVM_X86_PMU_OP } -static inline bool pmc_is_enabled(struct kvm_pmc *pmc) -{ - return static_call(kvm_x86_pmu_pmc_is_enabled)(pmc); -} - static void kvm_pmi_trigger_fn(struct irq_work *irq_work) { struct kvm_pmu *pmu = container_of(irq_work, struct kvm_pmu, irq_work); @@ -455,11 +450,61 @@ static void kvm_pmu_mark_pmc_in_use(struct kvm_vcpu *vcpu, u32 msr) int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + u32 msr = msr_info->index; + + switch (msr) { + case MSR_CORE_PERF_GLOBAL_STATUS: + msr_info->data = pmu->global_status; + return 0; + case MSR_CORE_PERF_GLOBAL_CTRL: + msr_info->data = pmu->global_ctrl; + return 0; + case MSR_CORE_PERF_GLOBAL_OVF_CTRL: + msr_info->data = 0; + return 0; + default: + break; + } + return static_call(kvm_x86_pmu_get_msr)(vcpu, msr_info); } int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + u32 msr = msr_info->index; + u64 data = msr_info->data; + u64 diff; + + switch (msr) { + case MSR_CORE_PERF_GLOBAL_STATUS: + if (msr_info->host_initiated) { + pmu->global_status = data; + return 0; + } + break; /* RO MSR */ + case MSR_CORE_PERF_GLOBAL_CTRL: + if (pmu->global_ctrl == data) + return 0; + if (kvm_valid_perf_global_ctrl(pmu, data)) { + diff = pmu->global_ctrl ^ data; + pmu->global_ctrl = data; + reprogram_counters(pmu, diff); + return 0; + } + break; + case MSR_CORE_PERF_GLOBAL_OVF_CTRL: + if (!(data & pmu->global_ovf_ctrl_mask)) { + if (!msr_info->host_initiated) + pmu->global_status &= ~data; + return 0; + } + break; + default: + break; + } + kvm_pmu_mark_pmc_in_use(vcpu, msr_info->index); return static_call(kvm_x86_pmu_set_msr)(vcpu, msr_info); } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index e3a3813b6a38..3f9823b503fb 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -28,7 +28,6 @@ struct kvm_event_hw_type_mapping { struct kvm_pmu_ops { bool (*hw_event_available)(struct kvm_pmc *pmc); - bool (*pmc_is_enabled)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask); @@ -191,6 +190,35 @@ static inline void kvm_pmu_request_counter_reprogam(struct kvm_pmc *pmc) kvm_make_request(KVM_REQ_PMU, pmc->vcpu); } +/* + * Check if a PMC is enabled by comparing it against global_ctrl bits. + * + * If the current version of vPMU doesn't have global_ctrl MSR, + * all vPMCs are enabled (return TRUE). + */ +static inline bool pmc_is_enabled(struct kvm_pmc *pmc) +{ + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + + if (pmu->version < 2) + return true; + + return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); +} + +static inline void reprogram_counters(struct kvm_pmu *pmu, u64 diff) +{ + int bit; + + if (!diff) + return; + + for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) + __set_bit(bit, pmu->reprogram_pmi); + + kvm_make_request(KVM_REQ_PMU, pmu_to_vcpu(pmu)); +} + void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index f99f2c869664..3a20972e9f1a 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -76,14 +76,6 @@ static bool amd_hw_event_available(struct kvm_pmc *pmc) return true; } -/* check if a PMC is enabled by comparing it against global_ctrl bits. Because - * AMD CPU doesn't have global_ctrl MSR, all PMCs are enabled (return TRUE). - */ -static bool amd_pmc_is_enabled(struct kvm_pmc *pmc) -{ - return true; -} - static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -218,7 +210,6 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) struct kvm_pmu_ops amd_pmu_ops __initdata = { .hw_event_available = amd_hw_event_available, - .pmc_is_enabled = amd_pmc_is_enabled, .pmc_idx_to_pmc = amd_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = amd_msr_idx_to_pmc, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 1d3d0bd3e0e7..cfc6de706bf4 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -68,18 +68,6 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) } } -static void reprogram_counters(struct kvm_pmu *pmu, u64 diff) -{ - int bit; - struct kvm_pmc *pmc; - - for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) { - pmc = intel_pmc_idx_to_pmc(pmu, bit); - if (pmc) - kvm_pmu_request_counter_reprogam(pmc); - } -} - static bool intel_hw_event_available(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); @@ -102,17 +90,6 @@ static bool intel_hw_event_available(struct kvm_pmc *pmc) return true; } -/* check if a PMC is enabled by comparing it with globl_ctrl bits. */ -static bool intel_pmc_is_enabled(struct kvm_pmc *pmc) -{ - struct kvm_pmu *pmu = pmc_to_pmu(pmc); - - if (!intel_pmu_has_perf_global_ctrl(pmu)) - return true; - - return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); -} - static bool intel_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -347,15 +324,6 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_CORE_PERF_FIXED_CTR_CTRL: msr_info->data = pmu->fixed_ctr_ctrl; return 0; - case MSR_CORE_PERF_GLOBAL_STATUS: - msr_info->data = pmu->global_status; - return 0; - case MSR_CORE_PERF_GLOBAL_CTRL: - msr_info->data = pmu->global_ctrl; - return 0; - case MSR_CORE_PERF_GLOBAL_OVF_CTRL: - msr_info->data = 0; - return 0; case MSR_IA32_PEBS_ENABLE: msr_info->data = pmu->pebs_enable; return 0; @@ -404,29 +372,6 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 0; } break; - case MSR_CORE_PERF_GLOBAL_STATUS: - if (msr_info->host_initiated) { - pmu->global_status = data; - return 0; - } - break; /* RO MSR */ - case MSR_CORE_PERF_GLOBAL_CTRL: - if (pmu->global_ctrl == data) - return 0; - if (kvm_valid_perf_global_ctrl(pmu, data)) { - diff = pmu->global_ctrl ^ data; - pmu->global_ctrl = data; - reprogram_counters(pmu, diff); - return 0; - } - break; - case MSR_CORE_PERF_GLOBAL_OVF_CTRL: - if (!(data & pmu->global_ovf_ctrl_mask)) { - if (!msr_info->host_initiated) - pmu->global_status &= ~data; - return 0; - } - break; case MSR_IA32_PEBS_ENABLE: if (pmu->pebs_enable == data) return 0; @@ -783,7 +728,7 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) pmc = intel_pmc_idx_to_pmc(pmu, bit); if (!pmc || !pmc_speculative_in_use(pmc) || - !intel_pmc_is_enabled(pmc) || !pmc->perf_event) + !pmc_is_enabled(pmc) || !pmc->perf_event) continue; hw_idx = pmc->perf_event->hw.idx; @@ -795,7 +740,6 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) struct kvm_pmu_ops intel_pmu_ops __initdata = { .hw_event_available = intel_hw_event_available, - .pmc_is_enabled = intel_pmc_is_enabled, .pmc_idx_to_pmc = intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, .msr_idx_to_pmc = intel_msr_idx_to_pmc, From patchwork Mon Sep 5 12:39:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12966081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 582F1ECAAD3 for ; Mon, 5 Sep 2022 12:43:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237585AbiIEMnu (ORCPT ); Mon, 5 Sep 2022 08:43:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238290AbiIEMnQ (ORCPT ); Mon, 5 Sep 2022 08:43:16 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7CB324941; Mon, 5 Sep 2022 05:40:11 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id o4so8318481pjp.4; Mon, 05 Sep 2022 05:40:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Ce7v6XPbHgf2/p0uT0z3Mwufd/C1Mv28FlyqnGf8RzY=; b=UeCrCQ0btQ3hROiN/wwHnMAWj98jcxdom1cEMZ/y99z3xkblRaadUC6xdPGkA7xmPI ymWylD5NUyiTxNIgaHCC/NLJSFf0Obb7XGEkNjR4D1YdR/fqKlcgi8g8qK4cSryj+JO5 60k/KJ90NoIw5CwKdLamvgyBqWN1cCcavd6XitH2AzlAAeJ2ourxpr13Ei0e/ZQY/c7D ZArFRKw02mO6M7fdtQeMLnOj0EPcGFHd9ZSBfODOgpHRj9SCYwcGUFgYkRSAT8L4Blg5 VGkEccKoQ+a4qWcLx/A9NUgUbzEiWem5ZwDSNZkccumsHdBE+vNYY7jCNmyQ9GIiu5V9 iyHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Ce7v6XPbHgf2/p0uT0z3Mwufd/C1Mv28FlyqnGf8RzY=; b=dLafWZHZHDkeZfAGM26jbmMbE8AAyoPmChzLP94ZY3DRGIj9znZDuYhy1M2SCwVsNa 3+PDyKUylrZlDbr8ijgAkIUB478eca+LtwHhZpXxteyCQFF4gsxoTyV+N86xTHgHiThf STc4GQNKznD8fpBoRPx7T8FpRnacFKMhjgWtfZGScENm6Z43AUNVItuiZHRrKBUWite8 hEEVaXjaw2n09G6eGTazGs/sWRQMqT1+XUoKkbXhGRVIGypasi1r1COvzRJCroqNmIFI gZ3V2tfrLfX2b8OGPjHNyoRGjlNy/XvV5pjLyy5qU2yA3ft8yVQ1oyQL26LbMJDXiDMa BSHQ== X-Gm-Message-State: ACgBeo2f148Vu8bCzHQfinaZysU7DmGt4eiF0NxfzyLU9qF6w89huNtF HRt4HGZp+wJOl5Ax5km5wKE= X-Google-Smtp-Source: AA6agR6aAuGpJR5LUW8O2ZNuIXkqhs3X/xn6cOvuNvtQUCv7wtb2qp0jMvDrhlKyzOvSY+10yDKBxA== X-Received: by 2002:a17:90b:38c8:b0:1ff:f1cc:b433 with SMTP id nn8-20020a17090b38c800b001fff1ccb433mr19536691pjb.161.1662381610793; Mon, 05 Sep 2022 05:40:10 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x8-20020a170902ec8800b00168dadc7354sm7428431plg.78.2022.09.05.05.40.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Sep 2022 05:40:10 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Sandipan Das , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/4] KVM: x86/svm/pmu: Add AMD PerfMonV2 support Date: Mon, 5 Sep 2022 20:39:43 +0800 Message-Id: <20220905123946.95223-4-likexu@tencent.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220905123946.95223-1-likexu@tencent.com> References: <20220905123946.95223-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu If AMD Performance Monitoring Version 2 (PerfMonV2) is detected by the guest, it can use a new scheme to manage the Core PMCs using the new global control and status registers. In addition to benefiting from the PerfMonV2 functionality in the same way as the host (higher precision), the guest also can reduce the number of vm-exits by lowering the total number of MSRs accesses. In terms of implementation details, amd_is_valid_msr() is resurrected since three newly added MSRs could not be mapped to one vPMC. The possibility of emulating PerfMonV2 on the mainframe has also been eliminated for reasons of precision. Co-developed-by: Sandipan Das Signed-off-by: Sandipan Das Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 6 +++++ arch/x86/kvm/svm/pmu.c | 50 +++++++++++++++++++++++++++++++++--------- arch/x86/kvm/x86.c | 11 ++++++++++ 3 files changed, 57 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 7002e1b74108..56b4f898a246 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -455,12 +455,15 @@ int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) switch (msr) { case MSR_CORE_PERF_GLOBAL_STATUS: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS: msr_info->data = pmu->global_status; return 0; case MSR_CORE_PERF_GLOBAL_CTRL: + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: msr_info->data = pmu->global_ctrl; return 0; case MSR_CORE_PERF_GLOBAL_OVF_CTRL: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: msr_info->data = 0; return 0; default: @@ -479,12 +482,14 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) switch (msr) { case MSR_CORE_PERF_GLOBAL_STATUS: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS: if (msr_info->host_initiated) { pmu->global_status = data; return 0; } break; /* RO MSR */ case MSR_CORE_PERF_GLOBAL_CTRL: + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: if (pmu->global_ctrl == data) return 0; if (kvm_valid_perf_global_ctrl(pmu, data)) { @@ -495,6 +500,7 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) } break; case MSR_CORE_PERF_GLOBAL_OVF_CTRL: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: if (!(data & pmu->global_ovf_ctrl_mask)) { if (!msr_info->host_initiated) pmu->global_status &= ~data; diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 3a20972e9f1a..4c7d408e3caa 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -92,12 +92,6 @@ static struct kvm_pmc *amd_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, return amd_pmc_idx_to_pmc(vcpu_to_pmu(vcpu), idx & ~(3u << 30)); } -static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) -{ - /* All MSRs refer to exactly one PMC, so msr_idx_to_pmc is enough. */ - return false; -} - static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, u32 msr) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -109,6 +103,29 @@ static struct kvm_pmc *amd_msr_idx_to_pmc(struct kvm_vcpu *vcpu, u32 msr) return pmc; } +static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + + switch (msr) { + case MSR_K7_EVNTSEL0 ... MSR_K7_PERFCTR3: + return pmu->version > 0; + case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: + return guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE); + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS: + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: + return pmu->version > 1; + default: + if (msr > MSR_F15H_PERF_CTR5 && + msr < MSR_F15H_PERF_CTL0 + 2 * KVM_AMD_PMC_MAX_GENERIC) + return pmu->version > 1; + break; + } + + return amd_msr_idx_to_pmc(vcpu, msr); +} + static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -162,20 +179,31 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) static void amd_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_cpuid_entry2 *entry; + union cpuid_0x80000022_ebx ebx; - if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) + pmu->version = 1; + entry = kvm_find_cpuid_entry_index(vcpu, 0x80000022, 0); + if (kvm_pmu_cap.version > 1 && entry && (entry->eax & BIT(0))) { + pmu->version = 2; + ebx.full = entry->ebx; + pmu->nr_arch_gp_counters = min3((unsigned int)ebx.split.num_core_pmc, + (unsigned int)kvm_pmu_cap.num_counters_gp, + (unsigned int)KVM_AMD_PMC_MAX_GENERIC); + pmu->global_ctrl_mask = ~((1ull << pmu->nr_arch_gp_counters) - 1); + pmu->global_ovf_ctrl_mask = pmu->global_ctrl_mask; + } else if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS_CORE; - else + } else { pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS; + } pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1; pmu->reserved_bits = 0xfffffff000280000ull; pmu->raw_event_mask = AMD64_RAW_EVENT_MASK; - pmu->version = 1; /* not applicable to AMD; but clean them to prevent any fall out */ pmu->counter_bitmask[KVM_PMC_FIXED] = 0; pmu->nr_arch_fixed_counters = 0; - pmu->global_status = 0; bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters); } @@ -206,6 +234,8 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) pmc_stop_counter(pmc); pmc->counter = pmc->prev_counter = pmc->eventsel = 0; } + + pmu->global_ctrl = pmu->global_status = 0; } struct kvm_pmu_ops amd_pmu_ops __initdata = { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b9738efd8425..96bb01c5eab8 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1424,6 +1424,11 @@ static const u32 msrs_to_save_all[] = { MSR_ARCH_PERFMON_FIXED_CTR0 + 2, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS, MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL, + + MSR_AMD64_PERF_CNTR_GLOBAL_CTL, + MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, + MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, + MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1, MSR_ARCH_PERFMON_PERFCTR0 + 2, MSR_ARCH_PERFMON_PERFCTR0 + 3, MSR_ARCH_PERFMON_PERFCTR0 + 4, MSR_ARCH_PERFMON_PERFCTR0 + 5, @@ -3856,6 +3861,9 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_DS_AREA: case MSR_PEBS_DATA_CFG: case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: if (kvm_pmu_is_valid_msr(vcpu, msr)) return kvm_pmu_set_msr(vcpu, msr_info); /* @@ -3959,6 +3967,9 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_DS_AREA: case MSR_PEBS_DATA_CFG: case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: + case MSR_AMD64_PERF_CNTR_GLOBAL_CTL: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS: + case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR: if (kvm_pmu_is_valid_msr(vcpu, msr_info->index)) return kvm_pmu_get_msr(vcpu, msr_info); /* From patchwork Mon Sep 5 12:39:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12966086 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69329ECAAD5 for ; Mon, 5 Sep 2022 12:44:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237862AbiIEMoA (ORCPT ); Mon, 5 Sep 2022 08:44:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238275AbiIEMnQ (ORCPT ); Mon, 5 Sep 2022 08:43:16 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C82D24960; Mon, 5 Sep 2022 05:40:13 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id q15so8496376pfn.11; Mon, 05 Sep 2022 05:40:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ZLlyAnNZ793JZrMXvTbxQFR4/v2HJhNRk38kZVF4NEc=; b=CoGB00PWicobs6W/fJFKBqn58QUlWmfH0ia6jcCebcoRBXHv+hdVXOLdn3qy7FoSRB fJqfSW5xZYr8opZjmm/CuME0KMigPiGiadwAEpJcBY2XqiYoBLNX01wcZOyJIF+yFJQj RyrLBz2CQ8nEqHGt6HgrZhGyxDVOUkskJxqjy+0ABKCZf53iJHDQX2DNthPpZuiqH73R xGumJy/S8TEnfBQAETKu1J71jBEkyh+JX+jlCIxA0lDTJE9/HelzP0XIsnej1BD28oPo 9cZUUZA5PqqfQuEmrxo8W2AWJuYpGYq25qVznPKdS+HCP0hy6CsuN03lXXS4ei3PKfGW 22qA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ZLlyAnNZ793JZrMXvTbxQFR4/v2HJhNRk38kZVF4NEc=; b=rozyNzrkDwgzsWz2VkHxcG9NKo8a+vj2graC3Rx042glAai3R+BK4bGNTvYOrgaVul BKH3B+Ucig12jWRSep09x618MLSoEkk4xvmGUZhVAYX5HrTcZGHq34VkEnPLHBKKNToB BJoDzEs4lFGow+F7UjDLPbR6LkhMGGiF0B5ms1H4X/GWLwKA6aZdLn2bnQlPgHdqCncW ukgCfSYGKXNr8FiXzJQHLvsiBwncOrcUeiO3uxxFtYqp5QILXcQ2ryTntjNVlsSGMAYk XHQp4ndXM6FuFJVc9VmHYu+P9HLqaM9fiPndcLUrT4lxizKRfufe9PmpzW8PIWF1P9MU qqog== X-Gm-Message-State: ACgBeo259Vye2yZuIXkToqnWj+Krgaxx9j4s3pjI1N6/lYSs6G0+4WcR 8GLa4DdyY6bNa6DchPPHPis= X-Google-Smtp-Source: AA6agR5LcZ0vz1buzH2luEZJ5XY89F8e59zZrBG/yE2h/lEaRTEmku0V1DJBBHjjuKeFQabp0FcaEg== X-Received: by 2002:a63:8643:0:b0:42b:66ab:b051 with SMTP id x64-20020a638643000000b0042b66abb051mr43002626pgd.259.1662381612570; Mon, 05 Sep 2022 05:40:12 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id x8-20020a170902ec8800b00168dadc7354sm7428431plg.78.2022.09.05.05.40.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Sep 2022 05:40:12 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Sandipan Das , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] KVM: x86/cpuid: Add AMD CPUID ExtPerfMonAndDbg leaf 0x80000022 Date: Mon, 5 Sep 2022 20:39:44 +0800 Message-Id: <20220905123946.95223-5-likexu@tencent.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220905123946.95223-1-likexu@tencent.com> References: <20220905123946.95223-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sandipan Das CPUID leaf 0x80000022 i.e. ExtPerfMonAndDbg advertises some new performance monitoring features for AMD processors. Bit 0 of EAX indicates support for Performance Monitoring Version 2 (PerfMonV2) features. If found to be set during PMU initialization, the EBX bits of the same CPUID function can be used to determine the number of available PMCs for different PMU types. Expose the relevant bits via KVM_GET_SUPPORTED_CPUID so that guests can make use of the PerfMonV2 features. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Sandipan Das --- arch/x86/include/asm/perf_event.h | 8 ++++++++ arch/x86/kvm/cpuid.c | 21 ++++++++++++++++++++- 2 files changed, 28 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index f6fc8dd51ef4..c848f504e467 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -214,6 +214,14 @@ union cpuid_0x80000022_ebx { unsigned int full; }; +union cpuid_0x80000022_eax { + struct { + /* Performance Monitoring Version 2 Supported */ + unsigned int perfmon_v2:1; + } split; + unsigned int full; +}; + struct x86_pmu_capability { int version; int num_counters_gp; diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 75dcf7a72605..08a29ab096d2 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -1094,7 +1094,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) entry->edx = 0; break; case 0x80000000: - entry->eax = min(entry->eax, 0x80000021); + entry->eax = min(entry->eax, 0x80000022); /* * Serializing LFENCE is reported in a multitude of ways, and * NullSegClearsBase is not reported in CPUID on Zen2; help @@ -1203,6 +1203,25 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) if (!static_cpu_has_bug(X86_BUG_NULL_SEG)) entry->eax |= BIT(6); break; + /* AMD Extended Performance Monitoring and Debug */ + case 0x80000022: { + union cpuid_0x80000022_eax eax; + union cpuid_0x80000022_ebx ebx; + + entry->eax = entry->ebx = entry->ecx = entry->edx = 0; + if (!enable_pmu) + break; + + if (kvm_pmu_cap.version > 1) { + /* AMD PerfMon is only supported up to V2 in the KVM. */ + eax.split.perfmon_v2 = 1; + ebx.split.num_core_pmc = min(kvm_pmu_cap.num_counters_gp, + KVM_AMD_PMC_MAX_GENERIC); + } + entry->eax = eax.full; + entry->ebx = ebx.full; + break; + } /*Add support for Centaur's CPUID instruction*/ case 0xC0000000: /*Just support up to 0xC0000004 now*/