From patchwork Wed May 12 08:44:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12253079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEC6FC433ED for ; Wed, 12 May 2021 08:45:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 89BDF613CB for ; Wed, 12 May 2021 08:45:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230196AbhELIqq (ORCPT ); Wed, 12 May 2021 04:46:46 -0400 Received: from mga17.intel.com ([192.55.52.151]:10038 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230411AbhELIqm (ORCPT ); Wed, 12 May 2021 04:46:42 -0400 IronPort-SDR: eCWfj3GGgRr7hdiMm0uOivpq9iBLi4mZslg34g/ieXzPar6nq8a5cORYC5AJQEyNwxsKK3177x J2XXzoKJs0mQ== X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="179918820" X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="179918820" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2021 01:45:33 -0700 IronPort-SDR: 2RfI0KCX8hls62HKYhihrH2SGRZJJ1OhtCA6lZscHNUH198yTQy8dErxYul8K1+3DWn2v41rU0 YGyU0isipdAw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="392636333" Received: from clx-ap-likexu.sh.intel.com ([10.239.48.108]) by orsmga006.jf.intel.com with ESMTP; 12 May 2021 01:45:29 -0700 From: Like Xu To: Paolo Bonzini , peterz@infradead.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , weijiang.yang@intel.com, eranian@google.com, wei.w.wang@intel.com, kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Like Xu , Andi Kleen , Alexander Shishkin Subject: [PATCH v3 1/5] KVM: x86/pmu: Add pebs_vmx support for ATOM_TREMONT Date: Wed, 12 May 2021 16:44:42 +0800 Message-Id: <20210512084446.342526-2-like.xu@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210512084446.342526-1-like.xu@linux.intel.com> References: <20210512084446.342526-1-like.xu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The ATOM_TREMONT platform also supports the EPT-Friendly PEBS capability and we can also safely enable guest PEBS. Per Intel SDM, the PDIR counter on non-Ice Lake platforms is always GP counter 1; Cc: Peter Zijlstra Cc: Andi Kleen Cc: Alexander Shishkin Signed-off-by: Like Xu --- arch/x86/events/intel/core.c | 1 + arch/x86/kvm/pmu.c | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index e7bbd9aab175..4404987bbc57 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -5826,6 +5826,7 @@ __init int intel_pmu_init(void) case INTEL_FAM6_ATOM_TREMONT_D: case INTEL_FAM6_ATOM_TREMONT: case INTEL_FAM6_ATOM_TREMONT_L: + x86_pmu.pebs_vmx = 1; x86_pmu.late_ack = true; memcpy(hw_cache_event_ids, glp_hw_cache_event_ids, sizeof(hw_cache_event_ids)); diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 4798bf991b60..8c700a7930c4 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -151,9 +151,8 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, * the accuracy of the PEBS profiling result, because the "event IP" * in the PEBS record is calibrated on the guest side. */ - attr.precise_ip = 1; - if (x86_match_cpu(vmx_icl_pebs_cpu) && pmc->idx == 32) - attr.precise_ip = 3; + attr.precise_ip = x86_match_cpu(vmx_icl_pebs_cpu) ? + ((pmc->idx == 32) ? 3 : 1) : ((pmc->idx == 1) ? 3 : 1); } event = perf_event_create_kernel_counter(&attr, -1, current, From patchwork Wed May 12 08:44:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12253083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68C99C433ED for ; Wed, 12 May 2021 08:45:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 31FA2613F8 for ; Wed, 12 May 2021 08:45:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230492AbhELIqz (ORCPT ); Wed, 12 May 2021 04:46:55 -0400 Received: from mga17.intel.com ([192.55.52.151]:10038 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230181AbhELIqp (ORCPT ); Wed, 12 May 2021 04:46:45 -0400 IronPort-SDR: uwsVPVwyvcvwOLJGY9MxY6Y4dzPRHJ4Ir3Tuykuw/OPToSYFSdeaSGh5QiEb2myW5yDbcIUQmL fXEOFpmwngWw== X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="179918834" X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="179918834" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2021 01:45:36 -0700 IronPort-SDR: urEe+zPi+pjlkkhP5Mkd9dSdduCvsW1Erwnna7vaqztMAccftphu8Z8dajWXrgNxP8biPocIza OMCwmTphwOmA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="392636360" Received: from clx-ap-likexu.sh.intel.com ([10.239.48.108]) by orsmga006.jf.intel.com with ESMTP; 12 May 2021 01:45:33 -0700 From: Like Xu To: Paolo Bonzini , peterz@infradead.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , weijiang.yang@intel.com, eranian@google.com, wei.w.wang@intel.com, kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Luwei Kang Subject: [PATCH v3 2/5] KVM: x86/pmu: Add the base address parameter for get_fixed_pmc() Date: Wed, 12 May 2021 16:44:43 +0800 Message-Id: <20210512084446.342526-3-like.xu@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210512084446.342526-1-like.xu@linux.intel.com> References: <20210512084446.342526-1-like.xu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Luwei Kang Introduce a new base MSR address pass-in parameter for get_fixed_pmc() so that the caller can define its base for fixed counters (such as MSR_RELOAD_FIXED_CTRx that can be used in the PBES-via-PT feature). Refactor the code using the current value MSR_CORE_PERF_FIXED_CTR0. Signed-off-by: Luwei Kang --- The checkpatch.pl yell "ERROR: do not use assignment in if condition", should I fix the orignal coding style ? arch/x86/kvm/pmu.h | 5 ++--- arch/x86/kvm/vmx/pmu_intel.c | 17 +++++++++++------ 2 files changed, 13 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 832cf56e6924..6720881b8370 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -126,10 +126,9 @@ static inline struct kvm_pmc *get_gp_pmc(struct kvm_pmu *pmu, u32 msr, } /* returns fixed PMC with the specified MSR */ -static inline struct kvm_pmc *get_fixed_pmc(struct kvm_pmu *pmu, u32 msr) +static inline struct kvm_pmc *get_fixed_pmc(struct kvm_pmu *pmu, + u32 msr, u32 base) { - int base = MSR_CORE_PERF_FIXED_CTR0; - if (msr >= base && msr < base + pmu->nr_arch_fixed_counters) { u32 index = array_index_nospec(msr - base, pmu->nr_arch_fixed_counters); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index a706d3597720..c10cb3008bf1 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -44,7 +44,8 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) u8 old_ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, i); struct kvm_pmc *pmc; - pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); + pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i, + MSR_CORE_PERF_FIXED_CTR0); if (old_ctrl == new_ctrl) continue; @@ -114,7 +115,8 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) else { u32 idx = pmc_idx - INTEL_PMC_IDX_FIXED; - return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0); + return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0, + MSR_CORE_PERF_FIXED_CTR0); } } @@ -222,7 +224,8 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) default: ret = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0) || get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0) || - get_fixed_pmc(pmu, msr) || get_fw_gp_pmc(pmu, msr) || + get_fixed_pmc(pmu, msr, MSR_CORE_PERF_FIXED_CTR0) || + get_fw_gp_pmc(pmu, msr) || intel_pmu_is_valid_lbr_msr(vcpu, msr); break; } @@ -235,7 +238,7 @@ static struct kvm_pmc *intel_msr_idx_to_pmc(struct kvm_vcpu *vcpu, u32 msr) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); struct kvm_pmc *pmc; - pmc = get_fixed_pmc(pmu, msr); + pmc = get_fixed_pmc(pmu, msr, MSR_CORE_PERF_FIXED_CTR0); pmc = pmc ? pmc : get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0); pmc = pmc ? pmc : get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0); @@ -382,7 +385,8 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) msr_info->data = val & pmu->counter_bitmask[KVM_PMC_GP]; return 0; - } else if ((pmc = get_fixed_pmc(pmu, msr))) { + } else if ((pmc = get_fixed_pmc(pmu, msr, + MSR_CORE_PERF_FIXED_CTR0))) { u64 val = pmc_read_counter(pmc); msr_info->data = val & pmu->counter_bitmask[KVM_PMC_FIXED]; @@ -470,7 +474,8 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) perf_event_period(pmc->perf_event, get_sample_period(pmc, data)); return 0; - } else if ((pmc = get_fixed_pmc(pmu, msr))) { + } else if ((pmc = get_fixed_pmc(pmu, msr, + MSR_CORE_PERF_FIXED_CTR0))) { pmc->counter += data - pmc_read_counter(pmc); if (pmc->perf_event) perf_event_period(pmc->perf_event, From patchwork Wed May 12 08:44:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12253085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F9F3C433ED for ; Wed, 12 May 2021 08:45:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED5AE613DE for ; Wed, 12 May 2021 08:45:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230360AbhELIrD (ORCPT ); Wed, 12 May 2021 04:47:03 -0400 Received: from mga17.intel.com ([192.55.52.151]:10038 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230474AbhELIqt (ORCPT ); Wed, 12 May 2021 04:46:49 -0400 IronPort-SDR: YBQNBrL1YuzOKWMbN+8vmpDAIiqZErUYYe7tbXrIl6OkmcUsH4O5uHNx6ToZ0NKP4XDgREmrIh 8BGfrsu7i2uA== X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="179918841" X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="179918841" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2021 01:45:41 -0700 IronPort-SDR: wTubRvc+/IylDD15mu9NMkzaZAmidfPu7BSkZSIL4D6/9EDYH4Uio7mADAl6UUweIBeaxvUQBf FOwMf7AoTy9A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="392636389" Received: from clx-ap-likexu.sh.intel.com ([10.239.48.108]) by orsmga006.jf.intel.com with ESMTP; 12 May 2021 01:45:37 -0700 From: Like Xu To: Paolo Bonzini , peterz@infradead.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , weijiang.yang@intel.com, eranian@google.com, wei.w.wang@intel.com, kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Like Xu , Luwei Kang Subject: [PATCH v3 3/5] KVM: x86/pmu: Add counter reload MSR emulation for all counters Date: Wed, 12 May 2021 16:44:44 +0800 Message-Id: <20210512084446.342526-4-like.xu@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210512084446.342526-1-like.xu@linux.intel.com> References: <20210512084446.342526-1-like.xu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The Intel PEBS-via-PT feature introduces a new output mechanism that directs PEBS records to the PT buffer, and after each PEBS record is generated, it automatically reloads the counter values from a new set of "reload values" MSRs (based on MSR_RELOAD_FIXED_CTRx and MSR_RELOAD_PMCx), instead of the counter reload values stored in the DS management area. If perf_capabilities supports this capability, PEBS records will be directed to the PT buffer when the relevant bit in pebs_enable is set. Co-developed-by: Luwei Kang Signed-off-by: Luwei Kang Signed-off-by: Like Xu --- arch/x86/events/perf_event.h | 5 ----- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/include/asm/msr-index.h | 6 ++++++ arch/x86/kvm/pmu.h | 8 ++++++++ arch/x86/kvm/vmx/pmu_intel.c | 18 ++++++++++++++++++ 5 files changed, 33 insertions(+), 5 deletions(-) diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 685a1a4e9438..4171f1328732 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -115,11 +115,6 @@ struct amd_nb { }; #define PEBS_COUNTER_MASK ((1ULL << MAX_PEBS_EVENTS) - 1) -#define PEBS_PMI_AFTER_EACH_RECORD BIT_ULL(60) -#define PEBS_OUTPUT_OFFSET 61 -#define PEBS_OUTPUT_MASK (3ull << PEBS_OUTPUT_OFFSET) -#define PEBS_OUTPUT_PT (1ull << PEBS_OUTPUT_OFFSET) -#define PEBS_VIA_PT_MASK (PEBS_OUTPUT_PT | PEBS_PMI_AFTER_EACH_RECORD) /* * Flags PEBS can handle without an PMI. diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 15bff609fd57..29d2d8027014 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -443,6 +443,7 @@ struct kvm_pmc { u8 idx; u64 counter; u64 eventsel; + u64 reload_counter; struct perf_event *perf_event; struct kvm_vcpu *vcpu; /* diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 1ab3f280f3a9..364c40ecd963 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -187,12 +187,18 @@ #define MSR_IA32_PERF_CAPABILITIES 0x00000345 #define PERF_CAP_METRICS_IDX 15 #define PERF_CAP_PT_IDX 16 +#define PEBS_PMI_AFTER_EACH_RECORD BIT_ULL(60) +#define PEBS_OUTPUT_OFFSET 61 +#define PEBS_OUTPUT_MASK (3ull << PEBS_OUTPUT_OFFSET) +#define PEBS_OUTPUT_PT (1ull << PEBS_OUTPUT_OFFSET) +#define PEBS_VIA_PT_MASK (PEBS_OUTPUT_PT | PEBS_PMI_AFTER_EACH_RECORD) #define MSR_PEBS_LD_LAT_THRESHOLD 0x000003f6 #define PERF_CAP_PEBS_TRAP BIT_ULL(6) #define PERF_CAP_ARCH_REG BIT_ULL(7) #define PERF_CAP_PEBS_FORMAT 0xf00 #define PERF_CAP_PEBS_BASELINE BIT_ULL(14) +#define PERF_CAP_PEBS_OUTPUT_PT BIT_ULL(16) #define PERF_CAP_PEBS_MASK (PERF_CAP_PEBS_TRAP | PERF_CAP_ARCH_REG | \ PERF_CAP_PEBS_FORMAT | PERF_CAP_PEBS_BASELINE) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 6720881b8370..f9895a7a59bc 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -115,6 +115,10 @@ static inline bool kvm_valid_perf_global_ctrl(struct kvm_pmu *pmu, static inline struct kvm_pmc *get_gp_pmc(struct kvm_pmu *pmu, u32 msr, u32 base) { + if ((msr == MSR_RELOAD_PMC0 || msr == MSR_RELOAD_FIXED_CTR0) && + !(pmu_to_vcpu(pmu)->arch.perf_capabilities & PERF_CAP_PEBS_OUTPUT_PT)) + return NULL; + if (msr >= base && msr < base + pmu->nr_arch_gp_counters) { u32 index = array_index_nospec(msr - base, pmu->nr_arch_gp_counters); @@ -129,6 +133,10 @@ static inline struct kvm_pmc *get_gp_pmc(struct kvm_pmu *pmu, u32 msr, static inline struct kvm_pmc *get_fixed_pmc(struct kvm_pmu *pmu, u32 msr, u32 base) { + if ((msr == MSR_RELOAD_PMC0 || msr == MSR_RELOAD_FIXED_CTR0) && + !(pmu_to_vcpu(pmu)->arch.perf_capabilities & PERF_CAP_PEBS_OUTPUT_PT)) + return NULL; + if (msr >= base && msr < base + pmu->nr_arch_fixed_counters) { u32 index = array_index_nospec(msr - base, pmu->nr_arch_fixed_counters); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index c10cb3008bf1..e5c12c958cdb 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -226,6 +226,8 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0) || get_fixed_pmc(pmu, msr, MSR_CORE_PERF_FIXED_CTR0) || get_fw_gp_pmc(pmu, msr) || + get_gp_pmc(pmu, msr, MSR_RELOAD_PMC0) || + get_fixed_pmc(pmu, msr, MSR_RELOAD_FIXED_CTR0) || intel_pmu_is_valid_lbr_msr(vcpu, msr); break; } @@ -241,6 +243,8 @@ static struct kvm_pmc *intel_msr_idx_to_pmc(struct kvm_vcpu *vcpu, u32 msr) pmc = get_fixed_pmc(pmu, msr, MSR_CORE_PERF_FIXED_CTR0); pmc = pmc ? pmc : get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0); pmc = pmc ? pmc : get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0); + pmc = pmc ? pmc : get_gp_pmc(pmu, msr, MSR_RELOAD_PMC0); + pmc = pmc ? pmc : get_fixed_pmc(pmu, msr, MSR_RELOAD_FIXED_CTR0); return pmc; } @@ -394,6 +398,10 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) } else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) { msr_info->data = pmc->eventsel; return 0; + } else if ((pmc = get_gp_pmc(pmu, msr, MSR_RELOAD_PMC0)) || + (pmc = get_fixed_pmc(pmu, msr, MSR_RELOAD_FIXED_CTR0))) { + msr_info->data = pmc->reload_counter; + return 0; } else if (intel_pmu_handle_lbr_msrs_access(vcpu, msr_info, true)) return 0; } @@ -488,6 +496,12 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) reprogram_gp_counter(pmc, data); return 0; } + } else if ((pmc = get_gp_pmc(pmu, msr, MSR_RELOAD_PMC0)) || + (pmc = get_fixed_pmc(pmu, msr, MSR_RELOAD_FIXED_CTR0))) { + if (!(data & ~pmc_bitmask(pmc))) { + pmc->reload_counter = data; + return 0; + } } else if (intel_pmu_handle_lbr_msrs_access(vcpu, msr_info, false)) return 0; } @@ -595,6 +609,8 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->pebs_enable_mask = ~((1ull << pmu->nr_arch_gp_counters) - 1); } + if (vcpu->arch.perf_capabilities & PERF_CAP_PEBS_OUTPUT_PT) + pmu->pebs_enable_mask &= ~PEBS_VIA_PT_MASK; } else { vcpu->arch.ia32_misc_enable_msr |= MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL; vcpu->arch.perf_capabilities &= ~PERF_CAP_PEBS_MASK; @@ -612,6 +628,7 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) pmu->gp_counters[i].vcpu = vcpu; pmu->gp_counters[i].idx = i; pmu->gp_counters[i].current_config = 0; + pmu->gp_counters[i].reload_counter = 0; } for (i = 0; i < INTEL_PMC_MAX_FIXED; i++) { @@ -619,6 +636,7 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) pmu->fixed_counters[i].vcpu = vcpu; pmu->fixed_counters[i].idx = i + INTEL_PMC_IDX_FIXED; pmu->fixed_counters[i].current_config = 0; + pmu->fixed_counters[i].reload_counter = 0; } vcpu->arch.perf_capabilities = vmx_get_perf_capabilities(); From patchwork Wed May 12 08:44:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12253087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 765B1C433ED for ; Wed, 12 May 2021 08:46:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 43AA2613CA for ; Wed, 12 May 2021 08:46:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231133AbhELIrG (ORCPT ); Wed, 12 May 2021 04:47:06 -0400 Received: from mga17.intel.com ([192.55.52.151]:10038 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230411AbhELIqy (ORCPT ); Wed, 12 May 2021 04:46:54 -0400 IronPort-SDR: gNCk433QKcVQqyqe2Ys7Ys5QWBxjo3YFYokszGf3jKokIxfMyt6qnqgL+5hx+O1UmyZaGEVRdy Z1RH86yqlCew== X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="179918870" X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="179918870" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2021 01:45:45 -0700 IronPort-SDR: 2w/CnFAmSUUakd5WBAphTyCppkfTkLTie2pSgd7HNrWEUPxGgUFtSOoWzZ9P5IacWg6OnvZ+4f qdpvE/nb4p+A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="392636401" Received: from clx-ap-likexu.sh.intel.com ([10.239.48.108]) by orsmga006.jf.intel.com with ESMTP; 12 May 2021 01:45:41 -0700 From: Like Xu To: Paolo Bonzini , peterz@infradead.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , weijiang.yang@intel.com, eranian@google.com, wei.w.wang@intel.com, kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v3 4/5] KVM: x86/pmu: Add counter reload registers to the MSR-load list Date: Wed, 12 May 2021 16:44:45 +0800 Message-Id: <20210512084446.342526-5-like.xu@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210512084446.342526-1-like.xu@linux.intel.com> References: <20210512084446.342526-1-like.xu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The guest counter reload registers need to be loaded to real HW before VM-entry. Taking into account the existing guest PT implementation, we add those counter reload registers to MSR-load list when the corresponding PEBS counters are enabled and the optimization from clear_atomic_switch_msr() can be reused. To support that, it needs to expand the value of NR_LOADSTORE_MSRS from 8 to 16 because when all counters are enabled, up to 7 or 8 counter reload registers need to be added into the MSR-load list. Cc: Peter Zijlstra Signed-off-by: Like Xu --- arch/x86/events/intel/core.c | 27 +++++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.h | 2 +- 2 files changed, 28 insertions(+), 1 deletion(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 4404987bbc57..bd6d9e2a64d9 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3903,6 +3903,8 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data) u64 intel_ctrl = hybrid(cpuc->pmu, intel_ctrl); u64 pebs_mask = (x86_pmu.flags & PMU_FL_PEBS_ALL) ? cpuc->pebs_enabled : (cpuc->pebs_enabled & PEBS_COUNTER_MASK); + u64 guest_pebs_enable, base, idx, host_reload_ctr; + unsigned long bit; *nr = 0; arr[(*nr)++] = (struct perf_guest_switch_msr){ @@ -3964,7 +3966,32 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data) arr[0].guest |= arr[*nr].guest; } + guest_pebs_enable = arr[*nr].guest; ++(*nr); + + if (!x86_pmu.intel_cap.pebs_output_pt_available || + !(guest_pebs_enable & PEBS_OUTPUT_PT)) + return arr; + + for_each_set_bit(bit, (unsigned long *)&guest_pebs_enable, + X86_PMC_IDX_MAX) { + base = (bit < INTEL_PMC_IDX_FIXED) ? + MSR_RELOAD_PMC0 : MSR_RELOAD_FIXED_CTR0; + idx = (bit < INTEL_PMC_IDX_FIXED) ? + bit : (bit - INTEL_PMC_IDX_FIXED); + + /* It's good when the pebs counters are not cross-mapped. */ + rdmsrl(base, host_reload_ctr); + + arr[(*nr)++] = (struct perf_guest_switch_msr){ + .msr = base, + .host = host_reload_ctr, + .guest = (bit < INTEL_PMC_IDX_FIXED) ? + pmu->gp_counters[bit].reload_counter : + pmu->fixed_counters[bit].reload_counter, + }; + } + return arr; } diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 3afdcebb0a11..25aa1cc3cc6a 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -28,7 +28,7 @@ extern const u32 vmx_msr_index[]; #define MAX_NR_USER_RETURN_MSRS 4 #endif -#define MAX_NR_LOADSTORE_MSRS 8 +#define MAX_NR_LOADSTORE_MSRS 16 struct vmx_msrs { unsigned int nr; From patchwork Wed May 12 08:44:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12253089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68F06C433ED for ; Wed, 12 May 2021 08:46:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2854F613C3 for ; Wed, 12 May 2021 08:46:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231160AbhELIrM (ORCPT ); Wed, 12 May 2021 04:47:12 -0400 Received: from mga17.intel.com ([192.55.52.151]:10038 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230498AbhELIq5 (ORCPT ); Wed, 12 May 2021 04:46:57 -0400 IronPort-SDR: SJfQ0DHc2GxqxaCGnpqoSYHE9pYkS/u3t5PBMEh5+Ep5weZot+zbDI1254xpM5RlqY8BCsXJsu lo8CxCEz0HZg== X-IronPort-AV: E=McAfee;i="6200,9189,9981"; a="179918887" X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="179918887" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2021 01:45:49 -0700 IronPort-SDR: H6o6VzF/Tuz6GSKhXx19UWOhNqXTidR80XhT9ufzH8IdO2tDOsbvOqxs6/FZJJWYokFcMhNt9e 9lVbj4V+hx5Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,293,1613462400"; d="scan'208";a="392636420" Received: from clx-ap-likexu.sh.intel.com ([10.239.48.108]) by orsmga006.jf.intel.com with ESMTP; 12 May 2021 01:45:45 -0700 From: Like Xu To: Paolo Bonzini , peterz@infradead.org Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , weijiang.yang@intel.com, eranian@google.com, wei.w.wang@intel.com, kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Like Xu , Alexander Shishkin , Andi Kleen Subject: [PATCH v3 5/5] KVM: x86/pmu: Expose PEBS-via-PT in the KVM supported capabilities Date: Wed, 12 May 2021 16:44:46 +0800 Message-Id: <20210512084446.342526-6-like.xu@linux.intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210512084446.342526-1-like.xu@linux.intel.com> References: <20210512084446.342526-1-like.xu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The hypervisor userspace can dis/enable it via the MSR-based feature "MSR_IA32_PERF_CAPABILITIES [bit 16]". If guest also has basic PT support, it can output the PEBS records to the PT buffer. Cc: Alexander Shishkin Cc: Andi Kleen Signed-off-by: Like Xu --- arch/x86/kvm/vmx/capabilities.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index fd8c9822db9e..e04b50174dd5 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -398,8 +398,11 @@ static inline u64 vmx_get_perf_capabilities(void) perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT; - if (vmx_pebs_supported()) + if (vmx_pebs_supported()) { perf_cap |= host_perf_cap & PERF_CAP_PEBS_MASK; + if (vmx_pt_mode_is_host_guest()) + perf_cap |= host_perf_cap & PERF_CAP_PEBS_OUTPUT_PT; + } return perf_cap; }