From patchwork Wed Sep 27 03:31:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399890 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7B7FE7E62B for ; Wed, 27 Sep 2023 04:22:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229768AbjI0EWO (ORCPT ); Wed, 27 Sep 2023 00:22:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229667AbjI0EVB (ORCPT ); Wed, 27 Sep 2023 00:21:01 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D03A12534; Tue, 26 Sep 2023 20:23:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785037; x=1727321037; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vG6JB09+GdX1oexJrBx3ofjZYO0vUM/vdHaobdJsMXA=; b=Hwn0qedhm2uoU8+xuKtocMWADuuN89PY6gCL5phsX6s9FMZJRdllzB6Q HWhJwoZoOl/l2jwfwyf9yeK/+O2DN9tk2A4v2Ixuwhhm1ylmvlXKO7wUD mQSGyRjQoOva1CI5KYBUQm9ByxgKifS2Etbfhgvau6eHKseWc//zEjeCc 90F9CEBVmpUT4DRrrp9CRwrTZrukCy108sBmsvSrYvIJZ2KyJRBr/kitc Gmsl/1UsAip0GOM9zi+1Ojt/cICpk4uaiLhkrtcErBy6iLV178BvAnh5C SykGgPMljvVYjRW7jLj6N+/E4XiY1jWhSdFvvLV9mMv2t0g1tcE7NMrN7 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780709" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780709" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:23:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864636942" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864636942" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:23:52 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 01/13] KVM: x86/pmu: Add Intel CPUID-hinted TopDown slots event Date: Wed, 27 Sep 2023 11:31:12 +0800 Message-Id: <20230927033124.1226509-2-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds support for the architectural topdown slots event which is hinted by CPUID.0AH.EBX. The topdown slots event counts the total number of available slots for an unhalted logical processor. Software can use this event as the denominator for the top-level metrics of the topDown Microarchitecture Analysis method. Although the MSR_PERF_METRICS MSR required for topdown events is not currently available in the guest, relying only on the data provided by the slots event is sufficient for pmu users to perceive differences in cpu pipeline machine-width across micro-architectures. The standalone slots event, like the instruction event, can be counted with gp counter or fixed counter 3 (if any). Its availability is also controlled by CPUID.AH.EBX. On Linux, perf user may encode "-e cpu/event=0xa4,umask=0x01/" or "-e cpu/slots/" to count slots events. This patch only enables slots event on GP counters. The enabling on fixed counter 3 will be supported in subsequent patches. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Dapeng Mi --- arch/x86/kvm/vmx/pmu_intel.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index f2efa0bf7ae8..7322f0c18565 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -34,6 +34,7 @@ enum intel_pmu_architectural_events { INTEL_ARCH_LLC_MISSES, INTEL_ARCH_BRANCHES_RETIRED, INTEL_ARCH_BRANCHES_MISPREDICTED, + INTEL_ARCH_TOPDOWN_SLOTS, NR_REAL_INTEL_ARCH_EVENTS, @@ -58,6 +59,7 @@ static struct { [INTEL_ARCH_LLC_MISSES] = { 0x2e, 0x41 }, [INTEL_ARCH_BRANCHES_RETIRED] = { 0xc4, 0x00 }, [INTEL_ARCH_BRANCHES_MISPREDICTED] = { 0xc5, 0x00 }, + [INTEL_ARCH_TOPDOWN_SLOTS] = { 0xa4, 0x01 }, [PSEUDO_ARCH_REFERENCE_CYCLES] = { 0x00, 0x03 }, }; From patchwork Wed Sep 27 03:31:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8D6CCE7A8B for ; Wed, 27 Sep 2023 04:04:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229720AbjI0EEB (ORCPT ); Wed, 27 Sep 2023 00:04:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229553AbjI0ECs (ORCPT ); Wed, 27 Sep 2023 00:02:48 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B66A1253E; Tue, 26 Sep 2023 20:24:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785048; x=1727321048; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=c3qze6Rzf7qt1xJA8WLqp19CuXQe2vCnChT+L9n+VSQ=; b=erHaNw8ZL9BWCrLWDoswzuEVWGjGUBvQoaJ/pJ45/eKkXUiLQxgHwAhV /ewAX35Y8iem7ZViU62sF2H+GblFvHURROcSznVXH3e45c9KxWGsBe0eT 0pswWKVhXGdCHan/Cft6DoEKkt5sdRc0SvaLg64djSvMKn+WsasAaICtT /UcGiMSzofYUnb0zUPSq418u13GP3lzo3YeJkDh3A53dOdmt/OQqAQYgn 3pvYs/dwBePzP/weVvgm6w740Ibs+iS5lgidHnu9NpIhHBr9raDhz3nCr jgv3C7DZTMynQJIwRUWxGjFaEw+s+6t6ARC79ikhX0T336zUD+uxq2hz7 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780738" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780738" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:24:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864636956" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864636956" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:24:02 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 02/13] KVM: x86/pmu: Support PMU fixed counter 3 Date: Wed, 27 Sep 2023 11:31:13 +0800 Message-Id: <20230927033124.1226509-3-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TopDown slots event can be enabled on gp counter or fixed counter 3 and it does not differ from other fixed counters in terms of the use of count and sampling modes (except for the hardware logic for event accumulation). According to commit 6017608936c1 ("perf/x86/intel: Add Icelake support"), KVM or any perf in-kernel user needs to reprogram fixed counter 3 via the kernel-defined TopDown slots event for real fixed counter 3 on the host. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 10 ++++++++++ arch/x86/kvm/x86.c | 4 ++-- 3 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 17715cb8731d..90ecd3f7a9c3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -509,7 +509,7 @@ struct kvm_pmc { #define KVM_INTEL_PMC_MAX_GENERIC 8 #define MSR_ARCH_PERFMON_PERFCTR_MAX (MSR_ARCH_PERFMON_PERFCTR0 + KVM_INTEL_PMC_MAX_GENERIC - 1) #define MSR_ARCH_PERFMON_EVENTSEL_MAX (MSR_ARCH_PERFMON_EVENTSEL0 + KVM_INTEL_PMC_MAX_GENERIC - 1) -#define KVM_PMC_MAX_FIXED 3 +#define KVM_PMC_MAX_FIXED 4 #define MSR_ARCH_PERFMON_FIXED_CTR_MAX (MSR_ARCH_PERFMON_FIXED_CTR0 + KVM_PMC_MAX_FIXED - 1) #define KVM_AMD_PMC_MAX_GENERIC 6 struct kvm_pmu { diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 7322f0c18565..044d61aa63dc 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -45,6 +45,14 @@ enum intel_pmu_architectural_events { * core crystal clock or the bus clock (yeah, "architectural"). */ PSEUDO_ARCH_REFERENCE_CYCLES = NR_REAL_INTEL_ARCH_EVENTS, + /* + * Pseudo-architectural event used to implement IA32_FIXED_CTR3, a.k.a. + * topDown slots. The topdown slots event counts the total number of + * available slots for an unhalted logical processor. The topdwon slots + * event with PERF_METRICS MSR together provides support for topdown + * micro-architecture analysis method. + */ + PSEUDO_ARCH_TOPDOWN_SLOTS, NR_INTEL_ARCH_EVENTS, }; @@ -61,6 +69,7 @@ static struct { [INTEL_ARCH_BRANCHES_MISPREDICTED] = { 0xc5, 0x00 }, [INTEL_ARCH_TOPDOWN_SLOTS] = { 0xa4, 0x01 }, [PSEUDO_ARCH_REFERENCE_CYCLES] = { 0x00, 0x03 }, + [PSEUDO_ARCH_TOPDOWN_SLOTS] = { 0x00, 0x04 }, }; /* mapping between fixed pmc index and intel_arch_events array */ @@ -68,6 +77,7 @@ static int fixed_pmc_events[] = { [0] = INTEL_ARCH_INSTRUCTIONS_RETIRED, [1] = INTEL_ARCH_CPU_CYCLES, [2] = PSEUDO_ARCH_REFERENCE_CYCLES, + [3] = PSEUDO_ARCH_TOPDOWN_SLOTS, }; static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9f18b06bbda6..906af36850fb 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1468,7 +1468,7 @@ static const u32 msrs_to_save_base[] = { static const u32 msrs_to_save_pmu[] = { MSR_ARCH_PERFMON_FIXED_CTR0, MSR_ARCH_PERFMON_FIXED_CTR1, - MSR_ARCH_PERFMON_FIXED_CTR0 + 2, + MSR_ARCH_PERFMON_FIXED_CTR2, MSR_ARCH_PERFMON_FIXED_CTR3, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS, MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL, MSR_IA32_PEBS_ENABLE, MSR_IA32_DS_AREA, MSR_PEBS_DATA_CFG, @@ -7196,7 +7196,7 @@ static void kvm_init_msr_lists(void) { unsigned i; - BUILD_BUG_ON_MSG(KVM_PMC_MAX_FIXED != 3, + BUILD_BUG_ON_MSG(KVM_PMC_MAX_FIXED != 4, "Please update the fixed PMCs in msrs_to_save_pmu[]"); num_msrs_to_save = 0; From patchwork Wed Sep 27 03:31:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1E08E7E62B for ; Wed, 27 Sep 2023 04:19:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229759AbjI0ETR (ORCPT ); Wed, 27 Sep 2023 00:19:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229571AbjI0ESN (ORCPT ); Wed, 27 Sep 2023 00:18:13 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EBBF140C8; Tue, 26 Sep 2023 20:24:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785068; x=1727321068; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ycMyhEAn9ZsRQ3bijHZc8YDuhP6Nbbfwf7cwH5wQeXk=; b=Ui21nhBySHLPI3+/rBHlK8cKkUin8HbEVzst4AM1OTS08BsEkGj9W68y gdi/eVsv7HirOq7qYSm8ldAgqWJzlouepgwNH9mQl9vxGHJ9dslm8nqSR W5M3K03dd6DpksrEVDXkokoHZqmTnpMKDbt9OFCpAYho+U5w//4LOIbMX wx8pSfZ2G2EXiAXtNMPA8bn/IeTWgnGmVorY5JF/aFosrgJxMfdi6iozT eL/7CncQNDLW6TVmWQmYuo8BYZELK47Sqpal/BxRzQgpyKotIXYL942ba OD17CxPzn4am4FPn2YeTEjdIjxIYKgzPT583u5EzR3EjjAQFTO1++F9jn A==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780770" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780770" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:24:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864637076" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864637076" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:24:22 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 03/13] perf/core: Add function perf_event_group_leader_check() Date: Wed, 27 Sep 2023 11:31:14 +0800 Message-Id: <20230927033124.1226509-4-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Extract the group leader checking code in function sys_perf_event_open() to create a new function perf_event_group_leader_check(). The subsequent change would add a new function perf_event_create_group_kernel_counters() which is used to create group events in kernel space. The function also needs to do same check for group leader event just like function sys_perf_event_open() does. So extract the checking code into a separate function and avoid the code duplication. Signed-off-by: Dapeng Mi --- kernel/events/core.c | 143 +++++++++++++++++++++++-------------------- 1 file changed, 78 insertions(+), 65 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 4c72a41f11af..d485dac2b55f 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12313,6 +12313,81 @@ perf_check_permission(struct perf_event_attr *attr, struct task_struct *task) return is_capable || ptrace_may_access(task, ptrace_mode); } +static int perf_event_group_leader_check(struct perf_event *group_leader, + struct perf_event *event, + struct perf_event_attr *attr, + struct perf_event_context *ctx, + struct pmu **pmu, + int *move_group) +{ + if (!group_leader) + return 0; + + /* + * Do not allow a recursive hierarchy (this new sibling + * becoming part of another group-sibling): + */ + if (group_leader->group_leader != group_leader) + return -EINVAL; + + /* All events in a group should have the same clock */ + if (group_leader->clock != event->clock) + return -EINVAL; + + /* + * Make sure we're both events for the same CPU; + * grouping events for different CPUs is broken; since + * you can never concurrently schedule them anyhow. + */ + if (group_leader->cpu != event->cpu) + return -EINVAL; + + /* + * Make sure we're both on the same context; either task or cpu. + */ + if (group_leader->ctx != ctx) + return -EINVAL; + + /* + * Only a group leader can be exclusive or pinned + */ + if (attr->exclusive || attr->pinned) + return -EINVAL; + + if (is_software_event(event) && + !in_software_context(group_leader)) { + /* + * If the event is a sw event, but the group_leader + * is on hw context. + * + * Allow the addition of software events to hw + * groups, this is safe because software events + * never fail to schedule. + * + * Note the comment that goes with struct + * perf_event_pmu_context. + */ + *pmu = group_leader->pmu_ctx->pmu; + } else if (!is_software_event(event)) { + if (is_software_event(group_leader) && + (group_leader->group_caps & PERF_EV_CAP_SOFTWARE)) { + /* + * In case the group is a pure software group, and we + * try to add a hardware event, move the whole group to + * the hardware context. + */ + *move_group = 1; + } + + /* Don't allow group of multiple hw events from different pmus */ + if (!in_software_context(group_leader) && + group_leader->pmu_ctx->pmu != *pmu) + return -EINVAL; + } + + return 0; +} + /** * sys_perf_event_open - open a performance event, associate it to a task/cpu * @@ -12507,71 +12582,9 @@ SYSCALL_DEFINE5(perf_event_open, } } - if (group_leader) { - err = -EINVAL; - - /* - * Do not allow a recursive hierarchy (this new sibling - * becoming part of another group-sibling): - */ - if (group_leader->group_leader != group_leader) - goto err_locked; - - /* All events in a group should have the same clock */ - if (group_leader->clock != event->clock) - goto err_locked; - - /* - * Make sure we're both events for the same CPU; - * grouping events for different CPUs is broken; since - * you can never concurrently schedule them anyhow. - */ - if (group_leader->cpu != event->cpu) - goto err_locked; - - /* - * Make sure we're both on the same context; either task or cpu. - */ - if (group_leader->ctx != ctx) - goto err_locked; - - /* - * Only a group leader can be exclusive or pinned - */ - if (attr.exclusive || attr.pinned) - goto err_locked; - - if (is_software_event(event) && - !in_software_context(group_leader)) { - /* - * If the event is a sw event, but the group_leader - * is on hw context. - * - * Allow the addition of software events to hw - * groups, this is safe because software events - * never fail to schedule. - * - * Note the comment that goes with struct - * perf_event_pmu_context. - */ - pmu = group_leader->pmu_ctx->pmu; - } else if (!is_software_event(event)) { - if (is_software_event(group_leader) && - (group_leader->group_caps & PERF_EV_CAP_SOFTWARE)) { - /* - * In case the group is a pure software group, and we - * try to add a hardware event, move the whole group to - * the hardware context. - */ - move_group = 1; - } - - /* Don't allow group of multiple hw events from different pmus */ - if (!in_software_context(group_leader) && - group_leader->pmu_ctx->pmu != pmu) - goto err_locked; - } - } + err = perf_event_group_leader_check(group_leader, event, &attr, ctx, &pmu, &move_group); + if (err) + goto err_locked; /* * Now that we're certain of the pmu; find the pmu_ctx. From patchwork Wed Sep 27 03:31:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399894 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C77EE7E62B for ; Wed, 27 Sep 2023 04:25:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229796AbjI0EZR (ORCPT ); Wed, 27 Sep 2023 00:25:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229547AbjI0EXq (ORCPT ); Wed, 27 Sep 2023 00:23:46 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9189C140CD; Tue, 26 Sep 2023 20:24:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785074; x=1727321074; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kF4DhZt8pErot/RT4yIRmpZuAvHJlxj64wy/EpFCWTQ=; b=A7ITNZhR0WyeYjmCqdxKo8ktMntAcwpCPi3wE3GGELbViwwPr+qW/gDc BQpWTHzRa62rHcAYur94xQmZiByg0egwLfiXvfq8IxbgQqf1XBaviOs9W O5IYnLC+AOWzmZdFyaBvF0udZmgqihdjWCpWqoIAO2JaNvfrRtT9Kj0Us //LXYmyXAtVrkPgdFIiLRWwrzceTMoQNzRuN2TxbgPlF8xmnBq/M0pm5r yE45h6m1rN5xjzBPUx0cAScsIW1ZHI2RN0x2ayNKhzhsmSq7XbW7llN+R 9D8+asnAjTo/9Lkkgq0EbM+bFYlorzrpwHcVqhw0be0WK4BvZEIe479A/ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780778" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780778" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:24:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864637104" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864637104" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:24:29 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 04/13] perf/core: Add function perf_event_move_group() Date: Wed, 27 Sep 2023 11:31:15 +0800 Message-Id: <20230927033124.1226509-5-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Extract the group moving code in function sys_perf_event_open() to create a new function perf_event_move_group(). The subsequent change would add a new function perf_event_create_group_kernel_counters() which is used to create group events in kernel space. The function also needs to do same group moving for group leader event just like function sys_perf_event_open() does. So extract the moving code into a separate function to avoid the code duplication. Signed-off-by: Dapeng Mi --- kernel/events/core.c | 82 ++++++++++++++++++++++++-------------------- 1 file changed, 45 insertions(+), 37 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index d485dac2b55f..953e3d3a1664 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12388,6 +12388,48 @@ static int perf_event_group_leader_check(struct perf_event *group_leader, return 0; } +static void perf_event_move_group(struct perf_event *group_leader, + struct perf_event_pmu_context *pmu_ctx, + struct perf_event_context *ctx) +{ + struct perf_event *sibling; + + perf_remove_from_context(group_leader, 0); + put_pmu_ctx(group_leader->pmu_ctx); + + for_each_sibling_event(sibling, group_leader) { + perf_remove_from_context(sibling, 0); + put_pmu_ctx(sibling->pmu_ctx); + } + + /* + * Install the group siblings before the group leader. + * + * Because a group leader will try and install the entire group + * (through the sibling list, which is still in-tact), we can + * end up with siblings installed in the wrong context. + * + * By installing siblings first we NO-OP because they're not + * reachable through the group lists. + */ + for_each_sibling_event(sibling, group_leader) { + sibling->pmu_ctx = pmu_ctx; + get_pmu_ctx(pmu_ctx); + perf_event__state_init(sibling); + perf_install_in_context(ctx, sibling, sibling->cpu); + } + + /* + * Removing from the context ends up with disabled + * event. What we want here is event in the initial + * startup state, ready to be add into new context. + */ + group_leader->pmu_ctx = pmu_ctx; + get_pmu_ctx(pmu_ctx); + perf_event__state_init(group_leader); + perf_install_in_context(ctx, group_leader, group_leader->cpu); +} + /** * sys_perf_event_open - open a performance event, associate it to a task/cpu * @@ -12403,7 +12445,7 @@ SYSCALL_DEFINE5(perf_event_open, { struct perf_event *group_leader = NULL, *output_event = NULL; struct perf_event_pmu_context *pmu_ctx; - struct perf_event *event, *sibling; + struct perf_event *event; struct perf_event_attr attr; struct perf_event_context *ctx; struct file *event_file = NULL; @@ -12635,42 +12677,8 @@ SYSCALL_DEFINE5(perf_event_open, * where we start modifying current state. */ - if (move_group) { - perf_remove_from_context(group_leader, 0); - put_pmu_ctx(group_leader->pmu_ctx); - - for_each_sibling_event(sibling, group_leader) { - perf_remove_from_context(sibling, 0); - put_pmu_ctx(sibling->pmu_ctx); - } - - /* - * Install the group siblings before the group leader. - * - * Because a group leader will try and install the entire group - * (through the sibling list, which is still in-tact), we can - * end up with siblings installed in the wrong context. - * - * By installing siblings first we NO-OP because they're not - * reachable through the group lists. - */ - for_each_sibling_event(sibling, group_leader) { - sibling->pmu_ctx = pmu_ctx; - get_pmu_ctx(pmu_ctx); - perf_event__state_init(sibling); - perf_install_in_context(ctx, sibling, sibling->cpu); - } - - /* - * Removing from the context ends up with disabled - * event. What we want here is event in the initial - * startup state, ready to be add into new context. - */ - group_leader->pmu_ctx = pmu_ctx; - get_pmu_ctx(pmu_ctx); - perf_event__state_init(group_leader); - perf_install_in_context(ctx, group_leader, group_leader->cpu); - } + if (move_group) + perf_event_move_group(group_leader, pmu_ctx, ctx); /* * Precalculate sample_data sizes; do while holding ctx::mutex such From patchwork Wed Sep 27 03:31:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E61CE7E65C for ; Wed, 27 Sep 2023 04:25:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229485AbjI0EZS (ORCPT ); Wed, 27 Sep 2023 00:25:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229550AbjI0EXr (ORCPT ); Wed, 27 Sep 2023 00:23:47 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97C80140DF; Tue, 26 Sep 2023 20:24:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785079; x=1727321079; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9xjosaDzuG0EqXr7C3EU30p86R0shJTUnb1AF/t2Rsw=; b=FWpucMzzgGeWJuJAnMY9LOVQZNSyr2OdsYdBBsZnBOc2gmxkpY1+Wqv9 0jvQJ03h/DY+4+KrixIEvlO462Gfr+9NSsZObWBiUs9CecgZ85/6KIxip OpgMf1QThiySKtoJOMcXA9WfsAtMQS6z+kwN14NohSVfXlCb9sJyW+DA8 OJOI0nnApA4Wc62U7y5LxURSnTAzUB/fG1GzFukckyPXObR4TYuf83UfN 6+T1Q5a6PzCFkoD6OCWs9YGfge+U8g9ogYns9kmCeG9iIHYevLnQhVTpK itPk612IUNxQhIkSAFaU6WP/g1aKX4EuAHjAz7z+GkOa/1M/XvxFdmW1e g==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780791" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780791" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:24:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864637140" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864637140" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:24:34 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 05/13] perf/core: Add *group_leader for perf_event_create_group_kernel_counters() Date: Wed, 27 Sep 2023 11:31:16 +0800 Message-Id: <20230927033124.1226509-6-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new argument *group_leader for perf_event_create_group_kernel_counters(), so group events can be created from Kernel space just like user space does. Current perf logic requires a perf events group is created to handle the topdown metrics profiling. To support topdown metrics feature in KVM, Kernel space also need the capability to create group events. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Dapeng Mi --- arch/arm64/kvm/pmu-emul.c | 2 +- arch/riscv/kvm/vcpu_pmu.c | 2 +- arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 4 ++-- arch/x86/kvm/pmu.c | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 4 ++-- include/linux/perf_event.h | 1 + kernel/events/core.c | 17 ++++++++++++++++- kernel/events/hw_breakpoint.c | 4 ++-- kernel/events/hw_breakpoint_test.c | 2 +- kernel/watchdog_perf.c | 2 +- 10 files changed, 28 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 6b066e04dc5d..d31dd91dc081 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -631,7 +631,7 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc) attr.sample_period = compute_period(pmc, kvm_pmu_get_pmc_value(pmc)); - event = perf_event_create_kernel_counter(&attr, -1, current, + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, kvm_pmu_perf_overflow, pmc); if (IS_ERR(event)) { diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c index 86391a5061dd..74075097cdaa 100644 --- a/arch/riscv/kvm/vcpu_pmu.c +++ b/arch/riscv/kvm/vcpu_pmu.c @@ -247,7 +247,7 @@ static int kvm_pmu_create_perf_event(struct kvm_pmc *pmc, struct perf_event_attr */ attr->sample_period = kvm_pmu_get_sample_period(pmc); - event = perf_event_create_kernel_counter(attr, -1, current, NULL, pmc); + event = perf_event_create_kernel_counter(attr, -1, current, NULL, NULL, pmc); if (IS_ERR(event)) { pr_err("kvm pmu event creation failed for eidx %lx: %ld\n", eidx, PTR_ERR(event)); return PTR_ERR(event); diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c index 8f559eeae08e..f84b4c8838a6 100644 --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c @@ -966,12 +966,12 @@ static int measure_residency_fn(struct perf_event_attr *miss_attr, u64 tmp; miss_event = perf_event_create_kernel_counter(miss_attr, plr->cpu, - NULL, NULL, NULL); + NULL, NULL, NULL, NULL); if (IS_ERR(miss_event)) goto out; hit_event = perf_event_create_kernel_counter(hit_attr, plr->cpu, - NULL, NULL, NULL); + NULL, NULL, NULL, NULL); if (IS_ERR(hit_event)) goto out_miss; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index edb89b51b383..760d293f4a4a 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -221,7 +221,7 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, attr.precise_ip = pmc_get_pebs_precise_level(pmc); } - event = perf_event_create_kernel_counter(&attr, -1, current, + event = perf_event_create_kernel_counter(&attr, -1, current, NULL, kvm_perf_overflow, pmc); if (IS_ERR(event)) { pr_debug_ratelimited("kvm_pmu: event creation failed %ld for pmc->idx = %d\n", diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 044d61aa63dc..9bf80fee34fb 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -302,8 +302,8 @@ int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu) return 0; } - event = perf_event_create_kernel_counter(&attr, -1, - current, NULL, NULL); + event = perf_event_create_kernel_counter(&attr, -1, current, + NULL, NULL, NULL); if (IS_ERR(event)) { pr_debug_ratelimited("%s: failed %ld\n", __func__, PTR_ERR(event)); diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index e85cd1c0eaf3..04e12a8e6584 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1102,6 +1102,7 @@ extern struct perf_event * perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, struct task_struct *task, + struct perf_event *group_leader, perf_overflow_handler_t callback, void *context); extern void perf_pmu_migrate_context(struct pmu *pmu, diff --git a/kernel/events/core.c b/kernel/events/core.c index 953e3d3a1664..3cc870d450c5 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12743,12 +12743,14 @@ SYSCALL_DEFINE5(perf_event_open, * @attr: attributes of the counter to create * @cpu: cpu in which the counter is bound * @task: task to profile (NULL for percpu) + * @group_leader: the group leader event of the created event * @overflow_handler: callback to trigger when we hit the event * @context: context data could be used in overflow_handler callback */ struct perf_event * perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, struct task_struct *task, + struct perf_event *group_leader, perf_overflow_handler_t overflow_handler, void *context) { @@ -12756,6 +12758,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, struct perf_event_context *ctx; struct perf_event *event; struct pmu *pmu; + int move_group = 0; int err; /* @@ -12765,7 +12768,11 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, if (attr->aux_output) return ERR_PTR(-EINVAL); - event = perf_event_alloc(attr, cpu, task, NULL, NULL, + if (task && group_leader && + group_leader->attr.inherit != attr->inherit) + return ERR_PTR(-EINVAL); + + event = perf_event_alloc(attr, cpu, task, group_leader, NULL, overflow_handler, context, -1); if (IS_ERR(event)) { err = PTR_ERR(event); @@ -12795,6 +12802,11 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, goto err_unlock; } + err = perf_event_group_leader_check(group_leader, event, attr, ctx, + &pmu, &move_group); + if (err) + goto err_unlock; + pmu_ctx = find_get_pmu_context(pmu, ctx, event); if (IS_ERR(pmu_ctx)) { err = PTR_ERR(pmu_ctx); @@ -12822,6 +12834,9 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu, goto err_pmu_ctx; } + if (move_group) + perf_event_move_group(group_leader, pmu_ctx, ctx); + perf_install_in_context(ctx, event, event->cpu); perf_unpin_context(ctx); mutex_unlock(&ctx->mutex); diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 6c2cb4e4f48d..5e328a684a4d 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -743,7 +743,7 @@ register_user_hw_breakpoint(struct perf_event_attr *attr, void *context, struct task_struct *tsk) { - return perf_event_create_kernel_counter(attr, -1, tsk, triggered, + return perf_event_create_kernel_counter(attr, -1, tsk, NULL, triggered, context); } EXPORT_SYMBOL_GPL(register_user_hw_breakpoint); @@ -853,7 +853,7 @@ register_wide_hw_breakpoint(struct perf_event_attr *attr, cpus_read_lock(); for_each_online_cpu(cpu) { - bp = perf_event_create_kernel_counter(attr, cpu, NULL, + bp = perf_event_create_kernel_counter(attr, cpu, NULL, NULL, triggered, context); if (IS_ERR(bp)) { err = PTR_ERR(bp); diff --git a/kernel/events/hw_breakpoint_test.c b/kernel/events/hw_breakpoint_test.c index 2cfeeecf8de9..694db7645676 100644 --- a/kernel/events/hw_breakpoint_test.c +++ b/kernel/events/hw_breakpoint_test.c @@ -39,7 +39,7 @@ static struct perf_event *register_test_bp(int cpu, struct task_struct *tsk, int attr.bp_addr = (unsigned long)&break_vars[idx]; attr.bp_len = HW_BREAKPOINT_LEN_1; attr.bp_type = HW_BREAKPOINT_RW; - return perf_event_create_kernel_counter(&attr, cpu, tsk, NULL, NULL); + return perf_event_create_kernel_counter(&attr, cpu, tsk, NULL, NULL, NULL); } static void unregister_test_bp(struct perf_event **bp) diff --git a/kernel/watchdog_perf.c b/kernel/watchdog_perf.c index 8ea00c4a24b2..f8a52c4df079 100644 --- a/kernel/watchdog_perf.c +++ b/kernel/watchdog_perf.c @@ -120,7 +120,7 @@ static int hardlockup_detector_event_create(void) wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh); /* Try to register using hardware perf events */ - evt = perf_event_create_kernel_counter(wd_attr, cpu, NULL, + evt = perf_event_create_kernel_counter(wd_attr, cpu, NULL, NULL, watchdog_overflow_callback, NULL); if (IS_ERR(evt)) { pr_debug("Perf event create on CPU %d failed with %ld\n", cpu, From patchwork Wed Sep 27 03:31:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399856 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF04DE7F15E for ; Wed, 27 Sep 2023 03:57:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229708AbjI0D5l (ORCPT ); Tue, 26 Sep 2023 23:57:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbjI0D4W (ORCPT ); Tue, 26 Sep 2023 23:56:22 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01D519027; Tue, 26 Sep 2023 20:24:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785084; x=1727321084; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BI0wCz5Y1/hfopD6qA0qFJQF81Z7fbGbcwe5ACKj33k=; b=G4o3LJ2lC2MlRxGMNuIcZJlBi0vzEPkSw21zyytmHghA5iEP5im6JO40 7sHaGoz2XYxuCbJBS+6VCYIRmI84YqmoI7yPXXWhFkCl5YjXsK3YELO+v 5bSl/bYXmAsTIu5+hbtrXldJqIY2gnzAkF9IpiIb5+reRKuIQkUwHVVn0 7g8gM9zAmx9WiZzyEX02ZOXKjWoxAma6woKflXeZhVAZUb9sPq1hDCW21 qNSLLSA+myr6mll90XedAgwnAt/yO786bjtnSHrqRt0khKsIhUzm/SXlK iBKC4GLPkVfLeCpDiEqPIGZFxHk0QThfLiZtZmt5Bo5nvvgk1KUZNe/yK Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780803" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780803" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:24:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864637161" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864637161" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:24:39 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 06/13] perf/x86: Fix typos and inconsistent indents in perf_event header Date: Wed, 27 Sep 2023 11:31:17 +0800 Message-Id: <20230927033124.1226509-7-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There is one typo and some inconsistent indents in perf_event.h header file. Fix them. Signed-off-by: Dapeng Mi --- arch/x86/include/asm/perf_event.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 85a9fd5a3ec3..63e1ce1f4b27 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -386,15 +386,15 @@ static inline bool is_topdown_idx(int idx) * * With this fake counter assigned, the guest LBR event user (such as KVM), * can program the LBR registers on its own, and we don't actually do anything - * with then in the host context. + * with them in the host context. */ -#define INTEL_PMC_IDX_FIXED_VLBR (GLOBAL_STATUS_LBRS_FROZEN_BIT) +#define INTEL_PMC_IDX_FIXED_VLBR (GLOBAL_STATUS_LBRS_FROZEN_BIT) /* * Pseudo-encoding the guest LBR event as event=0x00,umask=0x1b, * since it would claim bit 58 which is effectively Fixed26. */ -#define INTEL_FIXED_VLBR_EVENT 0x1b00 +#define INTEL_FIXED_VLBR_EVENT 0x1b00 /* * Adaptive PEBS v4 From patchwork Wed Sep 27 03:31:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF720E81810 for ; Wed, 27 Sep 2023 04:30:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229757AbjI0EaG (ORCPT ); Wed, 27 Sep 2023 00:30:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229612AbjI0E3I (ORCPT ); Wed, 27 Sep 2023 00:29:08 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE1451F00; Tue, 26 Sep 2023 20:24:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785088; x=1727321088; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kiUvulAGjvP1BxKp2QU0q0nvXG70PvnCMAlK3vrVfm8=; b=L4ScdBYGaf6xaHx3mjikmiPWF3Fz2S9sMY8dpJVz5CqGu31FkNR3XYXz eJeHkMuMu4DRhfzzli3v8HbrDP+rJ1fL2O6Py2l5yot50wy5jdLOHhyDg PibxBGYo+YKYkAbv8yohowloyy8nxgu2+z3hNvp6Mi9Am7UFEFzNltjHw PAQdhjCZGJjokogetsqm8yoOl5gP7vYQCzsM0hOBYf//qRnwo2LBT24Ov ya6dm9rG4BXSt4wHoNtdZsCPz/VRUI3Ig6AkowdEM3XCR9EnLQMzuWKCf RQ5HxE1pU23BDiAB78eiq3/iBUNKo6fshUN6c5f5ujd0QOGtZ5B55wa68 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780821" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780821" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:24:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864637170" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864637170" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:24:43 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 07/13] perf/x86: Add constraint for guest perf metrics event Date: Wed, 27 Sep 2023 11:31:18 +0800 Message-Id: <20230927033124.1226509-8-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When guest wants to use PERF_METRICS MSR, a virtual metrics event needs to be created in the perf subsystem so that the guest can have exclusive ownership of the PERF_METRICS MSR. We introduce the new vmetrics constraint, so that we can couple this virtual metrics event with slots event as a events group to involves in the host perf system scheduling. Since Guest metric events are always recognized as vCPU process's events on host, they are time-sharing multiplexed with other host metric events, so that we choose bit 48 (INTEL_PMC_IDX_METRIC_BASE) as the index of this virtual metrics event. Co-developed-by: Yang Weijiang Signed-off-by: Yang Weijiang Signed-off-by: Dapeng Mi --- arch/x86/events/intel/core.c | 28 +++++++++++++++++++++------- arch/x86/events/perf_event.h | 1 + arch/x86/include/asm/perf_event.h | 15 +++++++++++++++ 3 files changed, 37 insertions(+), 7 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index fa355d3658a6..1c349290677c 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3158,17 +3158,26 @@ intel_bts_constraints(struct perf_event *event) return NULL; } +static struct event_constraint *intel_virt_event_constraints[] __read_mostly = { + &vlbr_constraint, + &vmetrics_constraint, +}; + /* - * Note: matches a fake event, like Fixed2. + * Note: matches a virtual event, like vmetrics. */ static struct event_constraint * -intel_vlbr_constraints(struct perf_event *event) +intel_virt_constraints(struct perf_event *event) { - struct event_constraint *c = &vlbr_constraint; + int i; + struct event_constraint *c; - if (unlikely(constraint_match(c, event->hw.config))) { - event->hw.flags |= c->flags; - return c; + for (i = 0; i < ARRAY_SIZE(intel_virt_event_constraints); i++) { + c = intel_virt_event_constraints[i]; + if (unlikely(constraint_match(c, event->hw.config))) { + event->hw.flags |= c->flags; + return c; + } } return NULL; @@ -3368,7 +3377,7 @@ __intel_get_event_constraints(struct cpu_hw_events *cpuc, int idx, { struct event_constraint *c; - c = intel_vlbr_constraints(event); + c = intel_virt_constraints(event); if (c) return c; @@ -5369,6 +5378,11 @@ static struct attribute *spr_tsx_events_attrs[] = { NULL, }; +struct event_constraint vmetrics_constraint = + __EVENT_CONSTRAINT(INTEL_FIXED_VMETRICS_EVENT, + (1ULL << INTEL_PMC_IDX_FIXED_VMETRICS), + FIXED_EVENT_FLAGS, 1, 0, 0); + static ssize_t freeze_on_smi_show(struct device *cdev, struct device_attribute *attr, char *buf) diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index c8ba2be7585d..a0d12989a483 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -1482,6 +1482,7 @@ void reserve_lbr_buffers(void); extern struct event_constraint bts_constraint; extern struct event_constraint vlbr_constraint; +extern struct event_constraint vmetrics_constraint; void intel_pmu_enable_bts(u64 config); diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 63e1ce1f4b27..d767807aae91 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -390,6 +390,21 @@ static inline bool is_topdown_idx(int idx) */ #define INTEL_PMC_IDX_FIXED_VLBR (GLOBAL_STATUS_LBRS_FROZEN_BIT) +/* + * We model guest TopDown metrics event tracing similarly. + * + * Guest metric events are recognized as vCPU process's events on host, they + * would be time-sharing multiplexed with other host metric events, so that + * we choose bit 48 (INTEL_PMC_IDX_METRIC_BASE) as the index of virtual + * metrics event. + */ +#define INTEL_PMC_IDX_FIXED_VMETRICS (INTEL_PMC_IDX_METRIC_BASE) + +/* + * Pseudo-encoding the guest metrics event as event=0x00,umask=0x11, + * since it would claim bit 48 which is effectively Fixed16. + */ +#define INTEL_FIXED_VMETRICS_EVENT 0x1100 /* * Pseudo-encoding the guest LBR event as event=0x00,umask=0x1b, * since it would claim bit 58 which is effectively Fixed26. From patchwork Wed Sep 27 03:31:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399906 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BE1DE80A81 for ; Wed, 27 Sep 2023 04:34:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229846AbjI0Eeb (ORCPT ); Wed, 27 Sep 2023 00:34:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229682AbjI0Edc (ORCPT ); Wed, 27 Sep 2023 00:33:32 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEAA1A25C; Tue, 26 Sep 2023 20:25:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785108; x=1727321108; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EVyOUvu4NYq1jHoEPfEs/Mja+P/Jl4r7p70hL7gDqDM=; b=Ur5/8pJGUaqMagM+98DJYJdk084Uv9bpllBYOqyZMnp0OExb7rFqhWAl f5623Vk8nh8I3dNnLaAiGOFAOMK4DNQPJtNnt9/8AJKSmKM+OdNPk3uW5 EMtH16nWchTo89O9tdepBoYAVY6gat6VgwZZmxzizcTQZkBVKplO8KmnB gA0LuuHSN+/dSGewpEZ5N/9K67eU55AFSPKtHs0mMLom6Rp1xOWN/015z 3htOgxj3m7GbXBjT9IZ4vZYRmOAAPTlBaL2cmJgRGx+Q4Ep34353Tuab+ LqhtD2NpYcWJc9y2iGZw2LLOMvkMFHwWREIue+sAL3CybcZSeuo7Sz5vl Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780832" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780832" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:24:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864637181" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864637181" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:24:48 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 08/13] perf/core: Add new function perf_event_topdown_metrics() Date: Wed, 27 Sep 2023 11:31:19 +0800 Message-Id: <20230927033124.1226509-9-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new function perf_event_topdown_metrics(). This new function is quite familiar with function perf_event_period(), but it updates slots count and metrics raw data instead of sample period into perf system. When guest restores FIXED_CTR3 and PERF_METRICS MSRs in sched-in process, KVM needs to capture the MSR writing trap and set the MSR values of guest into corresponding perf events just like function perf_event_period() does. Initially we tried to reuse the function perf_event_period() to set the slots/metrics value, but we found it was quite hard. The function perf_event_period() only works on sampling events but unfortunately slots event and metric events in topdown mode are all non-sampling events. There are sampling event check and lots of sampling period related check and setting in the function perf_event_period() call-chain. If we want to reuse the function perf_event_period(), we have to add lots of if-else changes on the entire function-chain and even modify the function name. This would totally mess up the function perf_event_period(). Thus, we select to create a new function perf_event_topdown_metrics() to set the slots/metrics values. This makes logic and code both be clearer. Signed-off-by: Dapeng Mi --- include/linux/perf_event.h | 13 ++++++++ kernel/events/core.c | 62 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 75 insertions(+) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 04e12a8e6584..10d737aab7fa 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1057,6 +1057,11 @@ perf_cgroup_from_task(struct task_struct *task, struct perf_event_context *ctx) } #endif /* CONFIG_CGROUP_PERF */ +struct td_metrics { + u64 slots; + u64 metric; +}; + #ifdef CONFIG_PERF_EVENTS extern struct perf_event_context *perf_cpu_task_ctx(void); @@ -1707,6 +1712,8 @@ extern void perf_event_task_tick(void); extern int perf_event_account_interrupt(struct perf_event *event); extern int perf_event_period(struct perf_event *event, u64 value); extern u64 perf_event_pause(struct perf_event *event, bool reset); +extern int perf_event_topdown_metrics(struct perf_event *event, + struct td_metrics *value); #else /* !CONFIG_PERF_EVENTS: */ static inline void * perf_aux_output_begin(struct perf_output_handle *handle, @@ -1793,6 +1800,12 @@ static inline u64 perf_event_pause(struct perf_event *event, bool reset) { return 0; } + +static inline int perf_event_topdown_metrics(struct perf_event *event, + struct td_metrics *value) +{ + return 0; +} #endif #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) diff --git a/kernel/events/core.c b/kernel/events/core.c index 3cc870d450c5..500ffcd2c621 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5776,6 +5776,68 @@ int perf_event_period(struct perf_event *event, u64 value) } EXPORT_SYMBOL_GPL(perf_event_period); +static void __perf_event_topdown_metrics(struct perf_event *event, + struct perf_cpu_context *cpuctx, + struct perf_event_context *ctx, + void *info) +{ + struct td_metrics *td_metrics = (struct td_metrics *)info; + bool active; + + active = (event->state == PERF_EVENT_STATE_ACTIVE); + if (active) { + perf_pmu_disable(event->pmu); + /* + * We could be throttled; unthrottle now to avoid the tick + * trying to unthrottle while we already re-started the event. + */ + if (event->hw.interrupts == MAX_INTERRUPTS) { + event->hw.interrupts = 0; + perf_log_throttle(event, 1); + } + event->pmu->stop(event, PERF_EF_UPDATE); + } + + event->hw.saved_slots = td_metrics->slots; + event->hw.saved_metric = td_metrics->metric; + + if (active) { + event->pmu->start(event, PERF_EF_RELOAD); + perf_pmu_enable(event->pmu); + } +} + +static int _perf_event_topdown_metrics(struct perf_event *event, + struct td_metrics *value) +{ + /* + * Slots event in topdown metrics scenario + * must be non-sampling event. + */ + if (is_sampling_event(event)) + return -EINVAL; + + if (!value) + return -EINVAL; + + event_function_call(event, __perf_event_topdown_metrics, value); + + return 0; +} + +int perf_event_topdown_metrics(struct perf_event *event, struct td_metrics *value) +{ + struct perf_event_context *ctx; + int ret; + + ctx = perf_event_ctx_lock(event); + ret = _perf_event_topdown_metrics(event, value); + perf_event_ctx_unlock(event, ctx); + + return ret; +} +EXPORT_SYMBOL_GPL(perf_event_topdown_metrics); + static const struct file_operations perf_fops; static inline int perf_fget_light(int fd, struct fd *p) From patchwork Wed Sep 27 03:31:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F08FFE7E62A for ; Wed, 27 Sep 2023 04:13:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229727AbjI0ENj (ORCPT ); Wed, 27 Sep 2023 00:13:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229762AbjI0EM2 (ORCPT ); Wed, 27 Sep 2023 00:12:28 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 561FE1F05; Tue, 26 Sep 2023 20:25:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785109; x=1727321109; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4eYYvngh2M8HCkytzEnEBzL4UPADCh50c6O5Q3CPDV8=; b=mtqXLLjbmJ9JxXaFfCPq+hPXq+GQJ0SZZW6FE0F9x9TMHfz5vspGpHJy 4pQ25IcUkFs0eW4JDlq8zPkR0HsXG6ujMr/Ef8GD48vceUCcOqVWJPmOm njJPfgDrHixMPYHvs2fHGZUE9PenZ0EUBHEAdnWYxbjNkpUNvaFPHxiio B+PFK9+2YELTJJmQhrixbqg+FlJ0C5wqJ6vNrJ+7DyT945q21YxWnhzET H5z2m6/gAjTr1pBMUFRURbMPmHSjIgmg28ss3A/QuVf7zx/DGpF8f18La ehjFBIEahmkj3+SbcoRO/fQNCDZm9+b5foSNiZ9Sh8IX9d09L/Y5tl2nI g==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780849" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780849" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:24:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864637188" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864637188" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:24:53 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 09/13] perf/x86/intel: Handle KVM virtual metrics event in perf system Date: Wed, 27 Sep 2023 11:31:20 +0800 Message-Id: <20230927033124.1226509-10-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM creates a virtual metrics event to claim the PERF_METRICS MSR, but this virtual metrics event can't be recognized by perf system as it uses a different event code with known metrics events. We need to modify perf system code and make the KVM virtual metrics event can be recognized and processed by perf system. The counter of virtual metrics event doesn't save the real count value like other normal events, instead it's used to store the raw data of PERF_METRICS MSR, so KVM can obtain the raw data of PERF_METRICS after the virtual metrics event is disabled. Signed-off-by: Dapeng Mi --- arch/x86/events/intel/core.c | 39 +++++++++++++++++++++++++++--------- arch/x86/events/perf_event.h | 9 ++++++++- 2 files changed, 38 insertions(+), 10 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 1c349290677c..df56e091eb25 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -2546,7 +2546,7 @@ static int icl_set_topdown_event_period(struct perf_event *event) hwc->saved_metric = 0; } - if ((hwc->saved_slots) && is_slots_event(event)) { + if (is_slots_event(event)) { wrmsrl(MSR_CORE_PERF_FIXED_CTR3, hwc->saved_slots); wrmsrl(MSR_PERF_METRICS, hwc->saved_metric); } @@ -2619,6 +2619,15 @@ static void __icl_update_topdown_event(struct perf_event *event, } } +static inline void __icl_update_vmetrics_event(struct perf_event *event, u64 metrics) +{ + /* + * For the guest metrics event, the count would be used to save + * the raw data of PERF_METRICS MSR. + */ + local64_set(&event->count, metrics); +} + static void update_saved_topdown_regs(struct perf_event *event, u64 slots, u64 metrics, int metric_end) { @@ -2638,6 +2647,17 @@ static void update_saved_topdown_regs(struct perf_event *event, u64 slots, } } +static inline void _intel_update_topdown_event(struct perf_event *event, + u64 slots, u64 metrics, + u64 last_slots, u64 last_metrics) +{ + if (is_vmetrics_event(event)) + __icl_update_vmetrics_event(event, metrics); + else + __icl_update_topdown_event(event, slots, metrics, + last_slots, last_metrics); +} + /* * Update all active Topdown events. * @@ -2665,9 +2685,9 @@ static u64 intel_update_topdown_event(struct perf_event *event, int metric_end) if (!is_topdown_idx(idx)) continue; other = cpuc->events[idx]; - __icl_update_topdown_event(other, slots, metrics, - event ? event->hw.saved_slots : 0, - event ? event->hw.saved_metric : 0); + _intel_update_topdown_event(other, slots, metrics, + event ? event->hw.saved_slots : 0, + event ? event->hw.saved_metric : 0); } /* @@ -2675,9 +2695,9 @@ static u64 intel_update_topdown_event(struct perf_event *event, int metric_end) * in active_mask e.g. x86_pmu_stop() */ if (event && !test_bit(event->hw.idx, cpuc->active_mask)) { - __icl_update_topdown_event(event, slots, metrics, - event->hw.saved_slots, - event->hw.saved_metric); + _intel_update_topdown_event(event, slots, metrics, + event->hw.saved_slots, + event->hw.saved_metric); /* * In x86_pmu_stop(), the event is cleared in active_mask first, @@ -3858,8 +3878,9 @@ static int core_pmu_hw_config(struct perf_event *event) static bool is_available_metric_event(struct perf_event *event) { - return is_metric_event(event) && - event->attr.config <= INTEL_TD_METRIC_AVAILABLE_MAX; + return (is_metric_event(event) && + event->attr.config <= INTEL_TD_METRIC_AVAILABLE_MAX) || + is_vmetrics_event(event); } static inline bool is_mem_loads_event(struct perf_event *event) diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index a0d12989a483..7238b7f871ce 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -105,9 +105,16 @@ static inline bool is_slots_event(struct perf_event *event) return (event->attr.config & INTEL_ARCH_EVENT_MASK) == INTEL_TD_SLOTS; } +static inline bool is_vmetrics_event(struct perf_event *event) +{ + return (event->attr.config & INTEL_ARCH_EVENT_MASK) == + INTEL_FIXED_VMETRICS_EVENT; +} + static inline bool is_topdown_event(struct perf_event *event) { - return is_metric_event(event) || is_slots_event(event); + return is_metric_event(event) || is_slots_event(event) || + is_vmetrics_event(event); } struct amd_nb { From patchwork Wed Sep 27 03:31:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399900 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 804BEE81815 for ; Wed, 27 Sep 2023 04:31:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229765AbjI0Ebn (ORCPT ); Wed, 27 Sep 2023 00:31:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229669AbjI0EbE (ORCPT ); Wed, 27 Sep 2023 00:31:04 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9492AD2D; Tue, 26 Sep 2023 20:25:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785109; x=1727321109; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CTHF121PR7xhMS1XUtU5zQj11Uetgzp+urrA6+B41Dw=; b=MgvIEfITSWLci4BxE46aJLiv/HxEjSQm3KvDHtcNB6lrDcEPRbGSAGVe 975nTNl5ex1Np0XadfkLOwMpF74dV8FvgNPkjd0Q4gavCXNvnpMgI9sk7 cdpF7ywuZLJZKbJZLY7si/nTgqJpvzcpMPTTlya5Ez/W0q4v7jY0MGASe 08PVkYecU3in56asXqIWUcuaHro7agcGR+ZyDwErCyy7SpMgYE+jsLi1Y 0criGJYvDcAjqWrkySDtHuZBDUfauiaa1rrG637WFuHxLwFgzcNzRSUef 3uuNVsdKcLJCCCs7dtjCKUZt0BC0AK35wFhhyjFk0ZXURRlKvJedpzMzb w==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780866" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780866" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:25:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864637224" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864637224" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:24:58 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 10/13] KVM: x86/pmu: Extend pmc_reprogram_counter() to create group events Date: Wed, 27 Sep 2023 11:31:21 +0800 Message-Id: <20230927033124.1226509-11-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Current perf code creates a events group which contains a slots event that acts as group leader and multiple metric events to support the topdown perf metrics feature. To support the topdown metrics feature in KVM and reduce the changes for perf system at the same time, we follow this mature mechanism and create a events group in KVM. The events group contains a slots event which claims the fixed counter 3 and act as group leader as perf system requires, and a virtual metrics event which claims PERF_METRICS MSR. This events group would be scheduled as a whole by the perf system. Unfortunately the function pmc_reprogram_counter() can only create a single event for every counter, so this change extends the function and makes it have the capability to create a events group. Co-developed-by: Like Xu Signed-off-by: Like Xu Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm_host.h | 11 +++++- arch/x86/kvm/pmu.c | 64 ++++++++++++++++++++++++++------- arch/x86/kvm/pmu.h | 22 ++++++++---- arch/x86/kvm/svm/pmu.c | 2 ++ arch/x86/kvm/vmx/pmu_intel.c | 4 +++ 5 files changed, 83 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 90ecd3f7a9c3..bf1626b2b553 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -490,12 +490,12 @@ enum pmc_type { struct kvm_pmc { enum pmc_type type; u8 idx; + u8 max_nr_events; bool is_paused; bool intr; u64 counter; u64 prev_counter; u64 eventsel; - struct perf_event *perf_event; struct kvm_vcpu *vcpu; /* * only for creating or reusing perf_event, @@ -503,6 +503,15 @@ struct kvm_pmc { * ctrl value for fixed counters. */ u64 current_config; + /* + * Non-leader events may need some extra information, + * this field can be used to store this information. + */ + u64 extra_config; + union { + struct perf_event *perf_event; + DECLARE_FLEX_ARRAY(struct perf_event *, perf_events); + }; }; /* More counters may conflict with other existing Architectural MSRs */ diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 760d293f4a4a..b02a56c77647 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -187,7 +187,7 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, bool intr) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); - struct perf_event *event; + struct perf_event *event, *group_leader; struct perf_event_attr attr = { .type = type, .size = sizeof(attr), @@ -199,6 +199,7 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, .config = config, }; bool pebs = test_bit(pmc->idx, (unsigned long *)&pmu->pebs_enable); + unsigned int i, j; attr.sample_period = get_sample_period(pmc, pmc->counter); @@ -221,36 +222,73 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, attr.precise_ip = pmc_get_pebs_precise_level(pmc); } - event = perf_event_create_kernel_counter(&attr, -1, current, NULL, - kvm_perf_overflow, pmc); - if (IS_ERR(event)) { - pr_debug_ratelimited("kvm_pmu: event creation failed %ld for pmc->idx = %d\n", - PTR_ERR(event), pmc->idx); - return PTR_ERR(event); + /* + * To create grouped events, the first created perf_event doesn't + * know it will be the group_leader and may move to an unexpected + * enabling path, thus delay all enablement until after creation, + * not affecting non-grouped events to save one perf interface call. + */ + if (pmc->max_nr_events > 1) + attr.disabled = 1; + + for (i = 0; i < pmc->max_nr_events; i++) { + group_leader = i ? pmc->perf_event : NULL; + event = perf_event_create_kernel_counter(&attr, -1, + current, group_leader, + kvm_perf_overflow, pmc); + if (IS_ERR(event)) { + pr_err_ratelimited("kvm_pmu: event %u of pmc %u creation failed %ld\n", + i, pmc->idx, PTR_ERR(event)); + + for (j = 0; j < i; j++) { + perf_event_release_kernel(pmc->perf_events[j]); + pmc->perf_events[j] = NULL; + pmc_to_pmu(pmc)->event_count--; + } + + return PTR_ERR(event); + } + + pmc->perf_events[i] = event; + pmc_to_pmu(pmc)->event_count++; } - pmc->perf_event = event; - pmc_to_pmu(pmc)->event_count++; pmc->is_paused = false; pmc->intr = intr || pebs; + + if (!attr.disabled) + return 0; + + for (i = 0; pmc->perf_events[i] && i < pmc->max_nr_events; i++) + perf_event_enable(pmc->perf_events[i]); + return 0; } static void pmc_pause_counter(struct kvm_pmc *pmc) { u64 counter = pmc->counter; + unsigned int i; if (!pmc->perf_event || pmc->is_paused) return; - /* update counter, reset event value to avoid redundant accumulation */ + /* + * Update counter, reset event value to avoid redundant + * accumulation. Disable group leader event firstly and + * then disable non-group leader events. + */ counter += perf_event_pause(pmc->perf_event, true); + for (i = 1; pmc->perf_events[i] && i < pmc->max_nr_events; i++) + perf_event_pause(pmc->perf_events[i], true); pmc->counter = counter & pmc_bitmask(pmc); pmc->is_paused = true; } static bool pmc_resume_counter(struct kvm_pmc *pmc) { + unsigned int i; + if (!pmc->perf_event) return false; @@ -264,8 +302,8 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) (!!pmc->perf_event->attr.precise_ip)) return false; - /* reuse perf_event to serve as pmc_reprogram_counter() does*/ - perf_event_enable(pmc->perf_event); + for (i = 0; pmc->perf_events[i] && i < pmc->max_nr_events; i++) + perf_event_enable(pmc->perf_events[i]); pmc->is_paused = false; return true; @@ -432,7 +470,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) if (pmc->current_config == new_config && pmc_resume_counter(pmc)) goto reprogram_complete; - pmc_release_perf_event(pmc); + pmc_release_perf_event(pmc, false); pmc->current_config = new_config; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 7d9ba301c090..3dc0deb83096 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -74,21 +74,31 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc) return counter & pmc_bitmask(pmc); } -static inline void pmc_release_perf_event(struct kvm_pmc *pmc) +static inline void pmc_release_perf_event(struct kvm_pmc *pmc, bool reset) { - if (pmc->perf_event) { - perf_event_release_kernel(pmc->perf_event); - pmc->perf_event = NULL; - pmc->current_config = 0; + unsigned int i; + + if (!pmc->perf_event) + return; + + for (i = 0; pmc->perf_events[i] && i < pmc->max_nr_events; i++) { + perf_event_release_kernel(pmc->perf_events[i]); + pmc->perf_events[i] = NULL; pmc_to_pmu(pmc)->event_count--; } + + if (reset) { + pmc->current_config = 0; + pmc->extra_config = 0; + pmc->max_nr_events = 1; + } } static inline void pmc_stop_counter(struct kvm_pmc *pmc) { if (pmc->perf_event) { pmc->counter = pmc_read_counter(pmc); - pmc_release_perf_event(pmc); + pmc_release_perf_event(pmc, true); } } diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index cef5a3d0abd0..861ff79ac614 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -230,6 +230,8 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu) pmu->gp_counters[i].vcpu = vcpu; pmu->gp_counters[i].idx = i; pmu->gp_counters[i].current_config = 0; + pmu->gp_counters[i].extra_config = 0; + pmu->gp_counters[i].max_nr_events = 1; } } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 9bf80fee34fb..b45396e0a46c 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -628,6 +628,8 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) pmu->gp_counters[i].vcpu = vcpu; pmu->gp_counters[i].idx = i; pmu->gp_counters[i].current_config = 0; + pmu->gp_counters[i].extra_config = 0; + pmu->gp_counters[i].max_nr_events = 1; } for (i = 0; i < KVM_PMC_MAX_FIXED; i++) { @@ -635,6 +637,8 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu) pmu->fixed_counters[i].vcpu = vcpu; pmu->fixed_counters[i].idx = i + INTEL_PMC_IDX_FIXED; pmu->fixed_counters[i].current_config = 0; + pmu->fixed_counters[i].extra_config = 0; + pmu->fixed_counters[i].max_nr_events = 1; } lbr_desc->records.nr = 0; From patchwork Wed Sep 27 03:31:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399896 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B78FFE7E652 for ; Wed, 27 Sep 2023 04:25:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229832AbjI0EZT (ORCPT ); Wed, 27 Sep 2023 00:25:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229553AbjI0EXt (ORCPT ); Wed, 27 Sep 2023 00:23:49 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93151AD38; Tue, 26 Sep 2023 20:25:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785110; x=1727321110; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rNcgFo4RfvPd9sxyVAFnLLF4FaNWBPvZRCJYc76gwf8=; b=X7AjTMXFvaEQQc+hagwElzZog4E2GB47wrPNn3XwYdApreDlxEHu0o6P qst7qAJuSP5Y+lQm5DJrtmC3DWuHivZnDda7YWoszbacti+yBW4bj7Yko FMiLJaIZIKW1bSARdHpQQPKh2DAeGkayKRqOUqHeR+VQoikZ8CUp99wKT 94e0o/Hq2Hxj00q1xjK/IzTOgW1k47vMdZqndSxGzfxTCMgBzF3dKupL+ 7xH0tu1sYpZM6MuLqrJBYPBH0Ob+YXk6ggqA9y/eIYxwIMrWN9aNX9ouu ZGA8+QCzSw6utuUQ9aiVMY++So6HtnzoC8YOqOVgQyz/UHc6cEOSQgNjM A==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780880" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780880" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:25:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864637269" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864637269" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:25:03 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 11/13] KVM: x86/pmu: Support topdown perf metrics feature Date: Wed, 27 Sep 2023 11:31:22 +0800 Message-Id: <20230927033124.1226509-12-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds topdown perf metrics support for KVM. The topdown perf metrics is a feature on Intel CPUs which supports the TopDown Microarchitecture Analysis (TMA) Method. TMA is a structured analysis methodology to identify critical performance bottlenecks in out-of-order processors. The details about topdown metrics support on Intel processors can be found in section "Performance Metrics" of Intel's SDM. Intel chips use fixed counter 3 and PERF_METRICS MSR together to support topdown metrics feature. Fix counter 3 counts the elapsed cpu slots, PERF_METRICS reports the topdown metrics percentage. Generally speaking, KVM has no method to know guest is running a solo slots event counting/sampling or guest is profiling the topdown perf metrics if KVM only observes the fixed counter 3. Fortunately we know topdown metrics profiling always manipulate fixed counter 3 and PERF_METRICS MSR together with a fixed sequence, FIXED_CTR3 MSR is written firstly and then PERF_METRICS follows. So we can assume topdown metrics profiling is running in guest if KVM observes PERF_METRICS writing. In current perf logic, an events group is required to handle the topdown metrics profiling, and the events group couples a slots event which acts events group leader and multiple metric events. To coordinate with the perf topdown metrics handing logic and reduce the code changes in KVM, we choose to follow current mature vPMU PMC emulation framework. The only difference is that we need to create a events group for fixed counter 3 and manipulate FIXED_CTR3 and PERF_METRICS MSRS together instead of a single event and only manipulating FIXED_CTR3 MSR. When guest write PERF_METRICS MSR at first, KVM would create an event group which couples a slots event and a virtual metrics event. In this event group, slots event claims the fixed counter 3 HW resource and acts as group leader which is required by perf system. The virtual metrics event claims the PERF_METRICS MSR. This event group is just like the perf metrics events group on host and is scheduled by host perf system. In this proposal, the count of slots event is calculated and emulated on host and returned to guest just like other normal counters, but there is a difference for the metrics event processing. KVM doesn't calculate the real count of topdown metrics, it just stores the raw data of PERF_METRICS MSR and directly returns the stored raw data to guest. Thus, guest can get the real HW PERF_METRICS data and guarantee the calculation accuracy of topdown metrics. The whole procedure can be summarized as below. 1. KVM intercepts PERF_METRICS MSR writing and marks fixed counter 3 enter topdown profiling mode (set the max_nr_events of fixed counter 3 to 2) if it's not. 2. If the topdown metrics events group doesn't exist, create the events group firstly, and then update the saved slots count and metrics data of the group events with guest values. At last, enable the events and make the guest values are loaded into HW FIXED_CTR3 and PERF_METRICS MSRs. 3. Modify kvm_pmu_rdpmc() function to return PERF_METRICS MSR raw data to guest directly. Signed-off-by: Dapeng Mi --- arch/x86/include/asm/kvm_host.h | 6 ++++ arch/x86/kvm/pmu.c | 62 +++++++++++++++++++++++++++++++-- arch/x86/kvm/pmu.h | 28 +++++++++++++++ arch/x86/kvm/vmx/capabilities.h | 1 + arch/x86/kvm/vmx/pmu_intel.c | 48 +++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.h | 5 +++ arch/x86/kvm/x86.c | 1 + 7 files changed, 149 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index bf1626b2b553..19f8fb65e21e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -487,6 +487,12 @@ enum pmc_type { KVM_PMC_FIXED, }; +enum topdown_events { + KVM_TD_SLOTS = 0, + KVM_TD_METRICS = 1, + KVM_TD_EVENTS_MAX = 2, +}; + struct kvm_pmc { enum pmc_type type; u8 idx; diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b02a56c77647..fad7b2c10bb8 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -182,6 +182,30 @@ static u64 pmc_get_pebs_precise_level(struct kvm_pmc *pmc) return 1; } +static void pmc_setup_td_metrics_events_attr(struct kvm_pmc *pmc, + struct perf_event_attr *attr, + unsigned int event_idx) +{ + if (!pmc_is_topdown_metrics_used(pmc)) + return; + + /* + * setup slots event attribute, when slots event is + * created for guest topdown metrics profiling, the + * sample period must be 0. + */ + if (event_idx == KVM_TD_SLOTS) + attr->sample_period = 0; + + /* setup vmetrics event attribute */ + if (event_idx == KVM_TD_METRICS) { + attr->config = INTEL_FIXED_VMETRICS_EVENT; + attr->sample_period = 0; + /* Only group leader event can be pinned. */ + attr->pinned = false; + } +} + static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, bool exclude_user, bool exclude_kernel, bool intr) @@ -233,6 +257,8 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, for (i = 0; i < pmc->max_nr_events; i++) { group_leader = i ? pmc->perf_event : NULL; + pmc_setup_td_metrics_events_attr(pmc, &attr, i); + event = perf_event_create_kernel_counter(&attr, -1, current, group_leader, kvm_perf_overflow, pmc); @@ -256,6 +282,12 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, pmc->is_paused = false; pmc->intr = intr || pebs; + if (pmc_is_topdown_metrics_active(pmc)) { + pmc_update_topdown_metrics(pmc); + /* KVM need to inject PMI for PERF_METRICS overflow. */ + pmc->intr = true; + } + if (!attr.disabled) return 0; @@ -269,6 +301,7 @@ static void pmc_pause_counter(struct kvm_pmc *pmc) { u64 counter = pmc->counter; unsigned int i; + u64 data; if (!pmc->perf_event || pmc->is_paused) return; @@ -279,8 +312,15 @@ static void pmc_pause_counter(struct kvm_pmc *pmc) * then disable non-group leader events. */ counter += perf_event_pause(pmc->perf_event, true); - for (i = 1; pmc->perf_events[i] && i < pmc->max_nr_events; i++) - perf_event_pause(pmc->perf_events[i], true); + for (i = 1; pmc->perf_events[i] && i < pmc->max_nr_events; i++) { + data = perf_event_pause(pmc->perf_events[i], true); + /* + * The count of vmetrics event actually stores raw data of + * PERF_METRICS, save it to extra_config. + */ + if (pmc->idx == INTEL_PMC_IDX_FIXED_SLOTS && i == KVM_TD_METRICS) + pmc->extra_config = data; + } pmc->counter = counter & pmc_bitmask(pmc); pmc->is_paused = true; } @@ -557,6 +597,21 @@ static int kvm_pmu_rdpmc_vmware(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) return 0; } +static inline int kvm_pmu_read_perf_metrics(struct kvm_vcpu *vcpu, + unsigned int idx, u64 *data) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR3); + + if (!pmc) { + *data = 0; + return 1; + } + + *data = pmc->extra_config; + return 0; +} + int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) { bool fast_mode = idx & (1u << 31); @@ -570,6 +625,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) if (is_vmware_backdoor_pmc(idx)) return kvm_pmu_rdpmc_vmware(vcpu, idx, data); + if (idx & INTEL_PMC_FIXED_RDPMC_METRICS) + return kvm_pmu_read_perf_metrics(vcpu, idx, data); + pmc = static_call(kvm_x86_pmu_rdpmc_ecx_to_pmc)(vcpu, idx, &mask); if (!pmc) return 1; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 3dc0deb83096..43abe793c11c 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -257,6 +257,34 @@ static inline bool pmc_is_globally_enabled(struct kvm_pmc *pmc) return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); } +static inline int pmc_is_topdown_metrics_used(struct kvm_pmc *pmc) +{ + return (pmc->idx == INTEL_PMC_IDX_FIXED_SLOTS) && + (pmc->max_nr_events == KVM_TD_EVENTS_MAX); +} + +static inline int pmc_is_topdown_metrics_active(struct kvm_pmc *pmc) +{ + return pmc_is_topdown_metrics_used(pmc) && + pmc->perf_events[KVM_TD_METRICS]; +} + +static inline void pmc_update_topdown_metrics(struct kvm_pmc *pmc) +{ + struct perf_event *event; + int i; + + struct td_metrics td_metrics = { + .slots = pmc->counter, + .metric = pmc->extra_config, + }; + + for (i = 0; i < pmc->max_nr_events; i++) { + event = pmc->perf_events[i]; + perf_event_topdown_metrics(event, &td_metrics); + } +} + void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data); diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index 41a4533f9989..d8317552b634 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -22,6 +22,7 @@ extern int __read_mostly pt_mode; #define PT_MODE_HOST_GUEST 1 #define PMU_CAP_FW_WRITES (1ULL << 13) +#define PMU_CAP_PERF_METRICS BIT_ULL(15) #define PMU_CAP_LBR_FMT 0x3f struct nested_vmx_msrs { diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index b45396e0a46c..04ccb8c6f7e4 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -229,6 +229,9 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) ret = (perf_capabilities & PERF_CAP_PEBS_BASELINE) && ((perf_capabilities & PERF_CAP_PEBS_FORMAT) > 3); break; + case MSR_PERF_METRICS: + ret = intel_pmu_metrics_is_enabled(vcpu) && (pmu->version > 1); + break; default: ret = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0) || get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0) || @@ -357,6 +360,43 @@ static bool intel_pmu_handle_lbr_msrs_access(struct kvm_vcpu *vcpu, return true; } +static int intel_pmu_handle_perf_metrics_access(struct kvm_vcpu *vcpu, + struct msr_data *msr_info, bool read) +{ + u32 index = msr_info->index; + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR3); + + if (!pmc || index != MSR_PERF_METRICS) + return 1; + + if (read) { + msr_info->data = pmc->extra_config; + } else { + /* + * Save guest PERF_METRICS data in to extra_config, + * the extra_config would be read to write to PERF_METRICS + * MSR in later events group creating process. + */ + pmc->extra_config = msr_info->data; + if (pmc_is_topdown_metrics_active(pmc)) { + pmc_update_topdown_metrics(pmc); + } else { + /* + * If the slots/vmetrics events group is not + * created yet, set max_nr_events to 2 + * (slots event + vmetrics event), so KVM knows + * topdown metrics profiling is running in guest + * and slots/vmetrics events group would be created + * later. + */ + pmc->max_nr_events = KVM_TD_EVENTS_MAX; + } + } + + return 0; +} + static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -376,6 +416,10 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_PEBS_DATA_CFG: msr_info->data = pmu->pebs_data_cfg; break; + case MSR_PERF_METRICS: + if (intel_pmu_handle_perf_metrics_access(vcpu, msr_info, true)) + return 1; + break; default: if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) || (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) { @@ -438,6 +482,10 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) pmu->pebs_data_cfg = data; break; + case MSR_PERF_METRICS: + if (intel_pmu_handle_perf_metrics_access(vcpu, msr_info, false)) + return 1; + break; default: if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) || (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) { diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index c2130d2c8e24..63b6dcc360c2 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -670,6 +670,11 @@ static inline bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu) return !!vcpu_to_lbr_records(vcpu)->nr; } +static inline bool intel_pmu_metrics_is_enabled(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.perf_capabilities & PMU_CAP_PERF_METRICS; +} + void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu); int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu); void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 906af36850fb..1f4c4ebe2efc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1472,6 +1472,7 @@ static const u32 msrs_to_save_pmu[] = { MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS, MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL, MSR_IA32_PEBS_ENABLE, MSR_IA32_DS_AREA, MSR_PEBS_DATA_CFG, + MSR_PERF_METRICS, /* This part of MSRs should match KVM_INTEL_PMC_MAX_GENERIC. */ MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1, From patchwork Wed Sep 27 03:31:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02C4CE80A83 for ; Wed, 27 Sep 2023 04:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229867AbjI0EtR (ORCPT ); Wed, 27 Sep 2023 00:49:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229665AbjI0EsS (ORCPT ); Wed, 27 Sep 2023 00:48:18 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C3E444AD; Tue, 26 Sep 2023 20:25:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785113; x=1727321113; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YR2KwZHRdRwiX6EPjCUKmqUBEXVe92lJgbknVnk1D04=; b=dtLo/cuJrYKACgHY7J1wkVCeqVJ+3IhH7UGBn1qh4TuOT3svatE2SJCY 1UvqaJjwSG4vd0BPsbdsz/HlY7YOL1jy6Ul28Opgo0VBmgpIqlJjnelHm NOBiCFwE7AO1ql0n3vHzS4pVPMp7SQLwVAixZB6nmB7L/FNHMX2twIUNF nGwtpALWPvWMxbuwufwxXkvA6BohKGaPc62EincIiVmlZsxRjMsD+zeWN i8waUahOdVunsYotK0o2ZmZEwVOb6LHdxD56V5aakzEbIwCYAoF7K9H2E swro6ktFKBJUXTl4j4e/a7I4dkm7NLO4P/GjCVAke/Evs+ofGuJY1j+lA Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780888" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780888" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:25:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864637298" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864637298" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:25:08 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 12/13] KVM: x86/pmu: Handle PERF_METRICS overflow Date: Wed, 27 Sep 2023 11:31:23 +0800 Message-Id: <20230927033124.1226509-13-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When the fixed counter 3 overflows, the PMU would also triggers an PERF_METRICS overflow subsequently. This patch handles the PERF_METRICS overflow case, it would inject an PMI into guest and set the PERF_METRICS overflow bit in PERF_GLOBAL_STATUS MSR after detecting PERF_METRICS overflow on host. Signed-off-by: Dapeng Mi --- arch/x86/events/intel/core.c | 7 ++++++- arch/x86/kvm/pmu.c | 19 +++++++++++++++---- 2 files changed, 21 insertions(+), 5 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index df56e091eb25..5c3271772d75 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3053,8 +3053,13 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status) * Intel Perf metrics */ if (__test_and_clear_bit(GLOBAL_STATUS_PERF_METRICS_OVF_BIT, (unsigned long *)&status)) { + struct perf_event *event = cpuc->events[GLOBAL_STATUS_PERF_METRICS_OVF_BIT]; + handled++; - static_call(intel_pmu_update_topdown_event)(NULL); + if (event && is_vmetrics_event(event)) + READ_ONCE(event->overflow_handler)(event, &data, regs); + else + static_call(intel_pmu_update_topdown_event)(NULL); } /* diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index fad7b2c10bb8..06c815859f77 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -101,7 +101,7 @@ static void kvm_pmi_trigger_fn(struct irq_work *irq_work) kvm_pmu_deliver_pmi(vcpu); } -static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) +static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi, bool metrics_of) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); bool skip_pmi = false; @@ -121,7 +121,11 @@ static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) (unsigned long *)&pmu->global_status); } } else { - __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); + if (metrics_of) + __set_bit(GLOBAL_STATUS_PERF_METRICS_OVF_BIT, + (unsigned long *)&pmu->global_status); + else + __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); } if (!pmc->intr || skip_pmi) @@ -141,11 +145,18 @@ static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) kvm_make_request(KVM_REQ_PMI, pmc->vcpu); } +static inline bool is_vmetrics_event(struct perf_event *event) +{ + return (event->attr.config & INTEL_ARCH_EVENT_MASK) == + INTEL_FIXED_VMETRICS_EVENT; +} + static void kvm_perf_overflow(struct perf_event *perf_event, struct perf_sample_data *data, struct pt_regs *regs) { struct kvm_pmc *pmc = perf_event->overflow_handler_context; + bool metrics_of = is_vmetrics_event(perf_event); /* * Ignore overflow events for counters that are scheduled to be @@ -155,7 +166,7 @@ static void kvm_perf_overflow(struct perf_event *perf_event, if (test_and_set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi)) return; - __kvm_perf_overflow(pmc, true); + __kvm_perf_overflow(pmc, true, metrics_of); kvm_make_request(KVM_REQ_PMU, pmc->vcpu); } @@ -490,7 +501,7 @@ static void reprogram_counter(struct kvm_pmc *pmc) goto reprogram_complete; if (pmc->counter < pmc->prev_counter) - __kvm_perf_overflow(pmc, false); + __kvm_perf_overflow(pmc, false, false); if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); From patchwork Wed Sep 27 03:31:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 13399875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B151E7F15D for ; Wed, 27 Sep 2023 04:10:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229723AbjI0EKp (ORCPT ); Wed, 27 Sep 2023 00:10:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229580AbjI0EJ1 (ORCPT ); Wed, 27 Sep 2023 00:09:27 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B664140D7; Tue, 26 Sep 2023 20:25:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695785118; x=1727321118; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MqAR84HcFnlcSI6e30scf9vscDeb/onORUUaIdDgZzU=; b=nv2XGAy8QEDl5vq/pQwk+seeTCzQAk18MO4wt74uD1I2kw3AMUsBH6QL rykZYZNqbRpS7mpwPWjhz91P8bNdiVrV/7rn/dg80tFFJ1qVLI7n1TiEZ Fm7TyCcWnDbPbFXozGJ6jfBnyjg6bcndTvkM1vZ3uGsfVJE4oWXY0wk8v /0+suvZBljCGCdZDfQWyhQdciPjdE6j8nWlSawuh6hYltJDTDfZDplYDJ kD0kP5sUxIM0Xcdp6QxboeixNd5UJv7EoEN8lrY7C5hSxdYYZf5QEWmt+ J58Wdqn388aodBqBs+7PzuO2l9ic0Ecujjh+EyMzQABlRp8wjuK/nykOT A==; X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="366780898" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="366780898" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2023 20:25:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10845"; a="864637335" X-IronPort-AV: E=Sophos;i="6.03,179,1694761200"; d="scan'208";a="864637335" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by fmsmga002.fm.intel.com with ESMTP; 26 Sep 2023 20:25:12 -0700 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Peter Zijlstra , Arnaldo Carvalho de Melo , Kan Liang , Like Xu , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Lv Zhiyuan , Yang Weijiang , Dapeng Mi , Dapeng Mi Subject: [Patch v4 13/13] KVM: x86/pmu: Expose Topdown in MSR_IA32_PERF_CAPABILITIES Date: Wed, 27 Sep 2023 11:31:24 +0800 Message-Id: <20230927033124.1226509-14-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> References: <20230927033124.1226509-1-dapeng1.mi@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Topdown support is enumerated via IA32_PERF_CAPABILITIES[bit 15]. Enable this bit for guest when the feature is available on host. Co-developed-by: Yang Weijiang Signed-off-by: Yang Weijiang Signed-off-by: Dapeng Mi --- arch/x86/kvm/vmx/pmu_intel.c | 3 +++ arch/x86/kvm/vmx/vmx.c | 2 ++ 2 files changed, 5 insertions(+) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 04ccb8c6f7e4..5783cde00054 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -614,6 +614,9 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) (((1ull << pmu->nr_arch_fixed_counters) - 1) << INTEL_PMC_IDX_FIXED)); pmu->global_ctrl_mask = counter_mask; + if (intel_pmu_metrics_is_enabled(vcpu)) + pmu->global_ctrl_mask &= ~(1ULL << GLOBAL_CTRL_EN_PERF_METRICS); + /* * GLOBAL_STATUS and GLOBAL_OVF_CONTROL (a.k.a. GLOBAL_STATUS_RESET) * share reserved bit definitions. The kernel just happens to use diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 72e3943f3693..5686a74c14bb 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7850,6 +7850,8 @@ static u64 vmx_get_perf_capabilities(void) perf_cap &= ~PERF_CAP_PEBS_BASELINE; } + perf_cap |= host_perf_cap & PMU_CAP_PERF_METRICS; + return perf_cap; }