From patchwork Fri Nov 12 09:51:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12616477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1382CC433FE for ; Fri, 12 Nov 2021 09:51:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F3F7D60FBF for ; Fri, 12 Nov 2021 09:51:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234859AbhKLJyo (ORCPT ); Fri, 12 Nov 2021 04:54:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234833AbhKLJyn (ORCPT ); Fri, 12 Nov 2021 04:54:43 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2AD0C061766; Fri, 12 Nov 2021 01:51:52 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id x7so6251766pjn.0; Fri, 12 Nov 2021 01:51:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Xqw+uXnJ8YFsdiLIRJgGFtUeXTMcVMZ+IXvH1BdJFVA=; b=AhLgNzVKDy8dcajLZ9BX2TZhqDpMwrrdQK3tsf2YgexxICeGVEqEEUuZwVpJAf3xwG Nx+lUA1ijHUvYBoOh/chNNFj+iA/jIeL0LWHyC2fCy2Dlr8bOhv3k62AKlHqXAibDld3 sNmVZk3xkhXuisU3hc7te4TrgfyCBLD7Ws7zGd4dol0cIfcF9cyDhD7O9ciq8sMXJAf/ X1f1j8My4dc/GxQ6hxv3mUOZdNy6DmbEgMiY1umN8K4vVmElh1MYrIrh9IragRIVJoIR WriCh3kHDTb34npWhIndnZeZLYBGQXPKvVWrwv3k67slmvlz7PlqSEkwk2xVQzBy0tJU ynxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Xqw+uXnJ8YFsdiLIRJgGFtUeXTMcVMZ+IXvH1BdJFVA=; b=oW9f1rF3aZ7KK21JelP4xybdhgBaUW6dXRma1sIhxDOIq+cjP2l+v5BStTUcRZJnp3 8Wgh322dZWL61L99jHppeAb/pzz7QASxdtQVeiDrkFth7oTpQuxSBbpPCRdP1hqp8xpm NOgsvyGyaIx5SCJOEXuBxOrXYgz1HkyCIsO1Dzh18/EKUsK9oepH/TePV9fp29QM4dUT w7O/m27J/jh0yFqbgvosFX1dmmhGDIftt511gNUr9pEb0jlal7hMmFgdnW7JGRSuKXCb hooCumCHuv6+A+kE5muwjlEDZi76+v5L9Q7f+4qXLXdsOgex25lQ0UF3xEnTdFof1YSu c/yw== X-Gm-Message-State: AOAM532nW8gXSDBRzZqxdPZ6ETC8vEdasYkC3l098GODN1t8hTTLaJ+Q HKPqk7FGw2+CtjWd9MFjdJaDrz/mxwE= X-Google-Smtp-Source: ABdhPJx3MB2QnMZjTzvN/sdlx/XHCbAPJGz5xxJ55UEAYw+OlVwWVPbfN5Bsn79Wbv0YWtboMv1UpA== X-Received: by 2002:a17:90a:bb84:: with SMTP id v4mr35014154pjr.4.1636710712347; Fri, 12 Nov 2021 01:51:52 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id f3sm5799403pfg.167.2021.11.12.01.51.49 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 12 Nov 2021 01:51:51 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 1/7] KVM: x86/pmu: Make top-down.slots event unavailable in supported leaf Date: Fri, 12 Nov 2021 17:51:33 +0800 Message-Id: <20211112095139.21775-2-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211112095139.21775-1-likexu@tencent.com> References: <20211112095139.21775-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu When we choose to disable the fourth fixed counter TOPDOWN.SLOTS, we need to also reduce the length of the 0AH.EBX bit vector, which enumerates architecture performance monitoring events, and set 0AH.EBX.[bit 7] to 1 if the new value of EAX[31:24] is still > 7. Fixes: 2e8cd7a3b8287 ("kvm: x86: limit the maximum number of vPMU fixed counters to 3") Signed-off-by: Like Xu --- arch/x86/kvm/cpuid.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 2d70edb0f323..bbf8cf3f43b0 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -746,6 +746,20 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) eax.split.mask_length = cap.events_mask_len; edx.split.num_counters_fixed = min(cap.num_counters_fixed, MAX_FIXED_COUNTERS); + + /* + * The 8th architectural event (top-down slots) will be supported + * if the 4th fixed counter exists && EAX[31:24] > 7 && EBX[7] = 0. + * + * For now, KVM needs to make this event unavailable. + */ + if (edx.split.num_counters_fixed < 4) { + if (eax.split.mask_length > 7) + eax.split.mask_length--; + if (eax.split.mask_length > 7) + cap.events_mask |= BIT_ULL(7); + } + edx.split.bit_width_fixed = cap.bit_width_fixed; if (cap.version) edx.split.anythread_deprecated = 1; From patchwork Fri Nov 12 09:51:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12616479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1851C433FE for ; Fri, 12 Nov 2021 09:51:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CCB5760FBF for ; Fri, 12 Nov 2021 09:51:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234880AbhKLJyr (ORCPT ); Fri, 12 Nov 2021 04:54:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234878AbhKLJyp (ORCPT ); Fri, 12 Nov 2021 04:54:45 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 512E7C061203; Fri, 12 Nov 2021 01:51:55 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id nh10-20020a17090b364a00b001a69adad5ebso7168360pjb.2; Fri, 12 Nov 2021 01:51:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BYa2ixmP5+ZbmuACH1OAe6ovA/lf0vf497vEbvhmcEY=; b=PdBKieacJHToHzHK7icJJZhnZkjwlbPSgmcT645hfz5v7qQN/II6/VVEt6323ICAtD b8EgNtP+gXbnDcYNyFgVlAQT6behk9wCocia40FUGsqMqa6G4sNYEcFhB5CTdYDmGMfz 56U7vNBHSdrtSD6dZVK30p78DVTvkudXhx26rdZNIBRtbecG40L6OOQPtEQScoBh4UEZ omy9DtzEv+i8rrWffoDJonkE/TWb1KkznlTQzgE2fwmPlXHp2E2ktIS2VFuhRNA3J/Vp 6QQt9DDC6nYHlUskWLgxeRQGEhRU5RI15btLX5ysn5B1/bOxlxGlVngo9mJxuCGCM7AH DxtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BYa2ixmP5+ZbmuACH1OAe6ovA/lf0vf497vEbvhmcEY=; b=pOKnGQ9iXXcaS19Q2dJQbmzzRMOv1sFUWs7OBHZ3E+2rN8YyY7RnkNmUzXNqQddToA OjhhiLd5Dfun71B+0Liz2thtGeDzqIr3vZNnvbjMYI41SXdYFSv9Vjxp7dThQg4Yt1Jd R4F0cPiOHxZi3MhkgS7Dr72vZM332rNK5NOtaKrBOWfV4gRsBIZXgHtUTLUSscME+8cA wkgYG/jhqkkSRUGQQQNVyhI84MHThK38SoXR3AE72Fzka8ypXzK7rVFLvwe6wzaN8x/O ljzzamSH6CopkXz4TQXKOWmuFDr2HHelTH22TI5Dbgd+XDsE27p5fEyGTjF5/LduPfyw uklg== X-Gm-Message-State: AOAM533NcBzXEkncHgqHPu/OFtJxduo3/byaaumKioz4amhRN39nZp84 eLBZYY6XYae3Cd/l0bdHBLo= X-Google-Smtp-Source: ABdhPJwlgQYkrEISGBdPdIkOGZDBgGjPUnUzhMsi904kCjEBoKyHn/vuXQ7RLF40JYu8de96WnEmfQ== X-Received: by 2002:a17:90b:1b52:: with SMTP id nv18mr35042334pjb.43.1636710714931; Fri, 12 Nov 2021 01:51:54 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id f3sm5799403pfg.167.2021.11.12.01.51.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 12 Nov 2021 01:51:54 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 2/7] KVM: x86/pmu: Fix available_event_types check for REF_CPU_CYCLES event Date: Fri, 12 Nov 2021 17:51:34 +0800 Message-Id: <20211112095139.21775-3-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211112095139.21775-1-likexu@tencent.com> References: <20211112095139.21775-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu For the CPUID 0x0A.EBX bit vector, the [7] event should be the Intel unrealized architectural performance events "Topdown Slots" instead of the *kernel* generalized common hardware event "REF_CPU_CYCLES", so we can skip the cpuid unavaliblity check in the intel_find_arch_event() for the last REF_CPU_CYCLES event and update the confusing comment. Fixes: 62079d8a43128 ("KVM: PMU: add proper support for fixed counter 2") Signed-off-by: Like Xu --- arch/x86/kvm/vmx/pmu_intel.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index b8e0d21b7c8a..bc6845265362 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -21,7 +21,6 @@ #define MSR_PMC_FULL_WIDTH_BIT (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0) static struct kvm_event_hw_type_mapping intel_arch_events[] = { - /* Index must match CPUID 0x0A.EBX bit vector */ [0] = { 0x3c, 0x00, PERF_COUNT_HW_CPU_CYCLES }, [1] = { 0xc0, 0x00, PERF_COUNT_HW_INSTRUCTIONS }, [2] = { 0x3c, 0x01, PERF_COUNT_HW_BUS_CYCLES }, @@ -29,6 +28,7 @@ static struct kvm_event_hw_type_mapping intel_arch_events[] = { [4] = { 0x2e, 0x41, PERF_COUNT_HW_CACHE_MISSES }, [5] = { 0xc4, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS }, [6] = { 0xc5, 0x00, PERF_COUNT_HW_BRANCH_MISSES }, + /* The above index must match CPUID 0x0A.EBX bit vector */ [7] = { 0x00, 0x03, PERF_COUNT_HW_REF_CPU_CYCLES }, }; @@ -75,9 +75,9 @@ static unsigned intel_find_arch_event(struct kvm_pmu *pmu, int i; for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++) - if (intel_arch_events[i].eventsel == event_select - && intel_arch_events[i].unit_mask == unit_mask - && (pmu->available_event_types & (1 << i))) + if (intel_arch_events[i].eventsel == event_select && + intel_arch_events[i].unit_mask == unit_mask && + ((i > 6) || pmu->available_event_types & (1 << i))) break; if (i == ARRAY_SIZE(intel_arch_events)) From patchwork Fri Nov 12 09:51:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12616481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06FCCC433EF for ; Fri, 12 Nov 2021 09:52:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E659E60FBF for ; Fri, 12 Nov 2021 09:52:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234901AbhKLJyv (ORCPT ); Fri, 12 Nov 2021 04:54:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234878AbhKLJys (ORCPT ); Fri, 12 Nov 2021 04:54:48 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE53EC0613F5; Fri, 12 Nov 2021 01:51:57 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id j5-20020a17090a318500b001a6c749e697so5849922pjb.1; Fri, 12 Nov 2021 01:51:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mSUYYsI1RoLVeho4NqLoOx8k8Av4Oi06Kuv482qZLWA=; b=NYuTpDFFmQnKYnMPZXsrt2g2lMsVCOYYJv/1+Pp3tmCYt2qKVeibdDEbgyUo6ZS1l+ E5pNsqoNqyPDFAogrBzPtegEtqlwztAbWPT9v/R0cJwoBF6HQzEMTcbS0Ad6lYioFoiv 1hHYETopAT1L7uYuBJYeNgZb4z+eqeriVEiBRlKSDcDEWJj1Fyhljlhun7jIid5KCUgL ML4lTrdYIznqstnzc05K8Nzdp6FRXxkcdkbX2ePXS1QvktskTAemJScqxsH0czyRTi0Y Rfb3oPqcjEL2Bcb68qP+2Lr3/qKBDwleMmReWdFu33cJPFM2+7W/57OTVAeYC9xhYk6L 8Kvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mSUYYsI1RoLVeho4NqLoOx8k8Av4Oi06Kuv482qZLWA=; b=PMoJKvTZLN1rlih0KC+O/jNBwz7kgIbS0BP5UScbcCZrsJxW7Z6h9Yr6qyghDDIAXk VIz9cF6zphejaQtWMxeU+WBebMf9SSCkYlQEa6ES/5A0tNXCv0qtmXrdFBmujEctcuHR QYkR8rs5YjI0yy98bq+tvb3lS/HjMtxrheHup221sNUcN6aTUsTtzGU4YIFvI+GIznZX 9ZkDjTnA0Id+tXzbPdy3d+0WbpWFP3eJlM2gusyuTZOTUraO12UV5EzHlulrqGN4Cjrr 9tNMDF1VwiQJPr4PCZz/YeENcEXNF0BvJDEbgckVhiYWu0MLX7n/XGiDkRryKK1Sp/sb J0fg== X-Gm-Message-State: AOAM532avVDpayiH5MaRsfrdAwlGXPa4JrrlsXkIMcKKHt6aTiyZxlc2 8kL0fi9jfbwInxwiqrPcgnI= X-Google-Smtp-Source: ABdhPJwRMajZL6saDgpIQEf3QkiIMT9eQyx9EHB+VKOP+y+2FQby0nlhNpwpgooORTPlhf+1mAYYZA== X-Received: by 2002:a17:90a:df14:: with SMTP id gp20mr34664052pjb.186.1636710717398; Fri, 12 Nov 2021 01:51:57 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id f3sm5799403pfg.167.2021.11.12.01.51.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 12 Nov 2021 01:51:57 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 3/7] KVM: x86/pmu: Pass "struct kvm_pmu *" to the find_fixed_event() Date: Fri, 12 Nov 2021 17:51:35 +0800 Message-Id: <20211112095139.21775-4-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211112095139.21775-1-likexu@tencent.com> References: <20211112095139.21775-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The KVM userspace may make some hw events (including cpu-cycles, instruction, ref-cpu-cycles) not work properly by marking bits in the guest CPUID 0AH.EBX leaf, but these counters will still be accessible. As a preliminary preparation, this part of the check depends on the access to the pmu->available_event_types value in the find_fixed_event as well as find_arch_event(). Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 3 ++- arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/svm/pmu.c | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 2 +- 4 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 0772bad9165c..7093fc70cd38 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -245,6 +245,7 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) bool pmi = ctrl & 0x8; struct kvm_pmu_event_filter *filter; struct kvm *kvm = pmc->vcpu->kvm; + struct kvm_pmu *pmu = pmc_to_pmu(pmc); pmc_pause_counter(pmc); @@ -268,7 +269,7 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) pmc->current_config = (u64)ctrl; pmc_reprogram_counter(pmc, PERF_TYPE_HARDWARE, - kvm_x86_ops.pmu_ops->find_fixed_event(idx), + kvm_x86_ops.pmu_ops->find_fixed_event(pmu, idx), !(en_field & 0x2), /* exclude user */ !(en_field & 0x1), /* exclude kernel */ pmi, false, false); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 0e4f2b1fa9fb..fe29537b1343 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -26,7 +26,7 @@ struct kvm_event_hw_type_mapping { struct kvm_pmu_ops { unsigned (*find_arch_event)(struct kvm_pmu *pmu, u8 event_select, u8 unit_mask); - unsigned (*find_fixed_event)(int idx); + unsigned int (*find_fixed_event)(struct kvm_pmu *pmu, int idx); bool (*pmc_is_enabled)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index fdf587f19c5f..3ee8f86d9ace 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -152,7 +152,7 @@ static unsigned amd_find_arch_event(struct kvm_pmu *pmu, } /* return PERF_COUNT_HW_MAX as AMD doesn't have fixed events */ -static unsigned amd_find_fixed_event(int idx) +static unsigned int amd_find_fixed_event(struct kvm_pmu *pmu, int idx) { return PERF_COUNT_HW_MAX; } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index bc6845265362..4c04e94ae548 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -86,7 +86,7 @@ static unsigned intel_find_arch_event(struct kvm_pmu *pmu, return intel_arch_events[i].event_type; } -static unsigned intel_find_fixed_event(int idx) +static unsigned int intel_find_fixed_event(struct kvm_pmu *pmu, int idx) { u32 event; size_t size = ARRAY_SIZE(fixed_pmc_events); From patchwork Fri Nov 12 09:51:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12616483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0813C433F5 for ; Fri, 12 Nov 2021 09:52:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8598561038 for ; Fri, 12 Nov 2021 09:52:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234881AbhKLJyy (ORCPT ); Fri, 12 Nov 2021 04:54:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234902AbhKLJyv (ORCPT ); Fri, 12 Nov 2021 04:54:51 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B5D3C061767; Fri, 12 Nov 2021 01:52:01 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id g18so8060353pfk.5; Fri, 12 Nov 2021 01:52:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RfCGghs890u1IfUjBiOG/GW0eq/HrLLY/fA6C10Whzk=; b=IwfURyHgLi3MOABPwuj2Z+uunm4xktM7eK9LlX931BLKNr2JGzhCD8Fh7LOxubGN28 8ptA2Xko/NrUnayKTnQMhdKDDOlow4xA8x2eYmF5UfljLKeuyZrc8XivNAzUniUU2c3I 1ApVWhX7IYLCHUzfqNkVVfNSqB0fay2S3EQJYqDjT+CzBbD9IkhewGqsz+P3UXIEnd4q +2PQe8MlhrhyRGWqO4R640IFrXDvCu/ZYxSD2B9v/qUKqWwKzM8cJPPyn+FlUreUhGCD kFsCTewQZTBB3RBDZLyExL2bQN9zJDN6Q42DRy0t5Ulh3xqfgIZPUh9yTFNRmq4xQ6hr XtQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RfCGghs890u1IfUjBiOG/GW0eq/HrLLY/fA6C10Whzk=; b=omMVichlw/+TQjcyfaW7SRYqJRr5xBNp/uC0LOHLsUnP5ChAdtltCi1uepk82vxRT1 /oPu/MNPXm+USpwMI8/XFVnN8XNpNco4rSOv5MYs/gd60zSROyIj24rSVbAZXlUkLFiJ J9OI/y9yHxhIuIA5xsdfqo8BHVoYJYkJqxxHMTY3OScU2P4cJN7oXjbHYHQuXCPPjOGy jDjpc4WUfJ33IhkOk4lyjckvOlK5XCKqZ+kblU2xieRZpizKeK7qTTxnCmPNCvxl6LJe W1rrYguFFGhgBza5jfsyhADWyfhKdjp6MZQYl3hKARLKQ1mG57LcnkENORqaWZyIF0rU NrWw== X-Gm-Message-State: AOAM530DO5o+CS/0w9k6zbdtSaYIFLtFvy9rGUuNjJQ6FPVB7+vquSUO BizptKmYAOiSLHopkb/SCuU= X-Google-Smtp-Source: ABdhPJwEhiGP8rsEREyJRRSH8ve1R2bndHeYu56OMnwX5QBvPFOiIACrI3d/7nl5bEuvXLJzFKQVjA== X-Received: by 2002:a63:454:: with SMTP id 81mr8994340pge.24.1636710720788; Fri, 12 Nov 2021 01:52:00 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id f3sm5799403pfg.167.2021.11.12.01.51.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 12 Nov 2021 01:51:59 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 4/7] KVM: x86/pmu: Avoid perf_event creation for invalid counter config Date: Fri, 12 Nov 2021 17:51:36 +0800 Message-Id: <20211112095139.21775-5-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211112095139.21775-1-likexu@tencent.com> References: <20211112095139.21775-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu KVM needs to be fixed to avoid perf_event creation when the requested hw event on a gp or fixed counter is marked as unavailable in the Intel guest CPUID 0AH.EBX leaf. It's proposed to use is_intel_cpuid_event() to distinguish whether the hw event is an Intel pre-defined architecture event, so that we can decide to reprogram it with PERF_TYPE_HARDWARE (for fixed and gp) or PERF_TYPE_RAW (for gp only) perf_event, or just avoid creating perf_event. If an Intel cpuid event is marked as unavailable by checking pmu->available_event_types, the intel_find_[fixed|arch]_event() returns a new special value of "PERF_COUNT_HW_MAX + 1" to tell the caller to avoid creating perf_ event and not to use PERF_TYPE_RAW mode for gp. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 8 +++++++ arch/x86/kvm/vmx/pmu_intel.c | 45 +++++++++++++++++++++++++++++++----- 2 files changed, 47 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 7093fc70cd38..3b47bd92e7bb 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -111,6 +111,14 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, .config = config, }; + /* + * The "config > PERF_COUNT_HW_MAX" only appears when + * the kernel generic event is marked as unavailable + * in the Intel guest architecture event CPUID leaf. + */ + if (type == PERF_TYPE_HARDWARE && config >= PERF_COUNT_HW_MAX) + return; + attr.sample_period = get_sample_period(pmc, pmc->counter); if (in_tx) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 4c04e94ae548..4f58c14efa61 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -68,17 +68,39 @@ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data) reprogram_counter(pmu, bit); } +/* UMask and Event Select Encodings for Intel CPUID Events */ +static inline bool is_intel_cpuid_event(u8 event_select, u8 unit_mask) +{ + if ((!unit_mask && event_select == 0x3C) || + (!unit_mask && event_select == 0xC0) || + (unit_mask == 0x01 && event_select == 0x3C) || + (unit_mask == 0x4F && event_select == 0x2E) || + (unit_mask == 0x41 && event_select == 0x2E) || + (!unit_mask && event_select == 0xC4) || + (!unit_mask && event_select == 0xC5)) + return true; + + /* the unimplemented topdown.slots event check is kipped. */ + return false; +} + static unsigned intel_find_arch_event(struct kvm_pmu *pmu, u8 event_select, u8 unit_mask) { int i; - for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++) - if (intel_arch_events[i].eventsel == event_select && - intel_arch_events[i].unit_mask == unit_mask && - ((i > 6) || pmu->available_event_types & (1 << i))) - break; + for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++) { + if (intel_arch_events[i].eventsel != event_select || + intel_arch_events[i].unit_mask != unit_mask) + continue; + + if (is_intel_cpuid_event(event_select, unit_mask) && + !(pmu->available_event_types & BIT_ULL(i))) + return PERF_COUNT_HW_MAX + 1; + + break; + } if (i == ARRAY_SIZE(intel_arch_events)) return PERF_COUNT_HW_MAX; @@ -90,12 +112,23 @@ static unsigned int intel_find_fixed_event(struct kvm_pmu *pmu, int idx) { u32 event; size_t size = ARRAY_SIZE(fixed_pmc_events); + u8 event_select, unit_mask; + unsigned int event_type; if (idx >= size) return PERF_COUNT_HW_MAX; event = fixed_pmc_events[array_index_nospec(idx, size)]; - return intel_arch_events[event].event_type; + + event_select = intel_arch_events[event].eventsel; + unit_mask = intel_arch_events[event].unit_mask; + event_type = intel_arch_events[event].event_type; + + if (is_intel_cpuid_event(event_select, unit_mask) && + !(pmu->available_event_types & BIT_ULL(event_type))) + return PERF_COUNT_HW_MAX + 1; + + return event_type; } /* check if a PMC is enabled by comparing it with globl_ctrl bits. */ From patchwork Fri Nov 12 09:51:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12616485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26680C433F5 for ; Fri, 12 Nov 2021 09:52:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1134560F51 for ; Fri, 12 Nov 2021 09:52:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234934AbhKLJy5 (ORCPT ); Fri, 12 Nov 2021 04:54:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234904AbhKLJyy (ORCPT ); Fri, 12 Nov 2021 04:54:54 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE748C061767; Fri, 12 Nov 2021 01:52:03 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id y8so2526777plg.1; Fri, 12 Nov 2021 01:52:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=brF0DYStxhXsEwWOD3vo3umgC7ANl99kGXbRW01P4QQ=; b=BiXjxnFznhn0VPGM+CYzVXjHRwEI61w+t+lu6SBfLfItCUWsbqp2CGyNIDopwrO1tf h49aiM44xxZ7IVHugKTBk9ystV/Jin5L14fe4ekwSICw/UXcs3+5EbLnfhfclLucMWST hcvQY6X56sLbBNf5aiNfoTxmWc7xWb857nAAOGFq8XvOPuvipOQm1OzBHoWAGLnL/RFg 5nrCSxm5C9Vplghfd2kt5Dl8epbQAnsyNZUQ4cTbURrkMiWExhAtUbVv3vmIQJyf6iqD a3S3ndocL7QboFmzkKafNNSmjUwm0TNSokA60+vdU02po/oPpVbNOZLemSnzEf5pkq64 9NEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=brF0DYStxhXsEwWOD3vo3umgC7ANl99kGXbRW01P4QQ=; b=WqDTuLP9JgkOiJpp4VD1m9be6Gtm++IPAhkOj/1OjofznE5YnnF6P65WU8Gmes+7Qt NwlUqxQHkHYe63u5AFf+4acE9dsRg29LQJ2mU1OIkPYUu8PSpM1Tz/YJrgrgOlzV9JFR rNDYqxAh9yw5NFgLLBWldvPmuibsU5dKXFSzu+OvRkn3tAz8qBA6V1FgaJMlkrbVjhba 8wgJcFXUAruZKSliUQi5WgcvYyW3GuPX3odNx4WVIQBZiAbgfdku0Wemlcw2IqfbhtMS JhrZJI5XMDlHxdNw0Qu4THX6dLyUDJnrj9McH3MsHR51ehHewNe+mMKXXgxyJPQFLxQy q45A== X-Gm-Message-State: AOAM531figlC3j/jtx8lA/yqz3JljZf+8Tb93Vs/SyfYKD74pLBerAGf AR0m4cEllWf3hFvAHcPLoOs= X-Google-Smtp-Source: ABdhPJyYoqKIPQzpRIiNiQsOdb8WvtW6rv8Vf5Dv56c8Im29o+e7rps8PHgzQg1E3xa9dIOx/lZI6A== X-Received: by 2002:a17:90b:124d:: with SMTP id gx13mr34600093pjb.106.1636710723481; Fri, 12 Nov 2021 01:52:03 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id f3sm5799403pfg.167.2021.11.12.01.52.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 12 Nov 2021 01:52:03 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 5/7] KVM: x86/pmu: Refactor pmu->available_event_types field using BITMAP Date: Fri, 12 Nov 2021 17:51:37 +0800 Message-Id: <20211112095139.21775-6-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211112095139.21775-1-likexu@tencent.com> References: <20211112095139.21775-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Replace the explicit declaration of "unsigned available_event_types" with the generic macro DECLARE_BITMAP and rename it to "avail_cpuid_events" for better self-explanation. Signed-off-by: Like Xu --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 11 +++++++---- 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 88fce6ab4bbd..2e69dec3ad7b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -495,7 +495,6 @@ struct kvm_pmc { struct kvm_pmu { unsigned nr_arch_gp_counters; unsigned nr_arch_fixed_counters; - unsigned available_event_types; u64 fixed_ctr_ctrl; u64 global_ctrl; u64 global_status; @@ -510,6 +509,7 @@ struct kvm_pmu { DECLARE_BITMAP(reprogram_pmi, X86_PMC_IDX_MAX); DECLARE_BITMAP(all_valid_pmc_idx, X86_PMC_IDX_MAX); DECLARE_BITMAP(pmc_in_use, X86_PMC_IDX_MAX); + DECLARE_BITMAP(avail_cpuid_events, X86_PMC_IDX_MAX); /* * The gate to release perf_events not marked in diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 4f58c14efa61..db36e743c3cc 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -96,7 +96,7 @@ static unsigned intel_find_arch_event(struct kvm_pmu *pmu, continue; if (is_intel_cpuid_event(event_select, unit_mask) && - !(pmu->available_event_types & BIT_ULL(i))) + !test_bit(i, pmu->avail_cpuid_events)) return PERF_COUNT_HW_MAX + 1; break; @@ -125,7 +125,7 @@ static unsigned int intel_find_fixed_event(struct kvm_pmu *pmu, int idx) event_type = intel_arch_events[event].event_type; if (is_intel_cpuid_event(event_select, unit_mask) && - !(pmu->available_event_types & BIT_ULL(event_type))) + !test_bit(event_type, pmu->avail_cpuid_events)) return PERF_COUNT_HW_MAX + 1; return event_type; @@ -497,6 +497,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu); + unsigned long avail_cpuid_events; struct x86_pmu_capability x86_pmu; struct kvm_cpuid_entry2 *entry; @@ -527,8 +528,10 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) eax.split.bit_width = min_t(int, eax.split.bit_width, x86_pmu.bit_width_gp); pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << eax.split.bit_width) - 1; eax.split.mask_length = min_t(int, eax.split.mask_length, x86_pmu.events_mask_len); - pmu->available_event_types = ~entry->ebx & - ((1ull << eax.split.mask_length) - 1); + avail_cpuid_events = ~entry->ebx & ((1ull << eax.split.mask_length) - 1); + bitmap_copy(pmu->avail_cpuid_events, + (unsigned long *)&avail_cpuid_events, + eax.split.mask_length); if (pmu->version == 1) { pmu->nr_arch_fixed_counters = 0; From patchwork Fri Nov 12 09:51:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12616487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D02BDC433F5 for ; Fri, 12 Nov 2021 09:52:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC6F6604DB for ; Fri, 12 Nov 2021 09:52:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234921AbhKLJzH (ORCPT ); Fri, 12 Nov 2021 04:55:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234914AbhKLJy4 (ORCPT ); Fri, 12 Nov 2021 04:54:56 -0500 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F8CCC061767; Fri, 12 Nov 2021 01:52:06 -0800 (PST) Received: by mail-pj1-x102a.google.com with SMTP id t5-20020a17090a4e4500b001a0a284fcc2so6853998pjl.2; Fri, 12 Nov 2021 01:52:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/GEAKHOTJvbS3IgOKWf3Q6MVeHzRCDLEfVOLNTMcwPg=; b=FJFqrTnwPEuLtvQGIWc1XYBDmelqiGH2Mx60N5A6KyhcghAo9lOw+HSm04iREm4lwP Jv5+l5Nf6U/ifpewJ5N+XGkUNuegN0qKpNocKs4eNSFmJTXAfWMLaB+BupalCrH/e0Dg C7zHSJfwIsoICsc24dbakO1IE02D9VDI0SupaqRLBFphsIx7kQWSKEdAFrfrSz8MZPEu f6CHyPxwtlfs4X4tqsHVLOcEqGo8cSU/YIb3X9eqtjkn0hiC6wm/8mECRDM8hLm8RmcN /vtB552ru+41YMQhg3+ldMnE+XA2ydotW8xWWTWw5f5QUNjjZSIY1jD1Eji3rx26FHdb znkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/GEAKHOTJvbS3IgOKWf3Q6MVeHzRCDLEfVOLNTMcwPg=; b=VdBKSsgn9rByRind745dszysAg20yROvldYxeObPpdsPtPwbuLK29RkBZ/c0yoDCvU DA067iJdt7RNcGT6Vj8oHVCremxJFs0ZL6kvWdOGgFIOwlo17NErVo923tIdqt9tugil hqFwasa4SqEGcAePfMvQJfIU/9hBqVTeY1gtvZXbKpn4tijLBd0nJx+yUEgZ5J8GyCOy yy3oLae3nb7dS9pgz5MSbVs4SNrK8HAXKSiEwJ0S6OHveF9626c4wUJT4gWmTC98mQOU NCyHIB6exQrk98rCIsIieMxcU8cGEkNYh5U5NjdsYj0GP05+VZomSc8bbmx3x9hq8Ds8 U4pg== X-Gm-Message-State: AOAM5328vyL+ZyfYZ79fWNacnJFPoMgpVAV7QjJO8uL+4ItTX5MlM/21 PKzVewzEzroDgUK5fNzc8ps= X-Google-Smtp-Source: ABdhPJzE7V0Wq/g/ND3xAvRBQ3tIRjdvBkwQ43iKJ9Dwq2bQ/lzMKnzidVUmmfSLQwEMkOiEVQfwJQ== X-Received: by 2002:a17:90a:9f93:: with SMTP id o19mr34183778pjp.136.1636710726076; Fri, 12 Nov 2021 01:52:06 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id f3sm5799403pfg.167.2021.11.12.01.52.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 12 Nov 2021 01:52:05 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu , Peter Zijlstra Subject: [PATCH 6/7] perf: x86/core: Add interface to query perfmon_event_map[] directly Date: Fri, 12 Nov 2021 17:51:38 +0800 Message-Id: <20211112095139.21775-7-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211112095139.21775-1-likexu@tencent.com> References: <20211112095139.21775-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Currently, we have [intel|knc|p4|p6]_perfmon_event_map on the Intel platforms and amd_[f17h]_perfmon_event_map on the AMD platforms. Early clumsy KVM code or other potential perf_event users may have hard-coded these perfmon_maps (e.g., arch/x86/kvm/svm/pmu.c), so it would not make sense to program a common hardware event based on the generic "enum perf_hw_id" once the two tables do not match. Let's provide an interface for callers outside the perf subsystem to get the counter config based on the perfmon_event_map currently in use, and it also helps to save bytes. Cc: Peter Zijlstra Signed-off-by: Like Xu Reported-by: kernel test robot --- arch/x86/events/core.c | 9 +++++++++ arch/x86/include/asm/perf_event.h | 5 +++++ 2 files changed, 14 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 2a57dbed4894..dc88d39cec1b 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -691,6 +691,15 @@ void x86_pmu_disable_all(void) } } +u64 perf_get_hw_event_config(int perf_hw_id) +{ + if (perf_hw_id < x86_pmu.max_events) + return x86_pmu.event_map(perf_hw_id); + + return 0; +} +EXPORT_SYMBOL_GPL(perf_get_hw_event_config); + struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr) { return static_call(x86_pmu_guest_get_msrs)(nr); diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 8fc1b5003713..11a93cb1198b 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -492,9 +492,14 @@ static inline void perf_check_microcode(void) { } #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) extern struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr); +extern u64 perf_get_hw_event_config(int perf_hw_id); extern int x86_perf_get_lbr(struct x86_pmu_lbr *lbr); #else struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr); +u64 perf_get_hw_event_config(int perf_hw_id); +{ + return 0; +} static inline int x86_perf_get_lbr(struct x86_pmu_lbr *lbr) { return -1; From patchwork Fri Nov 12 09:51:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12616489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0B19C433EF for ; Fri, 12 Nov 2021 09:52:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 87473604DB for ; Fri, 12 Nov 2021 09:52:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234956AbhKLJzK (ORCPT ); Fri, 12 Nov 2021 04:55:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234929AbhKLJzE (ORCPT ); Fri, 12 Nov 2021 04:55:04 -0500 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CAA6C061205; Fri, 12 Nov 2021 01:52:09 -0800 (PST) Received: by mail-pj1-x1036.google.com with SMTP id nh10-20020a17090b364a00b001a69adad5ebso7168748pjb.2; Fri, 12 Nov 2021 01:52:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=z9lU273cXN8E0lW1RNkQnfL/m2oRGI6p2rKNyP0vEws=; b=nx/9EUBDA3Ji9VTcVFcslQZ9P9x33CZLt8gWjQNnZt9p/PC5W+4XHAhtelABVpYb3A CtVmUhOclSvCXN7nvyOfQRXtpBivYipaW9RNCBuT8KRFWsz8hiLhCbrclW5PxygwBxhu uD2Em2LuxWVJuJbmLXEOcEd1N5A61rVjNan6y1JGXsj8M4tWsyGBiBJTY3LserCnGeKD XfqkV4ncWwAqjjoDOZTDFxNOgA5+hCYPs9vFnq7eSD8fa7kl9tvmv64ORtgmGIN6BkS9 q91BVRhisCF5P+7r49xS7VVq+WkK8byqoVikWc7t6ml6uNmmz7akKgIPhjSmnhPMzqDK j2bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z9lU273cXN8E0lW1RNkQnfL/m2oRGI6p2rKNyP0vEws=; b=r2iGxF74GxT2rbTkvpYi8c4TMt91froWWi+3KU79vODkuHVzeuMuWKKGO6o0dwKYKh hRKJMa9yylj2txzhOWBHuzFz1vQT4vKLtZelbpFOq4uHhGnsvco5Nn6Pf9ZxRLYAvGEg X/gqNQYdRdaLa3U3OPKU6On77uRL4bq9T1cV6xcS7Nr3AnoQGicW+g/rGXoWF6nC0Olb ZsS6yzTv/6druPJDwvkUWRSI/enFzrsefstHmq7zSzZn2zy1oOM5SOt4lmsuPF5NJR4A UgXC6oVVmxuif+7u+V3JV2abLvwlBa0Kf3zd2teRyGZv3I/rFHTA1S3YEJBPMSSM0/OL PfzQ== X-Gm-Message-State: AOAM530VROalALwaIAlXnzetL/ED6100THfG1D7HDqzASbyZwdB5WxhJ 7Yj5Tx3ZFL8phxPVjeUG2Xw= X-Google-Smtp-Source: ABdhPJzYX0hPMLk6+QStzvvm7tPenOVYikga4zYDi+Ulj1CjxlWbtl74s6g3/0fDGqbG2JEPmG0OrA== X-Received: by 2002:a17:902:76c4:b0:143:6f27:391b with SMTP id j4-20020a17090276c400b001436f27391bmr6703760plt.76.1636710728534; Fri, 12 Nov 2021 01:52:08 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id f3sm5799403pfg.167.2021.11.12.01.52.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 12 Nov 2021 01:52:08 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 7/7] KVM: x86/pmu: Setup the {inte|amd}_event_mapping[] when hardware_setup Date: Fri, 12 Nov 2021 17:51:39 +0800 Message-Id: <20211112095139.21775-8-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211112095139.21775-1-likexu@tencent.com> References: <20211112095139.21775-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The current amd_event_mapping[] is only valid for "K7 and later, up to and including Family 16h" and it needs amd_f17h_perfmon_event_mapp[] for "Family 17h and later". It's proposed to fix it in a more generic approach. For AMD platforms, the new introduced interface perf_get_hw_event_config() could be applied to fill up the new introduced global kernel_arch_events[]. For Intel platforms, we need to distinguish the "kernel_arch_events" (which is defined based on the kernel generic "enum perf_hw_id" ) and "intel_cpuid_events" (which is defined based on the Intel CPUID). To keep the validation check function for Intel cpuid events, the get_perf_hw_id_from_cpuid_idx() is added to correcte the bit index in the pmu->avail_cpuid_events based on "enum perf_hw_id" when the original strictly ordered intel_arch_events[] is replaced by the new strictly ordered kernel_arch_events[]. When the kernel_arch_events[] is initialized, the original 8-element array is replaced by a new 10-element array, and the eventsel and unit_mask of the two new members of will be zero, which makes the call to "perf_hw_id" in the find_arch_event() very confusing. In this case, KVM will not query kernel_arch_events[] when the trapped event_select and unit_mask are both 0, it will fall back to PERF_TYPE_RAW mode to program the perf_event. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 24 ++++++++++++- arch/x86/kvm/pmu.h | 2 ++ arch/x86/kvm/svm/pmu.c | 22 +++--------- arch/x86/kvm/vmx/pmu_intel.c | 70 +++++++++++++++++++++++------------- arch/x86/kvm/x86.c | 1 + 5 files changed, 76 insertions(+), 43 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3b47bd92e7bb..03d28912309a 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -19,6 +19,9 @@ #include "lapic.h" #include "pmu.h" +struct kvm_event_hw_type_mapping kernel_arch_events[PERF_COUNT_HW_MAX]; +EXPORT_SYMBOL_GPL(kernel_arch_events); + /* This is enough to filter the vast majority of currently defined events. */ #define KVM_PMU_EVENT_FILTER_MAX_EVENTS 300 @@ -217,7 +220,9 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) event_select = eventsel & ARCH_PERFMON_EVENTSEL_EVENT; unit_mask = (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; - if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | + /* Fall back to PERF_TYPE_RAW mode if event_select and unit_mask are both 0. */ + if ((event_select | unit_mask) && + !(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | ARCH_PERFMON_EVENTSEL_INV | ARCH_PERFMON_EVENTSEL_CMASK | HSW_IN_TX | @@ -499,6 +504,23 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu) kvm_pmu_reset(vcpu); } +/* Initialize common hardware events mapping based on enum perf_hw_id. */ +void kvm_pmu_hw_events_mapping_setup(void) +{ + u64 config; + int i; + + for (i = 0; i < PERF_COUNT_HW_MAX; i++) { + config = perf_get_hw_event_config(i) & 0xFFFFULL; + + kernel_arch_events[i] = (struct kvm_event_hw_type_mapping){ + .eventsel = config & ARCH_PERFMON_EVENTSEL_EVENT, + .unit_mask = (config & ARCH_PERFMON_EVENTSEL_UMASK) >> 8, + .event_type = i, + }; + } +} + int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) { struct kvm_pmu_event_filter tmp, *filter; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index fe29537b1343..688b784f1e26 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -160,8 +160,10 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu); void kvm_pmu_destroy(struct kvm_vcpu *vcpu); int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp); +void kvm_pmu_hw_events_mapping_setup(void); bool is_vmware_backdoor_pmc(u32 pmc_idx); extern struct kvm_pmu_ops intel_pmu_ops; extern struct kvm_pmu_ops amd_pmu_ops; +extern struct kvm_event_hw_type_mapping kernel_arch_events[]; #endif /* __KVM_X86_PMU_H */ diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 3ee8f86d9ace..68814b3b6e27 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -32,18 +32,6 @@ enum index { INDEX_ERROR, }; -/* duplicated from amd_perfmon_event_map, K7 and above should work. */ -static struct kvm_event_hw_type_mapping amd_event_mapping[] = { - [0] = { 0x76, 0x00, PERF_COUNT_HW_CPU_CYCLES }, - [1] = { 0xc0, 0x00, PERF_COUNT_HW_INSTRUCTIONS }, - [2] = { 0x7d, 0x07, PERF_COUNT_HW_CACHE_REFERENCES }, - [3] = { 0x7e, 0x07, PERF_COUNT_HW_CACHE_MISSES }, - [4] = { 0xc2, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS }, - [5] = { 0xc3, 0x00, PERF_COUNT_HW_BRANCH_MISSES }, - [6] = { 0xd0, 0x00, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND }, - [7] = { 0xd1, 0x00, PERF_COUNT_HW_STALLED_CYCLES_BACKEND }, -}; - static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type) { struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); @@ -140,15 +128,15 @@ static unsigned amd_find_arch_event(struct kvm_pmu *pmu, { int i; - for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++) - if (amd_event_mapping[i].eventsel == event_select - && amd_event_mapping[i].unit_mask == unit_mask) + for (i = 0; i < PERF_COUNT_HW_MAX; i++) + if (kernel_arch_events[i].eventsel == event_select && + kernel_arch_events[i].unit_mask == unit_mask) break; - if (i == ARRAY_SIZE(amd_event_mapping)) + if (i == PERF_COUNT_HW_MAX) return PERF_COUNT_HW_MAX; - return amd_event_mapping[i].event_type; + return kernel_arch_events[i].event_type; } /* return PERF_COUNT_HW_MAX as AMD doesn't have fixed events */ diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index db36e743c3cc..40b4112aefa4 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -20,20 +20,14 @@ #define MSR_PMC_FULL_WIDTH_BIT (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0) -static struct kvm_event_hw_type_mapping intel_arch_events[] = { - [0] = { 0x3c, 0x00, PERF_COUNT_HW_CPU_CYCLES }, - [1] = { 0xc0, 0x00, PERF_COUNT_HW_INSTRUCTIONS }, - [2] = { 0x3c, 0x01, PERF_COUNT_HW_BUS_CYCLES }, - [3] = { 0x2e, 0x4f, PERF_COUNT_HW_CACHE_REFERENCES }, - [4] = { 0x2e, 0x41, PERF_COUNT_HW_CACHE_MISSES }, - [5] = { 0xc4, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS }, - [6] = { 0xc5, 0x00, PERF_COUNT_HW_BRANCH_MISSES }, - /* The above index must match CPUID 0x0A.EBX bit vector */ - [7] = { 0x00, 0x03, PERF_COUNT_HW_REF_CPU_CYCLES }, -}; - -/* mapping between fixed pmc index and intel_arch_events array */ -static int fixed_pmc_events[] = {1, 0, 7}; +/* + * mapping between fixed pmc index and kernel_arch_events array + * + * PERF_COUNT_HW_INSTRUCTIONS + * PERF_COUNT_HW_CPU_CYCLES + * PERF_COUNT_HW_REF_CPU_CYCLES + */ +static int fixed_pmc_events[] = {1, 0, 9}; static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) { @@ -90,9 +84,9 @@ static unsigned intel_find_arch_event(struct kvm_pmu *pmu, { int i; - for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++) { - if (intel_arch_events[i].eventsel != event_select || - intel_arch_events[i].unit_mask != unit_mask) + for (i = 0; i < PERF_COUNT_HW_MAX; i++) { + if (kernel_arch_events[i].eventsel != event_select || + kernel_arch_events[i].unit_mask != unit_mask) continue; if (is_intel_cpuid_event(event_select, unit_mask) && @@ -102,10 +96,10 @@ static unsigned intel_find_arch_event(struct kvm_pmu *pmu, break; } - if (i == ARRAY_SIZE(intel_arch_events)) + if (i == PERF_COUNT_HW_MAX) return PERF_COUNT_HW_MAX; - return intel_arch_events[i].event_type; + return kernel_arch_events[i].event_type; } static unsigned int intel_find_fixed_event(struct kvm_pmu *pmu, int idx) @@ -120,9 +114,9 @@ static unsigned int intel_find_fixed_event(struct kvm_pmu *pmu, int idx) event = fixed_pmc_events[array_index_nospec(idx, size)]; - event_select = intel_arch_events[event].eventsel; - unit_mask = intel_arch_events[event].unit_mask; - event_type = intel_arch_events[event].event_type; + event_select = kernel_arch_events[event].eventsel; + unit_mask = kernel_arch_events[event].unit_mask; + event_type = kernel_arch_events[event].event_type; if (is_intel_cpuid_event(event_select, unit_mask) && !test_bit(event_type, pmu->avail_cpuid_events)) @@ -493,6 +487,33 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 1; } +static inline int get_perf_hw_id_from_cpuid_idx(int bit) +{ + switch (bit) { + case 0: + case 1: + return bit; + case 2: + return PERF_COUNT_HW_BUS_CYCLES; + case 3: + case 4: + case 5: + case 6: + return --bit; + } + + return PERF_COUNT_HW_MAX; +} + +static inline void setup_available_kernel_arch_events(struct kvm_pmu *pmu, + unsigned int avail_cpuid_events, unsigned int mask_length) +{ + int bit; + + for_each_set_bit(bit, (unsigned long *)&avail_cpuid_events, mask_length) + __set_bit(get_perf_hw_id_from_cpuid_idx(bit), pmu->avail_cpuid_events); +} + static void intel_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -529,9 +550,8 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << eax.split.bit_width) - 1; eax.split.mask_length = min_t(int, eax.split.mask_length, x86_pmu.events_mask_len); avail_cpuid_events = ~entry->ebx & ((1ull << eax.split.mask_length) - 1); - bitmap_copy(pmu->avail_cpuid_events, - (unsigned long *)&avail_cpuid_events, - eax.split.mask_length); + setup_available_kernel_arch_events(pmu, avail_cpuid_events, + eax.split.mask_length); if (pmu->version == 1) { pmu->nr_arch_fixed_counters = 0; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ac83d873d65b..8f7e70f59665 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11317,6 +11317,7 @@ int kvm_arch_hardware_setup(void *opaque) memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops)); kvm_ops_static_call_update(); + kvm_pmu_hw_events_mapping_setup(); if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES)) supported_xss = 0;