From patchwork Fri Nov 19 06:48:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12628315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF0F7C433EF for ; Fri, 19 Nov 2021 06:49:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8E63760F14 for ; Fri, 19 Nov 2021 06:49:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232200AbhKSGwM (ORCPT ); Fri, 19 Nov 2021 01:52:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232087AbhKSGwK (ORCPT ); Fri, 19 Nov 2021 01:52:10 -0500 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9587CC061574; Thu, 18 Nov 2021 22:49:09 -0800 (PST) Received: by mail-pg1-x530.google.com with SMTP id m15so7812344pgu.11; Thu, 18 Nov 2021 22:49:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ySY5gs4YmNlohZdoUrGBvO2t4ksP1moPL1VkuQOudcA=; b=c+T0luO1Z7DW1M8suxXHK4hs4HJWs64viTP6TUFyaZt3U74977+LdEJ8DerkOiXWVG HOe4geh7T45+KRr2Qx9X6Fu/LJLSb2bSuK9DIQjNWbAzPthqLV0TJC8SyHWNnHlaGvAR 0mwvGNd6SjWs2widOjP/Zhg1qWe4+4I+fkBRm3BU1TqfRWtuukuhXE1hN5SdX0qe0kJB FtKSek03ykzn9DL5ctlCdrYHkEzxO35cT+GLsPj7tKEDP1mfq13LrYBqqY+2WlL1sYaD m4SDyizoBHpLzSgUWfTebyPiohAgEUIDmAlrGCcECYDCjHc347IJvipCSedEIaES93u6 w3qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ySY5gs4YmNlohZdoUrGBvO2t4ksP1moPL1VkuQOudcA=; b=AFgvQEYGS5hZv1PJxBWp1XKxij/eXOXPdURyEUoqm7AUfO3nMgQzorN03VUT+nJ4oL eeJARnwKl+EmzNPDCFUSorfl4N19V3A24B1UNQvjgmGSqYBaHgrRq/YVtE1CHRrylfXa 6yAKiQfDtxCvPuRqYeBROJ09n5/eyCSXP7QLUwklc2539G/tuX+0+rF9TUNpDneX2HoK iWWuatX1TuV8srTZvZHqaKKduNMHaHfgHG86CnojfsgEW3/S7fJMHW19DmxMefw9pMYx k2GsskDXYo0OUJvvG2ZGRo7wc37DaccFIyqn5QYBnhA3hOJOVQDLIzzk9r337GY/2i1p WU2w== X-Gm-Message-State: AOAM532FAgaO6BTd0+TxwT9lpImzZpa1GY1dk3nCjv83Cx+I+dVDol2n 1w7N6uMzXTbIwteiVPloLvk= X-Google-Smtp-Source: ABdhPJzDcAHy/evao7AX3x06t6sy1xn9rOgE2vnpjqPEpKRSedb6uC7ZbGDtOf/FxXua5wunUPqNJA== X-Received: by 2002:a05:6a00:1489:b0:49f:daa8:c727 with SMTP id v9-20020a056a00148900b0049fdaa8c727mr62043812pfu.56.1637304549133; Thu, 18 Nov 2021 22:49:09 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id mr2sm1286928pjb.25.2021.11.18.22.49.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 18 Nov 2021 22:49:08 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/4] KVM: x86/pmu: Setup pmc->eventsel for fixed PMCs Date: Fri, 19 Nov 2021 14:48:53 +0800 Message-Id: <20211119064856.77948-2-likexu@tencent.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211119064856.77948-1-likexu@tencent.com> References: <20211119064856.77948-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The current pmc->eventsel for fixed counter is underutilised. The pmc->eventsel can be setup for all known available fixed counters since we have mapping between fixed pmc index and the intel_arch_events array. Either gp or fixed counter, it will simplify the later checks for consistency between eventsel and perf_hw_id. Signed-off-by: Like Xu --- arch/x86/kvm/vmx/pmu_intel.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 1b7456b2177b..b7ab5fd03681 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -459,6 +459,21 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 1; } +static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu) +{ + size_t size = ARRAY_SIZE(fixed_pmc_events); + struct kvm_pmc *pmc; + u32 event; + int i; + + for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { + pmc = &pmu->fixed_counters[i]; + event = fixed_pmc_events[array_index_nospec(i, size)]; + pmc->eventsel = (intel_arch_events[event].unit_mask << 8) | + intel_arch_events[event].eventsel; + } +} + static void intel_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -506,6 +521,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) edx.split.bit_width_fixed, x86_pmu.bit_width_fixed); pmu->counter_bitmask[KVM_PMC_FIXED] = ((u64)1 << edx.split.bit_width_fixed) - 1; + setup_fixed_pmc_eventsel(pmu); } pmu->global_ctrl = ((1ull << pmu->nr_arch_gp_counters) - 1) | From patchwork Fri Nov 19 06:48:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12628317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4051C433F5 for ; Fri, 19 Nov 2021 06:49:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8331A61AA2 for ; Fri, 19 Nov 2021 06:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232589AbhKSGwO (ORCPT ); Fri, 19 Nov 2021 01:52:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232087AbhKSGwN (ORCPT ); Fri, 19 Nov 2021 01:52:13 -0500 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57FB1C061574; Thu, 18 Nov 2021 22:49:12 -0800 (PST) Received: by mail-pf1-x429.google.com with SMTP id x5so8637060pfr.0; Thu, 18 Nov 2021 22:49:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+C0R42cZXkNUI8oUTBO/cPU3Mmv17041b0kCSOqwtLk=; b=iL90//NeoxxkF3V3hooMVDHa7mPmqQJAQSHDAsO+p0t0dOb7V5C/DkWlHUeQYwKBq9 kE2VQIUz5c3Akpqa5L6I9iOY1tnXYeGdn04JK/D5zmYyrgpmSSMohxRChd771gK7yRSZ ONv0Qpj6jdlAlXY5EiMcOlZ0eSklKPqRpfUf9MfrdQn87ucWmLWyGb5+Ic8y3UbXPqpu 25PCY5cw/6P0ZRLM35jk8HNRKkrKLT9TqvRZGBCRhnuLbyXwHm/mkNqsw7eWaYgbg0qV i8a0OURZ2SvHpCCRfDU9Bxmd1h05x4T5+2iHu80HEPgpcwPEUEjLsW35YSzrBv5PcZKG btEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+C0R42cZXkNUI8oUTBO/cPU3Mmv17041b0kCSOqwtLk=; b=idHBHM1PZ6Rkjlt583WhMUKngyF2u0Fr8KV6MxfyU7jzqhpG4gnKPLkmFCDAWuc0Oa ITHpjeEvRPXVtNQMTId0lycT0ybdHps7Bo/o54ufZcc1QqCn+TXmt9cZaTWHx32rdg3X jkj7z3uZVR1tgRuEGru8xT0NSvlgYL8Uho1g7ApwHjiA9Fhh7foKdUVCtBsDIf7T1VId c3Io+D6eUrGSe+njM7e7NBXjeyn3SW7UM+2eZNPLiQBtGS8h8nZAV9BdoYBmhEqxOUPm pmmKBhUHcomSKkb6KGuVl3iGXnu3PixlL4bCCSaNT6VOI7+BO1o57UTPXNujbhxkBuC4 nanQ== X-Gm-Message-State: AOAM533W8T3oDPT9dV+BsRIh0trGY33FaldLP7YJ2BnVUcTIsSuZQdJr IRb/4Ei1HQPz5QXZizk93vU= X-Google-Smtp-Source: ABdhPJzcwgLekA2EVT30H+IfJyowljZU3NfvinT6QUzyMGGUE2tIS4qCYEN/obGTejxIRjnjTYs95A== X-Received: by 2002:a05:6a00:2181:b0:44c:f4bc:2f74 with SMTP id h1-20020a056a00218100b0044cf4bc2f74mr20535238pfi.68.1637304551928; Thu, 18 Nov 2021 22:49:11 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id mr2sm1286928pjb.25.2021.11.18.22.49.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 18 Nov 2021 22:49:11 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/4] KVM: x86/pmu: Refactoring find_arch_event() to pmc_perf_hw_id() Date: Fri, 19 Nov 2021 14:48:54 +0800 Message-Id: <20211119064856.77948-3-likexu@tencent.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211119064856.77948-1-likexu@tencent.com> References: <20211119064856.77948-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The find_arch_event() returns a "unsigned int" value, which is used by the pmc_reprogram_counter() to program a PERF_TYPE_HARDWARE type perf_event. The returned value is actually the kernel defined gernic perf_hw_id, let's rename it to pmc_perf_hw_id() with simpler incoming parameters for better self-explanation. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 8 +------- arch/x86/kvm/pmu.h | 3 +-- arch/x86/kvm/svm/pmu.c | 8 ++++---- arch/x86/kvm/vmx/pmu_intel.c | 9 +++++---- 4 files changed, 11 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 09873f6488f7..3b3ccf5b1106 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -174,7 +174,6 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) { unsigned config, type = PERF_TYPE_RAW; - u8 event_select, unit_mask; struct kvm *kvm = pmc->vcpu->kvm; struct kvm_pmu_event_filter *filter; int i; @@ -206,17 +205,12 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) if (!allow_event) return; - event_select = eventsel & ARCH_PERFMON_EVENTSEL_EVENT; - unit_mask = (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; - if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | ARCH_PERFMON_EVENTSEL_INV | ARCH_PERFMON_EVENTSEL_CMASK | HSW_IN_TX | HSW_IN_TX_CHECKPOINTED))) { - config = kvm_x86_ops.pmu_ops->find_arch_event(pmc_to_pmu(pmc), - event_select, - unit_mask); + config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc); if (config != PERF_COUNT_HW_MAX) type = PERF_TYPE_HARDWARE; } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 59d6b76203d5..dd7dbb1c5048 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -24,8 +24,7 @@ struct kvm_event_hw_type_mapping { }; struct kvm_pmu_ops { - unsigned (*find_arch_event)(struct kvm_pmu *pmu, u8 event_select, - u8 unit_mask); + unsigned int (*pmc_perf_hw_id)(struct kvm_pmc *pmc); unsigned (*find_fixed_event)(int idx); bool (*pmc_is_enabled)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 871c426ec389..3c00a34457d7 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -134,10 +134,10 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, return &pmu->gp_counters[msr_to_index(msr)]; } -static unsigned amd_find_arch_event(struct kvm_pmu *pmu, - u8 event_select, - u8 unit_mask) +static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc) { + u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; + u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; int i; for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++) @@ -319,7 +319,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) } struct kvm_pmu_ops amd_pmu_ops = { - .find_arch_event = amd_find_arch_event, + .pmc_perf_hw_id = amd_pmc_perf_hw_id, .find_fixed_event = amd_find_fixed_event, .pmc_is_enabled = amd_pmc_is_enabled, .pmc_idx_to_pmc = amd_pmc_idx_to_pmc, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index b7ab5fd03681..67a0188ecdc5 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -68,10 +68,11 @@ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data) reprogram_counter(pmu, bit); } -static unsigned intel_find_arch_event(struct kvm_pmu *pmu, - u8 event_select, - u8 unit_mask) +static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) { + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; + u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; int i; for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++) @@ -719,7 +720,7 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) } struct kvm_pmu_ops intel_pmu_ops = { - .find_arch_event = intel_find_arch_event, + .pmc_perf_hw_id = intel_pmc_perf_hw_id, .find_fixed_event = intel_find_fixed_event, .pmc_is_enabled = intel_pmc_is_enabled, .pmc_idx_to_pmc = intel_pmc_idx_to_pmc, From patchwork Fri Nov 19 06:48:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12628319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF935C433FE for ; Fri, 19 Nov 2021 06:49:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ACDCC61AA2 for ; Fri, 19 Nov 2021 06:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232653AbhKSGwR (ORCPT ); Fri, 19 Nov 2021 01:52:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232710AbhKSGwQ (ORCPT ); Fri, 19 Nov 2021 01:52:16 -0500 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46B6CC06173E; Thu, 18 Nov 2021 22:49:15 -0800 (PST) Received: by mail-pf1-x42d.google.com with SMTP id i12so8582002pfd.6; Thu, 18 Nov 2021 22:49:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7/oODsisYT43emkbjYLaNkqzmjXiRrzkOY4r35KmWnY=; b=oCrrten28Nqle+kQf73ws2IIE1pd7x18jE3qHRotzYc3UBHD3QI7YnxyHgS5OWvN3Q dKEJxV6d8d9Fjl0vIpyOjo7t/oBimFHQqmnWDx1vfYwsNSWK34/ibhHt2WMkxwfGPyLy hE+zlKSyUIDzpE6Fa3+pZZz+3nyA0OcmwlwyhzUS1hUgG9zHs/+bPIH8lW0HB7pmIaAK XgZJTE5qYiw+Z4ifLUvXbWCjNuD+L3Whh6DoIw1i7Y0CxyHppVjmIoJuJ8slPF3Iy4U/ 4esM3Cx3J3xL7x27dO7587MKwwWb9q+BMHy5+7cNkOAW987xisdub1MKadPy8Arqf2sY RyuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7/oODsisYT43emkbjYLaNkqzmjXiRrzkOY4r35KmWnY=; b=SFgV5p8ZSXw84Nb5D/ymBOVlpX+cGVdW92KjvkfAah60GQjX1WE31jQxcVUEJ7rhSV HFDAZs6w9MbrLJdvxz9xVF5Ah/duZuRCSJspK8CEzmCtQ8Zt0z72Fl2hhzSe1YBaSVuZ 4U/TTGMN3So3nrelihNCmKYu68goSgb2vzS/hFWtEHYRMysasAsuoAbLs+aGeoF+Ytf4 RzHG8FoyjZWXWV2GuT6jT/rzxXcz37uAKstF1A/Yhesi7LELm8RIkFs/NRQo/NjaSAtR e0kKN/Co6Les+HOagdmpc8GUxCpGBQR+NiGrMOsH0Z7gcBe+hxTeDcqJnWYn9YDUhn29 732Q== X-Gm-Message-State: AOAM532AHFfmo7v6LCP0n2reH/P9ZhuTMfwdzbyyvST34Ixu49UeeOJD aOkXg4M9t12oyA7hr6DI3hI= X-Google-Smtp-Source: ABdhPJxmH1IsniPfClUOYO3brG7htx6Dnq12JYMqwsbVmKpVvG1WBfyqBeB5TpimgaMz3J3r0F7Ieg== X-Received: by 2002:a62:dd54:0:b0:4a2:93f7:c20a with SMTP id w81-20020a62dd54000000b004a293f7c20amr43706559pff.46.1637304554799; Thu, 18 Nov 2021 22:49:14 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id mr2sm1286928pjb.25.2021.11.18.22.49.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 18 Nov 2021 22:49:14 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/4] KVM: x86/pmu: Reuse pmc_perf_hw_id() and drop find_fixed_event() Date: Fri, 19 Nov 2021 14:48:55 +0800 Message-Id: <20211119064856.77948-4-likexu@tencent.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211119064856.77948-1-likexu@tencent.com> References: <20211119064856.77948-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Since we set the same semantic event value for the fixed counter in pmc->eventsel, returning the perf_hw_id for the fixed counter via find_fixed_event() can be painlessly replaced by pmc_perf_hw_id() with the help of pmc_is_fixed() check. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 2 +- arch/x86/kvm/pmu.h | 1 - arch/x86/kvm/svm/pmu.c | 11 ++++------- arch/x86/kvm/vmx/pmu_intel.c | 29 ++++++++++++++++------------- 4 files changed, 21 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3b3ccf5b1106..b7a1ae28ab87 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -262,7 +262,7 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) pmc->current_config = (u64)ctrl; pmc_reprogram_counter(pmc, PERF_TYPE_HARDWARE, - kvm_x86_ops.pmu_ops->find_fixed_event(idx), + kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc), !(en_field & 0x2), /* exclude user */ !(en_field & 0x1), /* exclude kernel */ pmi, false, false); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index dd7dbb1c5048..c91d9725aafd 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -25,7 +25,6 @@ struct kvm_event_hw_type_mapping { struct kvm_pmu_ops { unsigned int (*pmc_perf_hw_id)(struct kvm_pmc *pmc); - unsigned (*find_fixed_event)(int idx); bool (*pmc_is_enabled)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 3c00a34457d7..da8aa1e5bff0 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -140,6 +140,10 @@ static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc) u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; int i; + /* return PERF_COUNT_HW_MAX as AMD doesn't have fixed events */ + if (WARN_ON(pmc_is_fixed(pmc))) + return PERF_COUNT_HW_MAX; + for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++) if (amd_event_mapping[i].eventsel == event_select && amd_event_mapping[i].unit_mask == unit_mask) @@ -151,12 +155,6 @@ static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc) return amd_event_mapping[i].event_type; } -/* return PERF_COUNT_HW_MAX as AMD doesn't have fixed events */ -static unsigned amd_find_fixed_event(int idx) -{ - return PERF_COUNT_HW_MAX; -} - /* check if a PMC is enabled by comparing it against global_ctrl bits. Because * AMD CPU doesn't have global_ctrl MSR, all PMCs are enabled (return TRUE). */ @@ -320,7 +318,6 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) struct kvm_pmu_ops amd_pmu_ops = { .pmc_perf_hw_id = amd_pmc_perf_hw_id, - .find_fixed_event = amd_find_fixed_event, .pmc_is_enabled = amd_pmc_is_enabled, .pmc_idx_to_pmc = amd_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 67a0188ecdc5..72db4ffb7eb2 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -68,6 +68,19 @@ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data) reprogram_counter(pmu, bit); } +static inline unsigned int intel_find_fixed_event(int idx) +{ + u32 event; + size_t size = ARRAY_SIZE(fixed_pmc_events); + + if (idx >= size) + return PERF_COUNT_HW_MAX; + + event = fixed_pmc_events[array_index_nospec(idx, size)]; + return intel_arch_events[event].event_type; +} + + static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); @@ -75,6 +88,9 @@ static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; int i; + if (pmc_is_fixed(pmc)) + return intel_find_fixed_event(pmc->idx - INTEL_PMC_IDX_FIXED); + for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++) if (intel_arch_events[i].eventsel == event_select && intel_arch_events[i].unit_mask == unit_mask @@ -87,18 +103,6 @@ static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) return intel_arch_events[i].event_type; } -static unsigned intel_find_fixed_event(int idx) -{ - u32 event; - size_t size = ARRAY_SIZE(fixed_pmc_events); - - if (idx >= size) - return PERF_COUNT_HW_MAX; - - event = fixed_pmc_events[array_index_nospec(idx, size)]; - return intel_arch_events[event].event_type; -} - /* check if a PMC is enabled by comparing it with globl_ctrl bits. */ static bool intel_pmc_is_enabled(struct kvm_pmc *pmc) { @@ -721,7 +725,6 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) struct kvm_pmu_ops intel_pmu_ops = { .pmc_perf_hw_id = intel_pmc_perf_hw_id, - .find_fixed_event = intel_find_fixed_event, .pmc_is_enabled = intel_pmc_is_enabled, .pmc_idx_to_pmc = intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, From patchwork Fri Nov 19 06:48:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12628321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBDBEC433EF for ; Fri, 19 Nov 2021 06:49:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AFB82611CC for ; Fri, 19 Nov 2021 06:49:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232922AbhKSGwW (ORCPT ); Fri, 19 Nov 2021 01:52:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232696AbhKSGwU (ORCPT ); Fri, 19 Nov 2021 01:52:20 -0500 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDE4CC061574; Thu, 18 Nov 2021 22:49:18 -0800 (PST) Received: by mail-pg1-x52a.google.com with SMTP id q12so7820272pgh.5; Thu, 18 Nov 2021 22:49:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZjQKmCWi7B+NrihnvF8SmL+lBcJx6mkBfv3Us362+9Q=; b=SoVcdnsXuGxQtkVxUOh8kRHg2BnsvnyxyM+U9xMcL63Ch95kxCZlvuNy4w65yFqSii myouJepethPhm41j6xS819BM3r8mvl62h9tzxj7AgqS8oJPWJFbhRpqce4U6ypZPXD74 PrsLm26ThCmbN1Fu8ReKCi4XzwC/4R4LAtBRRRlXGxkKp1fiVWVcsx3Ldp9+L5hmA/lI o4vlxM0JVdLxfAA+VtcRqD2kQzpyLqTKM1iHI2KTqqw+DQnvY9cKshfZB37f91lnT8VD VgEqbJWpKx4tJxw/TfHVvYEV/p3j24kWvvVaZkj6DXmsHopZEkMfIQEFpjXi/Nlp2HH4 CRCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZjQKmCWi7B+NrihnvF8SmL+lBcJx6mkBfv3Us362+9Q=; b=7ZIsmmP6DxSH2ciogyXyyARXBEgOFfuGezx8+pua3jVFH/zfaDib6Ter4w2clmkJwr CM0vZMfe9H9VssBbU3UOV/1bvZWpPWZRwMeQU/C1fH0m+OWnxjLTMc5KSD/iJDurAJF/ nJ6DhkuUnd/15fr+a/v0f1zq54UcIko8U6MxLWZ2gJlNPR1/w3JnLlFBG3bXkVrTLLjT F5wGSpPC8k3q8YySkqjSZ8IQbaON3YxNqrKtlwPaaBXBlI6vWAMS9cBT+dmh/lRVl/vN 6Mm0ZPIimInBoVGpXpI4DPiD37pYMeNY/3jntzPeg9OPQag1ftge//3alXhtNvMc/rPn f/XA== X-Gm-Message-State: AOAM530UOxhXEJR2SJX+wd9/rdA4E/tob2Z7usDxB/kFt9GHg0UIbusH WuPY2HMZQhcc4u6ze4LLQT92SOAir4s= X-Google-Smtp-Source: ABdhPJydDK8ISILmBNFGDqmubg/PfNYehNlYZIAMaeQJKNdaClnepaaOTYobsGKPn6H8LTbvbN0JWA== X-Received: by 2002:a05:6a00:1248:b0:4a2:5cba:89cb with SMTP id u8-20020a056a00124800b004a25cba89cbmr57870098pfi.12.1637304557610; Thu, 18 Nov 2021 22:49:17 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id mr2sm1286928pjb.25.2021.11.18.22.49.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 18 Nov 2021 22:49:17 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/4] KVM: x86/pmu: Add pmc->intr to refactor kvm_perf_overflow{_intr}() Date: Fri, 19 Nov 2021 14:48:56 +0800 Message-Id: <20211119064856.77948-5-likexu@tencent.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211119064856.77948-1-likexu@tencent.com> References: <20211119064856.77948-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Depending on whether intr should be triggered or not, KVM registers two different event overflow callbacks in the perf_event context. The code skeleton of these two functions is very similar, so the pmc->intr can be stored into pmc from pmc_reprogram_counter() which provides smaller instructions footprint against the u-architecture branch predictor. Suggested-by: Paolo Bonzini Signed-off-by: Like Xu --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/pmu.c | 48 ++++++++++++++------------------- 2 files changed, 21 insertions(+), 28 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1fcb345bc107..9fa63499e77e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -496,6 +496,7 @@ struct kvm_pmc { */ u64 current_config; bool is_paused; + bool intr; }; struct kvm_pmu { diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b7a1ae28ab87..9b52f18f56e0 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -62,36 +62,28 @@ static void kvm_perf_overflow(struct perf_event *perf_event, struct kvm_pmc *pmc = perf_event->overflow_handler_context; struct kvm_pmu *pmu = pmc_to_pmu(pmc); - if (!test_and_set_bit(pmc->idx, pmu->reprogram_pmi)) { - __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); - kvm_make_request(KVM_REQ_PMU, pmc->vcpu); - } -} + /* Ignore counters that have been reprogrammed already. */ + if (test_and_set_bit(pmc->idx, pmu->reprogram_pmi)) + return; -static void kvm_perf_overflow_intr(struct perf_event *perf_event, - struct perf_sample_data *data, - struct pt_regs *regs) -{ - struct kvm_pmc *pmc = perf_event->overflow_handler_context; - struct kvm_pmu *pmu = pmc_to_pmu(pmc); + __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); + kvm_make_request(KVM_REQ_PMU, pmc->vcpu); - if (!test_and_set_bit(pmc->idx, pmu->reprogram_pmi)) { - __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); - kvm_make_request(KVM_REQ_PMU, pmc->vcpu); + if (!pmc->intr) + return; - /* - * Inject PMI. If vcpu was in a guest mode during NMI PMI - * can be ejected on a guest mode re-entry. Otherwise we can't - * be sure that vcpu wasn't executing hlt instruction at the - * time of vmexit and is not going to re-enter guest mode until - * woken up. So we should wake it, but this is impossible from - * NMI context. Do it from irq work instead. - */ - if (!kvm_is_in_guest()) - irq_work_queue(&pmc_to_pmu(pmc)->irq_work); - else - kvm_make_request(KVM_REQ_PMI, pmc->vcpu); - } + /* + * Inject PMI. If vcpu was in a guest mode during NMI PMI + * can be ejected on a guest mode re-entry. Otherwise we can't + * be sure that vcpu wasn't executing hlt instruction at the + * time of vmexit and is not going to re-enter guest mode until + * woken up. So we should wake it, but this is impossible from + * NMI context. Do it from irq work instead. + */ + if (!kvm_is_in_guest()) + irq_work_queue(&pmc_to_pmu(pmc)->irq_work); + else + kvm_make_request(KVM_REQ_PMI, pmc->vcpu); } static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, @@ -126,7 +118,6 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, } event = perf_event_create_kernel_counter(&attr, -1, current, - intr ? kvm_perf_overflow_intr : kvm_perf_overflow, pmc); if (IS_ERR(event)) { pr_debug_ratelimited("kvm_pmu: event creation failed %ld for pmc->idx = %d\n", @@ -138,6 +129,7 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, pmc_to_pmu(pmc)->event_count++; clear_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi); pmc->is_paused = false; + pmc->intr = intr; } static void pmc_pause_counter(struct kvm_pmc *pmc)