From patchwork Wed May 18 13:25:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12853666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DFFBC433EF for ; Wed, 18 May 2022 13:25:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237796AbiERNZY (ORCPT ); Wed, 18 May 2022 09:25:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237718AbiERNZW (ORCPT ); Wed, 18 May 2022 09:25:22 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 625081B5FAB; Wed, 18 May 2022 06:25:21 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id r71so2175126pgr.0; Wed, 18 May 2022 06:25:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xBJDPO61I3gQO/bIk0UDIpqOBCWbyudg+XjoFP7FP9A=; b=keX0WaI+LV/BlyfDcr9BDjkzRCbHRKJZjLtphA4b01LMVaetwqa4xD5OoNd5AfSFiI fh/tiVyzNmm6AejOOmAsVG0HC5+Gow8nfCtwur675txVYCYhNHNIrWspbGWGkuTxq+Ju 3cRM2/Tp7iaCwGIsNGENYaCFFceNjlL7Q82tLL2wDRletJpe8L7ypsIK+TqBl4W+/+l8 LhDRsVJfEAD5/cTbL73e2JcRcbdD9qm2u/3V88tQv6GyZACrHgAnlcxxebFZARXrFS7e 1jv3FdNCFsHsAQT1GTjmAG8VUf6DF9/81H2vo8BZEVkL+/BfPwLbwOM3fR23KOzX0vmj J1pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xBJDPO61I3gQO/bIk0UDIpqOBCWbyudg+XjoFP7FP9A=; b=xW/gmjao2NArw70aWcDAaSwzmGvhvrjl5P/hLFwmRl4sAwQkaUJJ0Pjvvbd0I8a2L/ PXSotW5h8US1grYUOV6RCG42CLVIaxsJTKDdWe+a6yb+vq2R6mvRXv3imOihTgAEhLhM m6H/7tn1N0Llk8tpcGYwQYx1+d6JGvCUfVsHNQof5ZnPQOYKI8I4caToDpQ/1g5tcyYa TvxgKnKFxCVIbvVwYKvCslbRS0GoZMRjFV6TRdalc4TzdrgS8DAQ1c+Cl7llQAFMNI++ 4dNWHBTdq87sj6vzGAf657Ne6Mnql+P1vznnwcj/vM1WSm/ginWYlgO5gy5ATn1PrD3k guXQ== X-Gm-Message-State: AOAM533snQ+l6UsGKbaodaCj+SCucTrvBap7mdma/coh8D7I5R3NRUkg jIxOL0EIjcWo2e5ZyWkzgtA= X-Google-Smtp-Source: ABdhPJzxd5hvtnnjNdVX1po4z9RG9RwHDK6ykHHWMiswwQTeOKFwDbWh8EP1/G92tEEioEtQYLBiRQ== X-Received: by 2002:a62:6411:0:b0:50a:81df:bfa6 with SMTP id y17-20020a626411000000b0050a81dfbfa6mr28226016pfb.26.1652880320820; Wed, 18 May 2022 06:25:20 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.117]) by smtp.gmail.com with ESMTPSA id s13-20020a17090302cd00b0015e8d4eb244sm1625549plk.142.2022.05.18.06.25.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 06:25:20 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Jim Mattson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v3 01/11] KVM: x86/pmu: Update comments for AMD gp counters Date: Wed, 18 May 2022 21:25:02 +0800 Message-Id: <20220518132512.37864-2-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220518132512.37864-1-likexu@tencent.com> References: <20220518132512.37864-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The obsolete comment could more accurately state that AMD platforms have two base MSR addresses and two different maximum numbers for gp counters, depending on the X86_FEATURE_PERFCTR_CORE feature. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b5d0c36b869b..3e200b9610f9 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -37,7 +37,9 @@ EXPORT_SYMBOL_GPL(kvm_pmu_cap); * However AMD doesn't support fixed-counters; * - There are three types of index to access perf counters (PMC): * 1. MSR (named msr): For example Intel has MSR_IA32_PERFCTRn and AMD - * has MSR_K7_PERFCTRn. + * has MSR_K7_PERFCTRn and, for families 15H and later, + * MSR_F15H_PERF_CTRn, where MSR_F15H_PERF_CTR[0-3] are + * aliased to MSR_K7_PERFCTRn. * 2. MSR Index (named idx): This normally is used by RDPMC instruction. * For instance AMD RDPMC instruction uses 0000_0003h in ECX to access * C001_0007h (MSR_K7_PERCTR3). Intel has a similar mechanism, except @@ -49,7 +51,8 @@ EXPORT_SYMBOL_GPL(kvm_pmu_cap); * between pmc and perf counters is as the following: * * Intel: [0 .. INTEL_PMC_MAX_GENERIC-1] <=> gp counters * [INTEL_PMC_IDX_FIXED .. INTEL_PMC_IDX_FIXED + 2] <=> fixed - * * AMD: [0 .. AMD64_NUM_COUNTERS-1] <=> gp counters + * * AMD: [0 .. AMD64_NUM_COUNTERS-1] and, for families 15H + * and later, [0 .. AMD64_NUM_COUNTERS_CORE-1] <=> gp counters */ static struct kvm_pmu_ops kvm_pmu_ops __read_mostly; From patchwork Wed May 18 13:25:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12853667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 125D1C433EF for ; Wed, 18 May 2022 13:25:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237814AbiERNZ1 (ORCPT ); Wed, 18 May 2022 09:25:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237798AbiERNZZ (ORCPT ); Wed, 18 May 2022 09:25:25 -0400 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE7091B5FAB; Wed, 18 May 2022 06:25:23 -0700 (PDT) Received: by mail-pg1-x52a.google.com with SMTP id c22so2150814pgu.2; Wed, 18 May 2022 06:25:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nbyBZvHwvfXuHRDLwuJO4YuwoVjeLxru2xz7bgaHsoA=; b=XFRMHZ4EFhN96DF5t8nI3KHeq8wCB3dypEwnw4WFM+ULafJ3ctRgHm6hlAIXMxlQGE D/pYTTzeCCqjbNamm4WFlDaXAw0W2nxivuFxScPT/vUJSoZyROV/Ckn9HI04WvdPjjxf MVX/tPNfpLPLVYkT3CJpd7t6wzqI3kFMik4kyMLRjgdt3AGwadrrO/PcIMlYOX8YB1Bc mnxidux5qToY5IERPu90jjgShrvKrLlmpKUP/K83SQyfuOhFWmRnICulvSCIv07XwhPY sC+xk2pr1mb3KvwzBFQ5FT1U82Bqg8sdp1m5NYLguwTeE+a/K8h0JWRAkwtJe/Cbom4i m3/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nbyBZvHwvfXuHRDLwuJO4YuwoVjeLxru2xz7bgaHsoA=; b=b7hgKVzb5thtDDH+w64Ig49FKNHrLd0IfhQALAsDYvut9DFZs807EJAs3Zk5ofBpSZ OatqSYoqQJzhSpEcwJjnTPBIzUFTp0NHfL9eQPMo+Af71u4pugdTE9OzVdEcK7Al6Vxu 14TTktnCtuMjlk4sC06G6X+AgOSIGyDTwwZwl/an9I5CfzZH0ejm/GA8Fz6innHPSuxF l8M4oy08Hp9z0hEKhEDZxkMKA3gXwQ/5CyZrp3ZPzwU3ZxuQHyfYPF3DSD4ITECATdcc 0Fkxo6r+Q9nhwcScV0ZcWeX5LbvSTvezMCH4Vg5xBR8hVrrYkWFnfvtmGkLBFkgJXA8D /D5Q== X-Gm-Message-State: AOAM530RuFqCMnpKJaUj7ixsbp3imKgSUcVaMufg8xtO9BzzssGHK8p9 83FKa1kBjpUgZMxYL4aMWoc= X-Google-Smtp-Source: ABdhPJwt2saGt5mtepV0elRXunnYmCUxL5aedI7Xr/D+wwgnmqo5KfWVDrYi+qCJG5JPB7oPSRcqDA== X-Received: by 2002:aa7:88d2:0:b0:50a:cf7d:6ff1 with SMTP id k18-20020aa788d2000000b0050acf7d6ff1mr28259176pff.67.1652880323228; Wed, 18 May 2022 06:25:23 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.117]) by smtp.gmail.com with ESMTPSA id s13-20020a17090302cd00b0015e8d4eb244sm1625549plk.142.2022.05.18.06.25.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 06:25:23 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Jim Mattson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v3 02/11] KVM: x86/pmu: Extract check_pmu_event_filter() from the same semantics Date: Wed, 18 May 2022 21:25:03 +0800 Message-Id: <20220518132512.37864-3-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220518132512.37864-1-likexu@tencent.com> References: <20220518132512.37864-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Checking the kvm->arch.pmu_event_filter policy in both gp and fixed code paths was somewhat redundant, so common parts can be extracted, which reduces code footprint and improves readability. Signed-off-by: Like Xu Reviewed-by: Wanpeng Li --- arch/x86/kvm/pmu.c | 63 +++++++++++++++++++++++++++------------------- 1 file changed, 37 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3e200b9610f9..f189512207db 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -240,14 +240,44 @@ static int cmp_u64(const void *a, const void *b) return *(__u64 *)a - *(__u64 *)b; } +static bool check_pmu_event_filter(struct kvm_pmc *pmc) +{ + struct kvm_pmu_event_filter *filter; + struct kvm *kvm = pmc->vcpu->kvm; + bool allow_event = true; + __u64 key; + int idx; + + filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); + if (!filter) + goto out; + + if (pmc_is_gp(pmc)) { + key = pmc->eventsel & AMD64_RAW_EVENT_MASK_NB; + if (bsearch(&key, filter->events, filter->nevents, + sizeof(__u64), cmp_u64)) + allow_event = filter->action == KVM_PMU_EVENT_ALLOW; + else + allow_event = filter->action == KVM_PMU_EVENT_DENY; + } else { + idx = pmc->idx - INTEL_PMC_IDX_FIXED; + if (filter->action == KVM_PMU_EVENT_DENY && + test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) + allow_event = false; + if (filter->action == KVM_PMU_EVENT_ALLOW && + !test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) + allow_event = false; + } + +out: + return allow_event; +} + void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) { u64 config; u32 type = PERF_TYPE_RAW; - struct kvm *kvm = pmc->vcpu->kvm; - struct kvm_pmu_event_filter *filter; - struct kvm_pmu *pmu = vcpu_to_pmu(pmc->vcpu); - bool allow_event = true; + struct kvm_pmu *pmu = pmc_to_pmu(pmc); if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); @@ -259,17 +289,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE) || !pmc_is_enabled(pmc)) return; - filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); - if (filter) { - __u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB; - - if (bsearch(&key, filter->events, filter->nevents, - sizeof(__u64), cmp_u64)) - allow_event = filter->action == KVM_PMU_EVENT_ALLOW; - else - allow_event = filter->action == KVM_PMU_EVENT_DENY; - } - if (!allow_event) + if (!check_pmu_event_filter(pmc)) return; if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | @@ -302,23 +322,14 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) { unsigned en_field = ctrl & 0x3; bool pmi = ctrl & 0x8; - struct kvm_pmu_event_filter *filter; - struct kvm *kvm = pmc->vcpu->kvm; pmc_pause_counter(pmc); if (!en_field || !pmc_is_enabled(pmc)) return; - filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); - if (filter) { - if (filter->action == KVM_PMU_EVENT_DENY && - test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - return; - if (filter->action == KVM_PMU_EVENT_ALLOW && - !test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - return; - } + if (!check_pmu_event_filter(pmc)) + return; if (pmc->current_config == (u64)ctrl && pmc_resume_counter(pmc)) return; From patchwork Wed May 18 13:25:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12853668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0AE2C433FE for ; Wed, 18 May 2022 13:25:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237798AbiERNZm (ORCPT ); Wed, 18 May 2022 09:25:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237810AbiERNZ1 (ORCPT ); Wed, 18 May 2022 09:25:27 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FCE11B777B; Wed, 18 May 2022 06:25:26 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id ds11so2056235pjb.0; Wed, 18 May 2022 06:25:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ugrUGXg1vJU0Rc38q/y3whV0v+F8aO2EtBiRdNf5FTA=; b=Pn3zqp6akFaR1o3StQWduW5FG6yr9lXF6WmbX6vLnzLW2zSwCtoJKDIbefrkqvdj/L 0g+M2ChNln8NSNXyqgTrj/7m7aDpvTMUGe8mfA6rqwIzN1DEwDbvq1uRxzR/+Z66IWs9 PM/kygMiPZt1Bhlk2HHg2iDbSzR4oT1mcM+AUp0b12godS3x9XJSgn4YCI6sZ2lc5ykv E3cNTtUlc+8uohgDxnHGOJz3fF505pXqpTFZn7dbfKJMkb7dDjAPgJpuBkUynZDwgAAD t3g/ZGLzhOjFuyyo1V9cRX3Hi2rtI0U6hQ0kmyFvZg4RwDER2dwB77JmpHNNrqUjjJMa 5lUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ugrUGXg1vJU0Rc38q/y3whV0v+F8aO2EtBiRdNf5FTA=; b=0eCNBON3UtD9vK9G0wc6BXRQ1lcRKvXrKl7ydDoIOw57hPY04EJ0foIszEainFAzKw 0JDElC7cDJB9L7IKy58hubboqaoQHGN9yyJXHQVy9ziSZbVauwjpgpjPRZ1x149OPWHq M4solsngLHEclUW9yCW/t3Y0mDRxHm4xcUcQMlSSZO8T5PJ8eGh7wnoYMUxmFtT7TuQJ +RAcIcV/pt8qpaUFXQIr89DuB9kXczMjrRO42+tzpUe7S3h5arpgLG7wCgWVNL0kCD38 140QeAVe0oB1sKRmMNGEWu1Uckppm7LPFIQmGC/4cbmOfo75vZMPr0uXkcrn7zX8QMRj O3Ug== X-Gm-Message-State: AOAM530pEhjNLTpe1ITzUtdFZ1PTQULRBlpfn+G7ZA5hMCVlE+jBmrRQ unYPCSvGJea122FggrVZtYY= X-Google-Smtp-Source: ABdhPJyrNKLZEsqK7f8koaXenQm3xQWhoUKEeiSktq2NQuD49bRYhEbk4wjXlUNLTRND+JmLZAxKFw== X-Received: by 2002:a17:903:290:b0:15c:1c87:e66c with SMTP id j16-20020a170903029000b0015c1c87e66cmr27886975plr.61.1652880325618; Wed, 18 May 2022 06:25:25 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.117]) by smtp.gmail.com with ESMTPSA id s13-20020a17090302cd00b0015e8d4eb244sm1625549plk.142.2022.05.18.06.25.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 06:25:25 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Jim Mattson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v3 03/11] KVM: x86/pmu: Protect kvm->arch.pmu_event_filter with SRCU Date: Wed, 18 May 2022 21:25:04 +0800 Message-Id: <20220518132512.37864-4-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220518132512.37864-1-likexu@tencent.com> References: <20220518132512.37864-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Similar to "kvm->arch.msr_filter", KVM should guarantee that vCPUs will see either the previous filter or the new filter when user space calls KVM_SET_PMU_EVENT_FILTER ioctl with the vCPU running so that guest pmu events with identical settings in both the old and new filter have deterministic behavior. Fixes: 66bb8a065f5a ("KVM: x86: PMU Event Filter") Signed-off-by: Like Xu Reviewed-by: Wanpeng Li --- arch/x86/kvm/pmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index f189512207db..24624654e476 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -246,8 +246,9 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) struct kvm *kvm = pmc->vcpu->kvm; bool allow_event = true; __u64 key; - int idx; + int idx, srcu_idx; + srcu_idx = srcu_read_lock(&kvm->srcu); filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (!filter) goto out; @@ -270,6 +271,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) } out: + srcu_read_unlock(&kvm->srcu, srcu_idx); return allow_event; } From patchwork Wed May 18 13:25:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12853670 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D25BEC433F5 for ; Wed, 18 May 2022 13:25:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237939AbiERNZr (ORCPT ); Wed, 18 May 2022 09:25:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47636 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237845AbiERNZj (ORCPT ); Wed, 18 May 2022 09:25:39 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 145AF1BB119; Wed, 18 May 2022 06:25:29 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id qe3-20020a17090b4f8300b001dc24e4da73so3954991pjb.1; Wed, 18 May 2022 06:25:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=t1ZqPC8Q+OBn7Et1W+ctCRyzN+Ix/zx/2p0NyHASta4=; b=L316v0jf+KqXihW55HFFIBcX/C+19YQ3ACbXIFLKH3lrmSgLuBS5ZRtXqssuHKXVFU eqUpEm8yoQIc4/jl9R3JoS6OQzSfTt6U6KhgWLoocM+oq5/dI3yeQhr9WPCkyH9jM7/2 V2jrtl1QZaVcZKxz0pD0KbmbigDvhYSifvo/jsyeVDoORYc2BEs27Qwqooi37gEZvSyC DMWh4gnziqnNiPYCSZc8ZUY/250cHt4eBK1kpQcTsgX7JqzEgSERnALdTzR07iouwrJG W7I6LaSrjz5e8YhRI+rpH9FfQ7iov0yuEZltOJVKQpDKFlsvmTkaOT2TyYY6hnUT1WxN 6kfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=t1ZqPC8Q+OBn7Et1W+ctCRyzN+Ix/zx/2p0NyHASta4=; b=FSjn7FUc+acsIqAjb/70nxZ0MXfAyV2zqIF+OiL9LpWyYm/QPkC4WbNmnuOYlI/I0E PF6R3OP+AUuMWEF1oYaS0u+jO+DpsoIxXoS6PlhE+sSx66pngHHyuIc+t3Xzh0Rv12zW +AlPiFR3c0X50qQkkpiCDTuUJzwDSR+dafuL9bJGJJxHBiLu5lCttz8tl23gdBUjXIA5 J6T7+wERnn3ycsPbBD2EhiIhxcL8jUqLpUtYp4zledBZupIwfvwMHEWc6YARlqbL72Bd DgMBUWhHdSsoiQw+kuZzY4SiIanBv4PNHRsHyQcf4MXLyTsC0wAjZtdKrAH5SBjUronP OYeA== X-Gm-Message-State: AOAM532f8eUcnSp3404HRWodKY1NMlkhGaARryP68kk9ZFy17J1dWg2f 3HDJvTQP693dzP+DAydbois= X-Google-Smtp-Source: ABdhPJxhIMWgz7/P7Tklx0Pbdtn9yYq6SQ2/50grRQuCYGxzVJEehpqHPLMOcNiDrNF6XtvLlTLTDw== X-Received: by 2002:a17:902:6b44:b0:154:4bee:c434 with SMTP id g4-20020a1709026b4400b001544beec434mr27733901plt.43.1652880328465; Wed, 18 May 2022 06:25:28 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.117]) by smtp.gmail.com with ESMTPSA id s13-20020a17090302cd00b0015e8d4eb244sm1625549plk.142.2022.05.18.06.25.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 06:25:28 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Jim Mattson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v3 04/11] KVM: x86/pmu: Pass only "struct kvm_pmc *pmc" to reprogram_counter() Date: Wed, 18 May 2022 21:25:05 +0800 Message-Id: <20220518132512.37864-5-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220518132512.37864-1-likexu@tencent.com> References: <20220518132512.37864-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Passing the reference "struct kvm_pmc *pmc" when creating pmc->perf_event is sufficient. This change helps to simplify the calling convention by replacing reprogram_{gp, fixed}_counter() with reprogram_counter() seamlessly. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 17 +++++------------ arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 32 ++++++++++++++++++-------------- 3 files changed, 24 insertions(+), 27 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 24624654e476..ba767b4921e3 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -347,18 +347,13 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) } EXPORT_SYMBOL_GPL(reprogram_fixed_counter); -void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx) +void reprogram_counter(struct kvm_pmc *pmc) { - struct kvm_pmc *pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, pmc_idx); - - if (!pmc) - return; - if (pmc_is_gp(pmc)) reprogram_gp_counter(pmc, pmc->eventsel); else { - int idx = pmc_idx - INTEL_PMC_IDX_FIXED; - u8 ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, idx); + int idx = pmc->idx - INTEL_PMC_IDX_FIXED; + u8 ctrl = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, idx); reprogram_fixed_counter(pmc, ctrl, idx); } @@ -377,8 +372,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) clear_bit(bit, pmu->reprogram_pmi); continue; } - - reprogram_counter(pmu, bit); + reprogram_counter(pmc); } /* @@ -551,13 +545,12 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu) static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) { - struct kvm_pmu *pmu = pmc_to_pmu(pmc); u64 prev_count; prev_count = pmc->counter; pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc); - reprogram_counter(pmu, pmc->idx); + reprogram_counter(pmc); if (pmc->counter < prev_count) __kvm_perf_overflow(pmc, false); } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index dbf4c83519a4..0fd2518227f7 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -174,7 +174,7 @@ static inline void kvm_init_pmu_capability(void) void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); -void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx); +void reprogram_counter(struct kvm_pmc *pmc); void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 84b326c4dce9..33448482db50 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -56,16 +56,32 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) pmu->fixed_ctr_ctrl = data; } +static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) +{ + if (pmc_idx < INTEL_PMC_IDX_FIXED) { + return get_gp_pmc(pmu, MSR_P6_EVNTSEL0 + pmc_idx, + MSR_P6_EVNTSEL0); + } else { + u32 idx = pmc_idx - INTEL_PMC_IDX_FIXED; + + return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0); + } +} + /* function is called when global control register has been updated. */ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data) { int bit; u64 diff = pmu->global_ctrl ^ data; + struct kvm_pmc *pmc; pmu->global_ctrl = data; - for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) - reprogram_counter(pmu, bit); + for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) { + pmc = intel_pmc_idx_to_pmc(pmu, bit); + if (pmc) + reprogram_counter(pmc); + } } static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) @@ -101,18 +117,6 @@ static bool intel_pmc_is_enabled(struct kvm_pmc *pmc) return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); } -static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) -{ - if (pmc_idx < INTEL_PMC_IDX_FIXED) - return get_gp_pmc(pmu, MSR_P6_EVNTSEL0 + pmc_idx, - MSR_P6_EVNTSEL0); - else { - u32 idx = pmc_idx - INTEL_PMC_IDX_FIXED; - - return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0); - } -} - static bool intel_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); From patchwork Wed May 18 13:25:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12853669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B12CC433EF for ; Wed, 18 May 2022 13:25:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237881AbiERNZo (ORCPT ); Wed, 18 May 2022 09:25:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237849AbiERNZl (ORCPT ); Wed, 18 May 2022 09:25:41 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2988E1BB110; Wed, 18 May 2022 06:25:31 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id ds11so2056235pjb.0; Wed, 18 May 2022 06:25:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NeZCG7dJ8X7IBI97mKgmMWCWw8QF1msEv0hMGgyhQcg=; b=DxAmtSk/vwSdimtJt1g/fLf2Ap6qR9cT6NlBaDi8ZDVDE5YrfzMv4LpwLexJlQyIsM vxDbqzxZ3z7c8bQljMo5pvCie34OB458tDIzb8eANUat+hzLLs4GC8xi8TSwNdV+ThyG fRO+Fz5+IQ0yKPzyN0n+TdLja2aArTCrhZCi+5nte4Qj1j6I6vkvU1G4mx7ok7kj3NTE IiS64NZL+TcC6gQaOFC+lvKSSGOqsJ6Dci4bQZ/Y0tdJwpoahAwhsEYJPBPoCnA+uFoO IDIOVRN+Gvk8XwuRrno8orNHXdcTQDegc7BUdfkwL1eYbTD34Jq6KsBOQkuD9Ee1XTOq 0IhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NeZCG7dJ8X7IBI97mKgmMWCWw8QF1msEv0hMGgyhQcg=; b=0XeHQ3Rek4bLfyB+dfZfHdbhG42NvZbw2pk9pCjYmn4cP7nkyJ/iubVSzidk2CICO/ Xtl2KRBGsCMZVmvGEIdCf0ajAx/WfL6zMz1cNVzH0nYtNFMB1csds/BdAiceemQRh/Q0 FuSPRIRrU7BJtmnAgq03mvl9XZsyB226R9fls1eXOzzzaDdk8umpEK06r9vk0g7tnecu 1fqLYokIIdv6K7prVv2V697cIk/h/FwybfdobpxRWwgaPE2DR3gPeILpSKgx3BuzScJt h0q7TUD7AzDJfktLB65Xm1u26H9WwWA9qiLLO0+Tz3G8sDKXXuDZxUEwuYIrAipf7gTJ iCVw== X-Gm-Message-State: AOAM5308K+HheLPfvYG+tbhVPT4jAJaXJOutfaRtdGG0AMuva44vlr6+ aiaaRVhuiEUGZVy8sc5jPVQ= X-Google-Smtp-Source: ABdhPJyDCmd1LTRsJXAvLyDHsDb//pTia52o/FrIaKpbCYajJ9sE0SBksKM26YNndySXpnNEuqxhcA== X-Received: by 2002:a17:902:cec9:b0:15f:3f5d:882a with SMTP id d9-20020a170902cec900b0015f3f5d882amr28219229plg.132.1652880330875; Wed, 18 May 2022 06:25:30 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.117]) by smtp.gmail.com with ESMTPSA id s13-20020a17090302cd00b0015e8d4eb244sm1625549plk.142.2022.05.18.06.25.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 06:25:30 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Jim Mattson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v3 05/11] KVM: x86/pmu: Drop "u64 eventsel" for reprogram_gp_counter() Date: Wed, 18 May 2022 21:25:06 +0800 Message-Id: <20220518132512.37864-6-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220518132512.37864-1-likexu@tencent.com> References: <20220518132512.37864-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Because inside reprogram_gp_counter() it is bound to assign the requested eventel to pmc->eventsel, this assignment step can be moved forward, thus simplifying the passing of parameters to "struct kvm_pmc *pmc" only. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 7 +++---- arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/svm/pmu.c | 6 ++++-- arch/x86/kvm/vmx/pmu_intel.c | 3 ++- 4 files changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index ba767b4921e3..cbffa060976e 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -275,17 +275,16 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return allow_event; } -void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) +void reprogram_gp_counter(struct kvm_pmc *pmc) { u64 config; u32 type = PERF_TYPE_RAW; struct kvm_pmu *pmu = pmc_to_pmu(pmc); + u64 eventsel = pmc->eventsel; if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); - pmc->eventsel = eventsel; - pmc_pause_counter(pmc); if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE) || !pmc_is_enabled(pmc)) @@ -350,7 +349,7 @@ EXPORT_SYMBOL_GPL(reprogram_fixed_counter); void reprogram_counter(struct kvm_pmc *pmc) { if (pmc_is_gp(pmc)) - reprogram_gp_counter(pmc, pmc->eventsel); + reprogram_gp_counter(pmc); else { int idx = pmc->idx - INTEL_PMC_IDX_FIXED; u8 ctrl = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, idx); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 0fd2518227f7..56204f5a545d 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -172,7 +172,7 @@ static inline void kvm_init_pmu_capability(void) KVM_PMC_MAX_FIXED); } -void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); +void reprogram_gp_counter(struct kvm_pmc *pmc); void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); void reprogram_counter(struct kvm_pmc *pmc); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 47e8eaca1e90..fa4539e470b3 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -285,8 +285,10 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL); if (pmc) { data &= ~pmu->reserved_bits; - if (data != pmc->eventsel) - reprogram_gp_counter(pmc, data); + if (data != pmc->eventsel) { + pmc->eventsel = data; + reprogram_gp_counter(pmc); + } return 0; } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 33448482db50..2bfca470d5fd 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -484,7 +484,8 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) (pmu->raw_event_mask & HSW_IN_TX_CHECKPOINTED)) reserved_bits ^= HSW_IN_TX_CHECKPOINTED; if (!(data & reserved_bits)) { - reprogram_gp_counter(pmc, data); + pmc->eventsel = data; + reprogram_gp_counter(pmc); return 0; } } else if (intel_pmu_handle_lbr_msrs_access(vcpu, msr_info, false)) From patchwork Wed May 18 13:25:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12853671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6696EC433FE for ; Wed, 18 May 2022 13:25:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237859AbiERNZw (ORCPT ); Wed, 18 May 2022 09:25:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237864AbiERNZm (ORCPT ); Wed, 18 May 2022 09:25:42 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E609C1BB98D; Wed, 18 May 2022 06:25:33 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id x143so2112768pfc.11; Wed, 18 May 2022 06:25:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iGkM7zUZyNWulbks1XJcMQ2LSpgiie/fhoc/MtFlX9o=; b=cF/XiRy74wYu5lz/Dni+EhUhHJBswgr9RMmobHZfZEKjlBesOH04xehd+2BBm/J9TU n2sHNRnAc0KL/jo0zWnPS3HX2VuRvtIWZhnHg92iKmCKhj0tfzZZpz0fxw4+aXLIFf1W Vh9GKcfwxRNhfhItZTLpse+sWSMQhqmVUHTQsgip4rswdWAwLXJCCiWkwdjaPeUfAf+O TgLv6i19EWAZTuELYajfyXEMCqVGBXdYw8FinKGs6QESlAKIvV7F8t24a+Jk5G0UA8Yy S/xW8sgLecBdrf23fx5PcY/fbD8WUX6T+0klo0wvDJu2XQHiTigc0mlOtZdcglbOBCHy c1CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iGkM7zUZyNWulbks1XJcMQ2LSpgiie/fhoc/MtFlX9o=; b=Cro/ofcF9ajEn3Ry9hl0jRhXmh00xKFrRAPhHgI492epzK36bPzcyOfOOAi3Bqa//U ZH4rOCSa9mKin48FLPey+JQo3U9k7GwjOMmX57IJszPjI/4vod+TukjGHNxpugNzcfBx mIbCkRRlKd70e+UhLsGvSvlFSNt1Jwx04vIM75goUQZM+557tdNIZtjC7wP1uMrI/SzN 0fZzBgKtrmJNrWHwq88GlDEgbFWq9smOCkCm1KJOzFRPv5sDU9ekAt0G2r8DHlvLIlvy 9zPxQfz47ohPXOgz4XIDF/R7ZlCXzrEt+amDRoPGpbF6LMnyJm4bn/Qn5zLn7HsHsyRK gJzg== X-Gm-Message-State: AOAM531j+peO2zZc8gEU0HBKs8vdA+VIdLyhHJiAljig6NIZQIDkYEu7 SbFC8m4R73flJRKadPq1Cug= X-Google-Smtp-Source: ABdhPJzSXdtQtLfVYHMbN9XtxWkaFDm7YQhKqYNRpw01GYLVHrDYpluSBpy2zPFzFPU6yfFKFZtVgA== X-Received: by 2002:a63:2023:0:b0:3f5:e1e7:74e3 with SMTP id g35-20020a632023000000b003f5e1e774e3mr4890315pgg.377.1652880333318; Wed, 18 May 2022 06:25:33 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.117]) by smtp.gmail.com with ESMTPSA id s13-20020a17090302cd00b0015e8d4eb244sm1625549plk.142.2022.05.18.06.25.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 06:25:33 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Jim Mattson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v3 06/11] KVM: x86/pmu: Drop "u8 ctrl, int idx" for reprogram_fixed_counter() Date: Wed, 18 May 2022 21:25:07 +0800 Message-Id: <20220518132512.37864-7-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220518132512.37864-1-likexu@tencent.com> References: <20220518132512.37864-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Since afrer reprogram_fixed_counter() is called, it's bound to assign the requested fixed_ctr_ctrl to pmu->fixed_ctr_ctrl, this assignment step can be moved forward (the stale value for diff is saved extra early), thus simplifying the passing of parameters. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 13 ++++++------- arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 16 ++++++++-------- 3 files changed, 15 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index cbffa060976e..131fbab612ca 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -319,8 +319,11 @@ void reprogram_gp_counter(struct kvm_pmc *pmc) } EXPORT_SYMBOL_GPL(reprogram_gp_counter); -void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) +void reprogram_fixed_counter(struct kvm_pmc *pmc) { + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + int idx = pmc->idx - INTEL_PMC_IDX_FIXED; + u8 ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, idx); unsigned en_field = ctrl & 0x3; bool pmi = ctrl & 0x8; @@ -350,12 +353,8 @@ void reprogram_counter(struct kvm_pmc *pmc) { if (pmc_is_gp(pmc)) reprogram_gp_counter(pmc); - else { - int idx = pmc->idx - INTEL_PMC_IDX_FIXED; - u8 ctrl = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, idx); - - reprogram_fixed_counter(pmc, ctrl, idx); - } + else + reprogram_fixed_counter(pmc); } EXPORT_SYMBOL_GPL(reprogram_counter); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 56204f5a545d..8d7912978249 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -173,7 +173,7 @@ static inline void kvm_init_pmu_capability(void) } void reprogram_gp_counter(struct kvm_pmc *pmc); -void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); +void reprogram_fixed_counter(struct kvm_pmc *pmc); void reprogram_counter(struct kvm_pmc *pmc); void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 2bfca470d5fd..5e10a1ef435d 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -37,23 +37,23 @@ static int fixed_pmc_events[] = {1, 0, 7}; static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) { + struct kvm_pmc *pmc; + u8 old_fixed_ctr_ctrl = pmu->fixed_ctr_ctrl; int i; + pmu->fixed_ctr_ctrl = data; for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { u8 new_ctrl = fixed_ctrl_field(data, i); - u8 old_ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, i); - struct kvm_pmc *pmc; - - pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); + u8 old_ctrl = fixed_ctrl_field(old_fixed_ctr_ctrl, i); if (old_ctrl == new_ctrl) continue; - __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); - reprogram_fixed_counter(pmc, new_ctrl, i); - } + pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); - pmu->fixed_ctr_ctrl = data; + __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); + reprogram_fixed_counter(pmc); + } } static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) From patchwork Wed May 18 13:25:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12853675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29F66C43217 for ; Wed, 18 May 2022 13:26:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237952AbiERN0o (ORCPT ); Wed, 18 May 2022 09:26:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237894AbiERNZp (ORCPT ); Wed, 18 May 2022 09:25:45 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89B061BB9B0; Wed, 18 May 2022 06:25:36 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id gg20so2033916pjb.1; Wed, 18 May 2022 06:25:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PNVifhlsJTNl8PHU0BT9euUzgdNmiddZWSYrnTIcJzQ=; b=MF+gnc5KsYvIGrzPwk+bA8ymch4CyqFuoGL260G/ppzd6l3wv5zoMQ+G7v2HiqM3rE 9gDoS2JqsK98dUjp9bBpJhB9kQCZQZt3Jm88n9x/wSz4v088Rr7nWKqPf8ik/2upnq7Y BC09UcoBubbzDc1fy7rnzoun1XFfztGtEPRmm+Q1nCfhjI9QcOZelyupkBSdb+JLqCqR eQVqoLfVE57GIUJ1g5SsVt6BTNWw/J3bR4kRbUsfS4OJ6CT9xlS5ekXlCOYI2JS69SlA WPLGgZrXex9gVROkG1tNSsbAXGD92YT9oW2DjVbHB1DvlVSG0/iEDc8cQnY31JRFr0Cp kxQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PNVifhlsJTNl8PHU0BT9euUzgdNmiddZWSYrnTIcJzQ=; b=66tZWwCcln2fugpNGDIgmQ3uerb10IA7B5New9g1Yan0Ml5n/SibcNot+ljTWkrQTD Sb4KwWlKcQEEeDvDNNKBPSNiZU3j0uYoiVLIeIg7Qc5SG6BgZ8ZedJJRWDMftxM1D9Sn /XFxJJNWeqsEFpXox9bf2WoAibcKQrrQf1X2Gt3FlAkZFXt9dXcgQ+mB/4zG7Kvpa7rT WyLTPOmqvBk2YAKsgyPgx5hZA2Fk720a9g0zxHSNjWEzRkuIYncb4DkpOcCZudGQRbwI F11pDKeMfz4cCZJ1FdNISBxPa2zcIxQ0/YQV4hYbGokQKdjkN3gvcaEygSoqCUIZHiY6 V9Vg== X-Gm-Message-State: AOAM532cP3roVnv5rRv1yEz5tvF86q+6p5gdqewG8O1BhCPMOTozjjGF 1++EnKujo1F3OWL4uyl48i/56kusTIfiuw== X-Google-Smtp-Source: ABdhPJx8iEmIEI8Jai0U8daZ2nqrWK8PcTuoqaomNBMs2vTw1HgF1gOZfySkhJK+q31Pvbo58K7JUQ== X-Received: by 2002:a17:902:ab8b:b0:160:df9e:99fa with SMTP id f11-20020a170902ab8b00b00160df9e99famr25840735plr.108.1652880335790; Wed, 18 May 2022 06:25:35 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.117]) by smtp.gmail.com with ESMTPSA id s13-20020a17090302cd00b0015e8d4eb244sm1625549plk.142.2022.05.18.06.25.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 06:25:35 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Jim Mattson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v3 07/11] KVM: x86/pmu: Use only the uniform interface reprogram_counter() Date: Wed, 18 May 2022 21:25:08 +0800 Message-Id: <20220518132512.37864-8-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220518132512.37864-1-likexu@tencent.com> References: <20220518132512.37864-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Since reprogram_counter(), reprogram_{gp, fixed}_counter() currently have the same incoming parameter "struct kvm_pmc *pmc", the callers can simplify the conetxt by using uniformly exported interface, which makes reprogram_ {gp, fixed}_counter() static and eliminates EXPORT_SYMBOL_GPL. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 6 ++---- arch/x86/kvm/svm/pmu.c | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 4 ++-- 3 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 131fbab612ca..c2f00f07fbd7 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -275,7 +275,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return allow_event; } -void reprogram_gp_counter(struct kvm_pmc *pmc) +static void reprogram_gp_counter(struct kvm_pmc *pmc) { u64 config; u32 type = PERF_TYPE_RAW; @@ -317,9 +317,8 @@ void reprogram_gp_counter(struct kvm_pmc *pmc) !(eventsel & ARCH_PERFMON_EVENTSEL_OS), eventsel & ARCH_PERFMON_EVENTSEL_INT); } -EXPORT_SYMBOL_GPL(reprogram_gp_counter); -void reprogram_fixed_counter(struct kvm_pmc *pmc) +static void reprogram_fixed_counter(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); int idx = pmc->idx - INTEL_PMC_IDX_FIXED; @@ -347,7 +346,6 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc) !(en_field & 0x1), /* exclude kernel */ pmi); } -EXPORT_SYMBOL_GPL(reprogram_fixed_counter); void reprogram_counter(struct kvm_pmc *pmc) { diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index fa4539e470b3..b5ba846fee88 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -287,7 +287,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) data &= ~pmu->reserved_bits; if (data != pmc->eventsel) { pmc->eventsel = data; - reprogram_gp_counter(pmc); + reprogram_counter(pmc); } return 0; } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 5e10a1ef435d..75aa2282ae93 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -52,7 +52,7 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); - reprogram_fixed_counter(pmc); + reprogram_counter(pmc); } } @@ -485,7 +485,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) reserved_bits ^= HSW_IN_TX_CHECKPOINTED; if (!(data & reserved_bits)) { pmc->eventsel = data; - reprogram_gp_counter(pmc); + reprogram_counter(pmc); return 0; } } else if (intel_pmu_handle_lbr_msrs_access(vcpu, msr_info, false)) From patchwork Wed May 18 13:25:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12853672 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7264BC433F5 for ; Wed, 18 May 2022 13:26:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237875AbiERNZz (ORCPT ); Wed, 18 May 2022 09:25:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237909AbiERNZp (ORCPT ); Wed, 18 May 2022 09:25:45 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA670A5AA4; Wed, 18 May 2022 06:25:38 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id q4so1739297plr.11; Wed, 18 May 2022 06:25:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5rJYKB50pqwoRKSdVNgGM4Ug4UmJXRZvOCRmjAZMIK8=; b=a7Q0hcuUejglF/nUBx3FEUNOPWv/CgJ7paV/YyBks6cCNp9/9h0yVHtOcn/kRaCxjs s/QerGzPZnIEeT/MdOUwNEAJipS5dK3eOs2jN0s99zLR2cUrjEpyyat4SphODbVxlfzq Y5ZmJRcWu7ph/6y6Vm4H1h3tdHvjEogYyuULXiWIZ/RGC+yAITXljg4lvGJcs6Z5kleY wqy1VbPYy0UxuFp4qokfaMCl86RNPoulONOSjOc+OlRNH3x/3l3Zz3Rp1nuxO1CMjGiP pKolpI4pDWrSByap6Wsj/AUbZID5WrbkAbSx2auHpm4X+Rh77X8EZlARsfkfGDm5w1S8 VjrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5rJYKB50pqwoRKSdVNgGM4Ug4UmJXRZvOCRmjAZMIK8=; b=su8maLCYuagcmP9uX6ZC1MZjUYMPwZNai3AcqrKTIVXwQfM1j9RNypgHl9kMp124lH tInh+CnBqmF5Bx4asZmrbjTzpOd80y/cXr/osgSoW+AKVbwwdOvPipvofzcpDPTwzRnO 4eMIZQPzxmOIHA0mPWerzPOwQ0Xk2oWKexan0OZJEuVeUuoM39zj0wtk4QiwmEqV2EGc 2wKbeIY9mzRJCf/tIPtRKYnRWur7Vvp/Y/L3ZJyF7OHBGlJQymyGLcKWvpAUkJdrcemz fxtUjvAkk2z1FspINtBL3AeQSWmrHaTjQiZzLTj5k3jalX+XcEjuEz8CdxqcfQeAt/9q lMzA== X-Gm-Message-State: AOAM5332zpWlloabBauKGOhxHG86KUCBmfnEB8V3drrHIg4FhLit3kMe rnWWVbz1zD4dMlgx9dsCmyY= X-Google-Smtp-Source: ABdhPJyFaoda7Yd+qN6bKSH010pp1lvQOIXW0axA5QCzcsIWBgr76ktxjosOG27Ilj9mHQOggFPSuQ== X-Received: by 2002:a17:902:c40f:b0:15e:bc6b:6980 with SMTP id k15-20020a170902c40f00b0015ebc6b6980mr27090075plk.145.1652880338280; Wed, 18 May 2022 06:25:38 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.117]) by smtp.gmail.com with ESMTPSA id s13-20020a17090302cd00b0015e8d4eb244sm1625549plk.142.2022.05.18.06.25.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 06:25:38 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Jim Mattson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v3 08/11] KVM: x86/pmu: Use PERF_TYPE_RAW to merge reprogram_{gp,fixed}counter() Date: Wed, 18 May 2022 21:25:09 +0800 Message-Id: <20220518132512.37864-9-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220518132512.37864-1-likexu@tencent.com> References: <20220518132512.37864-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The code sketch for reprogram_{gp, fixed}_counter() is similar, while the fixed counter using the PERF_TYPE_HARDWAR type and the gp being able to use either PERF_TYPE_HARDWAR or PERF_TYPE_RAW type depending on the pmc->eventsel value. After 'commit 761875634a5e ("KVM: x86/pmu: Setup pmc->eventsel for fixed PMCs")', the pmc->eventsel of the fixed counter will also have been setup with the same semantic value and will not be changed during the guest runtime. The original story of using the PERF_TYPE_HARDWARE type is to emulate guest architecture PMU on a host without architecture PMU (the Pentium 4), for which the guest vPMC needs to be reprogrammed using the kernel generic perf_hw_id. But essentially, "the HARDWARE is just a convenience wrapper over RAW IIRC", quoated from Peterz. So it could be pretty safe to use the PERF_TYPE_RAW type only in practice to program both gp and fixed counters naturally in the reprogram_counter(). To make the gp and fixed counters more semantically symmetrical, the selection of EVENTSEL_{USER, OS, INT} bits is temporarily translated via fixed_ctr_ctrl before the pmc_reprogram_counter() call. Cc: Peter Zijlstra Suggested-by: Jim Mattson Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 87 +++++++++++++--------------------------------- 1 file changed, 25 insertions(+), 62 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index c2f00f07fbd7..33bf08fc0282 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -275,85 +275,48 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return allow_event; } -static void reprogram_gp_counter(struct kvm_pmc *pmc) +void reprogram_counter(struct kvm_pmc *pmc) { - u64 config; - u32 type = PERF_TYPE_RAW; struct kvm_pmu *pmu = pmc_to_pmu(pmc); u64 eventsel = pmc->eventsel; + u64 new_config = eventsel; + u8 fixed_ctr_ctrl; + + pmc_pause_counter(pmc); + + if (!pmc_speculative_in_use(pmc) || !pmc_is_enabled(pmc)) + return; + + if (!check_pmu_event_filter(pmc)) + return; if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); - pmc_pause_counter(pmc); - - if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE) || !pmc_is_enabled(pmc)) - return; - - if (!check_pmu_event_filter(pmc)) - return; - - if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | - ARCH_PERFMON_EVENTSEL_INV | - ARCH_PERFMON_EVENTSEL_CMASK | - HSW_IN_TX | - HSW_IN_TX_CHECKPOINTED))) { - config = static_call(kvm_x86_pmu_pmc_perf_hw_id)(pmc); - if (config != PERF_COUNT_HW_MAX) - type = PERF_TYPE_HARDWARE; + if (pmc_is_fixed(pmc)) { + fixed_ctr_ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, + pmc->idx - INTEL_PMC_IDX_FIXED); + if (fixed_ctr_ctrl & 0x1) + eventsel |= ARCH_PERFMON_EVENTSEL_OS; + if (fixed_ctr_ctrl & 0x2) + eventsel |= ARCH_PERFMON_EVENTSEL_USR; + if (fixed_ctr_ctrl & 0x8) + eventsel |= ARCH_PERFMON_EVENTSEL_INT; + new_config = (u64)fixed_ctr_ctrl; } - if (type == PERF_TYPE_RAW) - config = eventsel & pmu->raw_event_mask; - - if (pmc->current_config == eventsel && pmc_resume_counter(pmc)) + if (pmc->current_config == new_config && pmc_resume_counter(pmc)) return; pmc_release_perf_event(pmc); - pmc->current_config = eventsel; - pmc_reprogram_counter(pmc, type, config, + pmc->current_config = new_config; + pmc_reprogram_counter(pmc, PERF_TYPE_RAW, + (eventsel & pmu->raw_event_mask), !(eventsel & ARCH_PERFMON_EVENTSEL_USR), !(eventsel & ARCH_PERFMON_EVENTSEL_OS), eventsel & ARCH_PERFMON_EVENTSEL_INT); } - -static void reprogram_fixed_counter(struct kvm_pmc *pmc) -{ - struct kvm_pmu *pmu = pmc_to_pmu(pmc); - int idx = pmc->idx - INTEL_PMC_IDX_FIXED; - u8 ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, idx); - unsigned en_field = ctrl & 0x3; - bool pmi = ctrl & 0x8; - - pmc_pause_counter(pmc); - - if (!en_field || !pmc_is_enabled(pmc)) - return; - - if (!check_pmu_event_filter(pmc)) - return; - - if (pmc->current_config == (u64)ctrl && pmc_resume_counter(pmc)) - return; - - pmc_release_perf_event(pmc); - - pmc->current_config = (u64)ctrl; - pmc_reprogram_counter(pmc, PERF_TYPE_HARDWARE, - static_call(kvm_x86_pmu_pmc_perf_hw_id)(pmc), - !(en_field & 0x2), /* exclude user */ - !(en_field & 0x1), /* exclude kernel */ - pmi); -} - -void reprogram_counter(struct kvm_pmc *pmc) -{ - if (pmc_is_gp(pmc)) - reprogram_gp_counter(pmc); - else - reprogram_fixed_counter(pmc); -} EXPORT_SYMBOL_GPL(reprogram_counter); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) From patchwork Wed May 18 13:25:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12853679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 590BDC433EF for ; Wed, 18 May 2022 13:28:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237978AbiERN2B (ORCPT ); Wed, 18 May 2022 09:28:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237927AbiERNZq (ORCPT ); Wed, 18 May 2022 09:25:46 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E8342CE1C; Wed, 18 May 2022 06:25:43 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id i8so1735503plr.13; Wed, 18 May 2022 06:25:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AfO0NLcJeQALgqj1J+QSjTgv2c+H8QHy+g5kdNkMJ7M=; b=IiQfUJbnQVhSdENbcqcJSJXCmhgKbUEjfrBb6BaRYmF0mIK2OjAacKriEIAiBbdqw5 XbEjTNJvSLMygitmTFnJhNWTVbU3h5/49xhgrKfolHmAJs/UfD3GwmfgFiFM1hHCir2B /IMsh8APrpS8EeOoIYwsIogklrScL94RvFn0SHMYLNGmzAK5UgrKuKCnMIDvFl/am6MZ oeRjnu0KYrNxbCXpZH4IR/KH0IFXkSZQoj8FA2+BA8hgHObOBZAG6AFGeLEuZIUQ+1ft fBWc2tONJ040Lh3RwSg+ELoC0smRwAzepaxhpE5SAJAXHSHYadlaIwFliodn6s0Te3Ye YfmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AfO0NLcJeQALgqj1J+QSjTgv2c+H8QHy+g5kdNkMJ7M=; b=LxKYzKdcHP/iWiTr3smoJGb5n5y97riJYuU6jfIQaTQZ3fdeeYHPEXiEL8+0i4MbYu FjdE/LtCX1Yr9e8krhTLKWP2yvvgV6Zzn0vzbPR3EdLM/S+5ksDDez9Kg1CoKfH0ys32 gQsZzw8l8dB9wRkNZGfuTd1SXSqoUjDJduOV/1/m63+FD9xSj45YMk+xVMUUH1yEprMA wWNOnIBFq0fAUZKhn5+FgnsJlhCKQqc6V2/IvSKg4qpw8ixiTMVoiCH81Ahq/7aPVelz 3kKatTOznnaooPMX4BwDxcTCYf9JzyPuvbxYHOLW5ObYaYg6HBtruLlz7ud8ZvCW4I2x cr2A== X-Gm-Message-State: AOAM533L5Mf+oRS+wAd73GGhWWOnTrTCI2yW6OeV6wbfBKdMZvy9b5ts M53OE1R338rRO2aeCNKZm6g= X-Google-Smtp-Source: ABdhPJyeSfY3EL0jyog7JKG5CzqT4tKWx4OegLVs7LzTRGzETI0wQHoYx9HNB6qV1Qc8HYrwrUdoAw== X-Received: by 2002:a17:902:b418:b0:15f:713:c914 with SMTP id x24-20020a170902b41800b0015f0713c914mr27608231plr.171.1652880342910; Wed, 18 May 2022 06:25:42 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.117]) by smtp.gmail.com with ESMTPSA id s13-20020a17090302cd00b0015e8d4eb244sm1625549plk.142.2022.05.18.06.25.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 06:25:42 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Jim Mattson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v3 09/11] perf: x86/core: Add interface to query perfmon_event_map[] directly Date: Wed, 18 May 2022 21:25:10 +0800 Message-Id: <20220518132512.37864-10-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220518132512.37864-1-likexu@tencent.com> References: <20220518132512.37864-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Currently, we have [intel|knc|p4|p6]_perfmon_event_map on the Intel platforms and amd_[f17h]_perfmon_event_map on the AMD platforms. Early clumsy KVM code or other potential perf_event users may have hard-coded these perfmon_maps (e.g., arch/x86/kvm/svm/pmu.c), so it would not make sense to program a common hardware event based on the generic "enum perf_hw_id" once the two tables do not match. Let's provide an interface for callers outside the perf subsystem to get the counter config based on the perfmon_event_map currently in use, and it also helps to save bytes. Cc: Peter Zijlstra Signed-off-by: Like Xu Acked-by: Peter Zijlstra (Intel) --- arch/x86/events/core.c | 11 +++++++++++ arch/x86/include/asm/perf_event.h | 6 ++++++ 2 files changed, 17 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 7f1d10dbabc0..99cf67d63cf3 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2997,3 +2997,14 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) cap->pebs_ept = x86_pmu.pebs_ept; } EXPORT_SYMBOL_GPL(perf_get_x86_pmu_capability); + +u64 perf_get_hw_event_config(int hw_event) +{ + int max = x86_pmu.max_events; + + if (hw_event < max) + return x86_pmu.event_map(array_index_nospec(hw_event, max)); + + return 0; +} +EXPORT_SYMBOL_GPL(perf_get_hw_event_config); diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index dc295b8c8def..396f0ce7a0f4 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -478,6 +478,7 @@ struct x86_pmu_lbr { }; extern void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap); +extern u64 perf_get_hw_event_config(int hw_event); extern void perf_check_microcode(void); extern void perf_clear_dirty_counters(void); extern int x86_perf_rdpmc_index(struct perf_event *event); @@ -487,6 +488,11 @@ static inline void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) memset(cap, 0, sizeof(*cap)); } +static inline u64 perf_get_hw_event_config(int hw_event) +{ + return 0; +} + static inline void perf_events_lapic_init(void) { } static inline void perf_check_microcode(void) { } #endif From patchwork Wed May 18 13:25:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12853674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE982C43217 for ; Wed, 18 May 2022 13:26:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237870AbiERN0k (ORCPT ); Wed, 18 May 2022 09:26:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237800AbiERNZs (ORCPT ); Wed, 18 May 2022 09:25:48 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33B011AF; Wed, 18 May 2022 06:25:47 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id 31so2122890pgp.8; Wed, 18 May 2022 06:25:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fWq2dft195z67E3HmJbQS7CQHaiqAGijLUxUajn85g8=; b=SAmoBGLWGdD9VZvDpKQEOA25kznLhwkX3Asno5SON222ZqJ9OTUZOmLCwDXVCbqJpm AIHkHTKIH78SItD9TV/qOxSkrghPH7RMSjApSt57Rbl8XW1wEH0SK/8+nfGrPVnoqqkX 17HnINiFrua8UMbdX0QUj0NCDbQgPvQ9qAjOsHBWPNaHXof7UMyTSaotuk6jsWxT95hm e/2L+og0C73hUEQuh82ExL6ud3w1m910srRYWkjLd6teA1OQ5zOAt9+yeEfFnhhKXjwK a4F1lXvsW6132LJrnhJ8U5nNSCkqKvcrWZQ+yLMSYZOrdoGJFi8BMriSKrqodjH3gvNW S6hQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fWq2dft195z67E3HmJbQS7CQHaiqAGijLUxUajn85g8=; b=0b4MVa4q7NHGGEiBUScVbpn3hOpEJQQd+yw51Liej1TMYlJCAtXqNWMe5KOuswxr/8 D4sVKukh8YX51WgoPuNCpyqLk1dKoznmnvELjoJbBnTS6JuCf+78SixTRu4jCaKZ2EWY X4rQuKxud+NK+2ICXdLxeo3meCkxbb3Un9k4NnP4QEZ0ahrLgf7FK/DUvT/sHnS1pe8f oTkUGLqOxsVN+wbhI2k6psvcx+XG214WQC3YK1HxMT66j2u/1R68g+mzdWGbjyu+RNoh JgkZA6OeASSzEwidW6QorarCDmc9ViXTV1+dUpQGUxxyiVojxNEpJGhQiz3kr5TAvp/W 6XLg== X-Gm-Message-State: AOAM533ZcpaG1kOVCGrE6aBRLwCCDoZxGH070ho7tK4WuHxTUWG9sL3m QKLDJyaU+HxTbohZuV+xx28= X-Google-Smtp-Source: ABdhPJxCiW4vNidQr8d3khFMkKX2tm2oQM/WD8vaW3HKyU18UDKF1u8TMcdp4aq/dq6zQW78s0K6nQ== X-Received: by 2002:a62:c545:0:b0:50d:2d0f:2e8a with SMTP id j66-20020a62c545000000b0050d2d0f2e8amr28009615pfg.12.1652880346560; Wed, 18 May 2022 06:25:46 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.117]) by smtp.gmail.com with ESMTPSA id s13-20020a17090302cd00b0015e8d4eb244sm1625549plk.142.2022.05.18.06.25.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 06:25:46 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Jim Mattson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v3 10/11] KVM: x86/pmu: Replace pmc_perf_hw_id() with perf_get_hw_event_config() Date: Wed, 18 May 2022 21:25:11 +0800 Message-Id: <20220518132512.37864-11-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220518132512.37864-1-likexu@tencent.com> References: <20220518132512.37864-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu With the help of perf_get_hw_event_config(), KVM could query the correct EVENTSEL_{EVENT, UMASK} pair of a kernel-generic hw event directly from the different *_perfmon_event_map[] by the kernel's pre-defined perf_hw_id. Also extend the bit range of the comparison field to AMD64_RAW_EVENT_MASK_NB to prevent AMD from defining EventSelect[11:8] into perfmon_event_map[] one day. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 33bf08fc0282..7dc949f6a92c 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -517,13 +517,8 @@ static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) static inline bool eventsel_match_perf_hw_id(struct kvm_pmc *pmc, unsigned int perf_hw_id) { - u64 old_eventsel = pmc->eventsel; - unsigned int config; - - pmc->eventsel &= (ARCH_PERFMON_EVENTSEL_EVENT | ARCH_PERFMON_EVENTSEL_UMASK); - config = static_call(kvm_x86_pmu_pmc_perf_hw_id)(pmc); - pmc->eventsel = old_eventsel; - return config == perf_hw_id; + return !((pmc->eventsel ^ perf_get_hw_event_config(perf_hw_id)) & + AMD64_RAW_EVENT_MASK_NB); } static inline bool cpl_is_matched(struct kvm_pmc *pmc) From patchwork Wed May 18 13:25:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12853673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BE2CC4332F for ; Wed, 18 May 2022 13:26:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237920AbiERNZ7 (ORCPT ); Wed, 18 May 2022 09:25:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237825AbiERNZw (ORCPT ); Wed, 18 May 2022 09:25:52 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A6EFAFB09; Wed, 18 May 2022 06:25:50 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id i8so1735503plr.13; Wed, 18 May 2022 06:25:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=InECIgSZ0QPmtYHzj1Juv9iX30hqeOQtFiR8TnwIpkk=; b=MMa7UcDqe3NztEgAoWKoc79WYsyZyGuVhIY8+mgRs3qZyGi62S3JQ1PMMVKSUiVlfn nN4n9WWVCDrJjzVsjua3/EkFIxdITSfZNpVAZMNBaH08diQenCWunfQMF9Q32uiwFYw2 yQTecdH1DOsZbnO3spGNb2nYi3Bl9yk2VVQ4bDA9HOBLvAV4zZBnqG0krx9waDr8+nJY +xwpN/Bz/ATVOmSsDV07rpshwA6Vo+z0TScDVj13ckjEfWEITkddWe06VYQV9TUo3YI+ IOUYau13DOxeTz4/cjVDTB/NXUuOjyRYNBsofnKU7SwGJhRrgMQr8eGdTaM/7H5LDZL2 wtaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=InECIgSZ0QPmtYHzj1Juv9iX30hqeOQtFiR8TnwIpkk=; b=W9Gk02DS/VsDPcP4l7CFppZBSGLq3DZx611surMoGQnyPE1Jt8RdmZ+y79kv9uGh95 wRqNLICUfz+mDQ37BGQ7OdawWcRj2C0KsZr5GaYnjkSW+Rq81Bwd+J2kwWYjlw08nU+Z j268ZEfQzquSom7QkH3VS7JfJW2bFc2jyk+2z7PYYmZrn0Vsp51d4ckMySfM81knmc3I nM/IWpju6CdNOPp7e41ZMCYLVjj4XqWTs01itvJqsuIACMauD0laeNa3/9NsNeYu19Eg ImjQvzYxCFU071Rt7iAm0RaLLkbh5I9ohOLf0gO3uTtwvJ9dEbd8TmNXjrYS/AV/1Ygq fpOw== X-Gm-Message-State: AOAM530RMBQnx0ne7PH1+CURLa0kAKGLE2fasus0uoy/DqQZM11voZEN 2bxaBKsn4lmT/bYfI8nq+wE= X-Google-Smtp-Source: ABdhPJyGausP/HaR90eiCTKx/g3tX1H76STHGVBFl5nIC+S+CYDhEiOTlENLOVOTdPzQ9kvcoIaucA== X-Received: by 2002:a17:90a:9a8b:b0:1df:6b7a:8a40 with SMTP id e11-20020a17090a9a8b00b001df6b7a8a40mr14059230pjp.213.1652880349751; Wed, 18 May 2022 06:25:49 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.117]) by smtp.gmail.com with ESMTPSA id s13-20020a17090302cd00b0015e8d4eb244sm1625549plk.142.2022.05.18.06.25.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 May 2022 06:25:49 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Jim Mattson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v3 11/11] KVM: x86/pmu: Drop amd_event_mapping[] in the KVM context Date: Wed, 18 May 2022 21:25:12 +0800 Message-Id: <20220518132512.37864-12-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220518132512.37864-1-likexu@tencent.com> References: <20220518132512.37864-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu All gp or fixed counters have been reprogrammed using PERF_TYPE_RAW, which means that the table that maps perf_hw_id to event select values is no longer useful, at least for AMD. For Intel, the logic to check if the pmu event reported by Intel cpuid is not available is still required, in which case pmc_perf_hw_id() could be renamed to hw_event_is_unavail() and a bool value is returned to replace the semantics of "PERF_COUNT_HW_MAX+1". Signed-off-by: Like Xu --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 2 +- arch/x86/kvm/pmu.c | 6 +-- arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/svm/pmu.c | 56 ++------------------------ arch/x86/kvm/vmx/pmu_intel.c | 11 ++--- 5 files changed, 12 insertions(+), 65 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index fdfd8e06fee6..227317bafb22 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -12,7 +12,7 @@ BUILD_BUG_ON(1) * a NULL definition, for example if "static_call_cond()" will be used * at the call sites. */ -KVM_X86_PMU_OP(pmc_perf_hw_id) +KVM_X86_PMU_OP(hw_event_is_unavail) KVM_X86_PMU_OP(pmc_is_enabled) KVM_X86_PMU_OP(pmc_idx_to_pmc) KVM_X86_PMU_OP(rdpmc_ecx_to_pmc) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 7dc949f6a92c..c01d66d237bb 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -151,9 +151,6 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, }; bool pebs = test_bit(pmc->idx, (unsigned long *)&pmu->pebs_enable); - if (type == PERF_TYPE_HARDWARE && config >= PERF_COUNT_HW_MAX) - return; - attr.sample_period = get_sample_period(pmc, pmc->counter); if ((attr.config & HSW_IN_TX_CHECKPOINTED) && @@ -248,6 +245,9 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) __u64 key; int idx, srcu_idx; + if (static_call(kvm_x86_pmu_hw_event_is_unavail)(pmc)) + return false; + srcu_idx = srcu_read_lock(&kvm->srcu); filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (!filter) diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 8d7912978249..1ad19c1949ad 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -30,7 +30,7 @@ struct kvm_event_hw_type_mapping { }; struct kvm_pmu_ops { - unsigned int (*pmc_perf_hw_id)(struct kvm_pmc *pmc); + bool (*hw_event_is_unavail)(struct kvm_pmc *pmc); bool (*pmc_is_enabled)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index b5ba846fee88..0c9f2e4b7b6b 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -33,34 +33,6 @@ enum index { INDEX_ERROR, }; -/* duplicated from amd_perfmon_event_map, K7 and above should work. */ -static struct kvm_event_hw_type_mapping amd_event_mapping[] = { - [0] = { 0x76, 0x00, PERF_COUNT_HW_CPU_CYCLES }, - [1] = { 0xc0, 0x00, PERF_COUNT_HW_INSTRUCTIONS }, - [2] = { 0x7d, 0x07, PERF_COUNT_HW_CACHE_REFERENCES }, - [3] = { 0x7e, 0x07, PERF_COUNT_HW_CACHE_MISSES }, - [4] = { 0xc2, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS }, - [5] = { 0xc3, 0x00, PERF_COUNT_HW_BRANCH_MISSES }, - [6] = { 0xd0, 0x00, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND }, - [7] = { 0xd1, 0x00, PERF_COUNT_HW_STALLED_CYCLES_BACKEND }, -}; - -/* duplicated from amd_f17h_perfmon_event_map. */ -static struct kvm_event_hw_type_mapping amd_f17h_event_mapping[] = { - [0] = { 0x76, 0x00, PERF_COUNT_HW_CPU_CYCLES }, - [1] = { 0xc0, 0x00, PERF_COUNT_HW_INSTRUCTIONS }, - [2] = { 0x60, 0xff, PERF_COUNT_HW_CACHE_REFERENCES }, - [3] = { 0x64, 0x09, PERF_COUNT_HW_CACHE_MISSES }, - [4] = { 0xc2, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS }, - [5] = { 0xc3, 0x00, PERF_COUNT_HW_BRANCH_MISSES }, - [6] = { 0x87, 0x02, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND }, - [7] = { 0x87, 0x01, PERF_COUNT_HW_STALLED_CYCLES_BACKEND }, -}; - -/* amd_pmc_perf_hw_id depends on these being the same size */ -static_assert(ARRAY_SIZE(amd_event_mapping) == - ARRAY_SIZE(amd_f17h_event_mapping)); - static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type) { struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); @@ -154,31 +126,9 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, return &pmu->gp_counters[msr_to_index(msr)]; } -static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc) +static bool amd_hw_event_is_unavail(struct kvm_pmc *pmc) { - struct kvm_event_hw_type_mapping *event_mapping; - u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; - u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; - int i; - - /* return PERF_COUNT_HW_MAX as AMD doesn't have fixed events */ - if (WARN_ON(pmc_is_fixed(pmc))) - return PERF_COUNT_HW_MAX; - - if (guest_cpuid_family(pmc->vcpu) >= 0x17) - event_mapping = amd_f17h_event_mapping; - else - event_mapping = amd_event_mapping; - - for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++) - if (event_mapping[i].eventsel == event_select - && event_mapping[i].unit_mask == unit_mask) - break; - - if (i == ARRAY_SIZE(amd_event_mapping)) - return PERF_COUNT_HW_MAX; - - return event_mapping[i].event_type; + return false; } /* check if a PMC is enabled by comparing it against global_ctrl bits. Because @@ -344,7 +294,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) } struct kvm_pmu_ops amd_pmu_ops __initdata = { - .pmc_perf_hw_id = amd_pmc_perf_hw_id, + .hw_event_is_unavail = amd_hw_event_is_unavail, .pmc_is_enabled = amd_pmc_is_enabled, .pmc_idx_to_pmc = amd_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 75aa2282ae93..6d24db41d8e0 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -84,7 +84,7 @@ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data) } } -static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) +static bool intel_hw_event_is_unavail(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; @@ -98,15 +98,12 @@ static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) /* disable event that reported as not present by cpuid */ if ((i < 7) && !(pmu->available_event_types & (1 << i))) - return PERF_COUNT_HW_MAX + 1; + return true; break; } - if (i == ARRAY_SIZE(intel_arch_events)) - return PERF_COUNT_HW_MAX; - - return intel_arch_events[i].event_type; + return false; } /* check if a PMC is enabled by comparing it with globl_ctrl bits. */ @@ -805,7 +802,7 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) } struct kvm_pmu_ops intel_pmu_ops __initdata = { - .pmc_perf_hw_id = intel_pmc_perf_hw_id, + .hw_event_is_unavail = intel_hw_event_is_unavail, .pmc_is_enabled = intel_pmc_is_enabled, .pmc_idx_to_pmc = intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc,