From patchwork Mon Feb 21 11:51:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12753508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21C75C433F5 for ; Mon, 21 Feb 2022 11:52:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356842AbiBULwk (ORCPT ); Mon, 21 Feb 2022 06:52:40 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:49808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356815AbiBULwi (ORCPT ); Mon, 21 Feb 2022 06:52:38 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B12DD14092; Mon, 21 Feb 2022 03:52:15 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id cp23-20020a17090afb9700b001bbfe0fbe94so5465580pjb.3; Mon, 21 Feb 2022 03:52:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EYBik5vKaVIFlZo9J6mMazAMK3LoDIpMsWLZXB13uOk=; b=XUZEfY+Ss2SFYwmWtaY/gWqO79WtoY85K3q0Yp2TErFkdDmhWcGOaaMyUFRTQvr8Aq B7YIpwRSX+Kx9/3L4RrBhqEc617owD7vlcdsdkrpXEPZMkkK8x8P8S1GEApl3LWbRdXW 1bEzTbGuvKxZ2ME3F2BdteMkSfjJvj/IuR4jrsIMKbwGrYbRWAHOaiyo65xDElhRbn91 tk1xzLL2lVNKtKt2oRMccHdw7R7Uw7cmJUVm1PnaDDNfMoguZkMponcrbLpRzts/wtUE BUbfFZCqiGu/vRHO9Cc+T7kyWUQbHeXDTv9GJHyLaGeKMfgGpYHTu/LV1S95m8WsX/mQ m9Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EYBik5vKaVIFlZo9J6mMazAMK3LoDIpMsWLZXB13uOk=; b=MN6pVRwMCCgpmQJtrxoTB4jRDO7QJYYgmcS/rtTXEtNM2gjfaeiLqpyXxYztR7zEJj 2dcXTRofuZgpKo7RE22jxF4qM1crIgJkwfT1jVl0ATPlCcGgFd2PdaitXlQAcFqaiSOI pJDZiL1h6WB3i5o7Bz//Hu83WZZ+pv9CX32AcaPRglLPe58N/RNCATn28RJ57Zn/0+8I tqYQBYxVJHBYrdiOxzfDMWnskdF2BvfXjNQHwkB4C0+0wdqZ9x36B+F+GADHxeFk5XUP KnkcselHNZ4H7nzmz41lpWHP8ojQHrCBwAKkL03eEVahPGhGv5bikmpPui0c2h87W0xp Yjbw== X-Gm-Message-State: AOAM53282V3IqOwPCoi13i+NSJ/InWVT180drozrBCqsMrksNkvfAlbJ 7jtmSwEXYI+/8pCTeNRqsSY= X-Google-Smtp-Source: ABdhPJzW8PYu6tcMgeUsv9MU18EFHVzlEfNHpTPotSZi8BK71ITydrRJUxb/b6zaRKichRFFjcR1GQ== X-Received: by 2002:a17:90b:33c4:b0:1b9:3aa6:e3e0 with SMTP id lk4-20020a17090b33c400b001b93aa6e3e0mr25615714pjb.182.1645444335308; Mon, 21 Feb 2022 03:52:15 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id z14sm13055011pfe.30.2022.02.21.03.52.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Feb 2022 03:52:15 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Jim Mattson Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 01/11] KVM: x86/pmu: Update comments for AMD gp counters Date: Mon, 21 Feb 2022 19:51:51 +0800 Message-Id: <20220221115201.22208-2-likexu@tencent.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20220221115201.22208-1-likexu@tencent.com> References: <20220221115201.22208-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The obsolete comment could more accurately state that AMD platforms have two base MSR addresses and two different maximum numbers for gp counters, depending on the X86_FEATURE_PERFCTR_CORE feature. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b1a02993782b..c4692f0ff87e 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -34,7 +34,7 @@ * However AMD doesn't support fixed-counters; * - There are three types of index to access perf counters (PMC): * 1. MSR (named msr): For example Intel has MSR_IA32_PERFCTRn and AMD - * has MSR_K7_PERFCTRn. + * has MSR_F15H_PERF_CTRn or MSR_K7_PERFCTRn. * 2. MSR Index (named idx): This normally is used by RDPMC instruction. * For instance AMD RDPMC instruction uses 0000_0003h in ECX to access * C001_0007h (MSR_K7_PERCTR3). Intel has a similar mechanism, except @@ -46,7 +46,8 @@ * between pmc and perf counters is as the following: * * Intel: [0 .. INTEL_PMC_MAX_GENERIC-1] <=> gp counters * [INTEL_PMC_IDX_FIXED .. INTEL_PMC_IDX_FIXED + 2] <=> fixed - * * AMD: [0 .. AMD64_NUM_COUNTERS-1] <=> gp counters + * * AMD: [0 .. AMD64_NUM_COUNTERS-1] or + * [0 .. AMD64_NUM_COUNTERS_CORE-1] <=> gp counters */ static void kvm_pmi_trigger_fn(struct irq_work *irq_work) From patchwork Mon Feb 21 11:51:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12753509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 776B0C433EF for ; Mon, 21 Feb 2022 11:52:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356869AbiBULwo (ORCPT ); Mon, 21 Feb 2022 06:52:44 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:50056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356864AbiBULwl (ORCPT ); Mon, 21 Feb 2022 06:52:41 -0500 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C46692BCA; Mon, 21 Feb 2022 03:52:18 -0800 (PST) Received: by mail-pg1-x535.google.com with SMTP id w37so7694594pga.7; Mon, 21 Feb 2022 03:52:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zSEA5ShgmjgGbVmEyqJV29NO0XQwdAjkBZxc1SaUiQQ=; b=Sgspyi7sOpqLNdyg+nIZP72GED425zCInXpRNj/6hUQdLNCwfuCRk6QBffDoZx6uZ1 AI1EQ1WmTH8TAWLCb7HHfVOy9W2glGukahamjorERk+7buyr/qw7kBDL9yhBy4Fxsvte NmRdv5S4+XipvoGyGrQurTXRHFDs0CKOnintD6Hk16BUwlcgKlEdIEzn263cc89Of3t8 hmm3kCvHJ9KbXE6ekwIO8Lg3rnk1q05D79mzCbvBlmMKoItoPLBMWda01xXX4Y25mA12 MqlQuL1Ua0OH/7al2tacO10CzFAQMIcokKKx6ZjA1jUbFifAXv9ZjkIS64oTRzYG/Pie EPvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zSEA5ShgmjgGbVmEyqJV29NO0XQwdAjkBZxc1SaUiQQ=; b=wNv/oK0XSTmYr/aC5AIkMbxJveooxy5VvSFRAbI1PIUI32ido/6MPgiPDG+YgUAZNe 5QTz3mZx2GSKGL3Fk+MHQG40FH74dTS9BOiuXGBZjs3ZBrYqnAPfENZvdggSq/vqs/ov Lvuf8XX6p+2mIos68EN3DRf1TPtDHFjHS2KJI+RS4Gip1dxJ15ky6TiQo9TMA+C/oUdV aP6WmrWoyXz/Vbjn9DARYCoqF9cpq1DU3UcQU+FPIbi+acN7BL4PnQrgKx62dlsrWI3X nANdHASpjF7Z1b7SnM8N5Vb1Xytoqp+E+e+J4sxSKeucP+jq1UdaPjo+zy+dYDdeUg03 c3+A== X-Gm-Message-State: AOAM531f3nfOkH9dorH7N+HDJWLRDcYi9kpfMtm2XCVz/7n69ZZQ8ou0 S/MIhL7BC2KAtN8UXslTY68= X-Google-Smtp-Source: ABdhPJzG6WWKh7gw6qQ/L7I8BzazjMEQUt6kzMEEQJfxMhxCxBK3fXsoANLtNNKYEg1C4QVO3frbAg== X-Received: by 2002:a63:8bc9:0:b0:372:c564:621b with SMTP id j192-20020a638bc9000000b00372c564621bmr16100579pge.601.1645444338316; Mon, 21 Feb 2022 03:52:18 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id z14sm13055011pfe.30.2022.02.21.03.52.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Feb 2022 03:52:18 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Jim Mattson Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 02/11] KVM: x86/pmu: Extract check_pmu_event_filter() from the same semantics Date: Mon, 21 Feb 2022 19:51:52 +0800 Message-Id: <20220221115201.22208-3-likexu@tencent.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20220221115201.22208-1-likexu@tencent.com> References: <20220221115201.22208-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Checking the kvm->arch.pmu_event_filter policy in both gp and fixed code paths was somewhat redundant, so common parts can be extracted, which reduces code footprint and improves readability. Signed-off-by: Like Xu Reviewed-by: Wanpeng Li --- arch/x86/kvm/pmu.c | 61 +++++++++++++++++++++++++++------------------- 1 file changed, 36 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index c4692f0ff87e..78527b118f72 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -180,13 +180,43 @@ static int cmp_u64(const void *a, const void *b) return *(__u64 *)a - *(__u64 *)b; } +static bool check_pmu_event_filter(struct kvm_pmc *pmc) +{ + struct kvm_pmu_event_filter *filter; + struct kvm *kvm = pmc->vcpu->kvm; + bool allow_event = true; + __u64 key; + int idx; + + filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); + if (!filter) + goto out; + + if (pmc_is_gp(pmc)) { + key = pmc->eventsel & AMD64_RAW_EVENT_MASK_NB; + if (bsearch(&key, filter->events, filter->nevents, + sizeof(__u64), cmp_u64)) + allow_event = filter->action == KVM_PMU_EVENT_ALLOW; + else + allow_event = filter->action == KVM_PMU_EVENT_DENY; + } else { + idx = pmc->idx - INTEL_PMC_IDX_FIXED; + if (filter->action == KVM_PMU_EVENT_DENY && + test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) + allow_event = false; + if (filter->action == KVM_PMU_EVENT_ALLOW && + !test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) + allow_event = false; + } + +out: + return allow_event; +} + void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) { u64 config; u32 type = PERF_TYPE_RAW; - struct kvm *kvm = pmc->vcpu->kvm; - struct kvm_pmu_event_filter *filter; - bool allow_event = true; if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); @@ -198,17 +228,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE) || !pmc_is_enabled(pmc)) return; - filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); - if (filter) { - __u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB; - - if (bsearch(&key, filter->events, filter->nevents, - sizeof(__u64), cmp_u64)) - allow_event = filter->action == KVM_PMU_EVENT_ALLOW; - else - allow_event = filter->action == KVM_PMU_EVENT_DENY; - } - if (!allow_event) + if (!check_pmu_event_filter(pmc)) return; if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | @@ -243,23 +263,14 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) { unsigned en_field = ctrl & 0x3; bool pmi = ctrl & 0x8; - struct kvm_pmu_event_filter *filter; - struct kvm *kvm = pmc->vcpu->kvm; pmc_pause_counter(pmc); if (!en_field || !pmc_is_enabled(pmc)) return; - filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); - if (filter) { - if (filter->action == KVM_PMU_EVENT_DENY && - test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - return; - if (filter->action == KVM_PMU_EVENT_ALLOW && - !test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - return; - } + if (!check_pmu_event_filter(pmc)) + return; if (pmc->current_config == (u64)ctrl && pmc_resume_counter(pmc)) return; From patchwork Mon Feb 21 11:51:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12753510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 514ADC433EF for ; Mon, 21 Feb 2022 11:52:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356922AbiBULwt (ORCPT ); Mon, 21 Feb 2022 06:52:49 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:50276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356873AbiBULwp (ORCPT ); Mon, 21 Feb 2022 06:52:45 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C67201C90D; Mon, 21 Feb 2022 03:52:21 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id cp23-20020a17090afb9700b001bbfe0fbe94so5465924pjb.3; Mon, 21 Feb 2022 03:52:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=v+hL+FmCP5BA+6xKkzWlAfr/GV1nq1HZEPnxSmW/LTI=; b=gmGgtiVxWX/EkMag4f9zT2P7F/DsLFPX2vaMfjpE+Gv7GpkSDqGRK01DmwVP1m2+EQ 0C1Z5hwT+owi8Zs908fq6YITt0xn6rJIxcEVisUgQR/jmDth1MW3E/b7IWZJbcUhQaNE aIpqlGGXU4D1wMoz4M/q8DT39seTKyltLoBlpj8R7FlRTN7uwrxBLhy+gRBpE+GZzS+x sXbYhHip1euXi3U4iqqaPEAopHZ7SGCzecoGyP/yluqDXV8kD6lqZn+hETUlmSXqoZvC /K+NAlemTodI6xbGwMAb4sL7QUcTNJztUVpbLpqlYflGklgr9BixGxyuWUgPNqt8rmfH tbqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=v+hL+FmCP5BA+6xKkzWlAfr/GV1nq1HZEPnxSmW/LTI=; b=AUqjXmJYsQRy9eUA/+aMqvnYrVEILBebxTdSWyJlgeQ52kpKJ4JCADwTNCUEJRuZS7 Xo3uBzC7cZ/mhRt72xMVR0FC7AT40kctPqe3tsRE0GdkQ9WmUcopfI2gQSVY5TQ85Nal uEsFiBeT9OPYyVO6dN8BAjYlm7pQpdtIFCp5w555Yh/ura7c8slZX5Y/CAP9IbH5X7Bn 5zAVsJZkMkAQrVsC8p1UXnCf+iKgkO4dKP5cQUm4rPS0Q++6VHJSMRkui/Sz+sNatah0 2RU4cTHqf6ZALVEERKv9PhRy/aQ3CYb+vVxank50uEjYXw1sRkwChuRjL7sqs/lwO3g5 ilag== X-Gm-Message-State: AOAM532Kso4+aNYXto850MCXbUV/FBx+5WIgQWB70tRsmBFJrwKQ+DYI WoJ6lQZS0vDMrCRORlI2//A= X-Google-Smtp-Source: ABdhPJxLkG/V9yhrtPhvIi5culng1mP7SdyiXjax+Eu8P/1xRPvDUCkMdKnYM2gWEcfL3YyYU1Nthw== X-Received: by 2002:a17:902:d88a:b0:14f:1aca:d956 with SMTP id b10-20020a170902d88a00b0014f1acad956mr18213211plz.100.1645444341219; Mon, 21 Feb 2022 03:52:21 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id z14sm13055011pfe.30.2022.02.21.03.52.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Feb 2022 03:52:21 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Jim Mattson Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 03/11] KVM: x86/pmu: Pass only "struct kvm_pmc *pmc" to reprogram_counter() Date: Mon, 21 Feb 2022 19:51:53 +0800 Message-Id: <20220221115201.22208-4-likexu@tencent.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20220221115201.22208-1-likexu@tencent.com> References: <20220221115201.22208-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Passing the reference "struct kvm_pmc *pmc" when creating pmc->perf_event is sufficient. This change helps to simplify the calling convention by replacing reprogram_{gp, fixed}_counter() with reprogram_counter() seamlessly. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 17 +++++------------ arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 32 ++++++++++++++++++-------------- 3 files changed, 24 insertions(+), 27 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 78527b118f72..125bdfdbaa7a 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -286,18 +286,13 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) } EXPORT_SYMBOL_GPL(reprogram_fixed_counter); -void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx) +void reprogram_counter(struct kvm_pmc *pmc) { - struct kvm_pmc *pmc = kvm_x86_ops.pmu_ops->pmc_idx_to_pmc(pmu, pmc_idx); - - if (!pmc) - return; - if (pmc_is_gp(pmc)) reprogram_gp_counter(pmc, pmc->eventsel); else { - int idx = pmc_idx - INTEL_PMC_IDX_FIXED; - u8 ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, idx); + int idx = pmc->idx - INTEL_PMC_IDX_FIXED; + u8 ctrl = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, idx); reprogram_fixed_counter(pmc, ctrl, idx); } @@ -316,8 +311,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) clear_bit(bit, pmu->reprogram_pmi); continue; } - - reprogram_counter(pmu, bit); + reprogram_counter(pmc); } /* @@ -503,13 +497,12 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu) static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) { - struct kvm_pmu *pmu = pmc_to_pmu(pmc); u64 prev_count; prev_count = pmc->counter; pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc); - reprogram_counter(pmu, pmc->idx); + reprogram_counter(pmc); if (pmc->counter < prev_count) __kvm_perf_overflow(pmc, false); } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 7a7b8d5b775e..b529c54dc309 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -142,7 +142,7 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); -void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx); +void reprogram_counter(struct kvm_pmc *pmc); void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 466d18fc0c5d..049ce5519fb5 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -56,16 +56,32 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) pmu->fixed_ctr_ctrl = data; } +static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) +{ + if (pmc_idx < INTEL_PMC_IDX_FIXED) + return get_gp_pmc(pmu, MSR_P6_EVNTSEL0 + pmc_idx, + MSR_P6_EVNTSEL0); + else { + u32 idx = pmc_idx - INTEL_PMC_IDX_FIXED; + + return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0); + } +} + /* function is called when global control register has been updated. */ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data) { int bit; u64 diff = pmu->global_ctrl ^ data; + struct kvm_pmc *pmc; pmu->global_ctrl = data; - for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) - reprogram_counter(pmu, bit); + for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) { + pmc = intel_pmc_idx_to_pmc(pmu, bit); + if (pmc) + reprogram_counter(pmc); + } } static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) @@ -101,18 +117,6 @@ static bool intel_pmc_is_enabled(struct kvm_pmc *pmc) return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); } -static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) -{ - if (pmc_idx < INTEL_PMC_IDX_FIXED) - return get_gp_pmc(pmu, MSR_P6_EVNTSEL0 + pmc_idx, - MSR_P6_EVNTSEL0); - else { - u32 idx = pmc_idx - INTEL_PMC_IDX_FIXED; - - return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0); - } -} - static bool intel_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); From patchwork Mon Feb 21 11:51:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12753511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FD06C433EF for ; Mon, 21 Feb 2022 11:52:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356930AbiBULwz (ORCPT ); Mon, 21 Feb 2022 06:52:55 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:50472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356918AbiBULwr (ORCPT ); Mon, 21 Feb 2022 06:52:47 -0500 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EE741C90D; Mon, 21 Feb 2022 03:52:24 -0800 (PST) Received: by mail-pg1-x52a.google.com with SMTP id f8so14071733pgc.8; Mon, 21 Feb 2022 03:52:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FB1eMfSYvUVn+Sd4GmduBTaes5/j6Y5IcJ2hQSwAGKk=; b=AyiSYpQrW2zAwk6iAYZyxtr9h+kGAAF7cR7BsxHjKHLAqYpPtt5mSWMHKT6tjJHBTD riHOAj4bicbSth94d1YvRsg7sKhcjhO/2GurAgmv5yyG6FLMEZstLiFLc0u5ZOUH4Eh1 U13elNygg0Hs/cefn4VzhKTS++aU+OrMuYIBa1RKQRkEjbFogg1uJFqnr/a2iV/19v7q KdtAARNptQR0H8Lnu1zilLEjSURiRMgo2AQ4qlLMyyg4m8XYjO4RAbSKEvL+AsL4PPZW Q6bqOu4aCXq5EZ/dJO5Cmu6DhJz4R7IsHc0yJzYaxUQEvJz83JxX+e7BrT0507E09pXw A4gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FB1eMfSYvUVn+Sd4GmduBTaes5/j6Y5IcJ2hQSwAGKk=; b=j7EZmFwaii6/FY18/qyK4cq/Y70xX+MHbSGv/eWYAtOlAn/Qlt8cPSI5aKCXqvJKqB 7qJ4qmbucWVYhtJ98PQz5nUpdFP1F58G7nl5+jz2dy0Xhn3DUCCm+dLTrTw2oRVXFkDr Tjd+n/KKS3ryB8TAj8lpLZBbmI5HLVkYL7a1ThQFWu1iEisNvVqm7hOU+kXARDGmLt/s 6Kfn2cwE5lJvzT8PUxQGgHKYrynU6E+4T7X2J5ltqERgSHqPkIiJquVfSOu9aLfkpDoN siLVDlKmYbCRuoJu5tCyxOuNCg6ZgkOCTRnwQEqIpw8TAloY6nTSfR+1t0oK+uzsr6zR mDjg== X-Gm-Message-State: AOAM531GavY8zJJ9zOCxo7rW7nNVB1mIO6XT5B3nYVzefBUawekklRV1 e08S/s/cuthoQXhtXqYhD0k= X-Google-Smtp-Source: ABdhPJwqv7jbgfcpYcFQHKenXA0Et8VvWIDZY8kDqvJ9cZZCRD1ahqSr+Wrfh59wKmB3J04wCw/bnA== X-Received: by 2002:a05:6a00:15d6:b0:4f1:4a86:b3b with SMTP id o22-20020a056a0015d600b004f14a860b3bmr594459pfu.60.1645444344138; Mon, 21 Feb 2022 03:52:24 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id z14sm13055011pfe.30.2022.02.21.03.52.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Feb 2022 03:52:23 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Jim Mattson Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 04/11] KVM: x86/pmu: Drop "u64 eventsel" for reprogram_gp_counter() Date: Mon, 21 Feb 2022 19:51:54 +0800 Message-Id: <20220221115201.22208-5-likexu@tencent.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20220221115201.22208-1-likexu@tencent.com> References: <20220221115201.22208-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Because inside reprogram_gp_counter(), it is bound to assign the requested eventel to pmc->eventsel, this assignment step can be moved forward, thus simplifying the passing of parameters to "struct kvm_pmc *pmc" only. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 7 +++---- arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/svm/pmu.c | 3 ++- arch/x86/kvm/vmx/pmu_intel.c | 3 ++- 4 files changed, 8 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 125bdfdbaa7a..482a78956dd0 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -213,16 +213,15 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return allow_event; } -void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) +void reprogram_gp_counter(struct kvm_pmc *pmc) { u64 config; u32 type = PERF_TYPE_RAW; + u64 eventsel = pmc->eventsel; if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); - pmc->eventsel = eventsel; - pmc_pause_counter(pmc); if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE) || !pmc_is_enabled(pmc)) @@ -289,7 +288,7 @@ EXPORT_SYMBOL_GPL(reprogram_fixed_counter); void reprogram_counter(struct kvm_pmc *pmc) { if (pmc_is_gp(pmc)) - reprogram_gp_counter(pmc, pmc->eventsel); + reprogram_gp_counter(pmc); else { int idx = pmc->idx - INTEL_PMC_IDX_FIXED; u8 ctrl = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, idx); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index b529c54dc309..4db50c290c62 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -140,7 +140,7 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) return sample_period; } -void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); +void reprogram_gp_counter(struct kvm_pmc *pmc); void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); void reprogram_counter(struct kvm_pmc *pmc); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 5aa45f13b16d..db839578e8be 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -265,7 +265,8 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (data == pmc->eventsel) return 0; if (!(data & pmu->reserved_bits)) { - reprogram_gp_counter(pmc, data); + pmc->eventsel = data; + reprogram_gp_counter(pmc); return 0; } } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 049ce5519fb5..1ed7d23d6738 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -448,7 +448,8 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (data == pmc->eventsel) return 0; if (!(data & pmu->reserved_bits)) { - reprogram_gp_counter(pmc, data); + pmc->eventsel = data; + reprogram_gp_counter(pmc); return 0; } } else if (intel_pmu_handle_lbr_msrs_access(vcpu, msr_info, false)) From patchwork Mon Feb 21 11:51:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12753512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA498C433F5 for ; Mon, 21 Feb 2022 11:52:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356937AbiBULxJ (ORCPT ); Mon, 21 Feb 2022 06:53:09 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:51040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356935AbiBULw5 (ORCPT ); Mon, 21 Feb 2022 06:52:57 -0500 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59A641EEEF; Mon, 21 Feb 2022 03:52:27 -0800 (PST) Received: by mail-pj1-x102f.google.com with SMTP id ev16-20020a17090aead000b001bc3835fea8so2993115pjb.0; Mon, 21 Feb 2022 03:52:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EaGkMeIuW5BjaVqOufsfJOzlJHAqBDnT17fO9wpvlYE=; b=ZMRefgSPlD1TLSlMbMZGWTMOxOgqaA4F6qxBSNIMwJAQU8xxIBN5S7PRHuzcZcAuAA fsz4Qeot5x6T0/1eh0v+YAQduQdj97DQXbikMWU0tH5kBvClxU9V8RoZrAqwoiNfZSQ5 rKf81Zv1CsntAXnrFrzaEXHFKPs7lgMgZJ9YsCzeScwL8QR2fKHBEgnycwWZRnTau9Bv Npglls5rW6j4j6TZwxLuBXX2+iLAtzQxGm5431R5sFkEW6rqCLdtipmC1orN39ifGnAh V8UWZw/vlYWW4SSwXtEAj2d35dwgkWO62ysc25CECtczKoGSsWQAJnAduCJuVmoNx9RW kdkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EaGkMeIuW5BjaVqOufsfJOzlJHAqBDnT17fO9wpvlYE=; b=CQBELjedXKsvsrStPMjsqEPhjCIoN0ZoxxoKeKcBI9a5Puih2sjnfBhiX5hoaYfm06 Rnvm6CXVQiUw0MHOUAqx5x6gLIyTaUnPSSWmCD5Quhubb0ygIJbLYoIPoHSSfdkJAKow TOlsl58zr5/o+m6/Ju249mA9Qw4gi0ycA0cOnSKxdlCEiXtgIMRt40r1dhljSowT0Wbb JgYdMLPOig5ewEzpiX+DtJfwLFJFGAikPlEcsohUya3McgUJo3+30zaR8UwZSgQ2f7gT ZHYrWPIGNi6YLNYT10/isLvqaO9L4kbpyJKiqghTvOXwmhbfU/nSJCJfSqROePHgvufq HvNw== X-Gm-Message-State: AOAM530/yN2eML1brhJbWab+apDfXGz8fIIrCME9yjDvOxEe41wrioE3 q8C5i4VSW7EwCh3GzrHQCBw= X-Google-Smtp-Source: ABdhPJwhDKSNzUxsVJl9VvBm99nUlrfK1zeoHYxTBnlaX9sSAwG1v9k1C00JCXMVzSSGwxDxSQ5o2g== X-Received: by 2002:a17:902:f686:b0:14e:e441:b76b with SMTP id l6-20020a170902f68600b0014ee441b76bmr18630263plg.146.1645444346869; Mon, 21 Feb 2022 03:52:26 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id z14sm13055011pfe.30.2022.02.21.03.52.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Feb 2022 03:52:26 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Jim Mattson Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 05/11] KVM: x86/pmu: Drop "u8 ctrl, int idx" for reprogram_fixed_counter() Date: Mon, 21 Feb 2022 19:51:55 +0800 Message-Id: <20220221115201.22208-6-likexu@tencent.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20220221115201.22208-1-likexu@tencent.com> References: <20220221115201.22208-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Since afrer reprogram_fixed_counter() is called, it's bound to assign the requested fixed_ctr_ctrl to pmu->fixed_ctr_ctrl, this assignment step can be moved forward (the stale value for diff is saved extra early), thus simplifying the passing of parameters. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 13 ++++++------- arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 16 ++++++++-------- 3 files changed, 15 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 482a78956dd0..7c90d5d196a4 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -258,8 +258,11 @@ void reprogram_gp_counter(struct kvm_pmc *pmc) } EXPORT_SYMBOL_GPL(reprogram_gp_counter); -void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) +void reprogram_fixed_counter(struct kvm_pmc *pmc) { + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + int idx = pmc->idx - INTEL_PMC_IDX_FIXED; + u8 ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, idx); unsigned en_field = ctrl & 0x3; bool pmi = ctrl & 0x8; @@ -289,12 +292,8 @@ void reprogram_counter(struct kvm_pmc *pmc) { if (pmc_is_gp(pmc)) reprogram_gp_counter(pmc); - else { - int idx = pmc->idx - INTEL_PMC_IDX_FIXED; - u8 ctrl = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, idx); - - reprogram_fixed_counter(pmc, ctrl, idx); - } + else + reprogram_fixed_counter(pmc); } EXPORT_SYMBOL_GPL(reprogram_counter); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 4db50c290c62..70a982c3cdad 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -141,7 +141,7 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) } void reprogram_gp_counter(struct kvm_pmc *pmc); -void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); +void reprogram_fixed_counter(struct kvm_pmc *pmc); void reprogram_counter(struct kvm_pmc *pmc); void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 1ed7d23d6738..cc4a092f0d67 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -37,23 +37,23 @@ static int fixed_pmc_events[] = {1, 0, 7}; static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) { + struct kvm_pmc *pmc; + u8 old_fixed_ctr_ctrl = pmu->fixed_ctr_ctrl; int i; + pmu->fixed_ctr_ctrl = data; for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { u8 new_ctrl = fixed_ctrl_field(data, i); - u8 old_ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, i); - struct kvm_pmc *pmc; - - pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); + u8 old_ctrl = fixed_ctrl_field(old_fixed_ctr_ctrl, i); if (old_ctrl == new_ctrl) continue; - __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); - reprogram_fixed_counter(pmc, new_ctrl, i); - } + pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); - pmu->fixed_ctr_ctrl = data; + __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); + reprogram_fixed_counter(pmc); + } } static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) From patchwork Mon Feb 21 11:51:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12753513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFB22C433F5 for ; Mon, 21 Feb 2022 11:52:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356982AbiBULxP (ORCPT ); Mon, 21 Feb 2022 06:53:15 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:51334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356940AbiBULxA (ORCPT ); Mon, 21 Feb 2022 06:53:00 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4FED1EEF8; Mon, 21 Feb 2022 03:52:30 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id gi6so5388491pjb.1; Mon, 21 Feb 2022 03:52:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=H4FmNQqnLrwUNlzTz45ZJYgWcwcQmaA2toSJhtt6Wns=; b=OjCEdF/LCTsu90tVQFeD6wkMayy2sDYUg2XwBQCTq1+TKILsujCOkrA1PS5livTPsh XsnIOHlLrp8VG43jqJg4/uvmxFW0iwKnxF0U/0CWXogp0/h8xTQVPExo2uMprg9OPs2e gPZUNFW0uL93ryY8TVG2Oua7Z13q0WCGydodi8si7IEpdwN8n67yLd3aC7OJSHLuwocz Rroh+DMKMfLcPqwEr5KGBn13xrGzmB4nrgJ93V4ePebjtlj9Dzfw0+MA82W7yv4R1NS+ WlW22Y51RhiajflwBBN+bj9BEb309tyne11GXzR1qZfzEFsULIwMoy0liwOqxYQC+5ul h82w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=H4FmNQqnLrwUNlzTz45ZJYgWcwcQmaA2toSJhtt6Wns=; b=sJ/Q9XZkrF746ofXP2gRmzrX+YIEUhfqY1B3QLX37DYC9K5i5wnlM7JFCguXwBSk1E 6mQvA52SGyQ1/uou7wXoVbODogYKblVgX5DRiOOSUYfasIGW2fwOjh2n8QfmiyvHusxo sMDYS4bKW6OL6xnvyv/qrjFfd+KTx0gGW+z3ts7bl4VkT332qW6apG5jZf0UbAvj7OOF AUTTSFFfqgg2y+66VXBcO8A7OSRzohONgZVjI2gLNFKyYZWt7Px2NR0qdcqZo3DdBsQd 2HqH/0IsKQ3aQ6uwohvGcnbD7/wxzNOE2H+y6S3WQgLuOkZsbmiwmjLdTI+jjxsYt7Qb NECw== X-Gm-Message-State: AOAM531KKRD5/NfEWHoycRQn2BTdUwOqwwtx0vCYEmT2IPY9ZE0jYruP s75Bb4qK+O9PrBRxhn6Lw3TfC1fpfbF2bQ== X-Google-Smtp-Source: ABdhPJwE9fzZGi1JFhQXEdn2ZbTGvKc2bJ2G4U1h+EdCk/EB8F9R7OcL6g79q//0ZJ8Rts6eHp0peg== X-Received: by 2002:a17:90a:ac1:b0:1b9:7dd3:ba5f with SMTP id r1-20020a17090a0ac100b001b97dd3ba5fmr21242387pje.178.1645444349861; Mon, 21 Feb 2022 03:52:29 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id z14sm13055011pfe.30.2022.02.21.03.52.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Feb 2022 03:52:29 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Jim Mattson Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 06/11] KVM: x86/pmu: Use only the uniformly exported interface reprogram_counter() Date: Mon, 21 Feb 2022 19:51:56 +0800 Message-Id: <20220221115201.22208-7-likexu@tencent.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20220221115201.22208-1-likexu@tencent.com> References: <20220221115201.22208-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Since reprogram_counter(), reprogram_{gp, fixed}_counter() currently have the same incoming parameter "struct kvm_pmc *pmc", the callers can simplify the conetxt by using uniformly exported interface, which makes reprogram_ {gp, fixed}_counter() static and eliminates EXPORT_SYMBOL_GPL. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 6 ++---- arch/x86/kvm/pmu.h | 2 -- arch/x86/kvm/svm/pmu.c | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 4 ++-- 4 files changed, 5 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 7c90d5d196a4..5816af6b6494 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -213,7 +213,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return allow_event; } -void reprogram_gp_counter(struct kvm_pmc *pmc) +static void reprogram_gp_counter(struct kvm_pmc *pmc) { u64 config; u32 type = PERF_TYPE_RAW; @@ -256,9 +256,8 @@ void reprogram_gp_counter(struct kvm_pmc *pmc) (eventsel & HSW_IN_TX), (eventsel & HSW_IN_TX_CHECKPOINTED)); } -EXPORT_SYMBOL_GPL(reprogram_gp_counter); -void reprogram_fixed_counter(struct kvm_pmc *pmc) +static void reprogram_fixed_counter(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); int idx = pmc->idx - INTEL_PMC_IDX_FIXED; @@ -286,7 +285,6 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc) !(en_field & 0x1), /* exclude kernel */ pmi, false, false); } -EXPORT_SYMBOL_GPL(reprogram_fixed_counter); void reprogram_counter(struct kvm_pmc *pmc) { diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 70a982c3cdad..201b99628423 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -140,8 +140,6 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) return sample_period; } -void reprogram_gp_counter(struct kvm_pmc *pmc); -void reprogram_fixed_counter(struct kvm_pmc *pmc); void reprogram_counter(struct kvm_pmc *pmc); void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index db839578e8be..b264e8117be1 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -266,7 +266,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 0; if (!(data & pmu->reserved_bits)) { pmc->eventsel = data; - reprogram_gp_counter(pmc); + reprogram_counter(pmc); return 0; } } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index cc4a092f0d67..a69d2aeb7526 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -52,7 +52,7 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); - reprogram_fixed_counter(pmc); + reprogram_counter(pmc); } } @@ -449,7 +449,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 0; if (!(data & pmu->reserved_bits)) { pmc->eventsel = data; - reprogram_gp_counter(pmc); + reprogram_counter(pmc); return 0; } } else if (intel_pmu_handle_lbr_msrs_access(vcpu, msr_info, false)) From patchwork Mon Feb 21 11:51:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12753517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31932C433F5 for ; Mon, 21 Feb 2022 11:53:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357003AbiBULxX (ORCPT ); Mon, 21 Feb 2022 06:53:23 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:51672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356958AbiBULxI (ORCPT ); Mon, 21 Feb 2022 06:53:08 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 887241FA4C; Mon, 21 Feb 2022 03:52:33 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id u16so8592174pfg.12; Mon, 21 Feb 2022 03:52:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+B3Yky+zX1epczrKKuYCg8o/0cqQs8TYj8PRpxw+424=; b=pooyAdproOfT2WxnVSFkUF1HI3Eb6lRxlD6qBC68zDRuewvLYxxUAEXW1ewC4fSXYm eSzFPVp5z0r1S5G2un8gI5vv+4Bi9BWHV29/IY0WMBJEGz56wPxldFG3oTgrguwRTXVa q9LPYlKlRuwkZWE0OYp5R0LG89MtcjTIuYxeUArL0QzVYe6wvd6vyzI56pQ1/ZCbgUiA ZJ0KT1OAC2fWdwg3YG6FMjzDlBO6bkL/kDS5qDDzQzjwt0akzil1yHPPxaXaHOaLvf9U g/8sb5UXh2G1nhkAtoEv9NzTq42hzJBRPWo1ONvpS9KMzwujg6zYILzgh8Vq+9qaLdlX Xmyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+B3Yky+zX1epczrKKuYCg8o/0cqQs8TYj8PRpxw+424=; b=h6B8YrYF56iRdto3EaqVAFv2DdpTKZPXbvEpXDS9QjLa1+PY0r4nbGNTbjEVye6x8E uq5UAs+y6eIB9p+P2igDU/Pb2cSEA7cwxjfqaOw9lHS4AbyEg1IwsfK7Dz5CmcZx6vsP fnYIlyezha8kqouBg63bqav2NXkzy58JXNdyh4LBGpofXTxh93z7aIQqXjYNZYrrukGu 036C0oEpTJkc4gi98TaiE7MqJiqVT2aCo5CMiuu2r2iE1Czc668kWiig0Ehg7K4/93SJ n46SHzwQc0SkVndG9FYkf0iAQ0MI/pzaiTcg7uIAQYrXKFTZuisEMwtnUMVf/7PskK7+ 3SFg== X-Gm-Message-State: AOAM533wJQNhYQTLvcz336VOBV1YmL781YsFwcAAHbL78k7sMUYJHMsy yrKMnrLKVtl9wSazwZFTheA= X-Google-Smtp-Source: ABdhPJyIbe2WfkEX4KIh8EsuwCuJAhUcr3bAQJP6HdFd+D4fzMWdj8tetxn/iE6hV6voS0/5F7x9tg== X-Received: by 2002:a05:6a00:218a:b0:4e1:9ed6:c399 with SMTP id h10-20020a056a00218a00b004e19ed6c399mr19638896pfi.8.1645444353069; Mon, 21 Feb 2022 03:52:33 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id z14sm13055011pfe.30.2022.02.21.03.52.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Feb 2022 03:52:32 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Jim Mattson Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu , Peter Zijlstra Subject: [PATCH 07/11] KVM: x86/pmu: Use PERF_TYPE_RAW to merge reprogram_{gp, fixed}counter() Date: Mon, 21 Feb 2022 19:51:57 +0800 Message-Id: <20220221115201.22208-8-likexu@tencent.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20220221115201.22208-1-likexu@tencent.com> References: <20220221115201.22208-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The code sketch for reprogram_{gp, fixed}_counter() is similar, while the fixed counter using the PERF_TYPE_HARDWAR type and the gp being able to use either PERF_TYPE_HARDWAR or PERF_TYPE_RAW type depending on the pmc->eventsel value. After 'commit 761875634a5e ("KVM: x86/pmu: Setup pmc->eventsel for fixed PMCs")', the pmc->eventsel of the fixed counter will also have been setup with the same semantic value and will not be changed during the guest runtime. But essentially, "the HARDWARE is just a convenience wrapper over RAW IIRC", quoated from Peterz. So it could be pretty safe to use the PERF_TYPE_RAW type only to program both gp and fixed counters naturally in the reprogram_counter(). To make the gp and fixed counters more semantically symmetrical, the selection of EVENTSEL_{USER, OS, INT} bits is temporarily translated via fixed_ctr_ctrl before the pmc_reprogram_counter() call. Practically, this change drops the guest pmu support on the hosts without X86_FEATURE_ARCH_PERFMON (the oldest Pentium 4), where the PERF_TYPE_HARDWAR is intentionally introduced so that hosts can map the architectural guest PMU events to their own. Cc: Peter Zijlstra Suggested-by: Jim Mattson Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 106 +++++++++++------------------------ arch/x86/kvm/vmx/pmu_intel.c | 2 +- 2 files changed, 35 insertions(+), 73 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 5816af6b6494..edd51ec7711d 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -213,35 +213,44 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return allow_event; } -static void reprogram_gp_counter(struct kvm_pmc *pmc) +static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) { - u64 config; - u32 type = PERF_TYPE_RAW; - u64 eventsel = pmc->eventsel; + struct kvm_pmu *pmu = pmc_to_pmu(pmc); - if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) - printk_once("kvm pmu: pin control bit is ignored\n"); + if (pmc_is_fixed(pmc)) + return fixed_ctrl_field(pmu->fixed_ctr_ctrl, + pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; + + return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; +} + +void reprogram_counter(struct kvm_pmc *pmc) +{ + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + u64 eventsel = pmc->eventsel; + u8 fixed_ctr_ctrl; pmc_pause_counter(pmc); - if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE) || !pmc_is_enabled(pmc)) + if (!pmc_speculative_in_use(pmc) || !pmc_is_enabled(pmc)) return; if (!check_pmu_event_filter(pmc)) return; - if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | - ARCH_PERFMON_EVENTSEL_INV | - ARCH_PERFMON_EVENTSEL_CMASK | - HSW_IN_TX | - HSW_IN_TX_CHECKPOINTED))) { - config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc); - if (config != PERF_COUNT_HW_MAX) - type = PERF_TYPE_HARDWARE; - } + if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) + printk_once("kvm pmu: pin control bit is ignored\n"); - if (type == PERF_TYPE_RAW) - config = eventsel & AMD64_RAW_EVENT_MASK; + if (pmc_is_fixed(pmc)) { + fixed_ctr_ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, + pmc->idx - INTEL_PMC_IDX_FIXED); + if (fixed_ctr_ctrl & 0x1) + eventsel |= ARCH_PERFMON_EVENTSEL_OS; + if (fixed_ctr_ctrl & 0x2) + eventsel |= ARCH_PERFMON_EVENTSEL_USR; + if (fixed_ctr_ctrl & 0x8) + eventsel |= ARCH_PERFMON_EVENTSEL_INT; + } if (pmc->current_config == eventsel && pmc_resume_counter(pmc)) return; @@ -249,49 +258,13 @@ static void reprogram_gp_counter(struct kvm_pmc *pmc) pmc_release_perf_event(pmc); pmc->current_config = eventsel; - pmc_reprogram_counter(pmc, type, config, - !(eventsel & ARCH_PERFMON_EVENTSEL_USR), - !(eventsel & ARCH_PERFMON_EVENTSEL_OS), - eventsel & ARCH_PERFMON_EVENTSEL_INT, - (eventsel & HSW_IN_TX), - (eventsel & HSW_IN_TX_CHECKPOINTED)); -} - -static void reprogram_fixed_counter(struct kvm_pmc *pmc) -{ - struct kvm_pmu *pmu = pmc_to_pmu(pmc); - int idx = pmc->idx - INTEL_PMC_IDX_FIXED; - u8 ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, idx); - unsigned en_field = ctrl & 0x3; - bool pmi = ctrl & 0x8; - - pmc_pause_counter(pmc); - - if (!en_field || !pmc_is_enabled(pmc)) - return; - - if (!check_pmu_event_filter(pmc)) - return; - - if (pmc->current_config == (u64)ctrl && pmc_resume_counter(pmc)) - return; - - pmc_release_perf_event(pmc); - - pmc->current_config = (u64)ctrl; - pmc_reprogram_counter(pmc, PERF_TYPE_HARDWARE, - kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc), - !(en_field & 0x2), /* exclude user */ - !(en_field & 0x1), /* exclude kernel */ - pmi, false, false); -} - -void reprogram_counter(struct kvm_pmc *pmc) -{ - if (pmc_is_gp(pmc)) - reprogram_gp_counter(pmc); - else - reprogram_fixed_counter(pmc); + pmc_reprogram_counter(pmc, PERF_TYPE_RAW, + (eventsel & AMD64_RAW_EVENT_MASK), + !(eventsel & ARCH_PERFMON_EVENTSEL_USR), + !(eventsel & ARCH_PERFMON_EVENTSEL_OS), + eventsel & ARCH_PERFMON_EVENTSEL_INT, + (eventsel & HSW_IN_TX), + (eventsel & HSW_IN_TX_CHECKPOINTED)); } EXPORT_SYMBOL_GPL(reprogram_counter); @@ -449,17 +422,6 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu) kvm_pmu_refresh(vcpu); } -static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) -{ - struct kvm_pmu *pmu = pmc_to_pmu(pmc); - - if (pmc_is_fixed(pmc)) - return fixed_ctrl_field(pmu->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; - - return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; -} - /* Release perf_events for vPMCs that have been unused for a full time slice. */ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index a69d2aeb7526..98a01f6a9d5d 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -492,7 +492,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->reserved_bits = 0xffffffff00200000ull; entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); - if (!entry || !enable_pmu) + if (!entry || !enable_pmu || !boot_cpu_has(X86_FEATURE_ARCH_PERFMON)) return; eax.full = entry->eax; edx.full = entry->edx; From patchwork Mon Feb 21 11:51:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12753518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE227C433EF for ; Mon, 21 Feb 2022 11:53:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356951AbiBULxz (ORCPT ); Mon, 21 Feb 2022 06:53:55 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:51830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356953AbiBULxO (ORCPT ); Mon, 21 Feb 2022 06:53:14 -0500 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C2551FA6C; Mon, 21 Feb 2022 03:52:37 -0800 (PST) Received: by mail-pj1-x102f.google.com with SMTP id om7so14945238pjb.5; Mon, 21 Feb 2022 03:52:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RHHO1lCTgmUCqGpOMS2hTgwPWrCn1+5dYE38ZQv5fAw=; b=FHfU/E2JV17g5GrMCniEYdI+q+m5SY8zzNNLsOdlQqAsNjwezFhFxJ2E8bvPHMdoma c7z+zDnUcRLUAnaRfjnRHXaHnvdgpVzr036+ahqfjdFeaUshRzfl1TBn51GIEkMatx+q iGuqqgsfvk0dOyMx76qO+gS/NFY7ck+8SFtEGh/u5r1mOqW11k57WoONJWwnjwcjyO/O OEDMSnGzoEGCbc6XYiTLh8KKItW6PANzxBDmK3eYc/HbrnQ+m54vXSTpSHDTngiIpArC uQwHB20tT30ajLi94283ML4MHb/MsaL/hippmaunhvL72GjfmJdqVhX43uCkEHIfFUyI 7fQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RHHO1lCTgmUCqGpOMS2hTgwPWrCn1+5dYE38ZQv5fAw=; b=XEpOjXvS0XSdTaVr5LRyBevi0AzXa/812hnMYSUI8xNSvBKNra/972EkIryqj6LtpA 1dVu1mCv5h9ebcZ59gPbIkGkDzzdKZTzS2u5dgmzabUa3RLuidGLgRTPfeJBY/jzKKjV xAGKHI0pqXWYrJZBF4gi8bBHeoIYhMDKe3wyplj4SriDw1PWQFfsAZrP3Ne6SWZq3uVq E01YV9qPU96rWvq0e3qECFBsrquq5QFOhGZh0sfVnm07LJ9Z9//EIMBTPJpc1jL5f2ty WLKn6ToZNK9pKN/XD7dLMm/GlDjgvqfE9hyRol+2miAaQPTTYxbVaWpfyGaIq0yud7FJ smsA== X-Gm-Message-State: AOAM532jfWgxDYXgUTt0aoBQqyk4AfsZfKObzo4X+FzC3/anLEx9zGMv UaRsCxgiL2NC77a0q3iuitk= X-Google-Smtp-Source: ABdhPJzJdZysFmnfkECExgsUIz53tG43FuUEq97WWce3qS0nCaMg1G92mUQ0TZiiKXa0nDWWwEjEZw== X-Received: by 2002:a17:902:da88:b0:14b:550b:4caa with SMTP id j8-20020a170902da8800b0014b550b4caamr18214777plx.111.1645444356529; Mon, 21 Feb 2022 03:52:36 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id z14sm13055011pfe.30.2022.02.21.03.52.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Feb 2022 03:52:36 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Jim Mattson Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu , Peter Zijlstra Subject: [PATCH 08/11] perf: x86/core: Add interface to query perfmon_event_map[] directly Date: Mon, 21 Feb 2022 19:51:58 +0800 Message-Id: <20220221115201.22208-9-likexu@tencent.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20220221115201.22208-1-likexu@tencent.com> References: <20220221115201.22208-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Currently, we have [intel|knc|p4|p6]_perfmon_event_map on the Intel platforms and amd_[f17h]_perfmon_event_map on the AMD platforms. Early clumsy KVM code or other potential perf_event users may have hard-coded these perfmon_maps (e.g., arch/x86/kvm/svm/pmu.c), so it would not make sense to program a common hardware event based on the generic "enum perf_hw_id" once the two tables do not match. Just provide an interface for callers outside the perf subsystem to get the counter config based on the perfmon_event_map currently in use, and it also helps to save bytes. Cc: Peter Zijlstra Signed-off-by: Like Xu --- arch/x86/events/core.c | 11 +++++++++++ arch/x86/include/asm/perf_event.h | 6 ++++++ 2 files changed, 17 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index e686c5e0537b..e760a1348c62 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2996,3 +2996,14 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) cap->events_mask_len = x86_pmu.events_mask_len; } EXPORT_SYMBOL_GPL(perf_get_x86_pmu_capability); + +u64 perf_get_hw_event_config(int hw_event) +{ + int max = x86_pmu.max_events; + + if (hw_event < max) + return x86_pmu.event_map(array_index_nospec(hw_event, max)); + + return 0; +} +EXPORT_SYMBOL_GPL(perf_get_hw_event_config); diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 8fc1b5003713..822927045406 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -477,6 +477,7 @@ struct x86_pmu_lbr { }; extern void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap); +extern u64 perf_get_hw_event_config(int hw_event); extern void perf_check_microcode(void); extern void perf_clear_dirty_counters(void); extern int x86_perf_rdpmc_index(struct perf_event *event); @@ -486,6 +487,11 @@ static inline void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) memset(cap, 0, sizeof(*cap)); } +static inline u64 perf_get_hw_event_config(int hw_event) +{ + return 0; +} + static inline void perf_events_lapic_init(void) { } static inline void perf_check_microcode(void) { } #endif From patchwork Mon Feb 21 11:51:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12753514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89E41C433FE for ; Mon, 21 Feb 2022 11:53:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356968AbiBULx0 (ORCPT ); Mon, 21 Feb 2022 06:53:26 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:51668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356967AbiBULxX (ORCPT ); Mon, 21 Feb 2022 06:53:23 -0500 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50D1620197; Mon, 21 Feb 2022 03:52:40 -0800 (PST) Received: by mail-pj1-x1031.google.com with SMTP id m1-20020a17090a668100b001bc023c6f34so4807230pjj.3; Mon, 21 Feb 2022 03:52:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mhtKDiOsLT1DFVkDtSR/HqZI5hm/yp+9fh0AF9ZzPSE=; b=HzvfHmXZ10Z3fPpJMxzWMvs/cq9Ow90nVAhjDzhlQFBpuENLsPNcjq7kGEw3uJPFNM 6RJ1hJCBw0EiCAwEfZh5ok8pAh84dRvDh95OKDfBUOWEGoHErxWWwBe5t0dEZ1dtw0km Z7gb4pslxFor4OCr6T7Ceam1vAYLsuiBY5F8GYdtyCSInlaoGt6KN1+0Zp+S9rZ06FDc xjBGaf8PPdj8txfrko6KQEOBBDlw3Mvy7maZjZAhmI7RtMqlVRRYEt9Aep/f8nbXBMIw cK5ISMBDvMF/+wz4SBxqYAj/KLwyqO2vRZKturvYm68W8cmvuulsmu1QiwRnRQgbynu0 YmVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mhtKDiOsLT1DFVkDtSR/HqZI5hm/yp+9fh0AF9ZzPSE=; b=ma/iGNPu22D7zDlzgzaAjYtDNr4lwNsr6IxoT+j1wWdq48mai6w83kOomrz08eMZ3G eZk2yMgd5E10NJm6WuWDyyeFXTXi99ZKCLYZWYzxRWCIh12TAAPuXZSyGjXQ1c53Yaif r6W0U/HEV7G2uk3VnyoBix3ovrZ5Hb6lbPknb55vfWARsbF0TQOi9aQmrqkevqlTRhqr +EiCVbsSZne3myqFLGWEil3yaf+gb7Rq2/yu/O+/Wx1EjM+yWsnB0FSvktwgBdI27km9 Af5Kt2gHuecvtaF/xPArx9ga3ug7O6ffX1L5rb1u/jzjfnWsGYEnUDtO4Ov/otyYl+6n GA2Q== X-Gm-Message-State: AOAM532RucQfRBNRH8KoHhGCe32NBsU0JDNE25qMycrmyiCpisTMpxNX vBSuT9ss34YtiFs0U5aqVJ4= X-Google-Smtp-Source: ABdhPJxMx+ZUFUREsgdZwXILwUPAUIQIXLsPMgP/yfW63d+hAnPGcDSFRNCrhvv59aBJ81zTxlWIKA== X-Received: by 2002:a17:902:8d84:b0:14f:83f2:8c0d with SMTP id v4-20020a1709028d8400b0014f83f28c0dmr11898579plo.110.1645444359800; Mon, 21 Feb 2022 03:52:39 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id z14sm13055011pfe.30.2022.02.21.03.52.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Feb 2022 03:52:39 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Jim Mattson Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 09/11] KVM: x86/pmu: Replace pmc_perf_hw_id() with perf_get_hw_event_config() Date: Mon, 21 Feb 2022 19:51:59 +0800 Message-Id: <20220221115201.22208-10-likexu@tencent.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20220221115201.22208-1-likexu@tencent.com> References: <20220221115201.22208-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu With the help of perf_get_hw_event_config(), KVM could query the correct EVENTSEL_{EVENT, UMASK} pair of a kernel-generic hw event directly from the different *_perfmon_event_map[] by the kernel's pre-defined perf_hw_id. Also extend the bit range of the comparison field to AMD64_RAW_EVENT_MASK_NB to prevent AMD from defining EventSelect[11:8] into perfmon_event_map[] one day. Signed-off-by: Like Xu Reported-by: kernel test robot --- arch/x86/kvm/pmu.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index edd51ec7711d..a6bfcbd3412d 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -468,13 +468,8 @@ static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) static inline bool eventsel_match_perf_hw_id(struct kvm_pmc *pmc, unsigned int perf_hw_id) { - u64 old_eventsel = pmc->eventsel; - unsigned int config; - - pmc->eventsel &= (ARCH_PERFMON_EVENTSEL_EVENT | ARCH_PERFMON_EVENTSEL_UMASK); - config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc); - pmc->eventsel = old_eventsel; - return config == perf_hw_id; + return !((pmc->eventsel ^ perf_get_hw_event_config(perf_hw_id)) && + AMD64_RAW_EVENT_MASK_NB); } static inline bool cpl_is_matched(struct kvm_pmc *pmc) From patchwork Mon Feb 21 11:52:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12753515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53125C433EF for ; Mon, 21 Feb 2022 11:53:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357054AbiBULxc (ORCPT ); Mon, 21 Feb 2022 06:53:32 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:52048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357031AbiBULxX (ORCPT ); Mon, 21 Feb 2022 06:53:23 -0500 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC406201A8; Mon, 21 Feb 2022 03:52:43 -0800 (PST) Received: by mail-pj1-x102f.google.com with SMTP id cp23-20020a17090afb9700b001bbfe0fbe94so5466972pjb.3; Mon, 21 Feb 2022 03:52:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=M67FiBJerTGSJ6FYysDGd8HKiVNhS/J/iyD2gNN+Z8g=; b=Ai2REqt+PAh8C4xXE6Nrmri7da/xhQeDl08pts38TG7XEiDCTQ1x0wDOFai7SzGSGB 0sYvQRsO30RCSnPGXTMcJAXo5CdTHCIupar7CycOzolhV1gvLc7/IZ+91ZREZBYQonxN YzZqtmHUjt+k4k8PeiTGLeeY7q8iNmDLMqgDpFjsetUVWSicicUJnN5yz+cPJzHIwQqY 1/G/O1QW4C064nx6gNbWWNmwzJF/E8kMBGCP4I9tG2v9gYZpVAc076FD4k51hZyeIvLx qb6RAdX/xEbfXzD3BlVr6O7F3akCWYyDURB4+MOw1vJlY/p53tZoopH/N2ZXbYR7fwQ/ WTYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=M67FiBJerTGSJ6FYysDGd8HKiVNhS/J/iyD2gNN+Z8g=; b=iG43KAEUkN0f0oAdm9AeOwTCqR9EIFZlBbG5nHG9XVRSQ3x14hjBO225EcNfrjMmXh JPnX6PKMWbKJWgS5LCsvO0PFwgwZ4EXhb+ZF5OUtUvoT5sABv9GQRFmDNrdxpUZ/Losz sR1WZco80gxsoW+Y8F8IuRPbNrYSpjvAybqgMLugTIastNLhQLaBaIHlg9RRIPypRMHM N7Yixl1oK6/Wct9bPurEEr4XBsBfQBDR72pJ6isEniWz4EhbZFHT4FxPkMx3KotbuAl3 qulA4Ujle6TS+8KzP/irdRXv6cryUehO28+iwfOjm/L9yIe0ZISrR6tLSUDHweSALQ/0 xsQQ== X-Gm-Message-State: AOAM533ifQ24UkL6u3W6bMqSnC0oHFrNZdIZPp5AxmfAavK8p3urRu8g 2jJtSA+d3Y1VCHMr2iSkmOJ5F7qVvxBiDw== X-Google-Smtp-Source: ABdhPJzfNyXJ+kYZX5VcZQKNOUs1r33xHf6s9iL2+7tlqlfbafr/U7fPi4Px5zLBtyLYiWL9a2XIrQ== X-Received: by 2002:a17:90a:d494:b0:1bc:54d7:9e80 with SMTP id s20-20020a17090ad49400b001bc54d79e80mr1468997pju.4.1645444363434; Mon, 21 Feb 2022 03:52:43 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id z14sm13055011pfe.30.2022.02.21.03.52.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Feb 2022 03:52:43 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Jim Mattson Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 10/11] KVM: x86/pmu: Drop amd_event_mapping[] in the KVM context Date: Mon, 21 Feb 2022 19:52:00 +0800 Message-Id: <20220221115201.22208-11-likexu@tencent.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20220221115201.22208-1-likexu@tencent.com> References: <20220221115201.22208-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu All gp or fixed counters have been reprogrammed using PERF_TYPE_RAW, which means that the table that maps perf_hw_id to event select values is no longer useful, at least for AMD. For Intel, the logic to check if the pmu event reported by Intel cpuid is not available is still required, in which case pmc_perf_hw_id() could be renamed to hw_event_is_unavail() and a bool value is returned to replace the semantics of "PERF_COUNT_HW_MAX+1". Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 6 +++--- arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/svm/pmu.c | 34 +++------------------------------- arch/x86/kvm/vmx/pmu_intel.c | 11 ++++------- 4 files changed, 11 insertions(+), 42 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index a6bfcbd3412d..40a6e778b3d9 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -112,9 +112,6 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, .config = config, }; - if (type == PERF_TYPE_HARDWARE && config >= PERF_COUNT_HW_MAX) - return; - attr.sample_period = get_sample_period(pmc, pmc->counter); if (in_tx) @@ -188,6 +185,9 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) __u64 key; int idx; + if (kvm_x86_ops.pmu_ops->hw_event_is_unavail(pmc)) + return false; + filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (!filter) goto out; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 201b99628423..a2b4037759a2 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -24,7 +24,7 @@ struct kvm_event_hw_type_mapping { }; struct kvm_pmu_ops { - unsigned int (*pmc_perf_hw_id)(struct kvm_pmc *pmc); + bool (*hw_event_is_unavail)(struct kvm_pmc *pmc); bool (*pmc_is_enabled)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index b264e8117be1..031962a5e50f 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -33,18 +33,6 @@ enum index { INDEX_ERROR, }; -/* duplicated from amd_perfmon_event_map, K7 and above should work. */ -static struct kvm_event_hw_type_mapping amd_event_mapping[] = { - [0] = { 0x76, 0x00, PERF_COUNT_HW_CPU_CYCLES }, - [1] = { 0xc0, 0x00, PERF_COUNT_HW_INSTRUCTIONS }, - [2] = { 0x7d, 0x07, PERF_COUNT_HW_CACHE_REFERENCES }, - [3] = { 0x7e, 0x07, PERF_COUNT_HW_CACHE_MISSES }, - [4] = { 0xc2, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS }, - [5] = { 0xc3, 0x00, PERF_COUNT_HW_BRANCH_MISSES }, - [6] = { 0xd0, 0x00, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND }, - [7] = { 0xd1, 0x00, PERF_COUNT_HW_STALLED_CYCLES_BACKEND }, -}; - static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type) { struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); @@ -138,25 +126,9 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, return &pmu->gp_counters[msr_to_index(msr)]; } -static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc) +static bool amd_hw_event_is_unavail(struct kvm_pmc *pmc) { - u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; - u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; - int i; - - /* return PERF_COUNT_HW_MAX as AMD doesn't have fixed events */ - if (WARN_ON(pmc_is_fixed(pmc))) - return PERF_COUNT_HW_MAX; - - for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++) - if (amd_event_mapping[i].eventsel == event_select - && amd_event_mapping[i].unit_mask == unit_mask) - break; - - if (i == ARRAY_SIZE(amd_event_mapping)) - return PERF_COUNT_HW_MAX; - - return amd_event_mapping[i].event_type; + return false; } /* check if a PMC is enabled by comparing it against global_ctrl bits. Because @@ -322,7 +294,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) } struct kvm_pmu_ops amd_pmu_ops = { - .pmc_perf_hw_id = amd_pmc_perf_hw_id, + .hw_event_is_unavail = amd_hw_event_is_unavail, .pmc_is_enabled = amd_pmc_is_enabled, .pmc_idx_to_pmc = amd_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 98a01f6a9d5d..58ea46bd92ac 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -84,7 +84,7 @@ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data) } } -static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) +static bool intel_hw_event_is_unavail(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; @@ -98,15 +98,12 @@ static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) /* disable event that reported as not present by cpuid */ if ((i < 7) && !(pmu->available_event_types & (1 << i))) - return PERF_COUNT_HW_MAX + 1; + return true; break; } - if (i == ARRAY_SIZE(intel_arch_events)) - return PERF_COUNT_HW_MAX; - - return intel_arch_events[i].event_type; + return false; } /* check if a PMC is enabled by comparing it with globl_ctrl bits. */ @@ -720,7 +717,7 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) } struct kvm_pmu_ops intel_pmu_ops = { - .pmc_perf_hw_id = intel_pmc_perf_hw_id, + .hw_event_is_unavail = intel_hw_event_is_unavail, .pmc_is_enabled = intel_pmc_is_enabled, .pmc_idx_to_pmc = intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, From patchwork Mon Feb 21 11:52:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12753516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1879C433EF for ; Mon, 21 Feb 2022 11:53:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357115AbiBULxf (ORCPT ); Mon, 21 Feb 2022 06:53:35 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:51336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357048AbiBULxY (ORCPT ); Mon, 21 Feb 2022 06:53:24 -0500 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1269201B1; Mon, 21 Feb 2022 03:52:47 -0800 (PST) Received: by mail-pf1-x430.google.com with SMTP id d17so8647876pfl.0; Mon, 21 Feb 2022 03:52:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+R75ibZOMH2erhPGpJ4UOQrxhD/H+fgSa4LFFJeyR+c=; b=C3PUVSAQblKLBoLjedEjZEWr9jzp2hGYGHMzMPJHYg9SFYuiw2ZrL64tc+nFyDgNsR je8nKAkTLPlbu4gVoyttixYBZPNLwkK3gMVCpn3a7Rupe5sTZ9cwfYs/mzTHDN75PN98 ENzMrlI2W1hZJiTfy7GA/lfAPMRC7fGnNYRd40Nhqi8p4NcURtikeIgru1k62ZGERQcD yOFZpzvdWTHQ2CM+FTsT3PrePMqHcxUj5XTef2YbFQUUFDPSbZeerkZxxXe2dXOZcVFs EHmhqqUUmWaRIlqkJAl6Rue1rzb6xIe/2tXMb0FlJYQUhA1qVrEDiZCNe2IHnYkePEfF Qc1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+R75ibZOMH2erhPGpJ4UOQrxhD/H+fgSa4LFFJeyR+c=; b=C84qdGbC9p4mAoin29+dVRwYHM1G8Jb+//8KFEXtJLcJy6O80KIHegJVvQ2rNEDNbX YE1cxhkY9q5K/cKG6dbsqY3iy85sx6m6iAuphWeDziG4usiIRzKMqWyx7N1/r42DIjUT tVMycl7J1g5yFJ8x7Xcp8uQ0ny0RvF0KU1IzLO7w08848+taZ1gT9Y0MN9+bhLNi4G14 f+WNufGfZqPGbrfyqi/Vbf674JxhTztHhzvHmCnzdkVtxZ9MKXsxinm7RhzJXiu21TyE Qx0dohOUoTPQ0YMBqu4oA79fO3K6KdlK83u/PtBeUXKlsEn601P1BvdVKJ+fcL62AULA vmmQ== X-Gm-Message-State: AOAM5336fuO0H2gb0SQWKBZMct2BTHjIJNP7jZrLXwdWC8kDTpxsUXaL olgN/jUbOcGgyQO/C7lrfpo= X-Google-Smtp-Source: ABdhPJz/k4Yl4rwR1/MPXLtw/2I6fV4iNnKyLtmElWSrIrE3LJ7gyRNBNfEyGrrKkqh3drzM9pKyIg== X-Received: by 2002:a63:114:0:b0:34d:efd0:762a with SMTP id 20-20020a630114000000b0034defd0762amr15783747pgb.71.1645444367118; Mon, 21 Feb 2022 03:52:47 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id z14sm13055011pfe.30.2022.02.21.03.52.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Feb 2022 03:52:46 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Jim Mattson Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH 11/11] KVM: x86/pmu: Protect kvm->arch.pmu_event_filter with SRCU Date: Mon, 21 Feb 2022 19:52:01 +0800 Message-Id: <20220221115201.22208-12-likexu@tencent.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20220221115201.22208-1-likexu@tencent.com> References: <20220221115201.22208-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Similar to "kvm->arch.msr_filter", KVM should guarantee that vCPUs will see either the previous filter or the new filter when user space calls KVM_SET_PMU_EVENT_FILTER ioctl with the vCPU running so that guest pmu events with identical settings in both the old and new filter have deterministic behavior. Fixes: 66bb8a065f5a ("KVM: x86: PMU Event Filter") Signed-off-by: Like Xu Reviewed-by: Wanpeng Li --- arch/x86/kvm/pmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 40a6e778b3d9..84f0fcbba820 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -183,11 +183,12 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) struct kvm *kvm = pmc->vcpu->kvm; bool allow_event = true; __u64 key; - int idx; + int idx, srcu_idx; if (kvm_x86_ops.pmu_ops->hw_event_is_unavail(pmc)) return false; + srcu_idx = srcu_read_lock(&kvm->srcu); filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (!filter) goto out; @@ -210,6 +211,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) } out: + srcu_read_unlock(&kvm->srcu, srcu_idx); return allow_event; }