From patchwork Wed Mar 2 11:13:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765814 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45480C433EF for ; Wed, 2 Mar 2022 11:13:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241264AbiCBLOb (ORCPT ); Wed, 2 Mar 2022 06:14:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241240AbiCBLOa (ORCPT ); Wed, 2 Mar 2022 06:14:30 -0500 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D86A4BB87; Wed, 2 Mar 2022 03:13:47 -0800 (PST) Received: by mail-pj1-x102b.google.com with SMTP id v4so1484288pjh.2; Wed, 02 Mar 2022 03:13:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3+cxr9PBA0y6OTGUTKyW/B0HETWf7xZdmsRbL9EBLSs=; b=Js9++clihsvhkNBrmdJC1S5A8OK7lMfVxmGstEIEsYxazlDLkVHoRPZ2tMB1PDEK2u m5pZNqxV2w4RMNxbHVGF4DzkW2aj5rgYN87sWwZSVG77YUMW+sKqz4rE9QnYluXrNBlD km9EhE7KvQXEgkutvuAsieriqid5RDmKADhSBu+Z3aM06J+3/zC+4SpW+1Eh0sJtlYRk WupcKaDH8rzbTxRUoY27/UvrJO/zyRLzWTt3L2XXx0yGnSkeOBa6CtTr6jIbBvuq6YYd Ts9TZjsdjkrDTq9WKuZtAmICGba1N6yGVb2XDcGjX1JuQ8CdJsp9fwpjPP89AxrCWfa7 vFIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3+cxr9PBA0y6OTGUTKyW/B0HETWf7xZdmsRbL9EBLSs=; b=AAtqYJpFCa6Do92qyQ8BLnEeWvYtKFHZlPF8xkj97d8IeRvooAb+n9zz8eMuTNOld7 YXylhIxtzRby0xTuBabPiYTq45ZZrX/scIIQEKdMKEKXiLb56elfh98fzZz4LMnEsHgJ saFBlbs1PWpxrjVSD68O04bWoCQUJrh0Aq3A4g3ELuZrGx5OFleDBHEj9vKijFISHATB Qt5DmraoVb2XsxcpU/mKtK56/7uWLs57xGBjqCdMWD/nPSajlCd4EG7KwGTtVLDni3tz bf1j6KpyCuE4F5ShEPbFofr1K3i3XmEjb5WJhKRayi77Rzl784mw/b628P/RCy/nUui9 dbiw== X-Gm-Message-State: AOAM5335AtjlJgXDtuLIqgSMXPtEuYoblV/nWmWYwhR1ZW7t9ScGne0q XakhsXb+c5gujpwalgZ+2BU= X-Google-Smtp-Source: ABdhPJzY/p1NcBfFzVLiPysuI/yRg6F4Vs6vG341aTbEKf0V2Gzo8sFF7I4TBZHo+DclH3qPYNuMZQ== X-Received: by 2002:a17:902:e5ce:b0:151:9c5a:3c87 with SMTP id u14-20020a170902e5ce00b001519c5a3c87mr1048645plf.59.1646219626926; Wed, 02 Mar 2022 03:13:46 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.13.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:13:46 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 01/12] KVM: x86/pmu: Update comments for AMD gp counters Date: Wed, 2 Mar 2022 19:13:23 +0800 Message-Id: <20220302111334.12689-2-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The obsolete comment could more accurately state that AMD platforms have two base MSR addresses and two different maximum numbers for gp counters, depending on the X86_FEATURE_PERFCTR_CORE feature. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b1a02993782b..3f09af678b2c 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -34,7 +34,9 @@ * However AMD doesn't support fixed-counters; * - There are three types of index to access perf counters (PMC): * 1. MSR (named msr): For example Intel has MSR_IA32_PERFCTRn and AMD - * has MSR_K7_PERFCTRn. + * has MSR_K7_PERFCTRn and, for families 15H and later, + * MSR_F15H_PERF_CTRn, where MSR_F15H_PERF_CTR[0-3] are + * aliased to MSR_K7_PERFCTRn. * 2. MSR Index (named idx): This normally is used by RDPMC instruction. * For instance AMD RDPMC instruction uses 0000_0003h in ECX to access * C001_0007h (MSR_K7_PERCTR3). Intel has a similar mechanism, except @@ -46,7 +48,8 @@ * between pmc and perf counters is as the following: * * Intel: [0 .. INTEL_PMC_MAX_GENERIC-1] <=> gp counters * [INTEL_PMC_IDX_FIXED .. INTEL_PMC_IDX_FIXED + 2] <=> fixed - * * AMD: [0 .. AMD64_NUM_COUNTERS-1] <=> gp counters + * * AMD: [0 .. AMD64_NUM_COUNTERS-1] and, for families 15H + * and later, [0 .. AMD64_NUM_COUNTERS_CORE-1] <=> gp counters */ static void kvm_pmi_trigger_fn(struct irq_work *irq_work) From patchwork Wed Mar 2 11:13:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DEB9C433EF for ; Wed, 2 Mar 2022 11:14:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241240AbiCBLOo (ORCPT ); Wed, 2 Mar 2022 06:14:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241283AbiCBLOl (ORCPT ); Wed, 2 Mar 2022 06:14:41 -0500 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB56F60062; Wed, 2 Mar 2022 03:13:49 -0800 (PST) Received: by mail-pf1-x436.google.com with SMTP id d187so1634717pfa.10; Wed, 02 Mar 2022 03:13:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bQfqq7NPJa3dpih8pieKvYThWwyxqyyg9S3gYEPXDpE=; b=M5Sp6bR4yQXT4U8AY93+H5kqm8ZrIyK5wMb0dGfznsJwRnGxl9EETQWPO8vLdzaMxj 5vFoHbaE+QJK60aGCJ2LBC/o52keeXMqD1OaiycfnIrF0rs2IEH37lq+b4MU04XBVhe6 W5cVN9SHhlJz7+MPLUmYIevUHhkdpYnaWab6eT/GhFE7v2CGhE/G6fhrpz/XkOq3VNSc OE8otWyyD2500Bg5bhzzeLBqf2GCbWO3NfeVMos+XHgvhUkn8Jy5Oz0f/V8spB7XKN5T q8SyJbd1CVk3uZlryS/Bbculyl6DK4LXV/SfoRBcd9TmlRk/sZL5rHXCezjTUbh1ovR2 pkGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bQfqq7NPJa3dpih8pieKvYThWwyxqyyg9S3gYEPXDpE=; b=nTtMT4CijVO1QgyiNCPMTRZGlLk1k77Wp1tfzYb8S7MobMRuNZHGt0tA+uMSiVzOmy mWetMVyEruCCwSp/tYkT9gQafxS2xlZfwzlQdscTBddOTNx6Ip30P147fwxW7S7+0hbo xCOvqFej+kCW0L0QN/KXNDs/yig/0wctupcF6/014cVBviBaWftKfUpH6qN8fvo/igny z6l2Xc7bo5GlIKHS2h64bpuOg4ydH6gr6+GTW45KKw/QFstNN4JpRUv6AiaWzfXQkyKv iB1I37sixw//CdeQ5hmujJSrwU+kLJSFJ8EDYqW1stf+V/3lU9dvJ6UGZeiRHnk9a03/ dz5A== X-Gm-Message-State: AOAM530TAghe9xhl4uRMqBhStqvc1UBddourtDzwLGcnKXjnIDwS2q9t VqZBgbHICIVORvQZiAOmB3E= X-Google-Smtp-Source: ABdhPJyWqse28KxG1ouc+wCsVJRaVlyIHqtoSCQJ2VKb3bHKhsxnIfJLMZfEw5khe4Xg1EVq5K5ewQ== X-Received: by 2002:a05:6a00:ad0:b0:4e1:2d96:2ab0 with SMTP id c16-20020a056a000ad000b004e12d962ab0mr32495037pfl.3.1646219629410; Wed, 02 Mar 2022 03:13:49 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.13.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:13:49 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 02/12] KVM: x86/pmu: Extract check_pmu_event_filter() from the same semantics Date: Wed, 2 Mar 2022 19:13:24 +0800 Message-Id: <20220302111334.12689-3-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Checking the kvm->arch.pmu_event_filter policy in both gp and fixed code paths was somewhat redundant, so common parts can be extracted, which reduces code footprint and improves readability. Signed-off-by: Like Xu Reviewed-by: Wanpeng Li --- arch/x86/kvm/pmu.c | 61 +++++++++++++++++++++++++++------------------- 1 file changed, 36 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3f09af678b2c..fda963161951 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -182,13 +182,43 @@ static int cmp_u64(const void *a, const void *b) return *(__u64 *)a - *(__u64 *)b; } +static bool check_pmu_event_filter(struct kvm_pmc *pmc) +{ + struct kvm_pmu_event_filter *filter; + struct kvm *kvm = pmc->vcpu->kvm; + bool allow_event = true; + __u64 key; + int idx; + + filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); + if (!filter) + goto out; + + if (pmc_is_gp(pmc)) { + key = pmc->eventsel & AMD64_RAW_EVENT_MASK_NB; + if (bsearch(&key, filter->events, filter->nevents, + sizeof(__u64), cmp_u64)) + allow_event = filter->action == KVM_PMU_EVENT_ALLOW; + else + allow_event = filter->action == KVM_PMU_EVENT_DENY; + } else { + idx = pmc->idx - INTEL_PMC_IDX_FIXED; + if (filter->action == KVM_PMU_EVENT_DENY && + test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) + allow_event = false; + if (filter->action == KVM_PMU_EVENT_ALLOW && + !test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) + allow_event = false; + } + +out: + return allow_event; +} + void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) { u64 config; u32 type = PERF_TYPE_RAW; - struct kvm *kvm = pmc->vcpu->kvm; - struct kvm_pmu_event_filter *filter; - bool allow_event = true; if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); @@ -200,17 +230,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE) || !pmc_is_enabled(pmc)) return; - filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); - if (filter) { - __u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB; - - if (bsearch(&key, filter->events, filter->nevents, - sizeof(__u64), cmp_u64)) - allow_event = filter->action == KVM_PMU_EVENT_ALLOW; - else - allow_event = filter->action == KVM_PMU_EVENT_DENY; - } - if (!allow_event) + if (!check_pmu_event_filter(pmc)) return; if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | @@ -245,23 +265,14 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) { unsigned en_field = ctrl & 0x3; bool pmi = ctrl & 0x8; - struct kvm_pmu_event_filter *filter; - struct kvm *kvm = pmc->vcpu->kvm; pmc_pause_counter(pmc); if (!en_field || !pmc_is_enabled(pmc)) return; - filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); - if (filter) { - if (filter->action == KVM_PMU_EVENT_DENY && - test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - return; - if (filter->action == KVM_PMU_EVENT_ALLOW && - !test_bit(idx, (ulong *)&filter->fixed_counter_bitmap)) - return; - } + if (!check_pmu_event_filter(pmc)) + return; if (pmc->current_config == (u64)ctrl && pmc_resume_counter(pmc)) return; From patchwork Wed Mar 2 11:13:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765816 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 608E0C433EF for ; Wed, 2 Mar 2022 11:14:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241329AbiCBLOq (ORCPT ); Wed, 2 Mar 2022 06:14:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241274AbiCBLOl (ORCPT ); Wed, 2 Mar 2022 06:14:41 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F949606CC; Wed, 2 Mar 2022 03:13:52 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id cx5so1490877pjb.1; Wed, 02 Mar 2022 03:13:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jcf+uMQaYXb0fRvoASavmL6ans1TjeGObPHTY/XJvVE=; b=lieA7yUZsIYEOQAauQIyIWHHgg+Stvhravead9k08rzBmrkV02ZAVqPcSapAwzOelu Gr37N/FKZe61X0yE/jiLe9GDpgyC2nUdfOdIImeeuEfZ5nxOOuoXPVLo4CHgrmRaicLV vRKII+ZwtDSRiI5D19MKgWHEmuK+FVvM2ZFUlvuRoTZDrBA+CxeYUYmiVxN/6vPhP5fT nx8H3qkJdIUCrJFj1/rRb+wy+e2k6xTwtdUS0KLlqbwj0tgfOJqv+uCBw6jS27n2WPd9 fNVpZOUemMxxpke/wCnyo3hjdjmZ5XlH2m/PgKhWkomEvDpeFJ713awdjd2X9Sip3YMq uCkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jcf+uMQaYXb0fRvoASavmL6ans1TjeGObPHTY/XJvVE=; b=aeOJ0HD6lUbRcQak2BAUjYnmA7HAvppXbAeVdX+AmiSHdvGqkyumwxhvB2l/ZYuJCv BbnBNKBw/CfgfbLybPGMc71JP7GYZ3vhGEeDO42snuB3ehbMGBHI4nXRynDOcCj7KBg1 9N6YHavYOvwpHgvzxBgeQtQzY55yCLlOrl7A0ihzIurCMor2bJQUSu4gSNxZVfpd3+IZ LCeEPu4HPdhNeyEr8jEawJmBTNMDqQUh+buRSSm54jlTH15UlqzDso7g6e7YQVOlYnHe NsNJwRNV6O11gy6l1zOkkZ3+d3okCiabCfbyoI7DkmP6MWQuJ8TkaGuTKeMuG6IdRKoJ ke5g== X-Gm-Message-State: AOAM532+WdyDvbSWrVhK/kn+NBEfgqH+vQDABy7k7X8sZIoYyDdoHesW 2COQVvjFTaB/OzT45H53Pvg= X-Google-Smtp-Source: ABdhPJzmTu1Mt4/a3xoJD/CKmVsDCv3vJejVpekXhFuPdcUfkB7QzPweWxBMRy6VZwenmPNOW8SmVw== X-Received: by 2002:a17:90b:1d12:b0:1bc:8ef5:a110 with SMTP id on18-20020a17090b1d1200b001bc8ef5a110mr26358679pjb.207.1646219631964; Wed, 02 Mar 2022 03:13:51 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.13.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:13:51 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 03/12] KVM: x86/pmu: Pass only "struct kvm_pmc *pmc" to reprogram_counter() Date: Wed, 2 Mar 2022 19:13:25 +0800 Message-Id: <20220302111334.12689-4-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Passing the reference "struct kvm_pmc *pmc" when creating pmc->perf_event is sufficient. This change helps to simplify the calling convention by replacing reprogram_{gp, fixed}_counter() with reprogram_counter() seamlessly. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 17 +++++------------ arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 32 ++++++++++++++++++-------------- 3 files changed, 24 insertions(+), 27 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index fda963161951..0ce33a2798cd 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -288,18 +288,13 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) } EXPORT_SYMBOL_GPL(reprogram_fixed_counter); -void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx) +void reprogram_counter(struct kvm_pmc *pmc) { - struct kvm_pmc *pmc = kvm_x86_ops.pmu_ops->pmc_idx_to_pmc(pmu, pmc_idx); - - if (!pmc) - return; - if (pmc_is_gp(pmc)) reprogram_gp_counter(pmc, pmc->eventsel); else { - int idx = pmc_idx - INTEL_PMC_IDX_FIXED; - u8 ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, idx); + int idx = pmc->idx - INTEL_PMC_IDX_FIXED; + u8 ctrl = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, idx); reprogram_fixed_counter(pmc, ctrl, idx); } @@ -318,8 +313,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) clear_bit(bit, pmu->reprogram_pmi); continue; } - - reprogram_counter(pmu, bit); + reprogram_counter(pmc); } /* @@ -505,13 +499,12 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu) static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) { - struct kvm_pmu *pmu = pmc_to_pmu(pmc); u64 prev_count; prev_count = pmc->counter; pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc); - reprogram_counter(pmu, pmc->idx); + reprogram_counter(pmc); if (pmc->counter < prev_count) __kvm_perf_overflow(pmc, false); } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 7a7b8d5b775e..b529c54dc309 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -142,7 +142,7 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); -void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx); +void reprogram_counter(struct kvm_pmc *pmc); void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 4e5b1eeeb77c..20f2b5f5102b 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -56,16 +56,32 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) pmu->fixed_ctr_ctrl = data; } +static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) +{ + if (pmc_idx < INTEL_PMC_IDX_FIXED) + return get_gp_pmc(pmu, MSR_P6_EVNTSEL0 + pmc_idx, + MSR_P6_EVNTSEL0); + else { + u32 idx = pmc_idx - INTEL_PMC_IDX_FIXED; + + return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0); + } +} + /* function is called when global control register has been updated. */ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data) { int bit; u64 diff = pmu->global_ctrl ^ data; + struct kvm_pmc *pmc; pmu->global_ctrl = data; - for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) - reprogram_counter(pmu, bit); + for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) { + pmc = intel_pmc_idx_to_pmc(pmu, bit); + if (pmc) + reprogram_counter(pmc); + } } static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) @@ -101,18 +117,6 @@ static bool intel_pmc_is_enabled(struct kvm_pmc *pmc) return test_bit(pmc->idx, (unsigned long *)&pmu->global_ctrl); } -static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) -{ - if (pmc_idx < INTEL_PMC_IDX_FIXED) - return get_gp_pmc(pmu, MSR_P6_EVNTSEL0 + pmc_idx, - MSR_P6_EVNTSEL0); - else { - u32 idx = pmc_idx - INTEL_PMC_IDX_FIXED; - - return get_fixed_pmc(pmu, idx + MSR_CORE_PERF_FIXED_CTR0); - } -} - static bool intel_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); From patchwork Wed Mar 2 11:13:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE4EBC433EF for ; Wed, 2 Mar 2022 11:14:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241303AbiCBLOt (ORCPT ); Wed, 2 Mar 2022 06:14:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241273AbiCBLOm (ORCPT ); Wed, 2 Mar 2022 06:14:42 -0500 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1022A60AAE; Wed, 2 Mar 2022 03:13:54 -0800 (PST) Received: by mail-pj1-x1030.google.com with SMTP id bx5so1470233pjb.3; Wed, 02 Mar 2022 03:13:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PIGENgRol9EvI3plNWUpNr20sleh4j6Pq/GVp4nZ57k=; b=NHpb4nKYF0gByECOpvE/68w+N3+77T8HNhJLN/j6j3E5tdJ3ggWlF6+T5w4UNuqfwC pS6wHK/JF8QlxN0mQkg+snQmsS+Z9nKeOoZUEgwbjBS3QUMXIRss+CnLb4douJrgnGUI qdBwTYa1gT7fdbdNuzCytIXm8ObBKj2tHPmLZarmdFa39Gm24FWk2nKt95E3/97m1hgN jdoH19Ra4xgjNZAyZHO6iMj3JqMp23KJPfbvTgH9UENi2QbXjnJFzv43+Isr+C4jsqkW 3qJfqL7UilK0ePMt1LBwIMBe+JHOnyUh/YPFcSdAUIScW48BKI76LmWzq1xHqIz3HeL5 sdIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PIGENgRol9EvI3plNWUpNr20sleh4j6Pq/GVp4nZ57k=; b=z0dGgyC6QZCCyQVgM4vPZ/zDOiFTCw7ihrZ3IHx/Rm/j8a+53hPimMUbO0aT3MuDjN 7Ey5ebEpzdd7saQUnQdtngSGJpqh/HcFVZPeNCAdISEV1JJr6i1aMrjRvadPIcJ2uV5M kyBHJUVQrZ1vzDYuhFLPlm14qvyGaABUb7RIr+Iw2MgI5P8LQU92/Kmh0y0FQU9VufKx /Wd0xfmcxgoAeNtzsBGa0UpozErcqOabJdXyjRtbvEh6KUXzUMnoY1ov70h7+ubfqugo 3veLwjbz25XxY34KUiMy4qunhu6nJlJem7kDNltskJFpRlDLDFNhebSbhw/YTufH4/Gx EQcA== X-Gm-Message-State: AOAM531etqr+E5VLHcGSKDOW+FCbDry9HX0pDkIH5A3rxy6x5KuxG46Q mxpTG4DSpYap5siicTOaPU0= X-Google-Smtp-Source: ABdhPJwxMqinUtepWKpOcEDa9PGMOzkzYXsJjLJLKgwBnO+HzZRlYbNWJCggwwS870CbXofRixdbCg== X-Received: by 2002:a17:90a:1188:b0:1bd:36d0:d7b2 with SMTP id e8-20020a17090a118800b001bd36d0d7b2mr17086336pja.223.1646219634473; Wed, 02 Mar 2022 03:13:54 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.13.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:13:54 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 04/12] KVM: x86/pmu: Drop "u64 eventsel" for reprogram_gp_counter() Date: Wed, 2 Mar 2022 19:13:26 +0800 Message-Id: <20220302111334.12689-5-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Because inside reprogram_gp_counter() it is bound to assign the requested eventel to pmc->eventsel, this assignment step can be moved forward, thus simplifying the passing of parameters to "struct kvm_pmc *pmc" only. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 7 +++---- arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/svm/pmu.c | 3 ++- arch/x86/kvm/vmx/pmu_intel.c | 3 ++- 4 files changed, 8 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 0ce33a2798cd..7b8a5f973a63 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -215,16 +215,15 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return allow_event; } -void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) +void reprogram_gp_counter(struct kvm_pmc *pmc) { u64 config; u32 type = PERF_TYPE_RAW; + u64 eventsel = pmc->eventsel; if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); - pmc->eventsel = eventsel; - pmc_pause_counter(pmc); if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE) || !pmc_is_enabled(pmc)) @@ -291,7 +290,7 @@ EXPORT_SYMBOL_GPL(reprogram_fixed_counter); void reprogram_counter(struct kvm_pmc *pmc) { if (pmc_is_gp(pmc)) - reprogram_gp_counter(pmc, pmc->eventsel); + reprogram_gp_counter(pmc); else { int idx = pmc->idx - INTEL_PMC_IDX_FIXED; u8 ctrl = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, idx); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index b529c54dc309..4db50c290c62 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -140,7 +140,7 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) return sample_period; } -void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel); +void reprogram_gp_counter(struct kvm_pmc *pmc); void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); void reprogram_counter(struct kvm_pmc *pmc); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index d4de52409335..7ff9ccaca0a4 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -265,7 +265,8 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (data == pmc->eventsel) return 0; if (!(data & pmu->reserved_bits)) { - reprogram_gp_counter(pmc, data); + pmc->eventsel = data; + reprogram_gp_counter(pmc); return 0; } } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 20f2b5f5102b..2eefde7e4b1a 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -448,7 +448,8 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (data == pmc->eventsel) return 0; if (!(data & pmu->reserved_bits)) { - reprogram_gp_counter(pmc, data); + pmc->eventsel = data; + reprogram_gp_counter(pmc); return 0; } } else if (intel_pmu_handle_lbr_msrs_access(vcpu, msr_info, false)) From patchwork Wed Mar 2 11:13:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765822 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9B54C433F5 for ; Wed, 2 Mar 2022 11:14:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241338AbiCBLPG (ORCPT ); Wed, 2 Mar 2022 06:15:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241313AbiCBLOn (ORCPT ); Wed, 2 Mar 2022 06:14:43 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F76960CDA; Wed, 2 Mar 2022 03:13:57 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id k1so1666034pfu.2; Wed, 02 Mar 2022 03:13:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/eugBTlVHHjiLiWyNxaX8yniotcp8FYciUbkKrPu6Ko=; b=DHBPLXwNcNAR4Jzj56gtHWY2AB98daHiMl/bPXsjqbIRBjq/4RCGLDRXPXe09V49zJ BDDmUbs0mp2t57/KdP1E7p+5/IXG8bdFz+TiDFL4EImx/Qpj8JKaaAi61vMHwhUH0Ijy R/MvagH7QcHptwkyFWIph8/ENVIDNVzLs1sVBUFbIp9hRQKa+DkFY4B+e6EQW6S2GdVB Mpdcfje/w9J7d/NtntAmmeHFBFwXNB+I9o4TbYO/r+JmS8VqH4yuTbqIsU/JH+KWa8iZ HOR0dsk0ViJX/1+GvXuJzJPIComDWjunE5MU7ffL+lBrwxHVSStq2z0QHiSHjIdXDSPO +cVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/eugBTlVHHjiLiWyNxaX8yniotcp8FYciUbkKrPu6Ko=; b=SylHNxSFTct156j0EA3WxQiBuCPO/Y7N9obO1GdVPTaV26IHZgakdu5nDlCpiKr2N1 71tbYHNqeIJqSuOlP1WHo9bN+LoSqLVPwRspyXA6KETZpLVdiG3HbsRRs9qE5/BP7FFF l47fEik59tux0BESsAo517TO3YE/mXx4Qtk6fhLrYQ2AsxtC9qVYtofH5XpZkMcIC1ne XM9P6xTkmLqqDpV2r0BKzh3A0dj2akOn3rp6AOxYLnHVq7g27PlrcB/RpiY8wRmyda7I HqePlOem+oFlW1zfQiuyiEf7EyvFP9QzibnBCSeQALY5JjkGdXpVeCbaMmfr+mvjn3rR 5lhQ== X-Gm-Message-State: AOAM530DEuYuOxNNc2S/ltqRndiU5bZg1S6LeGk+wvkKAzRol7G85zPQ pHGR+igrJfPos6QTVj8xb9b9ikm5B0dAsccM X-Google-Smtp-Source: ABdhPJyAaYLakEPCTHYvtOh/PtOnSQecqsnC8XvAwKEy/zUNdhCoQ8pPhI/LAUrgCvf/ipOXk2yYVQ== X-Received: by 2002:a62:fb0f:0:b0:4f2:6d3f:5ffb with SMTP id x15-20020a62fb0f000000b004f26d3f5ffbmr31971466pfm.55.1646219636974; Wed, 02 Mar 2022 03:13:56 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.13.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:13:56 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 05/12] KVM: x86/pmu: Drop "u8 ctrl, int idx" for reprogram_fixed_counter() Date: Wed, 2 Mar 2022 19:13:27 +0800 Message-Id: <20220302111334.12689-6-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Since afrer reprogram_fixed_counter() is called, it's bound to assign the requested fixed_ctr_ctrl to pmu->fixed_ctr_ctrl, this assignment step can be moved forward (the stale value for diff is saved extra early), thus simplifying the passing of parameters. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 13 ++++++------- arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 16 ++++++++-------- 3 files changed, 15 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 7b8a5f973a63..282e6e859c46 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -260,8 +260,11 @@ void reprogram_gp_counter(struct kvm_pmc *pmc) } EXPORT_SYMBOL_GPL(reprogram_gp_counter); -void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) +void reprogram_fixed_counter(struct kvm_pmc *pmc) { + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + int idx = pmc->idx - INTEL_PMC_IDX_FIXED; + u8 ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, idx); unsigned en_field = ctrl & 0x3; bool pmi = ctrl & 0x8; @@ -291,12 +294,8 @@ void reprogram_counter(struct kvm_pmc *pmc) { if (pmc_is_gp(pmc)) reprogram_gp_counter(pmc); - else { - int idx = pmc->idx - INTEL_PMC_IDX_FIXED; - u8 ctrl = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, idx); - - reprogram_fixed_counter(pmc, ctrl, idx); - } + else + reprogram_fixed_counter(pmc); } EXPORT_SYMBOL_GPL(reprogram_counter); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 4db50c290c62..70a982c3cdad 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -141,7 +141,7 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) } void reprogram_gp_counter(struct kvm_pmc *pmc); -void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx); +void reprogram_fixed_counter(struct kvm_pmc *pmc); void reprogram_counter(struct kvm_pmc *pmc); void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 2eefde7e4b1a..3ddbfdd16cd0 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -37,23 +37,23 @@ static int fixed_pmc_events[] = {1, 0, 7}; static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) { + struct kvm_pmc *pmc; + u8 old_fixed_ctr_ctrl = pmu->fixed_ctr_ctrl; int i; + pmu->fixed_ctr_ctrl = data; for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { u8 new_ctrl = fixed_ctrl_field(data, i); - u8 old_ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, i); - struct kvm_pmc *pmc; - - pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); + u8 old_ctrl = fixed_ctrl_field(old_fixed_ctr_ctrl, i); if (old_ctrl == new_ctrl) continue; - __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); - reprogram_fixed_counter(pmc, new_ctrl, i); - } + pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); - pmu->fixed_ctr_ctrl = data; + __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); + reprogram_fixed_counter(pmc); + } } static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) From patchwork Wed Mar 2 11:13:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8593BC433F5 for ; Wed, 2 Mar 2022 11:14:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241343AbiCBLPA (ORCPT ); Wed, 2 Mar 2022 06:15:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241318AbiCBLOq (ORCPT ); Wed, 2 Mar 2022 06:14:46 -0500 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DD28606D7; Wed, 2 Mar 2022 03:13:59 -0800 (PST) Received: by mail-pf1-x429.google.com with SMTP id d17so1696438pfl.0; Wed, 02 Mar 2022 03:13:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GmHHVLhMLkneE+EGE4WwFIYlUm+9Kc8ZOrRDwiqbOwU=; b=k+QjHLMylb+UECl9c9j5EOTXOO/4ZUxCc2JlFkdTLTwl+EHmaL0+gzRcFA3w0uJWQy W7kpFz2sIUO61eqATqvOlqR1NN1Ppy7kgCRWCvu4hv0gFqP7Be24VVG3VnOGPuOHltWQ G1Dc4MIcWuyNZ+J9PD6LBHyyDoioRvQPcBfxWPmEbzodsG5kiRvuzS9EyWBRe9CvnFGg 7QoHooFx+TTYqiTxpBDHzNMmRJB3kSnhjh5kZA4vlFZLOfFoEKmrBxH6tINhMw00kLRb vJyuGVIA77mBmEaN/B9TSw9/mNRZC7o0cwEUSPz5WF5P7s8O4t37v61VXv8znoTQAGQe 82gA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GmHHVLhMLkneE+EGE4WwFIYlUm+9Kc8ZOrRDwiqbOwU=; b=1Y0++Yz8W5ExAFohaCUvaRYWFE6tfb1toU92uI7JiTpE7sX88cLh6Z0eubzo+hx2AJ eIdsZWJvsqnA3w71ITniAJ5mDeDvrMq4lSaG+pEFHVOJjv1iimgCqk8MY3Iq+mGdMpCK LE0U1kmGIcxZEfruoU4+U05zUOzUeICNd3zYJTTSPKAgZRC4GdPTbsllp+5q/W+zGDVr di1aE/0y1uYUC+cb3htM4j61xHT0GCkMVlC8Li+GkGJB7xzBqpFnkjxsBbur/pMLWyi+ PExvJyn5u5w+0qaqH3nDSXb4kbl+eBmGjhMneAM8UI3rdJAFSz11J14cL87CWnwvrsV2 DP/A== X-Gm-Message-State: AOAM533YcznWVXDdqtEQFk2YnCObuINzq/tL6rIdpfhfjlVUe7jGcazm nNGns8Yw5nReHyQYlFkSf6U= X-Google-Smtp-Source: ABdhPJxYh2IHOjNAazTukrTlaFnJlAWETGRItlCRNd/OOAMWSCEErTZoYtYyTCZat+lb4T8M6RVZjA== X-Received: by 2002:a63:c011:0:b0:378:74a6:9c31 with SMTP id h17-20020a63c011000000b0037874a69c31mr15963077pgg.585.1646219639456; Wed, 02 Mar 2022 03:13:59 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.13.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:13:59 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 06/12] KVM: x86/pmu: Use only the uniformly exported interface reprogram_counter() Date: Wed, 2 Mar 2022 19:13:28 +0800 Message-Id: <20220302111334.12689-7-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Since reprogram_counter(), reprogram_{gp, fixed}_counter() currently have the same incoming parameter "struct kvm_pmc *pmc", the callers can simplify the conetxt by using uniformly exported interface, which makes reprogram_ {gp, fixed}_counter() static and eliminates EXPORT_SYMBOL_GPL. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 6 ++---- arch/x86/kvm/pmu.h | 2 -- arch/x86/kvm/svm/pmu.c | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 4 ++-- 4 files changed, 5 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 282e6e859c46..5299488b002c 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -215,7 +215,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return allow_event; } -void reprogram_gp_counter(struct kvm_pmc *pmc) +static void reprogram_gp_counter(struct kvm_pmc *pmc) { u64 config; u32 type = PERF_TYPE_RAW; @@ -258,9 +258,8 @@ void reprogram_gp_counter(struct kvm_pmc *pmc) (eventsel & HSW_IN_TX), (eventsel & HSW_IN_TX_CHECKPOINTED)); } -EXPORT_SYMBOL_GPL(reprogram_gp_counter); -void reprogram_fixed_counter(struct kvm_pmc *pmc) +static void reprogram_fixed_counter(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); int idx = pmc->idx - INTEL_PMC_IDX_FIXED; @@ -288,7 +287,6 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc) !(en_field & 0x1), /* exclude kernel */ pmi, false, false); } -EXPORT_SYMBOL_GPL(reprogram_fixed_counter); void reprogram_counter(struct kvm_pmc *pmc) { diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 70a982c3cdad..201b99628423 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -140,8 +140,6 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value) return sample_period; } -void reprogram_gp_counter(struct kvm_pmc *pmc); -void reprogram_fixed_counter(struct kvm_pmc *pmc); void reprogram_counter(struct kvm_pmc *pmc); void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 7ff9ccaca0a4..a18bf636fbce 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -266,7 +266,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 0; if (!(data & pmu->reserved_bits)) { pmc->eventsel = data; - reprogram_gp_counter(pmc); + reprogram_counter(pmc); return 0; } } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 3ddbfdd16cd0..19b78a9d9d47 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -52,7 +52,7 @@ static void reprogram_fixed_counters(struct kvm_pmu *pmu, u64 data) pmc = get_fixed_pmc(pmu, MSR_CORE_PERF_FIXED_CTR0 + i); __set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use); - reprogram_fixed_counter(pmc); + reprogram_counter(pmc); } } @@ -449,7 +449,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 0; if (!(data & pmu->reserved_bits)) { pmc->eventsel = data; - reprogram_gp_counter(pmc); + reprogram_counter(pmc); return 0; } } else if (intel_pmu_handle_lbr_msrs_access(vcpu, msr_info, false)) From patchwork Wed Mar 2 11:13:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765818 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9C50C433EF for ; Wed, 2 Mar 2022 11:14:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241357AbiCBLO4 (ORCPT ); Wed, 2 Mar 2022 06:14:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241306AbiCBLOs (ORCPT ); Wed, 2 Mar 2022 06:14:48 -0500 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A27AC5FF3D; Wed, 2 Mar 2022 03:14:02 -0800 (PST) Received: by mail-pj1-x1036.google.com with SMTP id p3-20020a17090a680300b001bbfb9d760eso4500411pjj.2; Wed, 02 Mar 2022 03:14:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oRqNxSI8JpSv4mCnF5OeOINua/1Jvz4rIlrtr/F2+30=; b=NGCuJhq72MOxkJkFa6eEhIgwWnHyo2x1Ze70WR18Rwen6EFC+335loZvD5B9EZNoP3 iBu5cGDaNqaFGx/F9tuLQxOmb5K5+QrD5JT8I6KDalOdO53/T1+3119mdUBWXCgz9pA+ Fv6oNGdL8ULsDCIZ8OD7LLPtyV339b3WxpnZqHIpNIaIzVBnebStf3lEnYXloawAFTtk RPBOH2MjM7gO3lz9ZboAH5NIf2v5I6sDXvNZ0vyj8ABCoPqLdA0zbvAxod+8R++Lp6Jj yU+ym9V9u2NlUyIHWXDz/vM15FFkjT3CHxbmcY9HkxanYTIwu1DZfkWAA5Jlbqv8EfL+ VXXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oRqNxSI8JpSv4mCnF5OeOINua/1Jvz4rIlrtr/F2+30=; b=rwYod6S3Y3H8IOP5BhRz3Q3/dCO7jYqwaZRBR+vRH5CuaZkHwVjCw7npXQXRbCMpHl OyRGKdOjJbFEK7XUo5JS6ZDZEHgXn2m5oMfnMOfnb9xX1j7kglIrU+BGT5CMENzKmu8v qgeiFULPu7e2kbI1QiuXkCODw4ONTbrowhs/g2DJpLMv3aIHHxTbMz7NePIAi7zumXfd +k7h1GUI7s5ARz/l8F/ntzpDERfrJTRaHWhfcpvfvdf4v4N2YSGW3XABBBs27u0FtsC4 2dcVD2qSXXW12+MWQpJDwG7HiUD1WNS0o7MnR8BpyX0EjcOOtulMpn6FI8NXyi8xsYcN TwLA== X-Gm-Message-State: AOAM533fnV/gA9lo39GWiGEsO5C5bFDDe22AA4g7YoJO5yyNVyxUJGV+ aPs/SKjreOPVLwBeHvtRvzg= X-Google-Smtp-Source: ABdhPJx5TnBse/IEuhRhhkkfHmY3AR+YCUF0+oqbpB9qPyYt3UJs838S6X15dPYerhxGhMhLXhyjKg== X-Received: by 2002:a17:90a:fe86:b0:1bc:6935:346 with SMTP id co6-20020a17090afe8600b001bc69350346mr26946641pjb.150.1646219642118; Wed, 02 Mar 2022 03:14:02 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.13.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:14:01 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu , Peter Zijlstra Subject: [PATCH v2 07/12] KVM: x86/pmu: Use PERF_TYPE_RAW to merge reprogram_{gp, fixed}counter() Date: Wed, 2 Mar 2022 19:13:29 +0800 Message-Id: <20220302111334.12689-8-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The code sketch for reprogram_{gp, fixed}_counter() is similar, while the fixed counter using the PERF_TYPE_HARDWAR type and the gp being able to use either PERF_TYPE_HARDWAR or PERF_TYPE_RAW type depending on the pmc->eventsel value. After 'commit 761875634a5e ("KVM: x86/pmu: Setup pmc->eventsel for fixed PMCs")', the pmc->eventsel of the fixed counter will also have been setup with the same semantic value and will not be changed during the guest runtime. But essentially, "the HARDWARE is just a convenience wrapper over RAW IIRC", quoated from Peterz. So it could be pretty safe to use the PERF_TYPE_RAW type only to program both gp and fixed counters naturally in the reprogram_counter(). To make the gp and fixed counters more semantically symmetrical, the selection of EVENTSEL_{USER, OS, INT} bits is temporarily translated via fixed_ctr_ctrl before the pmc_reprogram_counter() call. Cc: Peter Zijlstra Suggested-by: Jim Mattson Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 128 +++++++++++++---------------------- arch/x86/kvm/vmx/pmu_intel.c | 2 +- 2 files changed, 47 insertions(+), 83 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 5299488b002c..00e1660c10ca 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -215,85 +215,60 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return allow_event; } -static void reprogram_gp_counter(struct kvm_pmc *pmc) -{ - u64 config; - u32 type = PERF_TYPE_RAW; - u64 eventsel = pmc->eventsel; - - if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) - printk_once("kvm pmu: pin control bit is ignored\n"); - - pmc_pause_counter(pmc); - - if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE) || !pmc_is_enabled(pmc)) - return; - - if (!check_pmu_event_filter(pmc)) - return; - - if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | - ARCH_PERFMON_EVENTSEL_INV | - ARCH_PERFMON_EVENTSEL_CMASK | - HSW_IN_TX | - HSW_IN_TX_CHECKPOINTED))) { - config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc); - if (config != PERF_COUNT_HW_MAX) - type = PERF_TYPE_HARDWARE; - } - - if (type == PERF_TYPE_RAW) - config = eventsel & AMD64_RAW_EVENT_MASK; - - if (pmc->current_config == eventsel && pmc_resume_counter(pmc)) - return; - - pmc_release_perf_event(pmc); - - pmc->current_config = eventsel; - pmc_reprogram_counter(pmc, type, config, - !(eventsel & ARCH_PERFMON_EVENTSEL_USR), - !(eventsel & ARCH_PERFMON_EVENTSEL_OS), - eventsel & ARCH_PERFMON_EVENTSEL_INT, - (eventsel & HSW_IN_TX), - (eventsel & HSW_IN_TX_CHECKPOINTED)); -} - -static void reprogram_fixed_counter(struct kvm_pmc *pmc) +static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); - int idx = pmc->idx - INTEL_PMC_IDX_FIXED; - u8 ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, idx); - unsigned en_field = ctrl & 0x3; - bool pmi = ctrl & 0x8; - pmc_pause_counter(pmc); + if (pmc_is_fixed(pmc)) + return fixed_ctrl_field(pmu->fixed_ctr_ctrl, + pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; - if (!en_field || !pmc_is_enabled(pmc)) - return; - - if (!check_pmu_event_filter(pmc)) - return; - - if (pmc->current_config == (u64)ctrl && pmc_resume_counter(pmc)) - return; - - pmc_release_perf_event(pmc); - - pmc->current_config = (u64)ctrl; - pmc_reprogram_counter(pmc, PERF_TYPE_HARDWARE, - kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc), - !(en_field & 0x2), /* exclude user */ - !(en_field & 0x1), /* exclude kernel */ - pmi, false, false); + return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; } void reprogram_counter(struct kvm_pmc *pmc) { - if (pmc_is_gp(pmc)) - reprogram_gp_counter(pmc); - else - reprogram_fixed_counter(pmc); + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + u64 eventsel = pmc->eventsel; + u64 new_config = eventsel; + u8 fixed_ctr_ctrl; + + pmc_pause_counter(pmc); + + if (!pmc_speculative_in_use(pmc) || !pmc_is_enabled(pmc)) + return; + + if (!check_pmu_event_filter(pmc)) + return; + + if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) + printk_once("kvm pmu: pin control bit is ignored\n"); + + if (pmc_is_fixed(pmc)) { + fixed_ctr_ctrl = fixed_ctrl_field(pmu->fixed_ctr_ctrl, + pmc->idx - INTEL_PMC_IDX_FIXED); + if (fixed_ctr_ctrl & 0x1) + eventsel |= ARCH_PERFMON_EVENTSEL_OS; + if (fixed_ctr_ctrl & 0x2) + eventsel |= ARCH_PERFMON_EVENTSEL_USR; + if (fixed_ctr_ctrl & 0x8) + eventsel |= ARCH_PERFMON_EVENTSEL_INT; + new_config = (u64)fixed_ctr_ctrl; + } + + if (pmc->current_config == new_config && pmc_resume_counter(pmc)) + return; + + pmc_release_perf_event(pmc); + + pmc->current_config = new_config; + pmc_reprogram_counter(pmc, PERF_TYPE_RAW, + (eventsel & AMD64_RAW_EVENT_MASK), + !(eventsel & ARCH_PERFMON_EVENTSEL_USR), + !(eventsel & ARCH_PERFMON_EVENTSEL_OS), + eventsel & ARCH_PERFMON_EVENTSEL_INT, + (eventsel & HSW_IN_TX), + (eventsel & HSW_IN_TX_CHECKPOINTED)); } EXPORT_SYMBOL_GPL(reprogram_counter); @@ -451,17 +426,6 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu) kvm_pmu_refresh(vcpu); } -static inline bool pmc_speculative_in_use(struct kvm_pmc *pmc) -{ - struct kvm_pmu *pmu = pmc_to_pmu(pmc); - - if (pmc_is_fixed(pmc)) - return fixed_ctrl_field(pmu->fixed_ctr_ctrl, - pmc->idx - INTEL_PMC_IDX_FIXED) & 0x3; - - return pmc->eventsel & ARCH_PERFMON_EVENTSEL_ENABLE; -} - /* Release perf_events for vPMCs that have been unused for a full time slice. */ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 19b78a9d9d47..d823fbe4e155 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -492,7 +492,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->reserved_bits = 0xffffffff00200000ull; entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); - if (!entry || !vcpu->kvm->arch.enable_pmu) + if (!entry || !vcpu->kvm->arch.enable_pmu || !boot_cpu_has(X86_FEATURE_ARCH_PERFMON)) return; eax.full = entry->eax; edx.full = entry->edx; From patchwork Wed Mar 2 11:13:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E445C433EF for ; Wed, 2 Mar 2022 11:14:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238306AbiCBLPC (ORCPT ); Wed, 2 Mar 2022 06:15:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241324AbiCBLOt (ORCPT ); Wed, 2 Mar 2022 06:14:49 -0500 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6756E60AA8; Wed, 2 Mar 2022 03:14:05 -0800 (PST) Received: by mail-pl1-x632.google.com with SMTP id p17so1262004plo.9; Wed, 02 Mar 2022 03:14:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=epyCD5JOodngCk58+WB9SJFl/4h2CqzFfv4VxTsof4Q=; b=Id+ayrUBZIZvkIK2+whn67yrEM7YOnZ2qplr4gf5gkGDWxJc5kVFkg6dqPvObxfEgu UWEu2a3+nsUMg5kSsYjkNblt823TMb2lDglLmiBljRvHSlQtUqY+IgHZho6s4TeJD0gs apFyrMdYsujCkoE0nu7R5ag0wYkfuAGlrqYeJizlDx08A8Z9fTS9gt5qpsRVSi2GRqSs OsxeHeCzbcsESoy+jjkajBMj/W3MOs5S0tJ+s+fl5+Q9NTC7RLvCLt6r/fJSzwLzLtG/ PiLckC8gSNnZvG42LPC1cRsulGwfP0MMpHc+t4QEWZp8hyD6hEd4M9GkvXmG/e1s5Wax viIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=epyCD5JOodngCk58+WB9SJFl/4h2CqzFfv4VxTsof4Q=; b=UVCxhvn61u839kgq5FDJ5wOgICaeO29bOOOjHmnA30XFMnJnEaS1gt0lwlu+YGbFBv hvvg46lINkkqJopMsJDE7UEZLAI6Qlr+LHjMBuAYeMjrPqkFdQCby91H4Se3/EfKR7zo +gvFYJ942kghsI/7ZG1fbkBIb6IxwplLsEQRaL68ir/kB28KmNIR6OFF4D1ZCBmpyzvD 22Rf5alKMvCydmoL892hk7HWbheLsj4fdsAgtzkBmKlzWXHyX1DECn+qvbGZRES8Rs7b yb0NDO5TyId61UuGTLeX4N3MkxUx4MqBSbGEgAIPEnV+gFuHNfsgKuiaa1GCZHJ2pltf nN4A== X-Gm-Message-State: AOAM531/EAkARdjVCesj6cPdW9Ur0sUeax4r2IuySaYizHXHIwIJsWgD 8hEgwwrGsQKpqzCIVGCP+UQ= X-Google-Smtp-Source: ABdhPJxl0CXU3s/Y0tgUbnmO5/ega6Rd6cq8U0g4bQ4kd1qfnYOWdFEyAXaz0HX94G0LDbdi+17w7w== X-Received: by 2002:a17:90a:2e0e:b0:1bc:dbe:2d04 with SMTP id q14-20020a17090a2e0e00b001bc0dbe2d04mr26674143pjd.74.1646219644770; Wed, 02 Mar 2022 03:14:04 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.14.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:14:04 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu , Peter Zijlstra Subject: [PATCH v2 08/12] perf: x86/core: Add interface to query perfmon_event_map[] directly Date: Wed, 2 Mar 2022 19:13:30 +0800 Message-Id: <20220302111334.12689-9-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Currently, we have [intel|knc|p4|p6]_perfmon_event_map on the Intel platforms and amd_[f17h]_perfmon_event_map on the AMD platforms. Early clumsy KVM code or other potential perf_event users may have hard-coded these perfmon_maps (e.g., arch/x86/kvm/svm/pmu.c), so it would not make sense to program a common hardware event based on the generic "enum perf_hw_id" once the two tables do not match. Let's provide an interface for callers outside the perf subsystem to get the counter config based on the perfmon_event_map currently in use, and it also helps to save bytes. Cc: Peter Zijlstra Signed-off-by: Like Xu --- arch/x86/events/core.c | 11 +++++++++++ arch/x86/include/asm/perf_event.h | 6 ++++++ 2 files changed, 17 insertions(+) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index e686c5e0537b..e760a1348c62 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2996,3 +2996,14 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) cap->events_mask_len = x86_pmu.events_mask_len; } EXPORT_SYMBOL_GPL(perf_get_x86_pmu_capability); + +u64 perf_get_hw_event_config(int hw_event) +{ + int max = x86_pmu.max_events; + + if (hw_event < max) + return x86_pmu.event_map(array_index_nospec(hw_event, max)); + + return 0; +} +EXPORT_SYMBOL_GPL(perf_get_hw_event_config); diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 8fc1b5003713..822927045406 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -477,6 +477,7 @@ struct x86_pmu_lbr { }; extern void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap); +extern u64 perf_get_hw_event_config(int hw_event); extern void perf_check_microcode(void); extern void perf_clear_dirty_counters(void); extern int x86_perf_rdpmc_index(struct perf_event *event); @@ -486,6 +487,11 @@ static inline void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) memset(cap, 0, sizeof(*cap)); } +static inline u64 perf_get_hw_event_config(int hw_event) +{ + return 0; +} + static inline void perf_events_lapic_init(void) { } static inline void perf_check_microcode(void) { } #endif From patchwork Wed Mar 2 11:13:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765820 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42008C433EF for ; Wed, 2 Mar 2022 11:14:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241283AbiCBLPB (ORCPT ); Wed, 2 Mar 2022 06:15:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241344AbiCBLOv (ORCPT ); Wed, 2 Mar 2022 06:14:51 -0500 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6EF660AAF; Wed, 2 Mar 2022 03:14:07 -0800 (PST) Received: by mail-pf1-x429.google.com with SMTP id p8so1643110pfh.8; Wed, 02 Mar 2022 03:14:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nirOxkhTtOqrukVcFgpCOAp8FbBAO2shJfP6XS8Zkbg=; b=crvxCU50UaIUgXmLL+2szMSRYsN12nUPfLrAxqiDX4Qq9npWOzUyriwRwPtRg3ay1A r+OuHNTf58OnrXGDn5aZLm9vaRegL/ArbOmm0juVbIDSvLMvGHiEwvyrIX2qHjuyEz2X b51MrjNjgBJl79KWvSu01tcznqb6eBLVlX6Ce53wsNQvU0IAuZgjLetmCvZNk+g4EIVq xbcK/KWzDcAJVfgPjzqEUui92w9hXrdIhEnytmEjR0xNX8Im7i4EySI4j+HXsPfQf9sE yew87eZnOcw2qbg1HjD2DyXgqLMByE2ecaQMaSJT2YpSF/H5sCb+I+OvlnWdbo2qByqn Jigg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nirOxkhTtOqrukVcFgpCOAp8FbBAO2shJfP6XS8Zkbg=; b=V4d5nqFSENrxVq96oF8ZdR7iVCZfdZs/HXD3bUU0jObRa/F4FYah5wqb6Z9eZlP4l8 Wa/CHZHxMkU8Qqw8HFDw2e1Sn0znZZs4Hg3pzs8CQYe5dTxUvPboNVBAJrygVlOik7YU OAAeTDDU3h1pEpZE7BYaGbnlnwxQST6nK15fk5jvbmAoBLIuXjY1rnIdPP8hIOV9d6RZ guqdLvoJal5SdOnCOOu+J72R4jZpfRvGNIQffG1SIG8fupN9rIZKvG3X5OYCvXdwdFQj O8EJKy1WGkKBNyZx1TzOjHQNgL0Fx3aWWBOeRrbpb17v2OUm3z5mRMZTAu3DlRx2TNAz YLvg== X-Gm-Message-State: AOAM533GmKGQPUqO5qgnEU9Q7lx+23d8/HUmnl0WSgeYY5RDkIqmUdjX iRTkNSEc0GJlaCVCNuWZZ8k= X-Google-Smtp-Source: ABdhPJzwy+v7a/GQS42AVYm3JF3eKqXGxQNqD/eGV8caSeX2uMW6wrCiwI2eoDLLjKp5oOyF66WALw== X-Received: by 2002:a63:5f14:0:b0:373:9e86:44d3 with SMTP id t20-20020a635f14000000b003739e8644d3mr25358045pgb.416.1646219647322; Wed, 02 Mar 2022 03:14:07 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.14.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:14:07 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 09/12] KVM: x86/pmu: Replace pmc_perf_hw_id() with perf_get_hw_event_config() Date: Wed, 2 Mar 2022 19:13:31 +0800 Message-Id: <20220302111334.12689-10-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu With the help of perf_get_hw_event_config(), KVM could query the correct EVENTSEL_{EVENT, UMASK} pair of a kernel-generic hw event directly from the different *_perfmon_event_map[] by the kernel's pre-defined perf_hw_id. Also extend the bit range of the comparison field to AMD64_RAW_EVENT_MASK_NB to prevent AMD from defining EventSelect[11:8] into perfmon_event_map[] one day. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 00e1660c10ca..9fb7d29e5fdd 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -472,13 +472,8 @@ static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) static inline bool eventsel_match_perf_hw_id(struct kvm_pmc *pmc, unsigned int perf_hw_id) { - u64 old_eventsel = pmc->eventsel; - unsigned int config; - - pmc->eventsel &= (ARCH_PERFMON_EVENTSEL_EVENT | ARCH_PERFMON_EVENTSEL_UMASK); - config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc); - pmc->eventsel = old_eventsel; - return config == perf_hw_id; + return !((pmc->eventsel ^ perf_get_hw_event_config(perf_hw_id)) & + AMD64_RAW_EVENT_MASK_NB); } static inline bool cpl_is_matched(struct kvm_pmc *pmc) From patchwork Wed Mar 2 11:13:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 851D2C433EF for ; Wed, 2 Mar 2022 11:14:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232437AbiCBLPR (ORCPT ); Wed, 2 Mar 2022 06:15:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241310AbiCBLOy (ORCPT ); Wed, 2 Mar 2022 06:14:54 -0500 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D18160CE5; Wed, 2 Mar 2022 03:14:10 -0800 (PST) Received: by mail-pf1-x42e.google.com with SMTP id d187so1635596pfa.10; Wed, 02 Mar 2022 03:14:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=a2RlQZ2iizWHD3Xw4Au9Wt4VFbR0MwR2hX1HXuetx0Y=; b=DtVDN+03l0niY0QlQRI4wBg/cDWpWRXI0YI+8cl7wudhDkhD3YYXqzI8Cm4xGYpedX zhtiUrp42Y5AoURmitVPqXkQ60aStWqqXZEsraQf8VRaUavKoDgBPDBoptOXhugybcBQ +qNKmz931lbcSBMmD9v+Zngi+/EEgzMQAyTX/N5QoPG039qRDPwUqGa3TcQEHL4xqMQO WcgoVh3iI1S1dsyfaEdrpz/aD2erdZjFLyxLGXhwbQdECOSizDDVIH2DOU4rqziaryYx Wn2DyTKi8uGyYnduClHsU1ceFH8B3TUQnkAnPfoKbR6QoJ8EL6jZbEe1DDIjLjOTacYh s4aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=a2RlQZ2iizWHD3Xw4Au9Wt4VFbR0MwR2hX1HXuetx0Y=; b=zGoxFM3lIFVOWm64D+vRmQwOu2pzyW1idf4qQJk5z/JIuBGvTMUusQIlzbEaQxvQO7 bUOl5+0Q8NfIEcmkK+XIziZrMCt2hlKZ+L7DgMp6uzACcvbZxz4s0NOYxdNId1T+Hi3v V3HtwAn5D5DvnBmD1HGOah9XOzT4t1eVVgGmtzFt/c/JYhZ5+PkFBQoYZvN3Fzw7rI2q lsbTZ0GmpMbIeqYnSqjQmJG0gY6aTxsvYbIuK+mda0ypC2y3UpBN7DonxPzsxvCBPXIa n0AU80s8ikpnx7ItjkLc+yXNfq404eP7A+dJumI2hdgqqAt8nhvmtcaHb+YI60lavzDK aNKw== X-Gm-Message-State: AOAM533gO685NfdvvbigF4ctmdCFUKVCCvUFheEsS56Qwrh7NX/WnQ/6 PAhBxkj5/goEo9bVPDlHDPCIfykVogq38kBK X-Google-Smtp-Source: ABdhPJw69Cv3Aul72fkXcwhYbwLUFHhQES3vNATuR0q5i4JW94RDILWS+JZ7A2vcG8CpOT//17EOcw== X-Received: by 2002:a63:195a:0:b0:372:bae2:d124 with SMTP id 26-20020a63195a000000b00372bae2d124mr25311167pgz.412.1646219649853; Wed, 02 Mar 2022 03:14:09 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.14.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:14:09 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 10/12] KVM: x86/pmu: Drop amd_event_mapping[] in the KVM context Date: Wed, 2 Mar 2022 19:13:32 +0800 Message-Id: <20220302111334.12689-11-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu All gp or fixed counters have been reprogrammed using PERF_TYPE_RAW, which means that the table that maps perf_hw_id to event select values is no longer useful, at least for AMD. For Intel, the logic to check if the pmu event reported by Intel cpuid is not available is still required, in which case pmc_perf_hw_id() could be renamed to hw_event_is_unavail() and a bool value is returned to replace the semantics of "PERF_COUNT_HW_MAX+1". Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 6 +++--- arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/svm/pmu.c | 34 +++------------------------------- arch/x86/kvm/vmx/pmu_intel.c | 11 ++++------- 4 files changed, 11 insertions(+), 42 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 9fb7d29e5fdd..60f44252540a 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -114,9 +114,6 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, .config = config, }; - if (type == PERF_TYPE_HARDWARE && config >= PERF_COUNT_HW_MAX) - return; - attr.sample_period = get_sample_period(pmc, pmc->counter); if (in_tx) @@ -190,6 +187,9 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) __u64 key; int idx; + if (kvm_x86_ops.pmu_ops->hw_event_is_unavail(pmc)) + return false; + filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (!filter) goto out; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 201b99628423..a2b4037759a2 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -24,7 +24,7 @@ struct kvm_event_hw_type_mapping { }; struct kvm_pmu_ops { - unsigned int (*pmc_perf_hw_id)(struct kvm_pmc *pmc); + bool (*hw_event_is_unavail)(struct kvm_pmc *pmc); bool (*pmc_is_enabled)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index a18bf636fbce..41c9b9e2aec2 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -33,18 +33,6 @@ enum index { INDEX_ERROR, }; -/* duplicated from amd_perfmon_event_map, K7 and above should work. */ -static struct kvm_event_hw_type_mapping amd_event_mapping[] = { - [0] = { 0x76, 0x00, PERF_COUNT_HW_CPU_CYCLES }, - [1] = { 0xc0, 0x00, PERF_COUNT_HW_INSTRUCTIONS }, - [2] = { 0x7d, 0x07, PERF_COUNT_HW_CACHE_REFERENCES }, - [3] = { 0x7e, 0x07, PERF_COUNT_HW_CACHE_MISSES }, - [4] = { 0xc2, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS }, - [5] = { 0xc3, 0x00, PERF_COUNT_HW_BRANCH_MISSES }, - [6] = { 0xd0, 0x00, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND }, - [7] = { 0xd1, 0x00, PERF_COUNT_HW_STALLED_CYCLES_BACKEND }, -}; - static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type) { struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); @@ -138,25 +126,9 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, return &pmu->gp_counters[msr_to_index(msr)]; } -static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc) +static bool amd_hw_event_is_unavail(struct kvm_pmc *pmc) { - u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; - u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; - int i; - - /* return PERF_COUNT_HW_MAX as AMD doesn't have fixed events */ - if (WARN_ON(pmc_is_fixed(pmc))) - return PERF_COUNT_HW_MAX; - - for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++) - if (amd_event_mapping[i].eventsel == event_select - && amd_event_mapping[i].unit_mask == unit_mask) - break; - - if (i == ARRAY_SIZE(amd_event_mapping)) - return PERF_COUNT_HW_MAX; - - return amd_event_mapping[i].event_type; + return false; } /* check if a PMC is enabled by comparing it against global_ctrl bits. Because @@ -322,7 +294,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) } struct kvm_pmu_ops amd_pmu_ops = { - .pmc_perf_hw_id = amd_pmc_perf_hw_id, + .hw_event_is_unavail = amd_hw_event_is_unavail, .pmc_is_enabled = amd_pmc_is_enabled, .pmc_idx_to_pmc = amd_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index d823fbe4e155..9b94674cc5fa 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -84,7 +84,7 @@ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data) } } -static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) +static bool intel_hw_event_is_unavail(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; @@ -98,15 +98,12 @@ static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) /* disable event that reported as not present by cpuid */ if ((i < 7) && !(pmu->available_event_types & (1 << i))) - return PERF_COUNT_HW_MAX + 1; + return true; break; } - if (i == ARRAY_SIZE(intel_arch_events)) - return PERF_COUNT_HW_MAX; - - return intel_arch_events[i].event_type; + return false; } /* check if a PMC is enabled by comparing it with globl_ctrl bits. */ @@ -721,7 +718,7 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) } struct kvm_pmu_ops intel_pmu_ops = { - .pmc_perf_hw_id = intel_pmc_perf_hw_id, + .hw_event_is_unavail = intel_hw_event_is_unavail, .pmc_is_enabled = intel_pmc_is_enabled, .pmc_idx_to_pmc = intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, From patchwork Wed Mar 2 11:13:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96F75C433F5 for ; Wed, 2 Mar 2022 11:14:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241360AbiCBLPT (ORCPT ); Wed, 2 Mar 2022 06:15:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241339AbiCBLO5 (ORCPT ); Wed, 2 Mar 2022 06:14:57 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6CDE60A87; Wed, 2 Mar 2022 03:14:12 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id e15so71367pfv.11; Wed, 02 Mar 2022 03:14:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ljDwF0Y8ODfLp/v8xYZgjAQF71OWFjDNrATmfFmG+To=; b=graQFjVDQ+6NCNuMj+ksCcKkCNGFgCl3v+UTPod2Wcj4/gJF3Jdq7WWPXmCxJLlmpI eixVdcJbQjrRENuiQ6rkqjaFoKRSCW2df+yrOdQoL4hHNsbDYIoAMQT6MzCyr0Bmxi88 Bcym370VN+aTUxpIf3up93c/f8uBKxg+r6qh/53Eai3cceM5vm4BPSJSltAKoyfb1C3p UgbHcuHSHa9kw04HiQs4onZEXv9vBFY4uxagtz6aMy01bX884gPPHtqenSHXmRjbaTrN aESpKtKXtR3mxnKBDlRFqUb7UKSql+I0WrgcH9eZ22JAB+t+tI5p9N8JwOwINY0e+D3V oK6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ljDwF0Y8ODfLp/v8xYZgjAQF71OWFjDNrATmfFmG+To=; b=pUtrP9KGG5VsTCdME7hw4MLzOY5Jiyf8s/2GEFMZVpjZDnyW7iDpwec1+EZOmkdmpL bPTCjfdKphgICPQNLFNM8eeGNx6c0HbIRW4AiQqQzl+CDD4Gpf7+uD79YjzBZY0y3UZd wsitgmrBZ5aCpxCaoCuKeiUwFsUvokMOxfO7X/JPXpJpYns7yUehXiPuETSSJI32o4MQ y/59WolHoFd9IyTSyScrsWDpLjBJfeQs/GiU24xbFOs+voTtDursiBQ+dCO4QUFcJYN2 yfIgQwmddUB1kWwP47gybZq/RH0IAK5YxdPotRkycGV//BT5qkAu0HORSySGmIg8GXPm NV+g== X-Gm-Message-State: AOAM530O5AwrjaJqUp9tzecy8YztMYi7YTVg02mqgK8TvzNvnMYKvMuP G6HlvtDmw9/4bB70BBKchNk= X-Google-Smtp-Source: ABdhPJyMUNSg0Piv6j4aaNMwiEzd1VrSm7ScRiE4tu3gxsbbu5rSCJO/sIzNDCHgVx31UkEe1aDoqQ== X-Received: by 2002:a65:4d0c:0:b0:379:3df:eac8 with SMTP id i12-20020a654d0c000000b0037903dfeac8mr3962615pgt.166.1646219652357; Wed, 02 Mar 2022 03:14:12 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.14.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:14:12 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 11/12] KVM: x86/pmu: Protect kvm->arch.pmu_event_filter with SRCU Date: Wed, 2 Mar 2022 19:13:33 +0800 Message-Id: <20220302111334.12689-12-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Similar to "kvm->arch.msr_filter", KVM should guarantee that vCPUs will see either the previous filter or the new filter when user space calls KVM_SET_PMU_EVENT_FILTER ioctl with the vCPU running so that guest pmu events with identical settings in both the old and new filter have deterministic behavior. Fixes: 66bb8a065f5a ("KVM: x86: PMU Event Filter") Signed-off-by: Like Xu Reviewed-by: Wanpeng Li --- arch/x86/kvm/pmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 60f44252540a..17c61c990282 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -185,11 +185,12 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) struct kvm *kvm = pmc->vcpu->kvm; bool allow_event = true; __u64 key; - int idx; + int idx, srcu_idx; if (kvm_x86_ops.pmu_ops->hw_event_is_unavail(pmc)) return false; + srcu_idx = srcu_read_lock(&kvm->srcu); filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (!filter) goto out; @@ -212,6 +213,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) } out: + srcu_read_unlock(&kvm->srcu, srcu_idx); return allow_event; } From patchwork Wed Mar 2 11:13:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12765825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 440B6C433F5 for ; Wed, 2 Mar 2022 11:14:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241391AbiCBLP1 (ORCPT ); Wed, 2 Mar 2022 06:15:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241384AbiCBLPN (ORCPT ); Wed, 2 Mar 2022 06:15:13 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A80D60A8B; Wed, 2 Mar 2022 03:14:15 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id z2so1264879plg.8; Wed, 02 Mar 2022 03:14:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LB2Rb6jDVG8ESI9lXPQQ7RDtAAcoqVgAAU8pFuQ40Os=; b=mL0ZvY02MjrP36BHSwyl3J021AubY9TqUCofuH/2SptWX5ARfBr9gOASAsUnoI4UxB l6VXN4dH5oVt8zdJdmtNvTP3fTJOJyOiGWfXZhAF8khtCWLMQ5RUq+EIeU1LyEPOZ6Lr QNwDo2NgTGHhwEuoZKAHTO3FBh/KVPZ2FlG+yp+Bldb11+IRb8fAr5coRAMDHKdgBLHj MHf2zvSl3qxvpthSnYxjul17y1yT2RMVV7wXlGx+cgB/9y27jHqtb/s3ob6CGs45ENxc s86Yi57QikZgOXkfpsvEblpdiD25oPVJ4efasKccvp7UWVyZwFXWCQf+wmfQxBps/biw LeDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LB2Rb6jDVG8ESI9lXPQQ7RDtAAcoqVgAAU8pFuQ40Os=; b=wjOWzQy7s724eKqp9QRehZwshPd8QUZLKPFIlYHm+KH5QbQRBLJYQW88exiVWDy8nc HHqboFp4qz+t/KPYXXl0yt0kcIsSq+TAyg+1Un0f/yl547LwQ5RKLH3QkjH/a3kGYgOQ J0+dvLzIPydXlJgMCCr3wRH2Ic5xbKAr30vVC349NtPVyEs1OnQOti/mdnpDw3g1S5Ih AB1Zg9/aphfd8+HRLH0LsFcqUvdg7ZN9qMpY+xPe3sjJX+7ZdRD/oz8f+sVnKsr0ns2s N58brGyQNcdHylhyiMOcIkZydYJFZzuoVc5qrewTQTCFlD2DYeAvQ5WmCXzCfATjeY3B n7xQ== X-Gm-Message-State: AOAM530KdRTqjEiQX7OL7zab51/bM9MQp7O2tcBHJl0HHeYWkR1w2bbx cP4xaJtwOq2bmdZd1gwjc/I= X-Google-Smtp-Source: ABdhPJzgFSeWc9NS1kv85YFdX7GTdFf0FkqSaD2vPuTuSTkry9Xu8BmSkbBg1IA8PfXMKxwH6aofww== X-Received: by 2002:a17:902:b582:b0:14c:a63d:3df6 with SMTP id a2-20020a170902b58200b0014ca63d3df6mr30212293pls.51.1646219654889; Wed, 02 Mar 2022 03:14:14 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id v22-20020a17090ad59600b001b7deb42251sm4681847pju.15.2022.03.02.03.14.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 03:14:14 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , kvm@vger.kernel.org, Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 12/12] KVM: x86/pmu: Clear reserved bit PERF_CTL2[43] for AMD erratum 1292 Date: Wed, 2 Mar 2022 19:13:34 +0800 Message-Id: <20220302111334.12689-13-likexu@tencent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220302111334.12689-1-likexu@tencent.com> References: <20220302111334.12689-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The AMD Family 19h Models 00h-0Fh Processors may experience sampling inaccuracies that cause the following performance counters to overcount retire-based events. To count the non-FP affected PMC events correctly, a patched guest with a target vCPU model would: - Use Core::X86::Msr::PERF_CTL2 to count the events, and - Program Core::X86::Msr::PERF_CTL2[43] to 1b, and - Program Core::X86::Msr::PERF_CTL2[20] to 0b. To support this use of AMD guests, KVM should not reserve bit 43 only for counter #2. Treatment of other cases remains unchanged. AMD hardware team clarified that the conditions under which the overcounting can happen, is quite rare. This change may make those PMU driver developers who have read errata #1292 less disappointed. Reported-by: Jim Mattson Signed-off-by: Like Xu --- arch/x86/kvm/svm/pmu.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 41c9b9e2aec2..05b4e4f2bb66 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -18,6 +18,20 @@ #include "pmu.h" #include "svm.h" +/* + * As a workaround of "Retire Based Events May Overcount" for erratum 1292, + * some patched guests may set PERF_CTL2[43] to 1b and PERF_CTL2[20] to 0b + * to count the non-FP affected PMC events correctly. + * + * Note, tests show that the counter difference before and after using the + * workaround is not significant. Host will be scheduling CTR2 indiscriminately. + */ +static inline bool vcpu_overcount_retire_events(struct kvm_vcpu *vcpu) +{ + return guest_cpuid_family(vcpu) == 0x19 && + guest_cpuid_model(vcpu) < 0x10; +} + enum pmu_type { PMU_TYPE_COUNTER = 0, PMU_TYPE_EVNTSEL, @@ -224,6 +238,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) struct kvm_pmc *pmc; u32 msr = msr_info->index; u64 data = msr_info->data; + u64 reserved_bits; /* MSR_PERFCTRn */ pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER); @@ -236,7 +251,10 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (pmc) { if (data == pmc->eventsel) return 0; - if (!(data & pmu->reserved_bits)) { + reserved_bits = pmu->reserved_bits; + if (pmc->idx == 2 && vcpu_overcount_retire_events(vcpu)) + reserved_bits &= ~BIT_ULL(43); + if (!(data & reserved_bits)) { pmc->eventsel = data; reprogram_counter(pmc); return 0;