From patchwork Fri Mar 10 10:53:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13169168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDEC7C64EC4 for ; Fri, 10 Mar 2023 10:54:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230104AbjCJKyb (ORCPT ); Fri, 10 Mar 2023 05:54:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229981AbjCJKyQ (ORCPT ); Fri, 10 Mar 2023 05:54:16 -0500 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6850E34B1; Fri, 10 Mar 2023 02:54:06 -0800 (PST) Received: by mail-pl1-x62e.google.com with SMTP id i5so5149454pla.2; Fri, 10 Mar 2023 02:54:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678445646; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r8Hr3QtIgBczYQFBm2I7OLr4pniDkf51aghVy6gKg5k=; b=DEtHjBAdk+4Ao8ILVr17mBoLdOTmMBfX95K/sEkKsjA1mMSQkbhefJH838YtAqHSZF lm2VOPMYBKHibdK5A1y/S/rO39ah85Mt1fK/TMYpV5OlXehyihRnKrgI7ljcWtc32+fQ 4oN135uBZPPgRJhnw76rGxH/9MtHM13G0LPWbmtspEqtjze349O62oVwF70O5tTo+HnF O+ftDtFH+yR1odivSVCcX+yGzuSoq/3AIefgMypbKG1eXOZb8kc6cLV6ypYdFFtuSAJo 3r3VwfSQpLKYQRtuW6L8B8eu6iTFTwFNkgBpicGk0aWaIF2a0f4Wqgtb2ujc/h7ejHRR 00eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678445646; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r8Hr3QtIgBczYQFBm2I7OLr4pniDkf51aghVy6gKg5k=; b=l18hLU7bmgThKa4cS+Ie+Fq7JkhkiIiBoy1KoC6VTHiHsoWj8DUAKYubwwhedbx14Z 69SXfK2s++wvEmoAa8OCopxxbY6Fu2CCxqamLSJNW6nnj4BQ9dCEx8g4gYcS6JbUmVhC iu50+vFlPojuAtZA7uAhHETynp7DO1Z6NB/Kj1RiNxvV9ee3g1NSVjtqn2rOduq+LMok xANF2e6jpcz7rQmTZSinE7GCWWA2FCT23gFDtdZ7SoD1BS9XIFzC39ZMoEiPXzQ0oBVd QjRFY+T5HUvu+PMjaCNbMaQTJthim9lbcpU+mF0yP3DUVTqWN6oe+9A0Cc3Z6tBmaeTW Y09g== X-Gm-Message-State: AO0yUKV5PD/Q/Z7tl/t8lHi1LXEVM41aqWSjUeL6YXVdJ3zsuU02K5Vq eyA0y4628VyZ7I9Bznbv8as= X-Google-Smtp-Source: AK7set8tpau27kjKDretfW4tbQZtFglfLd0Zw3mMwaYjvCLpxZWX6npZ54l7yPHHme9tuAcnKnCgcQ== X-Received: by 2002:a17:903:283:b0:19e:b9f8:1fca with SMTP id j3-20020a170903028300b0019eb9f81fcamr20596720plr.10.1678445646322; Fri, 10 Mar 2023 02:54:06 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ks3-20020a170903084300b0019cbabf127dsm1174167plb.182.2023.03.10.02.54.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 02:54:06 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Ravi Bangoria , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/5] KVM: x86/pmu: Emulate CTR overflow directly in kvm_pmu_handle_event() Date: Fri, 10 Mar 2023 18:53:42 +0800 Message-Id: <20230310105346.12302-2-likexu@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230310105346.12302-1-likexu@tencent.com> References: <20230310105346.12302-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu More and more vPMU emulations are deferred to kvm_pmu_handle_event() as it reduces the overhead of repeated execution. The reprogram_counter() is only responsible for creating the required perf_event, and naturally doesn't include the counter overflow part from emulated instruction events. Prior to this change, the assignment of pmc->prev_counter always occurred after pmc was enabled, and to keep the same semantics, pmc->prev_counter always needed to be reset after it has taken effect. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index a1a79b5f49d7..d1c89a6625a0 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -418,9 +418,6 @@ static void reprogram_counter(struct kvm_pmc *pmc) if (!event_is_allowed(pmc)) goto reprogram_complete; - if (pmc->counter < pmc->prev_counter) - __kvm_perf_overflow(pmc, false); - if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) printk_once("kvm pmu: pin control bit is ignored\n"); @@ -458,6 +455,13 @@ static void reprogram_counter(struct kvm_pmc *pmc) reprogram_complete: clear_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->reprogram_pmi); +} + +static inline void kvm_pmu_handle_pmc_overflow(struct kvm_pmc *pmc) +{ + if (pmc->counter < pmc->prev_counter) + __kvm_perf_overflow(pmc, false); + pmc->prev_counter = 0; } @@ -475,6 +479,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) } reprogram_counter(pmc); + kvm_pmu_handle_pmc_overflow(pmc); } /* From patchwork Fri Mar 10 10:53:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13169169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63C66C6FA99 for ; Fri, 10 Mar 2023 10:54:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230347AbjCJKyh (ORCPT ); Fri, 10 Mar 2023 05:54:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230095AbjCJKyT (ORCPT ); Fri, 10 Mar 2023 05:54:19 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B151CFFBC2; Fri, 10 Mar 2023 02:54:08 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id i3so5126162plg.6; Fri, 10 Mar 2023 02:54:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678445648; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=G4nlGjUeIFU1ciO0H/3MLCy3YXN9ZRSdwoj+bqo8KVo=; b=eHN8RTG6pL94ZOvOg1K0rftRqP+Sokxzh768c6Gr8ZOf5zmYPhBKN1Y7ifUe1/41Wt 8zucL4Mf7ts944+Yu6pwicQkNwHTREwBQe/6tTX74BpL6yP4bVtr/mRYK8BEjcO8yzuY J0kWgEKbFVvA8BV7CF/Q2kMpsfgpPf/rHEah44zT4dBss4+fSt4/KJohWqEVgcc0HgNl AHMfbXKF3moRvT+R2crip9h2ZUqzsuBKb++2reZo4K9DfOjgFvhiPO5pSMuk4zLuYUnS Mek8uCbjc5TrtNbx+XnnNTon9Q+UT7HJnBCAypSnvLnNzOIkqIG7xW5DIyKgIFSs9RUY QRbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678445648; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=G4nlGjUeIFU1ciO0H/3MLCy3YXN9ZRSdwoj+bqo8KVo=; b=iEltDBHwlI7UrdayJl/Zr1wBStWV+jM4maJHDroycgfLxKGGWDxRWLSyGRjwANFFKk 3PBBDvhDT+PDG1EFFSrNxGyM4zRS3atkvM9As1iyb2V2YnH0ba/qP8F4jb4Hc6u+uQ6q 9WUauonq3Ce1Lv77mga80w+OudU2k2KnY3p+7QhNxQ5/AyJAUaAHw+MUkoarrY0PAM3g 6PHabHhh5AWs0AE31/avRntRA3fe4CXDMIR+XSIGP21vz/sIJsoySuUyuM9OR4wwKzur tq0tZyU9x5ffY36nsNP+krBeeT0tobsvIjA6v0N+Mlw8eVTP5EOIH0Mx2he1Y63TqI7V 7Hcw== X-Gm-Message-State: AO0yUKVWmhW85XtwCqHz95bsDxFr2MdJSuDnRjD4Z3eDMVEqo5Q7dT1e sggGzMXoe8sYHuhIi691OsrkjYPZ0AohHsu7MQ8= X-Google-Smtp-Source: AK7set/EtmriIfUFXfBKPHBSYv6Fhro0WivjxJFQkAz5PClTwyTbZAT3rX3m3TtVI9sVvYJi8n4yZA== X-Received: by 2002:a17:902:da90:b0:19c:e405:4446 with SMTP id j16-20020a170902da9000b0019ce4054446mr29539447plx.30.1678445648178; Fri, 10 Mar 2023 02:54:08 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ks3-20020a170903084300b0019cbabf127dsm1174167plb.182.2023.03.10.02.54.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 02:54:07 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Ravi Bangoria , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/5] KVM: x86/pmu: Add a helper to check if pmc has PEBS mode enabled Date: Fri, 10 Mar 2023 18:53:43 +0800 Message-Id: <20230310105346.12302-3-likexu@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230310105346.12302-1-likexu@tencent.com> References: <20230310105346.12302-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Add a helper to check if pmc has PEBS mode enabled so that more new code may reuse this part and opportunistically drop a pmu reference. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 3 +-- arch/x86/kvm/pmu.h | 7 +++++++ 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index d1c89a6625a0..01a6b7ffa9b1 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -191,7 +191,6 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, bool exclude_user, bool exclude_kernel, bool intr) { - struct kvm_pmu *pmu = pmc_to_pmu(pmc); struct perf_event *event; struct perf_event_attr attr = { .type = type, @@ -203,7 +202,7 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config, .exclude_kernel = exclude_kernel, .config = config, }; - bool pebs = test_bit(pmc->idx, (unsigned long *)&pmu->pebs_enable); + bool pebs = pebs_is_enabled(pmc); attr.sample_period = get_sample_period(pmc, pmc->counter); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index cff0651b030b..db4262fe8814 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -189,6 +189,13 @@ static inline void kvm_pmu_request_counter_reprogram(struct kvm_pmc *pmc) kvm_make_request(KVM_REQ_PMU, pmc->vcpu); } +static inline bool pebs_is_enabled(struct kvm_pmc *pmc) +{ + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + + return test_bit(pmc->idx, (unsigned long *)&pmu->pebs_enable); +} + void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data); From patchwork Fri Mar 10 10:53:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13169170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B03DC64EC4 for ; Fri, 10 Mar 2023 10:54:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230404AbjCJKyp (ORCPT ); Fri, 10 Mar 2023 05:54:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230123AbjCJKyU (ORCPT ); Fri, 10 Mar 2023 05:54:20 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89DF7FFBFC; Fri, 10 Mar 2023 02:54:10 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id m8-20020a17090a4d8800b002377bced051so9418799pjh.0; Fri, 10 Mar 2023 02:54:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678445650; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TuKuJgCQ/AJK9oB9PxBqMc3IiOj3CxcIYL84H7LCcFk=; b=oP2GQg1nQ2BC5qEovXCSrIuixk9M7JDz47vXnzO3GPvIt/PREYhrbwHZCJ159gplWm kEf/MdQ3U22f/dV6QhsCCc0EuVvtozsi7kbHXuvfiWGNmPyATelxwtqwMZJMrq6MT3UJ GtdLfOIEgrN9KkGfxVcGdyejvlpt2hAflfSLMiq9gU1mKZI+hcdQ35yynxHOArhHU6qp EkA0Jurpw/RCy9c/Zc2oBFd3iKhI4mwtVBR26tFmD+P9Vo0oaeUGCgZxueZ1UftfC/by W0J//ZAqhJutipIEK14pe7Z47BcgtKZH+mauOXI9hx7rx8+hofP9JaM1WnKtJYvMoz/l c9ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678445650; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TuKuJgCQ/AJK9oB9PxBqMc3IiOj3CxcIYL84H7LCcFk=; b=n44JgvwJ+jj2gigOA2zASK8UWi8DzF2mBaAmc+ZLSvx8ta//YHQf6kY9pi8D8QazJd weaFTykVpFG5QTlAg0Hl4d/FPVKpqnXs6xiAqEoup2N/i4PduBJkolCDRC5mZpLXcV0D IX9tCiIFNVk9Mt+NhGRlthckNfxdseFw2tR7joUp2edlSGLq1w1F3BC6Qqzq7/s7p3g3 sPhDw7talQ88qPIRIzsBVzjPam3732jquLFKaksIBSvwMijyFOJYApnCLjOYztZRgq48 US6Jjk8ujTvlwr4wFAWOaQywAZf7ggve+FffMdXsqzBYtnefP4ZIXL43Fq33uzXLoLmx Uk/g== X-Gm-Message-State: AO0yUKX7V8SkcuiURzDFN6erLk1IH5lBHbWTZ3yfLLEu9CXcW5iArwQI d64FAx2sZfzWjFaSp63+7fM= X-Google-Smtp-Source: AK7set/VjVe4W4LYtwsKIqP++zAYM/v5WZ9XyQuK7AIwmM8cr0CRfRQajdyvCsTc57DzdvK2sdEHuQ== X-Received: by 2002:a17:902:e842:b0:19a:b4a9:9ddb with SMTP id t2-20020a170902e84200b0019ab4a99ddbmr31045713plg.49.1678445649976; Fri, 10 Mar 2023 02:54:09 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ks3-20020a170903084300b0019cbabf127dsm1174167plb.182.2023.03.10.02.54.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 02:54:09 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Ravi Bangoria , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/5] KVM: x86/pmu: Move the overflow of a normal counter out of PMI context Date: Fri, 10 Mar 2023 18:53:44 +0800 Message-Id: <20230310105346.12302-4-likexu@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230310105346.12302-1-likexu@tencent.com> References: <20230310105346.12302-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu From the guest's point of view, vPMU's global_status bit update following a counter overflow is completely independent of whether it is emulated in the host PMI context. The guest counter overflow emulation only depends on whether pmc->counter has an overflow or not. Plus the counter overflow generated by the emulation instruction has been delayed and not been handled in the PMI context. This part of the logic can be unified by reusing pmc->prev_counter for a normal counter. However for a PEBS counter, its buffer overflow irq still requires hardware to trigger PMI. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 01a6b7ffa9b1..81c7cc4ceadf 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -160,7 +160,10 @@ static void kvm_perf_overflow(struct perf_event *perf_event, if (test_and_set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi)) return; - __kvm_perf_overflow(pmc, true); + if (pebs_is_enabled(pmc)) + __kvm_perf_overflow(pmc, true); + else + pmc->prev_counter = pmc->counter; kvm_make_request(KVM_REQ_PMU, pmc->vcpu); } From patchwork Fri Mar 10 10:53:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13169171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91E82C6FD1C for ; Fri, 10 Mar 2023 10:54:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230154AbjCJKyr (ORCPT ); Fri, 10 Mar 2023 05:54:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230146AbjCJKyY (ORCPT ); Fri, 10 Mar 2023 05:54:24 -0500 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8472BF9D1E; Fri, 10 Mar 2023 02:54:12 -0800 (PST) Received: by mail-pl1-x635.google.com with SMTP id u5so5121386plq.7; Fri, 10 Mar 2023 02:54:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678445652; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EEzHwUubKyQzeZbhTkJFVtBrjdYEyf0DVFK0mJ5dj7w=; b=ZINmPDTbFKRM1dxUq4Ry/OA60Wf5Ez33YnIH8+a8ArGwTFftO749yxbrc3SxvcX215 617Zb1CSbBf/DaxX/dX4I5xNhCLpX+TAce3UEWwFtWBqNnGCff7y9T4F3ek0qGd2r9qU 0KjR5xIH2I1JamwKSwruFwW0QlLzqMk/fuw3U7mPnO3TkdRep6QbupQoiIIAcqMRZecw 9olmNgMnVMB+9Ig8nrj6PF1y7X7UUwrK60JrtIrX+CB7sgmMfq9KP5on5yMvqH5cMbUW W5cJjlWzb7ydh7QOXYealgU1nahtelAy0e+BQSTb0kg5Rti6nTg1sCt6Pk+GFdn000tS 0tSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678445652; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EEzHwUubKyQzeZbhTkJFVtBrjdYEyf0DVFK0mJ5dj7w=; b=JLNnnj4jWjzwq36Zij8Ci8wJQtUdNGImLX36GkoZH1Ak8y5jw6dL95vbyH81xuxBs/ K2g1kHoGcTwfSWTG947uA60q0m1AJzaQG3CFQgtm5qidRo2Bo7Si0fUqwFIILkZjWJhE hj6qk+12Z40+8/v+uptxTJLSIznXKqx1PpjZYu3nV+otnFw6+sIhTIGbolMK/P/AAAcR 8hWDBmNA1zTmHWBwCyWZqJvO18UdeY806I2OJ1CdUybCF6+T7xTRXVNCbGxl3qltoMdg jvuynDT06lZ5c/2mXkwf9WJWkyBZRlBwa9Uzq1xtW/ka1kRFGnU5PtqsBKd9II+VF3iz ru0w== X-Gm-Message-State: AO0yUKU6Qzizdj5kmPU63AQX0U4WtEnkaqjZdI1nLvBkMMQcH14TaaJ3 CxsZdP2qDyvieIC9LaK2Ia0= X-Google-Smtp-Source: AK7set/S2QZaAGp2rQGblYsw8dLFhL3vafq7KCiEce8tvLDYN48jQAiSrwbC6o2wRs3OfWKCQEXeOQ== X-Received: by 2002:a17:902:e741:b0:19e:7bd2:a224 with SMTP id p1-20020a170902e74100b0019e7bd2a224mr31449817plf.62.1678445651796; Fri, 10 Mar 2023 02:54:11 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ks3-20020a170903084300b0019cbabf127dsm1174167plb.182.2023.03.10.02.54.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 02:54:11 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Ravi Bangoria , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/5] KVM: x86/pmu: Reorder functions to reduce unnecessary declarations Date: Fri, 10 Mar 2023 18:53:45 +0800 Message-Id: <20230310105346.12302-5-likexu@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230310105346.12302-1-likexu@tencent.com> References: <20230310105346.12302-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Considering that more emulations are deferred to kvm_pmu_handle_event(), moving it to the end of pmu.c makes it easier to call previous functions, instead of just piling up the function declarations to make compiler green. The same motivation is applied to kvm_pmu_request_counter_reprogram(), as it is the trigger for any emulations delay. No functional change intended. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 52 +++++++++++++++++++++++----------------------- arch/x86/kvm/pmu.h | 12 +++++------ 2 files changed, 32 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 81c7cc4ceadf..2a0504732966 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -467,32 +467,6 @@ static inline void kvm_pmu_handle_pmc_overflow(struct kvm_pmc *pmc) pmc->prev_counter = 0; } -void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) -{ - struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - int bit; - - for_each_set_bit(bit, pmu->reprogram_pmi, X86_PMC_IDX_MAX) { - struct kvm_pmc *pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, bit); - - if (unlikely(!pmc)) { - clear_bit(bit, pmu->reprogram_pmi); - continue; - } - - reprogram_counter(pmc); - kvm_pmu_handle_pmc_overflow(pmc); - } - - /* - * Unused perf_events are only released if the corresponding MSRs - * weren't accessed during the last vCPU time slice. kvm_arch_sched_in - * triggers KVM_REQ_PMU if cleanup is needed. - */ - if (unlikely(pmu->need_cleanup)) - kvm_pmu_cleanup(vcpu); -} - /* check if idx is a valid index to access PMU */ bool kvm_pmu_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { @@ -847,3 +821,29 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) kfree(filter); return r; } + +void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + int bit; + + for_each_set_bit(bit, pmu->reprogram_pmi, X86_PMC_IDX_MAX) { + struct kvm_pmc *pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, bit); + + if (unlikely(!pmc)) { + clear_bit(bit, pmu->reprogram_pmi); + continue; + } + + reprogram_counter(pmc); + kvm_pmu_handle_pmc_overflow(pmc); + } + + /* + * Unused perf_events are only released if the corresponding MSRs + * weren't accessed during the last vCPU time slice. kvm_arch_sched_in + * triggers KVM_REQ_PMU if cleanup is needed. + */ + if (unlikely(pmu->need_cleanup)) + kvm_pmu_cleanup(vcpu); +} diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index db4262fe8814..a47b579667c6 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -48,6 +48,12 @@ static inline u64 pmc_bitmask(struct kvm_pmc *pmc) return pmu->counter_bitmask[pmc->type]; } +static inline void kvm_pmu_request_counter_reprogram(struct kvm_pmc *pmc) +{ + set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi); + kvm_make_request(KVM_REQ_PMU, pmc->vcpu); +} + static inline u64 pmc_read_counter(struct kvm_pmc *pmc) { u64 counter, enabled, running; @@ -183,12 +189,6 @@ static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops) KVM_PMC_MAX_FIXED); } -static inline void kvm_pmu_request_counter_reprogram(struct kvm_pmc *pmc) -{ - set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi); - kvm_make_request(KVM_REQ_PMU, pmc->vcpu); -} - static inline bool pebs_is_enabled(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); From patchwork Fri Mar 10 10:53:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13169172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCF1AC64EC4 for ; Fri, 10 Mar 2023 10:55:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229940AbjCJKzB (ORCPT ); Fri, 10 Mar 2023 05:55:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230034AbjCJKy2 (ORCPT ); Fri, 10 Mar 2023 05:54:28 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 544B0FD296; Fri, 10 Mar 2023 02:54:14 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id fr5-20020a17090ae2c500b0023af8a036d2so8573009pjb.5; Fri, 10 Mar 2023 02:54:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1678445653; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0wyg7PPl9oMWqL20HC/bqjAELVsUWzLxhdgpFsSisLM=; b=jn/kMMaNqkIr84Aixb+tjQPhwsKirnz68cRcL6mMk6z4QEsympIhltdDzMLwDLHVVq kxTkoDJhXDTEmf535WmCayGqs/8Thx7O3sQOy1o/yJBnAHgglZwcs1WXUYN+DsYgXkcs ejq+jkeUHbYcSMyT0mszwWswdR285M2ZqTv7KrT4ZuYy0CMKl+qANvx/Q14edHFwE5Bg S2HrNTTczkV67JDXkRilR62VEyeDGXAEVpSzB0/Xgfva3+9UT5sMtFKM5uAJ7u+XUKI8 O3wCBoLIHSRg9lSqryE37F+9x5KLyeBAiuqv6GomAUQqT6xyjs4w8Iw/002Qz6u7AzLw ZheA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678445653; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0wyg7PPl9oMWqL20HC/bqjAELVsUWzLxhdgpFsSisLM=; b=5ARlXlUdLxRfJl9BfV/nawx7B3HGKSCETZxmUUKv6M2QenCPU0Ex4IOvKH5NxIznzQ JPz7zPMTxElh5iIL1Ic5MaG2iTc8eMO8uLMRTvepF0Ny9eyF5fdG5aWQgSNqon9DpQ6T Jg72pZtYh1g4x5vUW4efYB9P540lYywCrPMCqHrH5S/jbCPMmif60lamW8uSwXCQ+SsL NLjiXJAj0cHMKHGZmt8y7V/ccMBXcOVOkoVHBtf+DiP/F5DaISG3ikJ66RVVkpAXLVoD iprFpkczs4MFjcSzmDSzBNVqRVPcuKoL0wAKRfzkbS6KwjuwY9Umj/eyzCAQozh5JNQq ikdw== X-Gm-Message-State: AO0yUKXKLNTvsSnY5eicC95s8mwJ8lEYhbJimgIOVTeDSbBxhQRletoP jwIXsjCFfblOGduVu4fq+v8= X-Google-Smtp-Source: AK7set9GOgc8d03h5OC2+CJxAfSKCw8Cqdq3FEXvdxgQP0Hm/oxTD3H85ka7WEmbgH72V1/W8e1cAw== X-Received: by 2002:a17:902:e844:b0:19c:d97f:5d28 with SMTP id t4-20020a170902e84400b0019cd97f5d28mr31349746plg.32.1678445653644; Fri, 10 Mar 2023 02:54:13 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ks3-20020a170903084300b0019cbabf127dsm1174167plb.182.2023.03.10.02.54.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Mar 2023 02:54:13 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Ravi Bangoria , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/5] KVM: x86/pmu: Hide guest counter updates from the VMRUN instruction Date: Fri, 10 Mar 2023 18:53:46 +0800 Message-Id: <20230310105346.12302-6-likexu@tencent.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230310105346.12302-1-likexu@tencent.com> References: <20230310105346.12302-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu When AMD guest is counting (branch) instructions event, its vPMU should first subtract one for any relevant (branch)-instructions enabled counter (when it precedes VMRUN and cannot be preempted) to offset the inevitable plus-one effect of the VMRUN instruction immediately follows. Based on a number of micro observations (also the reason why x86_64/ pmu_event_filter_test fails on AMD Zen platforms), each VMRUN will increment all hw-(branch)-instructions counters by 1, even if they are only enabled for guest code. This issue seriously affects the performance understanding of guest developers based on (branch) instruction events. If the current physical register value on the hardware is ~0x0, it triggers an overflow in the guest world right after running VMRUN. Although this cannot be avoided on mainstream released hardware, the resulting PMI (if configured) will not be incorrectly injected into the guest by vPMU, since the delayed injection mechanism for a normal counter overflow depends only on the change of pmc->counter values. The pmu_hide_vmrun() is called before each VMRUN and its overhead depends on the number of counters enabled by the guest and is negligible when none of the counters are used. Cc: Ravi Bangoria Signed-off-by: Like Xu --- arch/x86/include/asm/kvm_host.h | 4 ++++ arch/x86/kvm/pmu.c | 18 ++++++++++++++++++ arch/x86/kvm/pmu.h | 24 +++++++++++++++++++++++- arch/x86/kvm/svm/pmu.c | 1 + arch/x86/kvm/svm/svm.c | 28 ++++++++++++++++++++++++++++ 5 files changed, 74 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index adb92fc4d7c9..d6fcbf233cb3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -561,6 +561,10 @@ struct kvm_pmu { */ u64 host_cross_mapped_mask; + /* Flags to track any HW quirks that need to be fixed by vPMU. */ + u64 quirk_flags; + DECLARE_BITMAP(hide_vmrun_pmc_idx, X86_PMC_IDX_MAX); + /* * The gate to release perf_events not marked in * pmc_in_use only once in a vcpu time slice. diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 2a0504732966..315dca021d57 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -254,6 +254,7 @@ static void pmc_pause_counter(struct kvm_pmc *pmc) counter += perf_event_pause(pmc->perf_event, true); pmc->counter = counter & pmc_bitmask(pmc); pmc->is_paused = true; + kvm_mark_pmc_is_quirky(pmc); } static bool pmc_resume_counter(struct kvm_pmc *pmc) @@ -822,6 +823,19 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) return r; } +static inline bool event_is_branch_instruction(struct kvm_pmc *pmc) +{ + return eventsel_match_perf_hw_id(pmc, PERF_COUNT_HW_INSTRUCTIONS) || + eventsel_match_perf_hw_id(pmc, + PERF_COUNT_HW_BRANCH_INSTRUCTIONS); +} + +static inline bool quirky_pmc_will_count_vmrun(struct kvm_pmc *pmc) +{ + return event_is_branch_instruction(pmc) && event_is_allowed(pmc) && + !static_call(kvm_x86_get_cpl)(pmc->vcpu); +} + void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -837,6 +851,10 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) reprogram_counter(pmc); kvm_pmu_handle_pmc_overflow(pmc); + + if (vcpu_has_pmu_quirks(vcpu) && + quirky_pmc_will_count_vmrun(pmc)) + set_bit(pmc->idx, pmu->hide_vmrun_pmc_idx); } /* diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index a47b579667c6..30f6f58f4c38 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -18,6 +18,9 @@ #define VMWARE_BACKDOOR_PMC_REAL_TIME 0x10001 #define VMWARE_BACKDOOR_PMC_APPARENT_TIME 0x10002 +#define X86_PMU_COUNT_VMRUN BIT_ULL(0) +#define X86_PMU_QUIRKS_MASK X86_PMU_COUNT_VMRUN + struct kvm_pmu_ops { bool (*hw_event_available)(struct kvm_pmc *pmc); bool (*pmc_is_enabled)(struct kvm_pmc *pmc); @@ -54,14 +57,33 @@ static inline void kvm_pmu_request_counter_reprogram(struct kvm_pmc *pmc) kvm_make_request(KVM_REQ_PMU, pmc->vcpu); } +static inline bool vcpu_has_pmu_quirks(struct kvm_vcpu *vcpu) +{ + return vcpu_to_pmu(vcpu)->quirk_flags & X86_PMU_QUIRKS_MASK; +} + +/* + * The time to mark pmc is when the accumulation value returned + * by perf API based on a HW counter has just taken effect. + */ +static inline void kvm_mark_pmc_is_quirky(struct kvm_pmc *pmc) +{ + if (!vcpu_has_pmu_quirks(pmc->vcpu)) + return; + + kvm_pmu_request_counter_reprogram(pmc); +} + static inline u64 pmc_read_counter(struct kvm_pmc *pmc) { u64 counter, enabled, running; counter = pmc->counter; - if (pmc->perf_event && !pmc->is_paused) + if (pmc->perf_event && !pmc->is_paused) { counter += perf_event_read_value(pmc->perf_event, &enabled, &running); + kvm_mark_pmc_is_quirky(pmc); + } /* FIXME: Scaling needed? */ return counter & pmc_bitmask(pmc); } diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 5fa939e411d8..130991a97f22 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -187,6 +187,7 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu) pmu->nr_arch_fixed_counters = 0; pmu->global_status = 0; bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters); + pmu->quirk_flags |= X86_PMU_COUNT_VMRUN; } static void amd_pmu_init(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index f41d96e638ef..f6b33d172481 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3919,6 +3919,31 @@ static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu) return EXIT_FASTPATH_NONE; } +static void pmu_hide_vmrun(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + unsigned int i; + + for_each_set_bit(i, pmu->hide_vmrun_pmc_idx, X86_PMC_IDX_MAX) { + clear_bit(i, pmu->hide_vmrun_pmc_idx); + + /* AMD doesn't have fixed counters at now. */ + if (i >= pmu->nr_arch_gp_counters) + continue; + + /* + * The prerequisite for fixing HW quirks is that there is indeed + * HW working and perf has no chance to retrieve the counter. + */ + pmc = &pmu->gp_counters[i]; + if (!pmc->perf_event || pmc->perf_event->hw.idx < 0) + continue; + + pmc->counter--; + } +} + static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_intercepted) { struct vcpu_svm *svm = to_svm(vcpu); @@ -3986,6 +4011,9 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) kvm_wait_lapic_expire(vcpu); + if (vcpu->kvm->arch.enable_pmu && vcpu_has_pmu_quirks(vcpu)) + pmu_hide_vmrun(vcpu); + /* * If this vCPU has touched SPEC_CTRL, restore the guest's value if * it's non-zero. Since vmentry is serialising on affected CPUs, there