From patchwork Tue Aug 23 09:32:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12951987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1558C32793 for ; Tue, 23 Aug 2022 11:53:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358638AbiHWLxR (ORCPT ); Tue, 23 Aug 2022 07:53:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358545AbiHWLwY (ORCPT ); Tue, 23 Aug 2022 07:52:24 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1501D51C7; Tue, 23 Aug 2022 02:32:45 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id 1so5738958pfu.0; Tue, 23 Aug 2022 02:32:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=4phw6l5mgqEwcOD/rVR9pFicMsLaOxr6psvCJQqo8lE=; b=foZPvnATL0PWGTTfZeKbiLnnHznuv0OFDeaiOO1r2Sp4MnALv6bT26zsJaBpwqIjTM nvp9ITmlnjpc1xVO4TUw6Hx11a9EWy+eghMF24cHJ3ankqqNLvZyWbfnayFbVHtUv+W9 MRwuG1KOd7xIoRtW3hvc/3S91QvV8HydPw7wLWgV8HDrxI6sSRxKTpi1nruFHe3MKtvH tEr2yUFnsG52eTwh+fF/JV4R+sZ9v0f0gXHtLeiTV0uqp6rp9JU9CTdXdyvqqkXezM1/ bsrGGwqMuYy3SrLQeFy3oB8fKTpSNhI+hDLE4GFtPAtYHN3Rz4EirOx5TnykMIAd70Be AXhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=4phw6l5mgqEwcOD/rVR9pFicMsLaOxr6psvCJQqo8lE=; b=EEaBdVy51Kzo9KQSPxFxq7hgiBrF51CETQIK1LNnakpWy+FYPcYnMiNkcF5JOR/Ao+ 7t4YWKP/oO5p08oN89knW1uAEmTiR8DAflST5cNpTmrX0a7uWsELCn70hHKpxLM0WPE0 O3KWWIhQctQmOoHcu5qcZl3kiN6kgvJn6nlogm6hmZD3Lui6/8ZIHj0D/7YE4o6WPDd0 wvsRQ+34VZwcwyBybRGEQbulIlioOhkUM6Qm1L1DaI+VBiinZv79/LwD5eLV3OE46CYC 41MvFfQ0i595ISQqqW16jE0Yf8bZPepIUXdcgSrGvUQxGYlKKcY73ffDQgWONcIBXRQi qZvw== X-Gm-Message-State: ACgBeo3B2rqu1C10bOr3LXCE7U23SkoL94zhwbZ4xW5NaoQGjmtMcW7k XsGhtwc5PUlo2z1fe6SvrHI= X-Google-Smtp-Source: AA6agR7IXtimM8dnxDTdAkgMHDcvJcIiqdkbaUB3wJWRkUW39tkD/2LhybfQJOfqQpo85flgQoGYig== X-Received: by 2002:a62:cdc2:0:b0:536:5c11:c342 with SMTP id o185-20020a62cdc2000000b005365c11c342mr13453688pfg.42.1661247164457; Tue, 23 Aug 2022 02:32:44 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017297a6b39dsm10057212plg.265.2022.08.23.02.32.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Aug 2022 02:32:44 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Jim Mattson , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v2 1/8] perf/x86/core: Completely disable guest PEBS via guest's global_ctrl Date: Tue, 23 Aug 2022 17:32:14 +0800 Message-Id: <20220823093221.38075-2-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220823093221.38075-1-likexu@tencent.com> References: <20220823093221.38075-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu When a guest PEBS counter is cross-mapped by a host counter, software will remove the corresponding bit in the arr[global_ctrl].guest and expect hardware to perform a change of state "from enable to disable" via the msr_slot[] switch during the vmx transaction. The real world is that if user adjust the counter overflow value small enough, it still opens a tiny race window for the previously PEBS-enabled counter to write cross-mapped PEBS records into the guest's PEBS buffer, when arr[global_ctrl].guest has been prioritised (switch_msr_special stuff) to switch into the enabled state, while the arr[pebs_enable].guest has not. Close this window by clearing invalid bits in the arr[global_ctrl].guest. Fixes: 854250329c02 ("KVM: x86/pmu: Disable guest PEBS temporarily in two rare situations") Signed-off-by: Like Xu --- arch/x86/events/intel/core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 2db93498ff71..75cdd11ab014 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -4052,8 +4052,9 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data) /* Disable guest PEBS if host PEBS is enabled. */ arr[pebs_enable].guest = 0; } else { - /* Disable guest PEBS for cross-mapped PEBS counters. */ + /* Disable guest PEBS thoroughly for cross-mapped PEBS counters. */ arr[pebs_enable].guest &= ~kvm_pmu->host_cross_mapped_mask; + arr[global_ctrl].guest &= ~kvm_pmu->host_cross_mapped_mask; /* Set hw GLOBAL_CTRL bits for PEBS counter when it runs for guest */ arr[global_ctrl].guest |= arr[pebs_enable].guest; } From patchwork Tue Aug 23 09:32:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12951988 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06C68C32774 for ; Tue, 23 Aug 2022 11:53:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358725AbiHWLxU (ORCPT ); Tue, 23 Aug 2022 07:53:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358567AbiHWLw0 (ORCPT ); Tue, 23 Aug 2022 07:52:26 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C00A4D4BF8; Tue, 23 Aug 2022 02:32:48 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id 1so5739064pfu.0; Tue, 23 Aug 2022 02:32:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=nsftH5dD52AwRoWk2NYBNp/4StwHh8764Sbyy9peaj4=; b=IEoQCpQziqDUsVDWLwDtk0AUe7uob+W+yrwymTEwEBsqTF8eQ7jbzjaQLrPDYPT2B4 ZabGkAEqnzfOzwpF8C5sSoSsLhCuU3vzY42wDTkIKMFuZ5xDHYuq7n2ALcdSE2UFGHRH rd8o5fRFZXg8lQogm91SmrkOh8QgQm4umYz7bOarUIVjfEbz9DEDpxlQVYXvfDZ5WpP0 a8IJRqGxzddXOvkConpS8bHshUU4kZpXfDz/dvKJecpb4/cOwc9gh2trOPz0toQRgphW IQCJrCS2tHrPxGmDuyuw1gxeL0XTyeCYZZ2eZRZwQXds+2tvkV9ecK+sDEUkUPrIfiVn XP/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=nsftH5dD52AwRoWk2NYBNp/4StwHh8764Sbyy9peaj4=; b=UODJVLbSaXWjx+usw6x2Ji/+xMBHWj+3Tf+Q+sVMtuNybN8k6Pu9go5A1IFe59/Wl3 GGLETLVrxoT4OkArNtSgRH/DAJuClp8xHSgSroDkWsoCgdNxnHXpgwyaru9nj1H128mV tpMRPE+IxbT36JQCht4uv1fW/i4MsF41EthcDe4QjMhEkkUcTDUd1jA8mPPGtiWeoHEn AKXzZHSd5kPr4tGyZ70O5b0JZyy5noryj7gVLn4rv750Fw/ZIj25Il2Uxlj9nBFVYUZd VlrLzqIfP2q29YFNBlncVGgKRamhZJwL/08d2Pmmjd/jfMpmLX69L7AwbBoap4lGRBQa ZZyw== X-Gm-Message-State: ACgBeo2FoOLPqpM1b4i3CuW5rDOpktdsJtwnJmIEfG0IdRQfRYFXd3b1 53iPxIXaDzObjySj6M/MRto= X-Google-Smtp-Source: AA6agR402iDPUq9rKI/l4S5qq+JgdB5XAVxduUDaKxvzj/vf7rHt3thUYW+JbMJb5BZYTaAcNk4Naw== X-Received: by 2002:a05:6a00:1501:b0:52f:2556:9b7f with SMTP id q1-20020a056a00150100b0052f25569b7fmr23979938pfu.27.1661247167044; Tue, 23 Aug 2022 02:32:47 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017297a6b39dsm10057212plg.265.2022.08.23.02.32.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Aug 2022 02:32:46 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Jim Mattson , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v2 2/8] KVM: x86/pmu: Avoid setting BIT_ULL(-1) to pmu->host_cross_mapped_mask Date: Tue, 23 Aug 2022 17:32:15 +0800 Message-Id: <20220823093221.38075-3-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220823093221.38075-1-likexu@tencent.com> References: <20220823093221.38075-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu In the extreme case of host counters multiplexing and contention, the perf_event requested by the guest's pebs counter is not allocated to any actual physical counter, in which case hw.idx is bookkept as -1, resulting in an out-of-bounds access to host_cross_mapped_mask. Fixes: 854250329c02 ("KVM: x86/pmu: Disable guest PEBS temporarily in two rare situations") Signed-off-by: Like Xu --- arch/x86/kvm/vmx/pmu_intel.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index c399637a3a79..d595ff33d32d 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -776,20 +776,20 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) { struct kvm_pmc *pmc = NULL; - int bit; + int bit, hw_idx; for_each_set_bit(bit, (unsigned long *)&pmu->global_ctrl, X86_PMC_IDX_MAX) { pmc = intel_pmc_idx_to_pmc(pmu, bit); if (!pmc || !pmc_speculative_in_use(pmc) || - !intel_pmc_is_enabled(pmc)) + !intel_pmc_is_enabled(pmc) || !pmc->perf_event) continue; - if (pmc->perf_event && pmc->idx != pmc->perf_event->hw.idx) { - pmu->host_cross_mapped_mask |= - BIT_ULL(pmc->perf_event->hw.idx); - } + hw_idx = pmc->perf_event->hw.idx; + /* make it a little less dependent on perf's exact behavior */ + if (hw_idx != pmc->idx && hw_idx > -1) + pmu->host_cross_mapped_mask |= BIT_ULL(hw_idx); } } From patchwork Tue Aug 23 09:32:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12951989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1F2AC32772 for ; Tue, 23 Aug 2022 11:53:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358730AbiHWLx1 (ORCPT ); Tue, 23 Aug 2022 07:53:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358587AbiHWLw1 (ORCPT ); Tue, 23 Aug 2022 07:52:27 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38D64D51E4; Tue, 23 Aug 2022 02:32:50 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id y4so12313406plb.2; Tue, 23 Aug 2022 02:32:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=zk5VkhKY6qv5m7v9uP9nYXO8clyUIi7rFMMrFFDGn7U=; b=kmwFktDd3ld3yM7m7f68NClI/f/GAAGO/5hY1Cwxkqlmc8A3BK6+Uey+KWM4tBBx7/ qvznlJC54wouFVrRhbTlPhrXvsw2ycTYMVt9ExQFnZ7fXTfiJ8j88iI3+e5zUUfUm+sP 3L48IEHxsKDcfQQcihzzi1wlc/eCmcrjV3koajdbWwEOsnOumwtnfNXwnDr92TrGchqA 1fLRW8leTnKinX05mF4ZTplpJXTkxS4taJXKPIak0DhpMM0nIOZluT+Ptv1xf46GDW/c d5vGrzVg1RK5ZXy3n5FgOAO0IvzckjrUv2/2qZu9RKcfkOetWgELNXbI8jeDrXMQ5PPe LaZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=zk5VkhKY6qv5m7v9uP9nYXO8clyUIi7rFMMrFFDGn7U=; b=BuA6JrOoXk7fHYRUW8rvLho4AOOHdJp3HgeeAf5ayrVwieU1mt31MSxCz4giou36w6 9eOpU+KWZog/c4l96ZQp4LZw2POFt3BolLd667eQ+OazMCUqFtbiQ7d3h+MEFxBza4vJ tBUmCs2LpwVPTIBpNWLFp1F4zuLMXmyKnJzrUoEvHP3qf6IEN8FdEgmFRRv06qYTyb1d JDA6zGxj75DtSChMmkWWWAloioP/pLwbZIlIIV4DsjOyXcZRWGfCQP8C3d418qdsJgwg 6npBoUmR9cuTOyloX9cddpPKF4OEv0Zz+WuO4W8raRdRguCzS7gW+mIBWNZMkakD1K9s OdBg== X-Gm-Message-State: ACgBeo1KNji4XbswCFnd1JYtzXmavEO+hXuPOP4knzv8enYs9Yabn4HT 3bosqNtgnFCQNvnGrsKg8YE= X-Google-Smtp-Source: AA6agR6LfsI8b0QU91H2HRcCWO5xh0n1GDvgAOO89yB7YmfaMO/CMqaH+0esbGYN8H0gThikx4h1Ng== X-Received: by 2002:a17:90b:238b:b0:1fa:d833:30dd with SMTP id mr11-20020a17090b238b00b001fad83330ddmr2444336pjb.147.1661247169038; Tue, 23 Aug 2022 02:32:49 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017297a6b39dsm10057212plg.265.2022.08.23.02.32.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Aug 2022 02:32:48 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Jim Mattson , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v2 3/8] KVM: x86/pmu: Don't generate PEBS records for emulated instructions Date: Tue, 23 Aug 2022 17:32:16 +0800 Message-Id: <20220823093221.38075-4-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220823093221.38075-1-likexu@tencent.com> References: <20220823093221.38075-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu KVM will accumulate an enabled counter for at least INSTRUCTIONS or BRANCH_INSTRUCTION hw event from any KVM emulated instructions, generating emulated overflow interrupt on counter overflow, which in theory should also happen when the PEBS counter overflows but it currently lacks this part of the underlying support (e.g. through software injection of records in the irq context or a lazy approach). In this case, KVM skips the injection of this BUFFER_OVF PMI (effectively dropping one PEBS record) and let the overflow counter move on. The loss of a single sample does not introduce a loss of accuracy, but is easily noticeable for certain specific instructions. This issue is expected to be addressed along with the issue of PEBS cross-mapped counters with a slow-path proposal. Fixes: 79f3e3b58386 ("KVM: x86/pmu: Reprogram PEBS event to emulate guest PEBS counter") Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 02f9e4f245bd..390d697efde1 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -106,9 +106,19 @@ static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) return; if (pmc->perf_event && pmc->perf_event->attr.precise_ip) { - /* Indicate PEBS overflow PMI to guest. */ - skip_pmi = __test_and_set_bit(GLOBAL_STATUS_BUFFER_OVF_BIT, - (unsigned long *)&pmu->global_status); + if (!in_pmi) { + /* + * TODO: KVM is currently _choosing_ to not generate records + * for emulated instructions, avoiding BUFFER_OVF PMI when + * there are no records. Strictly speaking, it should be done + * as well in the right context to improve sampling accuracy. + */ + skip_pmi = true; + } else { + /* Indicate PEBS overflow PMI to guest. */ + skip_pmi = __test_and_set_bit(GLOBAL_STATUS_BUFFER_OVF_BIT, + (unsigned long *)&pmu->global_status); + } } else { __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); } From patchwork Tue Aug 23 09:32:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12951986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9989C32796 for ; Tue, 23 Aug 2022 11:53:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358706AbiHWLxG (ORCPT ); Tue, 23 Aug 2022 07:53:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358488AbiHWLwW (ORCPT ); Tue, 23 Aug 2022 07:52:22 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87FE9D51FE; Tue, 23 Aug 2022 02:32:53 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id 73so11763411pgb.9; Tue, 23 Aug 2022 02:32:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=kmsJThxMpL4fKPHkENmuVnZ9XORtMvlqpgm9TJ9ODsE=; b=A78yrInV0cuSgnOfsAcBxJ2wwQwZnjaetvznU7KHMasPcl5tGc/vCy5XkFeeJsp9Dg yj7JFhMbZY2O50EgTz/ZJEBG3Ilx1Y6cCptOo7dzs9v2egnU491pA0hWHWKtHRqxAS3i xFdZLebCQQzIu3Ax9ombfuzz9pHrzcWPITLttqnTPWiIm3sKBvvfcSwbdwBWRh1C67Zn rk33b/2gc77Ft5eZFZEJj0ftfSMZ2CXPb+erGDUENXe2u82Pv2Np84UUWq3KC362LRrD 0UicxLweea7YS3m/WitDmisYaCZm0p++jTcRIBNgUINoybOZ8fuo3ne5MkcNheOgb1Jt GYyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=kmsJThxMpL4fKPHkENmuVnZ9XORtMvlqpgm9TJ9ODsE=; b=x2FJAym2m0Aa4nLNMJZHIdcF9RzTwY9RKM8IyWUALHdw+DSYz7ds0+PJVUF63xKcrk XxNI4zFl+R43VSUm31WMHj0kdpViPT2oiOYSUXCYFF9Yq11g7wG4gJPb75N/RBLeQMUr sripSH8VyWbNCd75gn/WyR7UbhAgTX6ueixPdkAGk1phcilpgJ+YASsx6JcPaNainChg BBVKFH/1W9iLDpJOHoJ3yiRkOhNL9O6DBsgUiCNUv+hjk++I7pKP0aQI52b+rn5WK41u ZREqtPIk7Fzz+cHyN8TLxb6KdVm66T+XI/2BESs+KBViYWUF5zJTiXkUFk/7WykoMsqA 9gWA== X-Gm-Message-State: ACgBeo1pY1Ukfo3Xt9B9tRSNSTp8A3c1/4uxD9KnmTQPWLSrth3Dmbfx LVOEpNkDJpt8I5kTHfSNU9U= X-Google-Smtp-Source: AA6agR7SWSvgMJUf1OLU12wjPib4Z+Of0SWDD5+MxpFkkxBSgGnm77bzrRK3Lgrjff+7U4dumFRNVQ== X-Received: by 2002:a65:49c8:0:b0:415:e89d:ea1a with SMTP id t8-20020a6549c8000000b00415e89dea1amr19965937pgs.266.1661247171182; Tue, 23 Aug 2022 02:32:51 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017297a6b39dsm10057212plg.265.2022.08.23.02.32.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Aug 2022 02:32:50 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Jim Mattson , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v2 4/8] KVM: x86/pmu: Avoid using PEBS perf_events for normal counters Date: Tue, 23 Aug 2022 17:32:17 +0800 Message-Id: <20220823093221.38075-5-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220823093221.38075-1-likexu@tencent.com> References: <20220823093221.38075-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The check logic in the pmc_resume_counter() to determine whether a perf_event is reusable is partial and flawed, especially when it comes to a pseudocode sequence (not correct but clearly valid) like: - enabling a counter and its PEBS bit - enable global_ctrl - run workload - disable only the PEBS bit, leaving the global_ctrl bit enabled In this corner case, a perf_event created for PEBS can be reused by a normal counter before it has been released and recreated, and when this normal counter overflows, it triggers a PEBS interrupt (precise_ip != 0). To address this issue, the reuse check has been revamped and KVM will go back to do reprogram_counter() when any bit of guest PEBS_ENABLE msr has changed, which is similar to what global_ctrl_changed() does. Fixes: 79f3e3b58386 ("KVM: x86/pmu: Reprogram PEBS event to emulate guest PEBS counter") Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 4 ++-- arch/x86/kvm/vmx/pmu_intel.c | 14 +++++++------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 390d697efde1..d9b9a0f0db17 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -237,8 +237,8 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) get_sample_period(pmc, pmc->counter))) return false; - if (!test_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->pebs_enable) && - pmc->perf_event->attr.precise_ip) + if (test_bit(pmc->idx, (unsigned long *)&pmc_to_pmu(pmc)->pebs_enable) != + (!!pmc->perf_event->attr.precise_ip)) return false; /* reuse perf_event to serve as pmc_reprogram_counter() does*/ diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index d595ff33d32d..6242b0b81116 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -68,15 +68,11 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) } } -/* function is called when global control register has been updated. */ -static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data) +static void reprogram_counters(struct kvm_pmu *pmu, u64 diff) { int bit; - u64 diff = pmu->global_ctrl ^ data; struct kvm_pmc *pmc; - pmu->global_ctrl = data; - for_each_set_bit(bit, (unsigned long *)&diff, X86_PMC_IDX_MAX) { pmc = intel_pmc_idx_to_pmc(pmu, bit); if (pmc) @@ -397,7 +393,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) struct kvm_pmc *pmc; u32 msr = msr_info->index; u64 data = msr_info->data; - u64 reserved_bits; + u64 reserved_bits, diff; switch (msr) { case MSR_CORE_PERF_FIXED_CTR_CTRL: @@ -418,7 +414,9 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (pmu->global_ctrl == data) return 0; if (kvm_valid_perf_global_ctrl(pmu, data)) { - global_ctrl_changed(pmu, data); + diff = pmu->global_ctrl ^ data; + pmu->global_ctrl = data; + reprogram_counters(pmu, diff); return 0; } break; @@ -433,7 +431,9 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (pmu->pebs_enable == data) return 0; if (!(data & pmu->pebs_enable_mask)) { + diff = pmu->pebs_enable ^ data; pmu->pebs_enable = data; + reprogram_counters(pmu, diff); return 0; } break; From patchwork Tue Aug 23 09:32:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12951990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B92E8C32792 for ; Tue, 23 Aug 2022 11:54:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358782AbiHWLyM (ORCPT ); Tue, 23 Aug 2022 07:54:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358685AbiHWLwz (ORCPT ); Tue, 23 Aug 2022 07:52:55 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58A62D571E; Tue, 23 Aug 2022 02:32:55 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id r69so11784629pgr.2; Tue, 23 Aug 2022 02:32:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=CAfa2n1KTOfIPyRv+epAHCxbtQ7Xsz015iAg4y1XQaI=; b=Wd7ldfFhVFtiRs1Gcwu96YKIrY8Gq3FVAlGT9F3hBwayZrVYO0GH2Rd5eJ6CrOt42S z1z6y7tvtrOr2tOl+cO3IUBPAWGPLX1x+YuLOoeQnaHSGSCEwDPFxHRdZxlv57c4uDde Xe+I6eSxBb+8M3GRcEWVhmeENqcBAy+O21O2aFp7JoXVSbAovWH+yshuHghUF3zke3Mp E7UyTMQHvoeBmEhVEOWTiMei3WSg1Zb/6KmHGNv8oBi6jolxwIhh7n6tckv2dwIlzFwl o/Tx6BmCA0yzQTOCws5a5EDh7OXlmqGHigkyAdrgG41pAZGdkJXq7FDCToEeAQTjaEeD 2cOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=CAfa2n1KTOfIPyRv+epAHCxbtQ7Xsz015iAg4y1XQaI=; b=NIJUnlYGojwsEoiWsbPkmG6M1i8g8aNOEVLYZfcsLr942QKbQbdH/Ex02zkiGYuRHF 1LQGVd0kTwRbMTSX6ce4ap8EKz8UzTHoPtrfprKSRwmwX0m89p4sLaZUmc9HtenroVTb gc2d3vSol/dcoNA/f4tY5tH4yxnPLVuIPomeVi9hG0hjMUBirlUae+IajuWN8G+11u9r Nd8xpE9lbUQ4nzgkOt6ncYHDFKv4albNQE+EDu1i2+CHT6Qm2e5XzHWRRXM0PF42Hb04 gnvfqVDlYe8nqxnHCxjBVeXFG7nPLbZIq+/FVyvLnUW2rUC+VPogbeSUpQ6V9f/nLxtd SchQ== X-Gm-Message-State: ACgBeo2RTnOFZ5Rv3bW1/TVD34V4yGMTtdJq6EmxkwkVaCRBRt3PVBGg aX+znj1MtfUaqkFu9nzxQHy0YfAdFog= X-Google-Smtp-Source: AA6agR5OKXfHmRfMtQ6PyilKMX43lH3PA2Vu+bqjib+lxTHFZnMVsib9S66F1x7+8+CNnuiC0ENrjQ== X-Received: by 2002:a05:6a02:10e:b0:42a:b42b:5692 with SMTP id bg14-20020a056a02010e00b0042ab42b5692mr7416921pgb.67.1661247173287; Tue, 23 Aug 2022 02:32:53 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017297a6b39dsm10057212plg.265.2022.08.23.02.32.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Aug 2022 02:32:53 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Jim Mattson , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v2 5/8] KVM: x86/pmu: Defer reprogram_counter() to kvm_pmu_handle_event() Date: Tue, 23 Aug 2022 17:32:18 +0800 Message-Id: <20220823093221.38075-6-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220823093221.38075-1-likexu@tencent.com> References: <20220823093221.38075-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu During a KVM-trap from vm-exit to vm-entry, requests from different sources will try to create one or more perf_events via reprogram_counter(), which will allow some predecessor actions to be undone posteriorly, especially repeated calls to some perf subsystem interfaces. These repetitive calls can be omitted because only the final state of the perf_event and the hardware resources it occupies will take effect for the guest right before the vm-entry. To realize this optimization, KVM marks the creation requirements via an inline version of reprogram_counter(), and then defers the actual execution with the help of vcpu KVM_REQ_PMU request. Opportunistically update related comments to avoid misunderstandings. Signed-off-by: Like Xu --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/pmu.c | 16 +++++++++------- arch/x86/kvm/pmu.h | 6 +++++- 3 files changed, 15 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2c96c43c313a..4e568a7ef464 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -493,6 +493,7 @@ struct kvm_pmc { struct perf_event *perf_event; struct kvm_vcpu *vcpu; /* + * only for creating or reusing perf_event, * eventsel value for general purpose counters, * ctrl value for fixed counters. */ diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index d9b9a0f0db17..6940cbeee54d 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -101,7 +101,7 @@ static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) struct kvm_pmu *pmu = pmc_to_pmu(pmc); bool skip_pmi = false; - /* Ignore counters that have been reprogrammed already. */ + /* Ignore counters that have not been reprogrammed. */ if (test_and_set_bit(pmc->idx, pmu->reprogram_pmi)) return; @@ -293,7 +293,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) return allow_event; } -void reprogram_counter(struct kvm_pmc *pmc) +static void __reprogram_counter(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); u64 eventsel = pmc->eventsel; @@ -335,7 +335,6 @@ void reprogram_counter(struct kvm_pmc *pmc) !(eventsel & ARCH_PERFMON_EVENTSEL_OS), eventsel & ARCH_PERFMON_EVENTSEL_INT); } -EXPORT_SYMBOL_GPL(reprogram_counter); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) { @@ -345,11 +344,12 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) for_each_set_bit(bit, pmu->reprogram_pmi, X86_PMC_IDX_MAX) { struct kvm_pmc *pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, bit); - if (unlikely(!pmc || !pmc->perf_event)) { + if (unlikely(!pmc)) { clear_bit(bit, pmu->reprogram_pmi); continue; } - reprogram_counter(pmc); + + __reprogram_counter(pmc); } /* @@ -527,7 +527,7 @@ static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) prev_count = pmc->counter; pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc); - reprogram_counter(pmc); + __reprogram_counter(pmc); if (pmc->counter < prev_count) __kvm_perf_overflow(pmc, false); } @@ -542,7 +542,9 @@ static inline bool eventsel_match_perf_hw_id(struct kvm_pmc *pmc, static inline bool cpl_is_matched(struct kvm_pmc *pmc) { bool select_os, select_user; - u64 config = pmc->current_config; + u64 config = pmc_is_gp(pmc) ? pmc->eventsel : + (u64)fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl, + pmc->idx - INTEL_PMC_IDX_FIXED); if (pmc_is_gp(pmc)) { select_os = config & ARCH_PERFMON_EVENTSEL_OS; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 5cc5721f260b..d193d1dc6de0 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -183,7 +183,11 @@ static inline void kvm_init_pmu_capability(void) KVM_PMC_MAX_FIXED); } -void reprogram_counter(struct kvm_pmc *pmc); +static inline void reprogram_counter(struct kvm_pmc *pmc) +{ + __set_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi); + kvm_make_request(KVM_REQ_PMU, pmc->vcpu); +} void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu); void kvm_pmu_handle_event(struct kvm_vcpu *vcpu); From patchwork Tue Aug 23 09:32:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12951992 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BDE2C32772 for ; Tue, 23 Aug 2022 11:54:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358808AbiHWLyX (ORCPT ); Tue, 23 Aug 2022 07:54:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358690AbiHWLw5 (ORCPT ); Tue, 23 Aug 2022 07:52:57 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C160D477C; Tue, 23 Aug 2022 02:33:03 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id z187so12908188pfb.12; Tue, 23 Aug 2022 02:33:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=G7if3ijpIc8TnQSHWXh0LcgIPH6U4JE/8vWFIIr8OLI=; b=ATgMOrzq/UEEdAS5zLeMKcguyeiRDtrEpY6t6fu0f1aQVYBfeAK9KVeBR5vRRiNNu1 3v6Yi9RvBCPhN7v2L0AfLEmE3WxAB6jOJvI92ZXaMXsMreZ+mQYaXC/wPVPI85rKkdJw 07iIdK/otGFLxLmdCfPFVm+r+O0We7GqGBfMOH43piVN1KMSx90572wB335lezFCMGGv QeR6P+3aNs/MpcmvV2j6meKl7GHZhXndKjTQtHcgANEkSi17q3B4bmh61c6Mf09x2uhQ ajnbBJ2+3SHA9Qh8HqMfFFb4OiepwdPmRSVVm12Bv6coNJYgI2fGvUdVMU1qILvtO6nx dzOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=G7if3ijpIc8TnQSHWXh0LcgIPH6U4JE/8vWFIIr8OLI=; b=Kmg2dmXUcpqL7pbYkjLVbpb2dSyWHhT/ftqi5qg5Hg1A28o418xdA790fkxpMj9tIa UKLRgJtRHX1MSVsOtlCL/XjbvX5OOXSPmL0GL9cJQz01pHd25q97ubXf6tyQ0CvrKhBy rsYadUZPHsI1PB/+IgexR4R7gDHEXsyc+MaaaTTIql+7ZQiwdjEh8xjjjgGKcOBMmGNI z06eprlx0vYAFy3GkYEfyNrWvm0WXTM10cjshGxv81E+LGOwob9X+xuSnG7vf6EcGqMa 5vP1RSJ+eJMY6AlopCSjWSMztt7HnUKH/ITtHKZTNqlgdIBLIMJ+H00zyxh/GP+blLwA 8r3g== X-Gm-Message-State: ACgBeo0Samz1qD+FcDKwUAfQ/6TF8Y+Vpc+wEbdNhrNgA0ehzAY3FYKK YhI4pSQOsGLt3UI0SxX0JqA= X-Google-Smtp-Source: AA6agR6aBUzNMOzQ/JKSPbAlZJui3XnUYekBLY/m5yG3qo12sPCWaxxOnwapLcT+jzSsHoR+BigRUw== X-Received: by 2002:a63:fb4a:0:b0:429:8605:6ebf with SMTP id w10-20020a63fb4a000000b0042986056ebfmr19798586pgj.225.1661247175818; Tue, 23 Aug 2022 02:32:55 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017297a6b39dsm10057212plg.265.2022.08.23.02.32.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Aug 2022 02:32:55 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Jim Mattson , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Wanpeng Li Subject: [PATCH RESEND v2 6/8] KVM: x86/pmu: Defer counter emulated overflow via pmc->stale_counter Date: Tue, 23 Aug 2022 17:32:19 +0800 Message-Id: <20220823093221.38075-7-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220823093221.38075-1-likexu@tencent.com> References: <20220823093221.38075-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu There are contextual restrictions on the functions that can be called in the *_exit_handlers_fastpath path, for example calling pmc_reprogram_counter() brings up a host complaint like: [*] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:580 [*] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 2981888, name: CPU 15/KVM [*] preempt_count: 1, expected: 0 [*] RCU nest depth: 0, expected: 0 [*] INFO: lockdep is turned off. [*] irq event stamp: 0 [*] hardirqs last enabled at (0): [<0000000000000000>] 0x0 [*] hardirqs last disabled at (0): [] copy_process+0x146a/0x62d0 [*] softirqs last enabled at (0): [] copy_process+0x14a9/0x62d0 [*] softirqs last disabled at (0): [<0000000000000000>] 0x0 [*] Preemption disabled at: [*] [] vcpu_enter_guest+0x1001/0x3dc0 [kvm] [*] CPU: 17 PID: 2981888 Comm: CPU 15/KVM Kdump: 5.19.0-rc1-g239111db364c-dirty #2 [*] Call Trace: [*] [*] dump_stack_lvl+0x6c/0x9b [*] __might_resched.cold+0x22e/0x297 [*] __mutex_lock+0xc0/0x23b0 [*] perf_event_ctx_lock_nested+0x18f/0x340 [*] perf_event_pause+0x1a/0x110 [*] reprogram_counter+0x2af/0x1490 [kvm] [*] kvm_pmu_trigger_event+0x429/0x950 [kvm] [*] kvm_skip_emulated_instruction+0x48/0x90 [kvm] [*] handle_fastpath_set_msr_irqoff+0x349/0x3b0 [kvm] [*] vmx_vcpu_run+0x268e/0x3b80 [kvm_intel] [*] vcpu_enter_guest+0x1d22/0x3dc0 [kvm] A new stale_counter field is introduced to keep this part of the semantics invariant. It records the current counter value and it's used to determine whether to inject an emulated overflow interrupt in the later kvm_pmu_handle_event(), given that the internal count value from its perf_event has not been added to pmc->counter in time, or the guest will update the value of a running counter directly. Opportunistically shrink sizeof(struct kvm_pmc) a bit. Suggested-by: Wanpeng Li Fixes: 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions") Signed-off-by: Like Xu --- arch/x86/include/asm/kvm_host.h | 5 +++-- arch/x86/kvm/pmu.c | 15 ++++++++------- arch/x86/kvm/svm/pmu.c | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 4 ++-- 4 files changed, 14 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4e568a7ef464..ffd982bf015d 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -488,7 +488,10 @@ enum pmc_type { struct kvm_pmc { enum pmc_type type; u8 idx; + bool is_paused; + bool intr; u64 counter; + u64 stale_counter; u64 eventsel; struct perf_event *perf_event; struct kvm_vcpu *vcpu; @@ -498,8 +501,6 @@ struct kvm_pmc { * ctrl value for fixed counters. */ u64 current_config; - bool is_paused; - bool intr; }; #define KVM_PMC_MAX_FIXED 3 diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 6940cbeee54d..45d062cb1dd5 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -350,6 +350,12 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) } __reprogram_counter(pmc); + + if (pmc->stale_counter) { + if (pmc->counter < pmc->stale_counter) + __kvm_perf_overflow(pmc, false); + pmc->stale_counter = 0; + } } /* @@ -522,14 +528,9 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu) static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) { - u64 prev_count; - - prev_count = pmc->counter; + pmc->stale_counter = pmc->counter; pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc); - - __reprogram_counter(pmc); - if (pmc->counter < prev_count) - __kvm_perf_overflow(pmc, false); + reprogram_counter(pmc); } static inline bool eventsel_match_perf_hw_id(struct kvm_pmc *pmc, diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index f24613a108c5..e9c66dd659a6 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -290,7 +290,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) struct kvm_pmc *pmc = &pmu->gp_counters[i]; pmc_stop_counter(pmc); - pmc->counter = pmc->eventsel = 0; + pmc->counter = pmc->stale_counter = pmc->eventsel = 0; } } diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 6242b0b81116..42b591755010 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -647,14 +647,14 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu) pmc = &pmu->gp_counters[i]; pmc_stop_counter(pmc); - pmc->counter = pmc->eventsel = 0; + pmc->counter = pmc->stale_counter = pmc->eventsel = 0; } for (i = 0; i < KVM_PMC_MAX_FIXED; i++) { pmc = &pmu->fixed_counters[i]; pmc_stop_counter(pmc); - pmc->counter = 0; + pmc->counter = pmc->stale_counter = 0; } pmu->fixed_ctr_ctrl = pmu->global_ctrl = pmu->global_status = 0; From patchwork Tue Aug 23 09:32:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12951993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB4ECC32772 for ; Tue, 23 Aug 2022 11:56:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358885AbiHWLzT (ORCPT ); Tue, 23 Aug 2022 07:55:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358745AbiHWLxl (ORCPT ); Tue, 23 Aug 2022 07:53:41 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4EB3D59A4; Tue, 23 Aug 2022 02:33:04 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id x63-20020a17090a6c4500b001fabbf8debfso14037527pjj.4; Tue, 23 Aug 2022 02:33:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=jibgJOGd240HxEROEv/c/zVWj7my7wQDRjzZTRSYIhM=; b=oMLNqAxfBLjtjE01u/Hw9QLDMdq90VWOYmmQRF1gnUac/j5OCmgsh9gWkju2oE4d7W zkQ0W02e7NDzXd0Qg/zgCkdQNr6nRQMh8NgiX7DdZw0TTgF0hedIOCQv7D9ooHbrufz3 NPsyBjOKaT3b1dE+09xEaoEr+9lP3KbwIM0IN/KsXkUteWjPEyI1gkiENKsxa1TyrqTN xn8zEebwUEPz4NjROP+CoX5fvi0ywVw/5oQGTMZheKnLLyl8rnPzmi04y6gSdqJbqfiB hSowz0Zf6V+53iOfQScYJSc4F8HnPkwxjQNJFfWFXGa2+a1fgkQ8Yq0hiiHTvdjD6JN6 K8rA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=jibgJOGd240HxEROEv/c/zVWj7my7wQDRjzZTRSYIhM=; b=myrzvCqK8DlX+Bplt3jNYWggSVJoYkvhJWto+yPNhmIvnCxpjJRd8Ou6j7hr4Oyz4e 1qF9sEh4V6V3u9FMhlxAUhGam/imUkxiT2TRFBej3KP0lrQ22+UJL5Q7PubO01vyIZ7R j0hLqspOF+Tw7bkwJWWJa47UsSywD7VmfgQudnsQ3jyg+jtByf6arU+1Fm8cgX3QKoQq bFtWOQazgLF1pOlH/mTZf6N+JX2S2WW4gemXJt/QuTAtZI/YB6GkBkZ3sW7Bs3E+nsGI K6wz+IavUHmwb8BocncYT1zAP4NDCiPfH7qvWyxLwknLY9JKfm/QK/SKgVullB+kO+tg /hqw== X-Gm-Message-State: ACgBeo0LVhYPd++fIWzS4Lfo50dWt3Op7WrAFwqeNdsWKgMt2TYk1ybl lcdYMAYzTtQYf73ih9ruhV4Ud+N6buA= X-Google-Smtp-Source: AA6agR6WcaW+eA8Nd539bAAzHgymQCdl9IhadkT//t7peiazcYBH6mG9SgMp90MOYT2aqll2ML81rg== X-Received: by 2002:a17:902:aa87:b0:172:689f:106b with SMTP id d7-20020a170902aa8700b00172689f106bmr23581514plr.127.1661247177916; Tue, 23 Aug 2022 02:32:57 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017297a6b39dsm10057212plg.265.2022.08.23.02.32.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Aug 2022 02:32:57 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Jim Mattson , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v2 7/8] KVM: x86/svm/pmu: Direct access pmu->gp_counter[] to implement amd_*_to_pmc() Date: Tue, 23 Aug 2022 17:32:20 +0800 Message-Id: <20220823093221.38075-8-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220823093221.38075-1-likexu@tencent.com> References: <20220823093221.38075-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu AMD only has gp counters, whose corresponding vPMCs are initialised and stored in pmu->gp_counter[] in order of idx, so we can access this array directly based on any valid pmc->idx, without any help from other interfaces at all. The amd_rdpmc_ecx_to_pmc() can now reuse this part of the code quite naturally. Opportunistically apply array_index_nospec() to reduce the attack surface for speculative execution and remove the dead code. Signed-off-by: Like Xu --- arch/x86/kvm/svm/pmu.c | 41 +++++------------------------------------ 1 file changed, 5 insertions(+), 36 deletions(-) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index e9c66dd659a6..e57eb0555a04 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -33,23 +33,6 @@ enum index { INDEX_ERROR, }; -static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type) -{ - struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); - - if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { - if (type == PMU_TYPE_COUNTER) - return MSR_F15H_PERF_CTR; - else - return MSR_F15H_PERF_CTL; - } else { - if (type == PMU_TYPE_COUNTER) - return MSR_K7_PERFCTR0; - else - return MSR_K7_EVNTSEL0; - } -} - static enum index msr_to_index(u32 msr) { switch (msr) { @@ -141,18 +124,12 @@ static bool amd_pmc_is_enabled(struct kvm_pmc *pmc) static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) { - unsigned int base = get_msr_base(pmu, PMU_TYPE_COUNTER); - struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); + unsigned int num_counters = pmu->nr_arch_gp_counters; - if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { - /* - * The idx is contiguous. The MSRs are not. The counter MSRs - * are interleaved with the event select MSRs. - */ - pmc_idx *= 2; - } + if (pmc_idx >= num_counters) + return NULL; - return get_gp_pmc_amd(pmu, base + pmc_idx, PMU_TYPE_COUNTER); + return &pmu->gp_counters[array_index_nospec(pmc_idx, num_counters)]; } static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) @@ -168,15 +145,7 @@ static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) static struct kvm_pmc *amd_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask) { - struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - struct kvm_pmc *counters; - - idx &= ~(3u << 30); - if (idx >= pmu->nr_arch_gp_counters) - return NULL; - counters = pmu->gp_counters; - - return &counters[idx]; + return amd_pmc_idx_to_pmc(vcpu_to_pmu(vcpu), idx & ~(3u << 30)); } static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) From patchwork Tue Aug 23 09:32:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12951991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66B01C32772 for ; Tue, 23 Aug 2022 11:54:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358422AbiHWLyU (ORCPT ); Tue, 23 Aug 2022 07:54:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358700AbiHWLw6 (ORCPT ); Tue, 23 Aug 2022 07:52:58 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3960D59A7; Tue, 23 Aug 2022 02:33:04 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id p9so11852422pfq.13; Tue, 23 Aug 2022 02:33:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=/IKVzfhBQXEpapBjo70h6YuHbpTsL9mny3LHoRffMJs=; b=FZnyOdE08/dlPRvpgom/0NDmrRE8eCsvTigdBVJpLqkOFIqIHAGsbM/UkMSiJJVvA0 6+2jt2IzlwbPGX8z2DAZEE+20fnE+yqR3312eUjFDr4uGrHNaWFPTFGf+NaAmdFgZtkw Gci9cc7sjc21GJeE/WgBnjrnvwDl16b9Q/CtmbdlAvHgk/HOLuR3pOUIQQurh+2CzE8q 0IISZ1+L+8tbGh0VKWDtGkOSTGex7OsQ7CtM/FwoW6rej0aVaspiO7XVnN1CvD1znr9l eMidWn/4QPuB8RkvVlwNv5HNqKLpko64lhOC2PxSiuvJi1pwi9CSf4qqA2i3oSBc1Lqc yXGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=/IKVzfhBQXEpapBjo70h6YuHbpTsL9mny3LHoRffMJs=; b=WVFfMKdMKYGoKVKkKOKeJNSC+ysephMgSE7zQbZFSQ9U2k4cIWDNOL0P8hPvmykfUI 5TahmKW5Tqka+qIuBD/lfEbLRAR1sSGwJJlu3fZo2gTBlil30yBkUHAWe2CV8IGzKYlX yAdsdI7P2Pyy9qVOWKDfYhooJJuvKTJOXlJ49w9bjYPDyj24s/aqC6G5nqJWPqiJAgwr XUuCel4hXeynpwoIOJJQ8rxWn9zVWDSDCqzB4GqwGh2tf4wJij+Ek+4KqbcNMA/kIhG6 IeOG9vHdN9mqs0a8irDj9CBVWtPxoza7vMMse6meFzViL03V8RUmDqT/d4PMA76rQxOx eSOA== X-Gm-Message-State: ACgBeo3bAPwQp8krqcKcEDsN7aT1CvVzzeTTFwBCUqPXUZkJMiaaksGh tHvYWeyWRTi/5DnlM2NcmG0= X-Google-Smtp-Source: AA6agR7vZf0um13O/sGVqgfT8zI+7tCAywQiXHqFbDS2ImQPFFukABtRQjPXXgbogJ+X5X3lzen2tw== X-Received: by 2002:a05:6a00:1a44:b0:52a:ecd5:bbef with SMTP id h4-20020a056a001a4400b0052aecd5bbefmr24021554pfv.28.1661247180774; Tue, 23 Aug 2022 02:33:00 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b0017297a6b39dsm10057212plg.265.2022.08.23.02.32.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Aug 2022 02:33:00 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: Jim Mattson , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH RESEND v2 8/8] KVM: x86/svm/pmu: Rewrite get_gp_pmc_amd() for more counters scalability Date: Tue, 23 Aug 2022 17:32:21 +0800 Message-Id: <20220823093221.38075-9-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220823093221.38075-1-likexu@tencent.com> References: <20220823093221.38075-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu If the number of AMD gp counters continues to grow, the code will be very clumsy and the switch-case design of inline get_gp_pmc_amd() will also bloat the kernel text size. The target code is taught to manage two groups of MSRs, each representing a different version of the AMD PMU counter MSRs. The MSR addresses of each group are contiguous, with no holes, and there is no intersection between two sets of addresses, but they are discrete in functionality by design like this: [Group A : All counter MSRs are tightly bound to all event select MSRs ] MSR_K7_EVNTSEL0 0xc0010000 MSR_K7_EVNTSELi 0xc0010000 + i ... MSR_K7_EVNTSEL3 0xc0010003 MSR_K7_PERFCTR0 0xc0010004 MSR_K7_PERFCTRi 0xc0010004 + i ... MSR_K7_PERFCTR3 0xc0010007 [Group B : The counter MSRs are interleaved with the event select MSRs ] MSR_F15H_PERF_CTL0 0xc0010200 MSR_F15H_PERF_CTR0 (0xc0010200 + 1) ... MSR_F15H_PERF_CTLi (0xc0010200 + 2 * i) MSR_F15H_PERF_CTRi (0xc0010200 + 2 * i + 1) ... MSR_F15H_PERF_CTL5 (0xc0010200 + 2 * 5) MSR_F15H_PERF_CTR5 (0xc0010200 + 2 * 5 + 1) Rewrite get_gp_pmc_amd() in this way: first determine which group of registers is accessed, then determine if it matches its requested type, applying different scaling ratios respectively, and finally get pmc_idx to pass into amd_pmc_idx_to_pmc(). Signed-off-by: Like Xu --- arch/x86/kvm/svm/pmu.c | 85 +++++++++--------------------------------- 1 file changed, 17 insertions(+), 68 deletions(-) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index e57eb0555a04..c7ff6a910679 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -23,90 +23,49 @@ enum pmu_type { PMU_TYPE_EVNTSEL, }; -enum index { - INDEX_ZERO = 0, - INDEX_ONE, - INDEX_TWO, - INDEX_THREE, - INDEX_FOUR, - INDEX_FIVE, - INDEX_ERROR, -}; - -static enum index msr_to_index(u32 msr) +static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) { - switch (msr) { - case MSR_F15H_PERF_CTL0: - case MSR_F15H_PERF_CTR0: - case MSR_K7_EVNTSEL0: - case MSR_K7_PERFCTR0: - return INDEX_ZERO; - case MSR_F15H_PERF_CTL1: - case MSR_F15H_PERF_CTR1: - case MSR_K7_EVNTSEL1: - case MSR_K7_PERFCTR1: - return INDEX_ONE; - case MSR_F15H_PERF_CTL2: - case MSR_F15H_PERF_CTR2: - case MSR_K7_EVNTSEL2: - case MSR_K7_PERFCTR2: - return INDEX_TWO; - case MSR_F15H_PERF_CTL3: - case MSR_F15H_PERF_CTR3: - case MSR_K7_EVNTSEL3: - case MSR_K7_PERFCTR3: - return INDEX_THREE; - case MSR_F15H_PERF_CTL4: - case MSR_F15H_PERF_CTR4: - return INDEX_FOUR; - case MSR_F15H_PERF_CTL5: - case MSR_F15H_PERF_CTR5: - return INDEX_FIVE; - default: - return INDEX_ERROR; - } + unsigned int num_counters = pmu->nr_arch_gp_counters; + + if (pmc_idx >= num_counters) + return NULL; + + return &pmu->gp_counters[array_index_nospec(pmc_idx, num_counters)]; } static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, enum pmu_type type) { struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); + unsigned int idx; if (!vcpu->kvm->arch.enable_pmu) return NULL; switch (msr) { - case MSR_F15H_PERF_CTL0: - case MSR_F15H_PERF_CTL1: - case MSR_F15H_PERF_CTL2: - case MSR_F15H_PERF_CTL3: - case MSR_F15H_PERF_CTL4: - case MSR_F15H_PERF_CTL5: + case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) return NULL; - fallthrough; + idx = (unsigned int)((msr - MSR_F15H_PERF_CTL0) / 2); + if ((msr == (MSR_F15H_PERF_CTL0 + 2 * idx)) != + (type == PMU_TYPE_EVNTSEL)) + return NULL; + break; case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3: if (type != PMU_TYPE_EVNTSEL) return NULL; + idx = msr - MSR_K7_EVNTSEL0; break; - case MSR_F15H_PERF_CTR0: - case MSR_F15H_PERF_CTR1: - case MSR_F15H_PERF_CTR2: - case MSR_F15H_PERF_CTR3: - case MSR_F15H_PERF_CTR4: - case MSR_F15H_PERF_CTR5: - if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) - return NULL; - fallthrough; case MSR_K7_PERFCTR0 ... MSR_K7_PERFCTR3: if (type != PMU_TYPE_COUNTER) return NULL; + idx = msr - MSR_K7_PERFCTR0; break; default: return NULL; } - return &pmu->gp_counters[msr_to_index(msr)]; + return amd_pmc_idx_to_pmc(pmu, idx); } static bool amd_hw_event_available(struct kvm_pmc *pmc) @@ -122,16 +81,6 @@ static bool amd_pmc_is_enabled(struct kvm_pmc *pmc) return true; } -static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) -{ - unsigned int num_counters = pmu->nr_arch_gp_counters; - - if (pmc_idx >= num_counters) - return NULL; - - return &pmu->gp_counters[array_index_nospec(pmc_idx, num_counters)]; -} - static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);