From patchwork Tue Apr 2 02:21:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13613249 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E87D7F9C3; Tue, 2 Apr 2024 02:21:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712024487; cv=none; b=QCEzQFqiuKRpBrpf/UOOpUiSwtywByO56xHcTZSa1rgEuCKPsoVsvEMHhT4EEQVddtotDfMDzvJKU5c9n+sLw1S4GjkXWxKB+yeMqtKs7L9NUi3XzeY4bfLUPXKpkO+dr6/quM9LY+PVRK8qNvUW1FFZPGP+EpwqJgKrR4NBSkc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712024487; c=relaxed/simple; bh=lTo4lKtE0h2fuyllQaxq++YQvJSqJjudNcxfJPsgNV8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Iq1WyzbGi3yFcGNkTx5UNl79dqgzBcw1sy4re+PniW/j6QA5xeBdyGK9btwYmn+5Ga0eDc7wsyhaGJdviInu31pfjfKgvC+p9ClqyLSCFYFb2kBVr9OJdmNgwb5/8a1MG4EMQp/F2www6OmWimvz35yejHeys3ya0FmAZEnSn5g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lq/z2pxN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lq/z2pxN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5ED63C433C7; Tue, 2 Apr 2024 02:21:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712024486; bh=lTo4lKtE0h2fuyllQaxq++YQvJSqJjudNcxfJPsgNV8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lq/z2pxNgG1k6KtAtfiH2GP3zyMLz7mNIh+whXOw4k7emmt1IwOeLJHVwn6en8o1N eHl2gH/wN82wmETb1h+h6yKNrArDpZDs4e3VgwhyEoUf3QTfQsN3/533qY6oEwnLS3 pWWkMX0BVeiQ/FjnykisqhZJNlHiBjN9iB2KXgOuCBefF6Njl9tOKX2owp5ELBhq+t lHMOiuC4OIqz0WvH5GJ8vtriMS1eOvA95YwgfixBqSJE2AmJGIIgQP0LY5zb0KV+FP E6EsGwijvvUNZCK9ukZDtwjLryZvEcC/iFABil+0GJwAcKhbJKMqIrt16B5ccyhtN/ c25+2Fsr71Pkw== From: Andrii Nakryiko To: x86@kernel.org, peterz@infradead.org, mingo@redhat.com, tglx@linutronix.de Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, song@kernel.org, kernel-team@meta.com, Andrii Nakryiko , Sandipan Das Subject: [PATCH v5 1/4] perf/x86/amd: ensure amd_pmu_core_disable_all() is always inlined Date: Mon, 1 Apr 2024 19:21:15 -0700 Message-ID: <20240402022118.1046049-2-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240402022118.1046049-1-andrii@kernel.org> References: <20240402022118.1046049-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In the following patches we will enable LBR capture on AMD CPUs at arbitrary point in time, which means that LBR recording won't be frozen by hardware automatically as part of hardware overflow event. So we need to take care to minimize amount of branches and function calls/returns on the path to freezing LBR, minimizing LBR snapshot altering as much as possible. amd_pmu_core_disable_all() is one of the functions on this path, and is already marked as __always_inline. But it calls amd_pmu_set_global_ctl() which is marked as just inline. So to guarantee no function call will be generated thoughout mark amd_pmu_set_global_ctl() as __always_inline as well. Reviewed-by: Sandipan Das Signed-off-by: Andrii Nakryiko --- arch/x86/events/amd/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c index 985ef3b47919..9b15afda0326 100644 --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -647,7 +647,7 @@ static void amd_pmu_cpu_dead(int cpu) } } -static inline void amd_pmu_set_global_ctl(u64 ctl) +static __always_inline void amd_pmu_set_global_ctl(u64 ctl) { wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, ctl); } From patchwork Tue Apr 2 02:21:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13613250 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CDF514AA7; Tue, 2 Apr 2024 02:21:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712024490; cv=none; b=MVC7/0V9xWAzsXFTkjS7swoQbls9iHUjmphKYDtyvxrUXGYN3W+MImn4aqFHy1QCKKbUAAYUjNhi8GgNdRUoG07H1CvLRYuLl4g1x9Ukxsy8aX7v9YtLgcrWhhJ9hXZtUuBehI2Ik6HPU98u97l9GDHAI5aX/BYwLgRl3GpWU2g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712024490; c=relaxed/simple; bh=x+EBMifWShV+xw46OrW58BPsP2ML8vpkbmBPBYU3nCI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MoXq1vqfENGyAbOKyHbN3rQaWGZJfR+z27gr8CKbPjU2ZB+lG3OjsmTTWxptfQyDFHvYK+8dcr8VH9/TlAqbjyvg3sT5xcPTLJxNsvc3rXhf3U4d1G4iR5zml9Wd/elOYCzWMCMigiAdLQgDNfayG74Ag3JxSPdnI6L9HhSQ5fI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FBa9pnIH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FBa9pnIH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8B9ABC43390; Tue, 2 Apr 2024 02:21:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712024489; bh=x+EBMifWShV+xw46OrW58BPsP2ML8vpkbmBPBYU3nCI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FBa9pnIHd7ML3Xor/8I+/wtyWRzP7rK6hndhTgSCjVmCuAAQQnDSbAnmPhu9gv9DJ 7uyrszuGDGjNJxpfv3Jp/CxaB1vxZ/yh7flC0qibYwfX72ClQplijZlQRL4GXoOqFL G25hOx2mPZ6eJGkdGoZRrac5CEoc+IKctXVJfGarUm3OUouWSbdDnUd1opIqANiRS0 OpyZp0np2bl+cDcczR41aHUFEnWExNa7Ggoix561YzG/bu0UqPkGbd0bREa9cUgPGv /rB/C6/PzwmJloMZbs8vKNPtiOhiOgpNxilS2CZlL6b4pLLhAniQ+68v44Gq2C62WQ QPD5AHSBgpz9g== From: Andrii Nakryiko To: x86@kernel.org, peterz@infradead.org, mingo@redhat.com, tglx@linutronix.de Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, song@kernel.org, kernel-team@meta.com, Andrii Nakryiko , Sandipan Das Subject: [PATCH v5 2/4] perf/x86/amd: avoid taking branches before disabling LBR Date: Mon, 1 Apr 2024 19:21:16 -0700 Message-ID: <20240402022118.1046049-3-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240402022118.1046049-1-andrii@kernel.org> References: <20240402022118.1046049-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In the following patches we will enable LBR capture on AMD CPUs at arbitrary point in time, which means that LBR recording won't be frozen by hardware automatically as part of hardware overflow event. So we need to take care to minimize amount of branches and function calls/returns on the path to freezing LBR, minimizing LBR snapshot altering as much as possible. As such, split out LBR disabling logic from the sanity checking logic inside amd_pmu_lbr_disable_all(). This will ensure that no branches are taken before LBR is frozen in the functionality added in the next patch. Use __always_inline to also eliminate any possible function calls. Reviewed-by: Sandipan Das Signed-off-by: Andrii Nakryiko --- arch/x86/events/amd/lbr.c | 9 +-------- arch/x86/events/perf_event.h | 13 +++++++++++++ 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/events/amd/lbr.c b/arch/x86/events/amd/lbr.c index 5149830c7c4f..33d0a45c0cd3 100644 --- a/arch/x86/events/amd/lbr.c +++ b/arch/x86/events/amd/lbr.c @@ -414,18 +414,11 @@ void amd_pmu_lbr_enable_all(void) void amd_pmu_lbr_disable_all(void) { struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); - u64 dbg_ctl, dbg_extn_cfg; if (!cpuc->lbr_users || !x86_pmu.lbr_nr) return; - rdmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg); - wrmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN); - - if (cpu_feature_enabled(X86_FEATURE_AMD_LBR_PMC_FREEZE)) { - rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl); - wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI); - } + __amd_pmu_lbr_disable(); } __init int amd_pmu_lbr_init(void) diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index fb56518356ec..72b022a1e16c 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -1329,6 +1329,19 @@ void amd_pmu_lbr_enable_all(void); void amd_pmu_lbr_disable_all(void); int amd_pmu_lbr_hw_config(struct perf_event *event); +static __always_inline void __amd_pmu_lbr_disable(void) +{ + u64 dbg_ctl, dbg_extn_cfg; + + rdmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg); + wrmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN); + + if (cpu_feature_enabled(X86_FEATURE_AMD_LBR_PMC_FREEZE)) { + rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl); + wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI); + } +} + #ifdef CONFIG_PERF_EVENTS_AMD_BRS #define AMD_FAM19H_BRS_EVENT 0xc4 /* RETIRED_TAKEN_BRANCH_INSTRUCTIONS */ From patchwork Tue Apr 2 02:21:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13613251 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 21336168DD; Tue, 2 Apr 2024 02:21:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712024493; cv=none; b=SVoQPMy/DxrGabqAzPxlwfZq8HZR6MsPuuaO5oifVvI83VtOO/J2s9NWMZInRQ1+VUAY5Xv/q+gvc/0sTJTEUPp8jSWWeADsi80Lo8dF349Tz/lapO1bpM1holho/NJs/J66SrEG5qjpHwBPIGYw9m01WwTxMjrdzniezQhz/LM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712024493; c=relaxed/simple; bh=6rtaBnzG8vSA41sp68PKabBNfcq1hfqROrFmhJ0TClQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Hn56ZdL22e1JS3Uh/xwYFa35Mu0UYq4dgrLf2gga0kzmmzDZmGIbR/ng6SWwToTqcNntcOycXesLD10EoawsFBS7m99eRPa1uUQqnwdE3moulSDYD1A7jfD6uBDrmUWFQ63Tdwrtel5nwwanpMjJzQ8Ukoynkrs4hsC1WvrveMU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jHgSVI7g; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jHgSVI7g" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0939C43390; Tue, 2 Apr 2024 02:21:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712024493; bh=6rtaBnzG8vSA41sp68PKabBNfcq1hfqROrFmhJ0TClQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jHgSVI7gFh855zKKprdhejHDxxnNdRFIUomOb88p4rYvbFK5xBGPO/I6/ZKHqoER0 QD07NgCrCa5tdAdUcxhT+n7M+GwVZb9BrrL8pbeIuT2J7Gs4jxJA2Mpn64Nlc2eTxQ 52ZAmvf5RRP5YEAQq8OWe/bTYX/nzBF94Jn0gfrzQwDkm2uWZdfNXHZShWG4TK6OsK p80lIO6Rbz9RCwVnCQK3DpZUKfgRX1zzXwtVfe2osTRKVBX70ibVNUxXT+PtOEZ+A0 fIhIdEAGpUYbj9SETgn3q25MT4x23FDpVlsFcrNn0/wK4uox0lnitmEAjJAJpXhJpU +Ye2+WB7Hp9hA== From: Andrii Nakryiko To: x86@kernel.org, peterz@infradead.org, mingo@redhat.com, tglx@linutronix.de Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, song@kernel.org, kernel-team@meta.com, Andrii Nakryiko , Sandipan Das Subject: [PATCH v5 3/4] perf/x86/amd: support capturing LBR from software events Date: Mon, 1 Apr 2024 19:21:17 -0700 Message-ID: <20240402022118.1046049-4-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240402022118.1046049-1-andrii@kernel.org> References: <20240402022118.1046049-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Upstream commit c22ac2a3d4bd ("perf: Enable branch record for software events") added ability to capture LBR (Last Branch Records) on Intel CPUs from inside BPF program at pretty much any arbitrary point. This is extremely useful capability that allows to figure out otherwise hard to debug problems, because LBR is now available based on some application-defined conditions, not just hardware-supported events. retsnoop ([0]) is one such tool that takes a huge advantage of this functionality and has proved to be an extremely useful tool in practice. Now, AMD Zen4 CPUs got support for similar LBR functionality, but necessary wiring inside the kernel is not yet setup. This patch seeks to rectify this and follows a similar approach to the original patch for Intel CPUs. We implement an AMD-specific callback set to be called through perf_snapshot_branch_stack static call. Previous preparatory patches ensured that amd_pmu_core_disable_all() and __amd_pmu_lbr_disable() will be completely inlined and will have no branches, so LBR snapshot contamination will be minimized. This was tested on AMD Bergamo CPU and worked well when utilized from the aforementioned retsnoop tool. [0] https://github.com/anakryiko/retsnoop Reviewed-by: Sandipan Das Signed-off-by: Andrii Nakryiko --- arch/x86/events/amd/core.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c index 9b15afda0326..1fc4ce44e743 100644 --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -907,6 +907,37 @@ static int amd_pmu_handle_irq(struct pt_regs *regs) return amd_pmu_adjust_nmi_window(handled); } +/* + * AMD-specific callback invoked through perf_snapshot_branch_stack static + * call, defined in include/linux/perf_event.h. See its definition for API + * details. It's up to caller to provide enough space in *entries* to fit all + * LBR records, otherwise returned result will be truncated to *cnt* entries. + */ +static int amd_pmu_v2_snapshot_branch_stack(struct perf_branch_entry *entries, unsigned int cnt) +{ + struct cpu_hw_events *cpuc; + unsigned long flags; + + /* + * The sequence of steps to freeze LBR should be completely inlined + * and contain no branches to minimize contamination of LBR snapshot + */ + local_irq_save(flags); + amd_pmu_core_disable_all(); + __amd_pmu_lbr_disable(); + + cpuc = this_cpu_ptr(&cpu_hw_events); + + amd_pmu_lbr_read(); + cnt = min(cnt, x86_pmu.lbr_nr); + memcpy(entries, cpuc->lbr_entries, sizeof(struct perf_branch_entry) * cnt); + + amd_pmu_v2_enable_all(0); + local_irq_restore(flags); + + return cnt; +} + static int amd_pmu_v2_handle_irq(struct pt_regs *regs) { struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); @@ -1443,6 +1474,10 @@ static int __init amd_core_pmu_init(void) static_call_update(amd_pmu_branch_reset, amd_pmu_lbr_reset); static_call_update(amd_pmu_branch_add, amd_pmu_lbr_add); static_call_update(amd_pmu_branch_del, amd_pmu_lbr_del); + + /* Only support branch_stack snapshot on perfmon v2 */ + if (x86_pmu.handle_irq == amd_pmu_v2_handle_irq) + static_call_update(perf_snapshot_branch_stack, amd_pmu_v2_snapshot_branch_stack); } else if (!amd_brs_init()) { /* * BRS requires special event constraints and flushing on ctxsw. From patchwork Tue Apr 2 02:21:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13613252 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E49E8F7D; Tue, 2 Apr 2024 02:21:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712024496; cv=none; b=GaXLlehkWXxBzVMpVjlDFysSUCj+xjq5mvLoz2kVwns/D7rh4cuZdVTk0N4kHqMo7yXPLJ5Yxk4L9Jv8HwWp4XzBUmi/CbyTVC67X6eQQ8D+fAAyIAhXFwexG4Vndw972mwrllR2IIK3bL11f3GxNJW3HGjs3B387xtUTu+qKbo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712024496; c=relaxed/simple; bh=CV8kIvJyEvkQbvLwnjBjosZJzgYcjJearDgYKOb9sCA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qTIfZuIrwzhEYDWuedC3F+tSDqSuLtY/LAIezZ3mgyp1TnsNVYHoPQsxw1dbyEZk+LmnE6rfULGtkR/mJ5cgvVfOxoJMQAJzcw8XpeJvRBMPOEz/9l/xjVXzSXViIhCYHgt9SpX3kwEHivOAh1MV3U4Jmc2CkzX8K1zWZMz4BEA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qzBf82vT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qzBf82vT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 06870C433F1; Tue, 2 Apr 2024 02:21:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712024496; bh=CV8kIvJyEvkQbvLwnjBjosZJzgYcjJearDgYKOb9sCA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qzBf82vThUOQZ6GfSZxFiPh6FmaD6RWqTT+1BqHXKf+Lk+btUKu4FE0vmh23Yx7t0 cL7lU3BgmdwjOKx97WU3uZDJDyi2UXtMBekXdDox2gj9ZWmwHGtNsiYjc9M7SWmxAC bKNgxqa8C6pf+WgLVbsrfjUsg9f8w//dywF987W/9cUO4fp6oT3N/CWgnNMBQrAvV5 qAtBnRWC8Wx33HZ+QXcD7aY/j3JoS88BqCNvfOB9gbEamYTM2Hc3vcR4hxYfJUDgy3 R1pDhqdHIxU35I1PkEUKYSnsvSiQ0Z0s/0ecDE/RBfe4VC0JTvlYbAHj2yCXBZJJiH 9wP2hyN2HF9vw== From: Andrii Nakryiko To: x86@kernel.org, peterz@infradead.org, mingo@redhat.com, tglx@linutronix.de Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, song@kernel.org, kernel-team@meta.com, Andrii Nakryiko , Sandipan Das Subject: [PATCH v5 4/4] perf/x86/amd: don't reject non-sampling events with configured LBR Date: Mon, 1 Apr 2024 19:21:18 -0700 Message-ID: <20240402022118.1046049-5-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240402022118.1046049-1-andrii@kernel.org> References: <20240402022118.1046049-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Now that it's possible to capture LBR on AMD CPU from BPF at arbitrary point, there is no reason to artificially limit this feature to just sampling events. So corresponding check is removed. AFAIU, there is no correctness implications of doing this (and it was possible to bypass this check by just setting perf_event's sample_period to 1 anyways, so it doesn't guard all that much). Reviewed-by: Sandipan Das Signed-off-by: Andrii Nakryiko --- arch/x86/events/amd/lbr.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/arch/x86/events/amd/lbr.c b/arch/x86/events/amd/lbr.c index 33d0a45c0cd3..19c7b76e21bc 100644 --- a/arch/x86/events/amd/lbr.c +++ b/arch/x86/events/amd/lbr.c @@ -310,10 +310,6 @@ int amd_pmu_lbr_hw_config(struct perf_event *event) { int ret = 0; - /* LBR is not recommended in counting mode */ - if (!is_sampling_event(event)) - return -EINVAL; - ret = amd_pmu_lbr_setup_filter(event); if (!ret) event->attach_state |= PERF_ATTACH_SCHED_CB;