From patchwork Sun Mar 31 04:18:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13611803 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 502A379CF; Sun, 31 Mar 2024 04:18:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711858721; cv=none; b=J8d0GNK6rRcolipXzcrBMkrdRkvUoae4aiGfuBc6DOXCjr3YZWTBUF7v81vHPto/bV5uSPiU0abzEzvTsTfai/nBLTZ5dsppWEd7C7Th1UEuXh41YddmKAtSi/EfAQsSEkVaABaU2YIpT3o/2m2TfRMwaSSpMU3q/U3rVmgQ18A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711858721; c=relaxed/simple; bh=2UoMgzqdK9r8CvIpC3oOB03imODxUNwTnU11qkixdmQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uDhil0dOUy2cS1n8SPuvjWBJ55iRMjKNeeS5XlXmBD/pxIea8Ntv/12z6YN+Pvyuvi6DubJJqwTThpRzYzJorgSQZGvz0Lf+UtObd8QVyOFHZn0t3ednrz/EaaHOp5q4c0kPTcfWkGoj4oynQJMOzD4gA87tKfZjusBYveUhQYg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lESXkq9D; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lESXkq9D" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 92972C433F1; Sun, 31 Mar 2024 04:18:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711858720; bh=2UoMgzqdK9r8CvIpC3oOB03imODxUNwTnU11qkixdmQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lESXkq9DIszACumsllrkE24+iuuU5YhDXYwjQ+YXDsxMfoHxc7mo6cRQe4i+S8Th4 rnUNSJEwKmninWTZXwaj6llKPE6itMSq6ibbBgPKaBdYbg3q96pZU3Fq+0cWyxa8jN EyZL2WlcW51O7iP89GXaGDJ2GS7TwVRJZKTQ1Yrq7RO4CZqino98cxES/TCXVlnPCr kXjslbX20zlxuNcArbI9FQwscJNeWiDYJWR7aTSGLnvuv93S37wxZ3J+Ulev/QisE5 ccBhgwP6oDG2q8LQklKujFJ5T6Gbuvc/6+2trgVKHRbIFfo6ZYsQM3aJqFFA94W4aE NdDihVuOqtqFQ== From: Andrii Nakryiko To: x86@kernel.org, peterz@infradead.org, mingo@redhat.com, tglx@linutronix.de Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, song@kernel.org, kernel-team@meta.com, Andrii Nakryiko , Sandipan Das Subject: [PATCH v4 1/4] perf/x86/amd: ensure amd_pmu_core_disable_all() is always inlined Date: Sat, 30 Mar 2024 21:18:27 -0700 Message-ID: <20240331041830.2806741-2-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240331041830.2806741-1-andrii@kernel.org> References: <20240331041830.2806741-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In the following patches we will enable LBR capture on AMD CPUs at arbitrary point in time, which means that LBR recording won't be frozen by hardware automatically as part of hardware overflow event. So we need to take care to minimize amount of branches and function calls/returns on the path to freezing LBR, minimizing LBR snapshot altering as much as possible. amd_pmu_core_disable_all() is one of the functions on this path, and is already marked as __always_inline. But it calls amd_pmu_set_global_ctl() which is marked as just inline. So to guarantee no function call will be generated thoughout mark amd_pmu_set_global_ctl() as __always_inline as well. Reviewed-by: Sandipan Das Signed-off-by: Andrii Nakryiko --- arch/x86/events/amd/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c index aec16e581f5b..c5bcbc87d057 100644 --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -618,7 +618,7 @@ static void amd_pmu_cpu_dead(int cpu) } } -static inline void amd_pmu_set_global_ctl(u64 ctl) +static __always_inline void amd_pmu_set_global_ctl(u64 ctl) { wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, ctl); } From patchwork Sun Mar 31 04:18:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13611804 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B4099455; Sun, 31 Mar 2024 04:18:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711858724; cv=none; b=HaegUB0VW1Pi8pfOgLnn3EhIlvsG1JX3QDdd4kw+nWkl+wlt7CbYsaoEYELjzjYjweKex40kQnE62B1zwQYytAzUzA4gF++RWT6F6bOK84Fiwm6JINPVDiDOnr5rWUySnuCgIUv8wbEJsr8iPRMHpg4yy+yYiNG83l7Pu26NATA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711858724; c=relaxed/simple; bh=0YP19HLDTdmiKDmEUwI54qLXFqcjsIyepLsJ8fYA4cU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=X9V11GjiO1OQ9TzDVBArjMlEu1f8TKqY5HPmC0DsiXACcnhjb5h4pc08z0jDSrMB7z6cJPiBaNqfFr5yEm0pIDv6dqaZXtHSFYJvFekiJZlnMnoYQxR2BH2NecF55dsQ4JEq7q89g2XMlwSusSjfadlTM7WoCbe61C/MlFGVpDk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YMaaJwDm; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YMaaJwDm" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB5B1C433C7; Sun, 31 Mar 2024 04:18:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711858723; bh=0YP19HLDTdmiKDmEUwI54qLXFqcjsIyepLsJ8fYA4cU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YMaaJwDmz5S4IjSIGyKKRR9pMsPNHyJGnrenZcIVedGD4T05QLLGSxcfcAohdXvGL tJuSUqm8dKiwg1JU8OfU8plHtSvgSTeZbganSvU9UVFCQ+3WKCEtv5ECIr22oL44qw CJwFLgDU/ymUhUvZV+JgwEkfbCStsi607Rs0k7sfKLG65uX0NjKh9cGMOSvmG5zPi0 QLrNWvpQQkSVAUaKG0rGdU+3DUXHnJpiduluHKwnunkWeFwMz01iU7vmBzgyqfPeyh ljr+yEZJ9QqTNwMn9IAB9ne7s4mbDB0JsGo0q59HFWhu2LrCwXzknX3z8Qouo3hFlQ VbwarRZ5kbgyw== From: Andrii Nakryiko To: x86@kernel.org, peterz@infradead.org, mingo@redhat.com, tglx@linutronix.de Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, song@kernel.org, kernel-team@meta.com, Andrii Nakryiko , Sandipan Das Subject: [PATCH v4 2/4] perf/x86/amd: avoid taking branches before disabling LBR Date: Sat, 30 Mar 2024 21:18:28 -0700 Message-ID: <20240331041830.2806741-3-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240331041830.2806741-1-andrii@kernel.org> References: <20240331041830.2806741-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In the following patches we will enable LBR capture on AMD CPUs at arbitrary point in time, which means that LBR recording won't be frozen by hardware automatically as part of hardware overflow event. So we need to take care to minimize amount of branches and function calls/returns on the path to freezing LBR, minimizing LBR snapshot altering as much as possible. As such, split out LBR disabling logic from the sanity checking logic inside amd_pmu_lbr_disable_all(). This will ensure that no branches are taken before LBR is frozen in the functionality added in the next patch. Use __always_inline to also eliminate any possible function calls. Reviewed-by: Sandipan Das Signed-off-by: Andrii Nakryiko --- arch/x86/events/amd/lbr.c | 7 +------ arch/x86/events/perf_event.h | 11 +++++++++++ 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/x86/events/amd/lbr.c b/arch/x86/events/amd/lbr.c index 4a1e600314d5..0e4de028590d 100644 --- a/arch/x86/events/amd/lbr.c +++ b/arch/x86/events/amd/lbr.c @@ -412,16 +412,11 @@ void amd_pmu_lbr_enable_all(void) void amd_pmu_lbr_disable_all(void) { struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); - u64 dbg_ctl, dbg_extn_cfg; if (!cpuc->lbr_users || !x86_pmu.lbr_nr) return; - rdmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg); - rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl); - - wrmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN); - wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI); + __amd_pmu_lbr_disable(); } __init int amd_pmu_lbr_init(void) diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index fb56518356ec..4dddf0a7e81e 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -1329,6 +1329,17 @@ void amd_pmu_lbr_enable_all(void); void amd_pmu_lbr_disable_all(void); int amd_pmu_lbr_hw_config(struct perf_event *event); +static __always_inline void __amd_pmu_lbr_disable(void) +{ + u64 dbg_ctl, dbg_extn_cfg; + + rdmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg); + rdmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl); + + wrmsrl(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN); + wrmsrl(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI); +} + #ifdef CONFIG_PERF_EVENTS_AMD_BRS #define AMD_FAM19H_BRS_EVENT 0xc4 /* RETIRED_TAKEN_BRANCH_INSTRUCTIONS */ From patchwork Sun Mar 31 04:18:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13611805 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D80CB660; Sun, 31 Mar 2024 04:18:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711858727; cv=none; b=dvkhbQAvyv77adWYIuNAR5Sc/SQ0UsO+w2xMCI8ufqaF1QWN4qbP2tB8XJ1WUJXZfhnUTu99O6SDKYwquqqY5RBKTz5sF4ypzbiSoJ6ApXiXyMmhi8esR3Ae9NQQhgdI8tbROh5BgolBheDvG/c+4T74Bs4+ni1FfiDymJwkNt8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711858727; c=relaxed/simple; bh=CIe5yz2IxljzMvNVJnEoy0imMXCwroO7kFYa/Os7/jc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=auBlE4xvH+0H8zsEj+g2F8Xl9ofRHnyrtoQNiqtIq22IjqoXG0JIBagsm4qGewhiDHe5U5dVcXFjiUE6ELLvTMXf94DGgIAMKSMEJ1kzfZqOORa+EkpD92ZHEQF/1f7LpiAc2gWdTphVtfG/CFWQnoyToYb6N6cdG4+FSRit7Jc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=P16rkTZ2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="P16rkTZ2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D5145C433C7; Sun, 31 Mar 2024 04:18:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711858727; bh=CIe5yz2IxljzMvNVJnEoy0imMXCwroO7kFYa/Os7/jc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P16rkTZ2Mne5K7Dx32E/+4stuEGOpdhIvaMl7dCwJU/EfL7W3cgksvqE4Yo2P4+L5 ZTv1MakGto0TIcMHJUfSMtEZ7jsVTTJkcrv/FnBDp0XqHLaRispjK0+WHTOHqPVy0s NLb943KQT1Gp1sOuZv+P0d/xrBWg/kc3pZFOQUqqHQVBcrxHB+kwehLxB0PPyEke/s TCeNO1nbHF13xtQTrkr5Yy1zVRZ6OVP8APp2qm+X4YmiXq9JG1aW8fFOAnEZl8MrBc EfgdDXJwetAjPnVs5xakpOD4HEdGkYs/q0YiE5WTNW4lN0EoS7DC/D81XPo6WpvEbr Qj+75HLH6KAkg== From: Andrii Nakryiko To: x86@kernel.org, peterz@infradead.org, mingo@redhat.com, tglx@linutronix.de Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, song@kernel.org, kernel-team@meta.com, Andrii Nakryiko , Sandipan Das Subject: [PATCH v4 3/4] perf/x86/amd: support capturing LBR from software events Date: Sat, 30 Mar 2024 21:18:29 -0700 Message-ID: <20240331041830.2806741-4-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240331041830.2806741-1-andrii@kernel.org> References: <20240331041830.2806741-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Upstream commit c22ac2a3d4bd ("perf: Enable branch record for software events") added ability to capture LBR (Last Branch Records) on Intel CPUs from inside BPF program at pretty much any arbitrary point. This is extremely useful capability that allows to figure out otherwise hard to debug problems, because LBR is now available based on some application-defined conditions, not just hardware-supported events. retsnoop ([0]) is one such tool that takes a huge advantage of this functionality and has proved to be an extremely useful tool in practice. Now, AMD Zen4 CPUs got support for similar LBR functionality, but necessary wiring inside the kernel is not yet setup. This patch seeks to rectify this and follows a similar approach to the original patch for Intel CPUs. We implement an AMD-specific callback set to be called through perf_snapshot_branch_stack static call. Previous preparatory patches ensured that amd_pmu_core_disable_all() and __amd_pmu_lbr_disable() will be completely inlined and will have no branches, so LBR snapshot contamination will be minimized. This was tested on AMD Bergamo CPU and worked well when utilized from the aforementioned retsnoop tool. [0] https://github.com/anakryiko/retsnoop Reviewed-by: Sandipan Das Signed-off-by: Andrii Nakryiko --- arch/x86/events/amd/core.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c index c5bcbc87d057..ed53ce091664 100644 --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -878,6 +878,37 @@ static int amd_pmu_handle_irq(struct pt_regs *regs) return amd_pmu_adjust_nmi_window(handled); } +/* + * AMD-specific callback invoked through perf_snapshot_branch_stack static + * call, defined in include/linux/perf_event.h. See its definition for API + * details. It's up to caller to provide enough space in *entries* to fit all + * LBR records, otherwise returned result will be truncated to *cnt* entries. + */ +static int amd_pmu_v2_snapshot_branch_stack(struct perf_branch_entry *entries, unsigned int cnt) +{ + struct cpu_hw_events *cpuc; + unsigned long flags; + + /* + * The sequence of steps to freeze LBR should be completely inlined + * and contain no branches to minimize contamination of LBR snapshot + */ + local_irq_save(flags); + amd_pmu_core_disable_all(); + __amd_pmu_lbr_disable(); + + cpuc = this_cpu_ptr(&cpu_hw_events); + + amd_pmu_lbr_read(); + cnt = min(cnt, x86_pmu.lbr_nr); + memcpy(entries, cpuc->lbr_entries, sizeof(struct perf_branch_entry) * cnt); + + amd_pmu_v2_enable_all(0); + local_irq_restore(flags); + + return cnt; +} + static int amd_pmu_v2_handle_irq(struct pt_regs *regs) { struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); @@ -1414,6 +1445,10 @@ static int __init amd_core_pmu_init(void) static_call_update(amd_pmu_branch_reset, amd_pmu_lbr_reset); static_call_update(amd_pmu_branch_add, amd_pmu_lbr_add); static_call_update(amd_pmu_branch_del, amd_pmu_lbr_del); + + /* Only support branch_stack snapshot on perfmon v2 */ + if (x86_pmu.handle_irq == amd_pmu_v2_handle_irq) + static_call_update(perf_snapshot_branch_stack, amd_pmu_v2_snapshot_branch_stack); } else if (!amd_brs_init()) { /* * BRS requires special event constraints and flushing on ctxsw. From patchwork Sun Mar 31 04:18:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13611806 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA6A1C148; Sun, 31 Mar 2024 04:18:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711858730; cv=none; b=b7EN8IsNmCwRWo5pWSYtVW8vjf9NTn0smoMMW7UuLlXDOWlG+EPgxjbQ7hafOPmQHOlaSp8ZbMGPsEOg93l9f3zOiDgUHDPdleQnUdA0So1HZeej60/3vmyKfxu4dB5iXFTN1Br9QwvaQ4gFsdJxRYuOFrPEUZsDOtZ9FNzsAvY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711858730; c=relaxed/simple; bh=xYmgREzCBywKoxooAo66NI+2rXvFajDOQHS3LMXIdnQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Xke3/Lje6t6GnFbn2GN+9PfRkg+q699MPWmxSGUs6CAG6hF8VFd3KLJ3RdStIr2Z1v9Dp1Dazd/3yVKQPLEr0qLY1EcbHrKVM9V8IDC0pwmWYD7+irN6F7zNQT/lHjYBjcnCsVTFBjEf0gGPF1TKfu3gUIqUut5HbT9JP3hwTKo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cnEdt35r; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cnEdt35r" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 211F7C43390; Sun, 31 Mar 2024 04:18:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711858730; bh=xYmgREzCBywKoxooAo66NI+2rXvFajDOQHS3LMXIdnQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cnEdt35rOq+GQr4S+dw/OQkEWHJa2472sZr61tQ0/YSr3wGqPfY+lwu9W5i9DxFcW zRS2A4QtMnFc7wQpFJ4kIv2A8Qpua6gJSCi9vgE+RaYxyBuUghqZtm+ZKd57jgLSZG 91NMWjWmZeS/H/YCVlpD3RKgNhqS+uDl1BXUYC5hkSy2L6gD4CjGWi8W664nH7kSqV AavBe+DbUc1j5McH7kKzSUQiHjxCCwPXFddDXmvW7qn/fBsQuTDO+eqJRSp2z5otXd 3/jbDpIm42UZxgX8bTHfgoVA6gUOIDxJZDvufJfEu1Q7urHo84GTI9iP396CDMo5BT Cm2sVZuTngMeA== From: Andrii Nakryiko To: x86@kernel.org, peterz@infradead.org, mingo@redhat.com, tglx@linutronix.de Cc: bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, song@kernel.org, kernel-team@meta.com, Andrii Nakryiko , Sandipan Das Subject: [PATCH v4 4/4] perf/x86/amd: don't reject non-sampling events with configured LBR Date: Sat, 30 Mar 2024 21:18:30 -0700 Message-ID: <20240331041830.2806741-5-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240331041830.2806741-1-andrii@kernel.org> References: <20240331041830.2806741-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Now that it's possible to capture LBR on AMD CPU from BPF at arbitrary point, there is no reason to artificially limit this feature to just sampling events. So corresponding check is removed. AFAIU, there is no correctness implications of doing this (and it was possible to bypass this check by just setting perf_event's sample_period to 1 anyways, so it doesn't guard all that much). Reviewed-by: Sandipan Das Signed-off-by: Andrii Nakryiko --- arch/x86/events/amd/lbr.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/arch/x86/events/amd/lbr.c b/arch/x86/events/amd/lbr.c index 0e4de028590d..75920f895d67 100644 --- a/arch/x86/events/amd/lbr.c +++ b/arch/x86/events/amd/lbr.c @@ -310,10 +310,6 @@ int amd_pmu_lbr_hw_config(struct perf_event *event) { int ret = 0; - /* LBR is not recommended in counting mode */ - if (!is_sampling_event(event)) - return -EINVAL; - ret = amd_pmu_lbr_setup_filter(event); if (!ret) event->attach_state |= PERF_ATTACH_SCHED_CB;