diff mbox series

perf/x86: fix wrong assumption that LBR is only useful for sampling events

Message ID 20240905180055.1221620-1-andrii@kernel.org (mailing list archive)
State Not Applicable
Headers show
Series perf/x86: fix wrong assumption that LBR is only useful for sampling events | expand

Checks

Context Check Description
netdev/tree_selection success Not a local patch

Commit Message

Andrii Nakryiko Sept. 5, 2024, 6 p.m. UTC
It's incorrect to assume that LBR can/should only be used with sampling
events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
which expects a properly setup and activated perf event which allows
kernel to capture LBR data.

For instance, retsnoop tool ([0]) makes an extensive use of this
functionality and sets up perf event as follows:

	struct perf_event_attr attr;

	memset(&attr, 0, sizeof(attr));
	attr.size = sizeof(attr);
	attr.type = PERF_TYPE_HARDWARE;
	attr.config = PERF_COUNT_HW_CPU_CYCLES;
	attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
	attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;

Commit referenced in Fixes tag broke this setup by making invalid assumption
that LBR is useful only for sampling events. Remove that assumption.

Note, earlier we removed a similar assumption on AMD side of LBR support,
see [1] for details.

  [0] https://github.com/anakryiko/retsnoop
  [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")

Cc: stable@vger.kernel.org # 6.8+
Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 arch/x86/events/intel/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Liang, Kan Sept. 5, 2024, 7:20 p.m. UTC | #1
On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
> It's incorrect to assume that LBR can/should only be used with sampling
> events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
> which expects a properly setup and activated perf event which allows
> kernel to capture LBR data.
> 
> For instance, retsnoop tool ([0]) makes an extensive use of this
> functionality and sets up perf event as follows:
> 
> 	struct perf_event_attr attr;
> 
> 	memset(&attr, 0, sizeof(attr));
> 	attr.size = sizeof(attr);
> 	attr.type = PERF_TYPE_HARDWARE;
> 	attr.config = PERF_COUNT_HW_CPU_CYCLES;
> 	attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
> 	attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
> 
> Commit referenced in Fixes tag broke this setup by making invalid assumption
> that LBR is useful only for sampling events. Remove that assumption.
> 
> Note, earlier we removed a similar assumption on AMD side of LBR support,
> see [1] for details.
> 
>   [0] https://github.com/anakryiko/retsnoop
>   [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
> 
> Cc: stable@vger.kernel.org # 6.8+
> Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
>  arch/x86/events/intel/core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 9e519d8a810a..f82a342b8852 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
>  			x86_pmu.pebs_aliases(event);
>  	}
>  
> -	if (needs_branch_stack(event) && is_sampling_event(event))
> +	if (needs_branch_stack(event))
>  		event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;

To limit the LBR for a sampling event is to avoid unnecessary branch
stack setup for a counting event in the sample read. The above change
should break the sample read case.

How about the below patch (not test)? Is it good enough for the BPF usage?

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 0c9c2706d4ec..8d67cbda916b 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
*event)
 		x86_pmu.pebs_aliases(event);
 	}

-	if (needs_branch_stack(event) && is_sampling_event(event))
-		event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
+	if (needs_branch_stack(event)) {
+		/* Avoid branch stack setup for counting events in SAMPLE READ */
+		if (is_sampling_event(event) ||
+		    !(event->attr.sample_type & PERF_SAMPLE_READ))
+			event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
+	}

 	if (branch_sample_counters(event)) {
 		struct perf_event *leader, *sibling;


Thanks,
Kan
>  
>  	if (branch_sample_counters(event)) {
Andrii Nakryiko Sept. 5, 2024, 8:22 p.m. UTC | #2
On Thu, Sep 5, 2024 at 12:21 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
>
>
>
> On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
> > It's incorrect to assume that LBR can/should only be used with sampling
> > events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
> > which expects a properly setup and activated perf event which allows
> > kernel to capture LBR data.
> >
> > For instance, retsnoop tool ([0]) makes an extensive use of this
> > functionality and sets up perf event as follows:
> >
> >       struct perf_event_attr attr;
> >
> >       memset(&attr, 0, sizeof(attr));
> >       attr.size = sizeof(attr);
> >       attr.type = PERF_TYPE_HARDWARE;
> >       attr.config = PERF_COUNT_HW_CPU_CYCLES;
> >       attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
> >       attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
> >
> > Commit referenced in Fixes tag broke this setup by making invalid assumption
> > that LBR is useful only for sampling events. Remove that assumption.
> >
> > Note, earlier we removed a similar assumption on AMD side of LBR support,
> > see [1] for details.
> >
> >   [0] https://github.com/anakryiko/retsnoop
> >   [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
> >
> > Cc: stable@vger.kernel.org # 6.8+
> > Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> >  arch/x86/events/intel/core.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> > index 9e519d8a810a..f82a342b8852 100644
> > --- a/arch/x86/events/intel/core.c
> > +++ b/arch/x86/events/intel/core.c
> > @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
> >                       x86_pmu.pebs_aliases(event);
> >       }
> >
> > -     if (needs_branch_stack(event) && is_sampling_event(event))
> > +     if (needs_branch_stack(event))
> >               event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>
> To limit the LBR for a sampling event is to avoid unnecessary branch
> stack setup for a counting event in the sample read. The above change
> should break the sample read case.
>
> How about the below patch (not test)? Is it good enough for the BPF usage?
>
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 0c9c2706d4ec..8d67cbda916b 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
> *event)
>                 x86_pmu.pebs_aliases(event);
>         }
>
> -       if (needs_branch_stack(event) && is_sampling_event(event))
> -               event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> +       if (needs_branch_stack(event)) {
> +               /* Avoid branch stack setup for counting events in SAMPLE READ */
> +               if (is_sampling_event(event) ||
> +                   !(event->attr.sample_type & PERF_SAMPLE_READ))
> +                       event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> +       }
>

I'm sure it will be fine for my use case, as I set only
PERF_SAMPLE_BRANCH_STACK.

But I'll leave it up to perf subsystem experts to decide if this
condition makes sense, because looking at what PERF_SAMPLE_READ is:

          PERF_SAMPLE_READ
                 Record counter values for all events in a group,
                 not just the group leader.

It's not clear why this would disable LBR, if specified.

>         if (branch_sample_counters(event)) {
>                 struct perf_event *leader, *sibling;
>
>
> Thanks,
> Kan
> >
> >       if (branch_sample_counters(event)) {
Liang, Kan Sept. 5, 2024, 8:29 p.m. UTC | #3
On 2024-09-05 4:22 p.m., Andrii Nakryiko wrote:
> On Thu, Sep 5, 2024 at 12:21 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
>>
>>
>>
>> On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
>>> It's incorrect to assume that LBR can/should only be used with sampling
>>> events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
>>> which expects a properly setup and activated perf event which allows
>>> kernel to capture LBR data.
>>>
>>> For instance, retsnoop tool ([0]) makes an extensive use of this
>>> functionality and sets up perf event as follows:
>>>
>>>       struct perf_event_attr attr;
>>>
>>>       memset(&attr, 0, sizeof(attr));
>>>       attr.size = sizeof(attr);
>>>       attr.type = PERF_TYPE_HARDWARE;
>>>       attr.config = PERF_COUNT_HW_CPU_CYCLES;
>>>       attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
>>>       attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
>>>
>>> Commit referenced in Fixes tag broke this setup by making invalid assumption
>>> that LBR is useful only for sampling events. Remove that assumption.
>>>
>>> Note, earlier we removed a similar assumption on AMD side of LBR support,
>>> see [1] for details.
>>>
>>>   [0] https://github.com/anakryiko/retsnoop
>>>   [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
>>>
>>> Cc: stable@vger.kernel.org # 6.8+
>>> Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
>>> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
>>> ---
>>>  arch/x86/events/intel/core.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>>> index 9e519d8a810a..f82a342b8852 100644
>>> --- a/arch/x86/events/intel/core.c
>>> +++ b/arch/x86/events/intel/core.c
>>> @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
>>>                       x86_pmu.pebs_aliases(event);
>>>       }
>>>
>>> -     if (needs_branch_stack(event) && is_sampling_event(event))
>>> +     if (needs_branch_stack(event))
>>>               event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>>
>> To limit the LBR for a sampling event is to avoid unnecessary branch
>> stack setup for a counting event in the sample read. The above change
>> should break the sample read case.
>>
>> How about the below patch (not test)? Is it good enough for the BPF usage?
>>
>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>> index 0c9c2706d4ec..8d67cbda916b 100644
>> --- a/arch/x86/events/intel/core.c
>> +++ b/arch/x86/events/intel/core.c
>> @@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
>> *event)
>>                 x86_pmu.pebs_aliases(event);
>>         }
>>
>> -       if (needs_branch_stack(event) && is_sampling_event(event))
>> -               event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>> +       if (needs_branch_stack(event)) {
>> +               /* Avoid branch stack setup for counting events in SAMPLE READ */
>> +               if (is_sampling_event(event) ||
>> +                   !(event->attr.sample_type & PERF_SAMPLE_READ))
>> +                       event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>> +       }
>>
> 
> I'm sure it will be fine for my use case, as I set only
> PERF_SAMPLE_BRANCH_STACK.
> 
> But I'll leave it up to perf subsystem experts to decide if this
> condition makes sense, because looking at what PERF_SAMPLE_READ is:
> 
>           PERF_SAMPLE_READ
>                  Record counter values for all events in a group,
>                  not just the group leader.
> 
> It's not clear why this would disable LBR, if specified.

It only disables the counting event with SAMPLE_READ, since LBR is only
read in the sampling event's overflow.

Thanks,
Kan
> 
>>         if (branch_sample_counters(event)) {
>>                 struct perf_event *leader, *sibling;
>>
>>
>> Thanks,
>> Kan
>>>
>>>       if (branch_sample_counters(event)) {
Andrii Nakryiko Sept. 5, 2024, 8:33 p.m. UTC | #4
On Thu, Sep 5, 2024 at 1:29 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
>
>
>
> On 2024-09-05 4:22 p.m., Andrii Nakryiko wrote:
> > On Thu, Sep 5, 2024 at 12:21 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
> >>
> >>
> >>
> >> On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
> >>> It's incorrect to assume that LBR can/should only be used with sampling
> >>> events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
> >>> which expects a properly setup and activated perf event which allows
> >>> kernel to capture LBR data.
> >>>
> >>> For instance, retsnoop tool ([0]) makes an extensive use of this
> >>> functionality and sets up perf event as follows:
> >>>
> >>>       struct perf_event_attr attr;
> >>>
> >>>       memset(&attr, 0, sizeof(attr));
> >>>       attr.size = sizeof(attr);
> >>>       attr.type = PERF_TYPE_HARDWARE;
> >>>       attr.config = PERF_COUNT_HW_CPU_CYCLES;
> >>>       attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
> >>>       attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
> >>>
> >>> Commit referenced in Fixes tag broke this setup by making invalid assumption
> >>> that LBR is useful only for sampling events. Remove that assumption.
> >>>
> >>> Note, earlier we removed a similar assumption on AMD side of LBR support,
> >>> see [1] for details.
> >>>
> >>>   [0] https://github.com/anakryiko/retsnoop
> >>>   [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
> >>>
> >>> Cc: stable@vger.kernel.org # 6.8+
> >>> Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
> >>> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> >>> ---
> >>>  arch/x86/events/intel/core.c | 2 +-
> >>>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> >>> index 9e519d8a810a..f82a342b8852 100644
> >>> --- a/arch/x86/events/intel/core.c
> >>> +++ b/arch/x86/events/intel/core.c
> >>> @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
> >>>                       x86_pmu.pebs_aliases(event);
> >>>       }
> >>>
> >>> -     if (needs_branch_stack(event) && is_sampling_event(event))
> >>> +     if (needs_branch_stack(event))
> >>>               event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> >>
> >> To limit the LBR for a sampling event is to avoid unnecessary branch
> >> stack setup for a counting event in the sample read. The above change
> >> should break the sample read case.
> >>
> >> How about the below patch (not test)? Is it good enough for the BPF usage?
> >>
> >> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> >> index 0c9c2706d4ec..8d67cbda916b 100644
> >> --- a/arch/x86/events/intel/core.c
> >> +++ b/arch/x86/events/intel/core.c
> >> @@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
> >> *event)
> >>                 x86_pmu.pebs_aliases(event);
> >>         }
> >>
> >> -       if (needs_branch_stack(event) && is_sampling_event(event))
> >> -               event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> >> +       if (needs_branch_stack(event)) {
> >> +               /* Avoid branch stack setup for counting events in SAMPLE READ */
> >> +               if (is_sampling_event(event) ||
> >> +                   !(event->attr.sample_type & PERF_SAMPLE_READ))
> >> +                       event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> >> +       }
> >>
> >
> > I'm sure it will be fine for my use case, as I set only
> > PERF_SAMPLE_BRANCH_STACK.
> >
> > But I'll leave it up to perf subsystem experts to decide if this
> > condition makes sense, because looking at what PERF_SAMPLE_READ is:
> >
> >           PERF_SAMPLE_READ
> >                  Record counter values for all events in a group,
> >                  not just the group leader.
> >
> > It's not clear why this would disable LBR, if specified.
>
> It only disables the counting event with SAMPLE_READ, since LBR is only
> read in the sampling event's overflow.
>

Ok, sounds good! Would you like to send a proper patch with your
proposed changes?

> Thanks,
> Kan
> >
> >>         if (branch_sample_counters(event)) {
> >>                 struct perf_event *leader, *sibling;
> >>
> >>
> >> Thanks,
> >> Kan
> >>>
> >>>       if (branch_sample_counters(event)) {
Liang, Kan Sept. 9, 2024, 4:02 p.m. UTC | #5
On 2024-09-05 4:33 p.m., Andrii Nakryiko wrote:
> On Thu, Sep 5, 2024 at 1:29 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
>>
>>
>>
>> On 2024-09-05 4:22 p.m., Andrii Nakryiko wrote:
>>> On Thu, Sep 5, 2024 at 12:21 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
>>>>
>>>>
>>>>
>>>> On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
>>>>> It's incorrect to assume that LBR can/should only be used with sampling
>>>>> events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
>>>>> which expects a properly setup and activated perf event which allows
>>>>> kernel to capture LBR data.
>>>>>
>>>>> For instance, retsnoop tool ([0]) makes an extensive use of this
>>>>> functionality and sets up perf event as follows:
>>>>>
>>>>>       struct perf_event_attr attr;
>>>>>
>>>>>       memset(&attr, 0, sizeof(attr));
>>>>>       attr.size = sizeof(attr);
>>>>>       attr.type = PERF_TYPE_HARDWARE;
>>>>>       attr.config = PERF_COUNT_HW_CPU_CYCLES;
>>>>>       attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
>>>>>       attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
>>>>>
>>>>> Commit referenced in Fixes tag broke this setup by making invalid assumption
>>>>> that LBR is useful only for sampling events. Remove that assumption.
>>>>>
>>>>> Note, earlier we removed a similar assumption on AMD side of LBR support,
>>>>> see [1] for details.
>>>>>
>>>>>   [0] https://github.com/anakryiko/retsnoop
>>>>>   [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
>>>>>
>>>>> Cc: stable@vger.kernel.org # 6.8+
>>>>> Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
>>>>> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
>>>>> ---
>>>>>  arch/x86/events/intel/core.c | 2 +-
>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>>>>> index 9e519d8a810a..f82a342b8852 100644
>>>>> --- a/arch/x86/events/intel/core.c
>>>>> +++ b/arch/x86/events/intel/core.c
>>>>> @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
>>>>>                       x86_pmu.pebs_aliases(event);
>>>>>       }
>>>>>
>>>>> -     if (needs_branch_stack(event) && is_sampling_event(event))
>>>>> +     if (needs_branch_stack(event))
>>>>>               event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>>>>
>>>> To limit the LBR for a sampling event is to avoid unnecessary branch
>>>> stack setup for a counting event in the sample read. The above change
>>>> should break the sample read case.
>>>>
>>>> How about the below patch (not test)? Is it good enough for the BPF usage?
>>>>
>>>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>>>> index 0c9c2706d4ec..8d67cbda916b 100644
>>>> --- a/arch/x86/events/intel/core.c
>>>> +++ b/arch/x86/events/intel/core.c
>>>> @@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
>>>> *event)
>>>>                 x86_pmu.pebs_aliases(event);
>>>>         }
>>>>
>>>> -       if (needs_branch_stack(event) && is_sampling_event(event))
>>>> -               event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>>>> +       if (needs_branch_stack(event)) {
>>>> +               /* Avoid branch stack setup for counting events in SAMPLE READ */
>>>> +               if (is_sampling_event(event) ||
>>>> +                   !(event->attr.sample_type & PERF_SAMPLE_READ))
>>>> +                       event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>>>> +       }
>>>>
>>>
>>> I'm sure it will be fine for my use case, as I set only
>>> PERF_SAMPLE_BRANCH_STACK.
>>>
>>> But I'll leave it up to perf subsystem experts to decide if this
>>> condition makes sense, because looking at what PERF_SAMPLE_READ is:
>>>
>>>           PERF_SAMPLE_READ
>>>                  Record counter values for all events in a group,
>>>                  not just the group leader.
>>>
>>> It's not clear why this would disable LBR, if specified.
>>
>> It only disables the counting event with SAMPLE_READ, since LBR is only
>> read in the sampling event's overflow.
>>
> 
> Ok, sounds good! Would you like to send a proper patch with your
> proposed changes?

The patch has been posted. Please give it a try.
https://lore.kernel.org/lkml/20240909155848.326640-1-kan.liang@linux.intel.com/

Thanks,
Kan
> 
>> Thanks,
>> Kan
>>>
>>>>         if (branch_sample_counters(event)) {
>>>>                 struct perf_event *leader, *sibling;
>>>>
>>>>
>>>> Thanks,
>>>> Kan
>>>>>
>>>>>       if (branch_sample_counters(event)) {
>
diff mbox series

Patch

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 9e519d8a810a..f82a342b8852 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3972,7 +3972,7 @@  static int intel_pmu_hw_config(struct perf_event *event)
 			x86_pmu.pebs_aliases(event);
 	}
 
-	if (needs_branch_stack(event) && is_sampling_event(event))
+	if (needs_branch_stack(event))
 		event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
 
 	if (branch_sample_counters(event)) {