diff mbox

[v1,2/2] perf/core: Add support for PMUs that can be read from any CPU

Message ID 1519431578-11995-2-git-send-email-skannan@codeaurora.org (mailing list archive)
State New, archived
Headers show

Commit Message

Saravana Kannan Feb. 24, 2018, 12:19 a.m. UTC
Some PMUs events can be read from any CPU. So allow the PMU to mark
events as such. For these events, we don't need to reject reads or
make smp calls to the event's CPU and cause unnecessary wake ups.

Good examples of such events would be events from caches shared across
all CPUs.

Signed-off-by: Saravana Kannan <skannan@codeaurora.org>
---
 include/linux/perf_event.h |  3 +++
 kernel/events/core.c       | 10 ++++++++--
 2 files changed, 11 insertions(+), 2 deletions(-)

Comments

Saravana Kannan Feb. 24, 2018, 12:56 a.m. UTC | #1
On 02/23/2018 04:19 PM, Saravana Kannan wrote:
> Some PMUs events can be read from any CPU. So allow the PMU to mark
> events as such. For these events, we don't need to reject reads or
> make smp calls to the event's CPU and cause unnecessary wake ups.
>
> Good examples of such events would be events from caches shared across
> all CPUs.
>
> Signed-off-by: Saravana Kannan <skannan@codeaurora.org>
> ---
>   include/linux/perf_event.h |  3 +++
>   kernel/events/core.c       | 10 ++++++++--
>   2 files changed, 11 insertions(+), 2 deletions(-)
>
>

Ugh! Didn't mean to chain these two emails. This one is independent of 
the other email.

-Saravana
Peter Zijlstra Feb. 24, 2018, 8:41 a.m. UTC | #2
On Fri, Feb 23, 2018 at 04:19:38PM -0800, Saravana Kannan wrote:
> Some PMUs events can be read from any CPU. So allow the PMU to mark
> events as such. For these events, we don't need to reject reads or
> make smp calls to the event's CPU and cause unnecessary wake ups.
> 
> Good examples of such events would be events from caches shared across
> all CPUs.

So why would the existing ACTIVE_PKG not work for you? Because clearly
your example does not cross a package.

> Signed-off-by: Saravana Kannan <skannan@codeaurora.org>
> ---
>  include/linux/perf_event.h |  3 +++
>  kernel/events/core.c       | 10 ++++++++--
>  2 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 7546822..ee8978f 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -510,9 +510,12 @@ typedef void (*perf_overflow_handler_t)(struct perf_event *,
>   * PERF_EV_CAP_SOFTWARE: Is a software event.
>   * PERF_EV_CAP_READ_ACTIVE_PKG: A CPU event (or cgroup event) that can be read
>   * from any CPU in the package where it is active.
> + * PERF_EV_CAP_READ_ANY_CPU: A CPU event (or cgroup event) that can be read
> + * from any CPU.
>   */
>  #define PERF_EV_CAP_SOFTWARE		BIT(0)
>  #define PERF_EV_CAP_READ_ACTIVE_PKG	BIT(1)
> +#define PERF_EV_CAP_READ_ANY_CPU	BIT(2)
>  
>  #define SWEVENT_HLIST_BITS		8
>  #define SWEVENT_HLIST_SIZE		(1 << SWEVENT_HLIST_BITS)
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 5d3df58..570187b 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3484,6 +3484,10 @@ static int __perf_event_read_cpu(struct perf_event *event, int event_cpu)
>  {
>  	u16 local_pkg, event_pkg;
>  
> +	if (event->group_caps & PERF_EV_CAP_READ_ANY_CPU) {
> +		return smp_processor_id();
> +	}
> +
>  	if (event->group_caps & PERF_EV_CAP_READ_ACTIVE_PKG) {
>  		int local_cpu = smp_processor_id();
>  

> @@ -3575,6 +3579,7 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
>  {
>  	unsigned long flags;
>  	int ret = 0;
> +	bool is_any_cpu = !!(event->group_caps & PERF_EV_CAP_READ_ANY_CPU);
>  
>  	/*
>  	 * Disabling interrupts avoids all counter scheduling (context
> @@ -3600,7 +3605,8 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
>  
>  	/* If this is a per-CPU event, it must be for this CPU */
>  	if (!(event->attach_state & PERF_ATTACH_TASK) &&
> -	    event->cpu != smp_processor_id()) {
> +	    event->cpu != smp_processor_id() &&
> +	    !is_any_cpu) {
>  		ret = -EINVAL;
>  		goto out;
>  	}
> @@ -3610,7 +3616,7 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
>  	 * or local to this CPU. Furthermore it means its ACTIVE (otherwise
>  	 * oncpu == -1).
>  	 */
> -	if (event->oncpu == smp_processor_id())
> +	if (event->oncpu == smp_processor_id() || is_any_cpu)
>  		event->pmu->read(event);
>  
>  	*value = local64_read(&event->count);

And why are you modifying read_local for this? That didn't support
ACTIVE_PKG, so why should it support this?

And again, where are the users?
Mark Rutland Feb. 25, 2018, 2:38 p.m. UTC | #3
On Fri, Feb 23, 2018 at 04:19:38PM -0800, Saravana Kannan wrote:
> Some PMUs events can be read from any CPU. So allow the PMU to mark
> events as such. For these events, we don't need to reject reads or
> make smp calls to the event's CPU and cause unnecessary wake ups.
> 
> Good examples of such events would be events from caches shared across
> all CPUs.

I think that if we need to generalize PERF_EV_CAP_READ_ACTIVE_PKG, it would be
better to give events a pointer to a cpumask. That could then cover all cases
quite trivially:

static int __perf_event_read_cpu(struct perf_event *event, int event_cpu)
{
	int local_cpu = smp_processor_id();

	if (event->read_mask &&
	    cpumask_test_cpu(local_cpu, event->read_mask))
		event_cpu = local_cpu;
	
	return event_cpu;
}

... in the PERF_EV_CAP_READ_ACTIVE_PKG case, we can use the exiting(?) package
masks, and more generally we can re-use the PMU's affinit mask if it has one.

That said, I see that many pmu::read() implementations have side-effects on
hwc->prev_count, and event->count, so I worry that this won't be sfe in general
(e.g. if we race with the IRQ handler on another CPU).

Thanks,
Mark.
Saravana Kannan Feb. 27, 2018, 1:53 a.m. UTC | #4
On 2018-02-24 00:41, Peter Zijlstra wrote:
> On Fri, Feb 23, 2018 at 04:19:38PM -0800, Saravana Kannan wrote:
>> Some PMUs events can be read from any CPU. So allow the PMU to mark
>> events as such. For these events, we don't need to reject reads or
>> make smp calls to the event's CPU and cause unnecessary wake ups.
>> 
>> Good examples of such events would be events from caches shared across
>> all CPUs.
> 
> So why would the existing ACTIVE_PKG not work for you? Because clearly
> your example does not cross a package.

Because based on testing it on hardware, it looks like the two clusters 
in an ARM DynamIQ design are not considered part of the same "package". 
When I say clusters, I using the more common interpretation of 
"homogeneous CPUs running on the same clock"/CPUs in a cpufreq policy 
and not ARM's new redefinition of cluster. So, on a SoC with 4 little 
and 4 big cores, it'll still trigger a lot of unnecessary smp calls/IPIs 
that cause unnecessary wakeups.

Although, I like Mark's suggestion of just giving a cpumask for every 
event and using that instead. Because the meaning of "active package" is 
very ambiguous. For example if a SoC has 2 DynamIQ blocks (not sure if 
that's possible), what's considered a package? CPUs that are sitting on 
one L3 can't read the PMU counters of a different L3. In that case, 
neither "Any CPU" nor "Active Package" is correct/usable for reducing 
IPIs.

> 
>> Signed-off-by: Saravana Kannan <skannan@codeaurora.org>
>> ---
>>  include/linux/perf_event.h |  3 +++
>>  kernel/events/core.c       | 10 ++++++++--
>>  2 files changed, 11 insertions(+), 2 deletions(-)
>> 
>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>> index 7546822..ee8978f 100644
>> --- a/include/linux/perf_event.h
>> +++ b/include/linux/perf_event.h
>> @@ -510,9 +510,12 @@ typedef void (*perf_overflow_handler_t)(struct 
>> perf_event *,
>>   * PERF_EV_CAP_SOFTWARE: Is a software event.
>>   * PERF_EV_CAP_READ_ACTIVE_PKG: A CPU event (or cgroup event) that 
>> can be read
>>   * from any CPU in the package where it is active.
>> + * PERF_EV_CAP_READ_ANY_CPU: A CPU event (or cgroup event) that can 
>> be read
>> + * from any CPU.
>>   */
>>  #define PERF_EV_CAP_SOFTWARE		BIT(0)
>>  #define PERF_EV_CAP_READ_ACTIVE_PKG	BIT(1)
>> +#define PERF_EV_CAP_READ_ANY_CPU	BIT(2)
>> 
>>  #define SWEVENT_HLIST_BITS		8
>>  #define SWEVENT_HLIST_SIZE		(1 << SWEVENT_HLIST_BITS)
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 5d3df58..570187b 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -3484,6 +3484,10 @@ static int __perf_event_read_cpu(struct 
>> perf_event *event, int event_cpu)
>>  {
>>  	u16 local_pkg, event_pkg;
>> 
>> +	if (event->group_caps & PERF_EV_CAP_READ_ANY_CPU) {
>> +		return smp_processor_id();
>> +	}
>> +
>>  	if (event->group_caps & PERF_EV_CAP_READ_ACTIVE_PKG) {
>>  		int local_cpu = smp_processor_id();
>> 
> 
>> @@ -3575,6 +3579,7 @@ int perf_event_read_local(struct perf_event 
>> *event, u64 *value,
>>  {
>>  	unsigned long flags;
>>  	int ret = 0;
>> +	bool is_any_cpu = !!(event->group_caps & PERF_EV_CAP_READ_ANY_CPU);
>> 
>>  	/*
>>  	 * Disabling interrupts avoids all counter scheduling (context
>> @@ -3600,7 +3605,8 @@ int perf_event_read_local(struct perf_event 
>> *event, u64 *value,
>> 
>>  	/* If this is a per-CPU event, it must be for this CPU */
>>  	if (!(event->attach_state & PERF_ATTACH_TASK) &&
>> -	    event->cpu != smp_processor_id()) {
>> +	    event->cpu != smp_processor_id() &&
>> +	    !is_any_cpu) {
>>  		ret = -EINVAL;
>>  		goto out;
>>  	}
>> @@ -3610,7 +3616,7 @@ int perf_event_read_local(struct perf_event 
>> *event, u64 *value,
>>  	 * or local to this CPU. Furthermore it means its ACTIVE (otherwise
>>  	 * oncpu == -1).
>>  	 */
>> -	if (event->oncpu == smp_processor_id())
>> +	if (event->oncpu == smp_processor_id() || is_any_cpu)
>>  		event->pmu->read(event);
>> 
>>  	*value = local64_read(&event->count);
> 
> And why are you modifying read_local for this? That didn't support
> ACTIVE_PKG, so why should it support this?

Maybe I'll make a separate patch to first have perf_event_read_local() 
also handle ACTIVE_PACKAGE? Because in those cases, the smp call made by 
read_local is unnecessary too.

> 
> And again, where are the users?

The DynamIQ PMU driver would be the user.

-Saravana
Saravana Kannan Feb. 27, 2018, 2:11 a.m. UTC | #5
On 2018-02-25 06:38, Mark Rutland wrote:
> On Fri, Feb 23, 2018 at 04:19:38PM -0800, Saravana Kannan wrote:
>> Some PMUs events can be read from any CPU. So allow the PMU to mark
>> events as such. For these events, we don't need to reject reads or
>> make smp calls to the event's CPU and cause unnecessary wake ups.
>> 
>> Good examples of such events would be events from caches shared across
>> all CPUs.
> 
> I think that if we need to generalize PERF_EV_CAP_READ_ACTIVE_PKG, it 
> would be
> better to give events a pointer to a cpumask. That could then cover all 
> cases
> quite trivially:
> 
> static int __perf_event_read_cpu(struct perf_event *event, int 
> event_cpu)
> {
> 	int local_cpu = smp_processor_id();
> 
> 	if (event->read_mask &&
> 	    cpumask_test_cpu(local_cpu, event->read_mask))
> 		event_cpu = local_cpu;
> 
> 	return event_cpu;
> }
> 

This is a good improvement on my attempt. If I send a patch for this, is 
that something you'd be willing to incorporate into your patch set and 
make sure the DSU pmu driver handles it correctly?

> ... in the PERF_EV_CAP_READ_ACTIVE_PKG case, we can use the exiting(?) 
> package
> masks, and more generally we can re-use the PMU's affinit mask if it 
> has one.
> 
> That said, I see that many pmu::read() implementations have 
> side-effects on
> hwc->prev_count, and event->count, so I worry that this won't be sfe in 
> general
> (e.g. if we race with the IRQ handler on another CPU).
> 

Yeah, this doesn't have to be mandatory. It can be an optional mask the 
PMU can set up during perf event init.

Peter,

Is this something that's acceptable to you?

Thanks,
Saravana
Mark Rutland Feb. 27, 2018, 11:43 a.m. UTC | #6
On Mon, Feb 26, 2018 at 06:11:45PM -0800, skannan@codeaurora.org wrote:
> On 2018-02-25 06:38, Mark Rutland wrote:
> > On Fri, Feb 23, 2018 at 04:19:38PM -0800, Saravana Kannan wrote:
> > > Some PMUs events can be read from any CPU. So allow the PMU to mark
> > > events as such. For these events, we don't need to reject reads or
> > > make smp calls to the event's CPU and cause unnecessary wake ups.
> > > 
> > > Good examples of such events would be events from caches shared across
> > > all CPUs.
> > 
> > I think that if we need to generalize PERF_EV_CAP_READ_ACTIVE_PKG, it
> > would be
> > better to give events a pointer to a cpumask. That could then cover all
> > cases
> > quite trivially:
> > 
> > static int __perf_event_read_cpu(struct perf_event *event, int
> > event_cpu)
> > {
> > 	int local_cpu = smp_processor_id();
> > 
> > 	if (event->read_mask &&
> > 	    cpumask_test_cpu(local_cpu, event->read_mask))
> > 		event_cpu = local_cpu;
> > 
> > 	return event_cpu;
> > }
> 
> This is a good improvement on my attempt. If I send a patch for this, is
> that something you'd be willing to incorporate into your patch set and make
> sure the DSU pmu driver handles it correctly?

As I commented, I don't think that willl work without more invasive
changes as the DSU PMU's pmu::read() function has side effects on
hwc->prev_count and event_count, and could race with an IRQ handler on
another CPU.

Is the IPI really a problem in practice?

Thanks,
Mark.
Mark Rutland Feb. 27, 2018, 11:52 a.m. UTC | #7
On Mon, Feb 26, 2018 at 05:53:57PM -0800, skannan@codeaurora.org wrote:
> On 2018-02-24 00:41, Peter Zijlstra wrote:
> > On Fri, Feb 23, 2018 at 04:19:38PM -0800, Saravana Kannan wrote:
> > > Some PMUs events can be read from any CPU. So allow the PMU to mark
> > > events as such. For these events, we don't need to reject reads or
> > > make smp calls to the event's CPU and cause unnecessary wake ups.
> > > 
> > > Good examples of such events would be events from caches shared across
> > > all CPUs.
> > 
> > So why would the existing ACTIVE_PKG not work for you? Because clearly
> > your example does not cross a package.
> 
> Because based on testing it on hardware, it looks like the two clusters in
> an ARM DynamIQ design are not considered part of the same "package". 

I don't think we should consider the topology masks at all for system
PMU affinity. Due to the number of ways these can be integrated, and the
lack of a standard(ish) topology across arm platforms.

IIUC, there's ongoing work to try to clean that up, but that won't give
us anything meaningful for PMU affinity.

If we need a mask, that should be something the FW description of the
PMU provides, and the PMU driver provides to the core code.

Thanks,
Mark.
Saravana Kannan Feb. 27, 2018, 11:15 p.m. UTC | #8
On 2018-02-27 03:43, Mark Rutland wrote:
> On Mon, Feb 26, 2018 at 06:11:45PM -0800, skannan@codeaurora.org wrote:
>> On 2018-02-25 06:38, Mark Rutland wrote:
>> > On Fri, Feb 23, 2018 at 04:19:38PM -0800, Saravana Kannan wrote:
>> > > Some PMUs events can be read from any CPU. So allow the PMU to mark
>> > > events as such. For these events, we don't need to reject reads or
>> > > make smp calls to the event's CPU and cause unnecessary wake ups.
>> > >
>> > > Good examples of such events would be events from caches shared across
>> > > all CPUs.
>> >
>> > I think that if we need to generalize PERF_EV_CAP_READ_ACTIVE_PKG, it
>> > would be
>> > better to give events a pointer to a cpumask. That could then cover all
>> > cases
>> > quite trivially:
>> >
>> > static int __perf_event_read_cpu(struct perf_event *event, int
>> > event_cpu)
>> > {
>> > 	int local_cpu = smp_processor_id();
>> >
>> > 	if (event->read_mask &&
>> > 	    cpumask_test_cpu(local_cpu, event->read_mask))
>> > 		event_cpu = local_cpu;
>> >
>> > 	return event_cpu;
>> > }
>> 
>> This is a good improvement on my attempt. If I send a patch for this, 
>> is
>> that something you'd be willing to incorporate into your patch set and 
>> make
>> sure the DSU pmu driver handles it correctly?
> 
> As I commented, I don't think that willl work without more invasive
> changes as the DSU PMU's pmu::read() function has side effects on
> hwc->prev_count and event_count, and could race with an IRQ handler on
> another CPU.
> 
> Is the IPI really a problem in practice?
> 

There are a bunch of cases, but the simplest one is if you try to 
collect DSU stats (for analysis) while measuring power, it completely 
messes up the power measurements.

Thanks,
Saravana
Peter Zijlstra March 3, 2018, 3:41 p.m. UTC | #9
On Mon, Feb 26, 2018 at 05:53:57PM -0800, skannan@codeaurora.org wrote:
> On 2018-02-24 00:41, Peter Zijlstra wrote:
> > On Fri, Feb 23, 2018 at 04:19:38PM -0800, Saravana Kannan wrote:
> > > Some PMUs events can be read from any CPU. So allow the PMU to mark
> > > events as such. For these events, we don't need to reject reads or
> > > make smp calls to the event's CPU and cause unnecessary wake ups.
> > > 
> > > Good examples of such events would be events from caches shared across
> > > all CPUs.
> > 
> > So why would the existing ACTIVE_PKG not work for you? Because clearly
> > your example does not cross a package.
> 
> Because based on testing it on hardware, it looks like the two clusters in
> an ARM DynamIQ design are not considered part of the same "package". When I
> say clusters, I using the more common interpretation of "homogeneous CPUs
> running on the same clock"/CPUs in a cpufreq policy and not ARM's new
> redefinition of cluster. So, on a SoC with 4 little and 4 big cores, it'll
> still trigger a lot of unnecessary smp calls/IPIs that cause unnecessary
> wakeups.

arch/arm64/include/asm/topology.h:#define topology_physical_package_id(cpu)     (cpu_topology[cpu].cluster_id)

*sigh*... that's just broken...
Jeremy Linton March 7, 2018, 4:39 p.m. UTC | #10
Hi,

On 03/03/2018 09:41 AM, Peter Zijlstra wrote:
> On Mon, Feb 26, 2018 at 05:53:57PM -0800, skannan@codeaurora.org wrote:
>> On 2018-02-24 00:41, Peter Zijlstra wrote:
>>> On Fri, Feb 23, 2018 at 04:19:38PM -0800, Saravana Kannan wrote:
>>>> Some PMUs events can be read from any CPU. So allow the PMU to mark
>>>> events as such. For these events, we don't need to reject reads or
>>>> make smp calls to the event's CPU and cause unnecessary wake ups.
>>>>
>>>> Good examples of such events would be events from caches shared across
>>>> all CPUs.
>>>
>>> So why would the existing ACTIVE_PKG not work for you? Because clearly
>>> your example does not cross a package.
>>
>> Because based on testing it on hardware, it looks like the two clusters in
>> an ARM DynamIQ design are not considered part of the same "package". When I
>> say clusters, I using the more common interpretation of "homogeneous CPUs
>> running on the same clock"/CPUs in a cpufreq policy and not ARM's new
>> redefinition of cluster. So, on a SoC with 4 little and 4 big cores, it'll
>> still trigger a lot of unnecessary smp calls/IPIs that cause unnecessary
>> wakeups.
> 
> arch/arm64/include/asm/topology.h:#define topology_physical_package_id(cpu)     (cpu_topology[cpu].cluster_id)
> 
> *sigh*... that's just broken...
> 

Its being reworked in the PPTT (currently v7) patch set. For ACPI 
systems (and hopefully DT machines with the package property set) 
topology_physical_package and core siblings represent the socket as one 
would expect.

Thanks,
diff mbox

Patch

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 7546822..ee8978f 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -510,9 +510,12 @@  typedef void (*perf_overflow_handler_t)(struct perf_event *,
  * PERF_EV_CAP_SOFTWARE: Is a software event.
  * PERF_EV_CAP_READ_ACTIVE_PKG: A CPU event (or cgroup event) that can be read
  * from any CPU in the package where it is active.
+ * PERF_EV_CAP_READ_ANY_CPU: A CPU event (or cgroup event) that can be read
+ * from any CPU.
  */
 #define PERF_EV_CAP_SOFTWARE		BIT(0)
 #define PERF_EV_CAP_READ_ACTIVE_PKG	BIT(1)
+#define PERF_EV_CAP_READ_ANY_CPU	BIT(2)
 
 #define SWEVENT_HLIST_BITS		8
 #define SWEVENT_HLIST_SIZE		(1 << SWEVENT_HLIST_BITS)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 5d3df58..570187b 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3484,6 +3484,10 @@  static int __perf_event_read_cpu(struct perf_event *event, int event_cpu)
 {
 	u16 local_pkg, event_pkg;
 
+	if (event->group_caps & PERF_EV_CAP_READ_ANY_CPU) {
+		return smp_processor_id();
+	}
+
 	if (event->group_caps & PERF_EV_CAP_READ_ACTIVE_PKG) {
 		int local_cpu = smp_processor_id();
 
@@ -3575,6 +3579,7 @@  int perf_event_read_local(struct perf_event *event, u64 *value,
 {
 	unsigned long flags;
 	int ret = 0;
+	bool is_any_cpu = !!(event->group_caps & PERF_EV_CAP_READ_ANY_CPU);
 
 	/*
 	 * Disabling interrupts avoids all counter scheduling (context
@@ -3600,7 +3605,8 @@  int perf_event_read_local(struct perf_event *event, u64 *value,
 
 	/* If this is a per-CPU event, it must be for this CPU */
 	if (!(event->attach_state & PERF_ATTACH_TASK) &&
-	    event->cpu != smp_processor_id()) {
+	    event->cpu != smp_processor_id() &&
+	    !is_any_cpu) {
 		ret = -EINVAL;
 		goto out;
 	}
@@ -3610,7 +3616,7 @@  int perf_event_read_local(struct perf_event *event, u64 *value,
 	 * or local to this CPU. Furthermore it means its ACTIVE (otherwise
 	 * oncpu == -1).
 	 */
-	if (event->oncpu == smp_processor_id())
+	if (event->oncpu == smp_processor_id() || is_any_cpu)
 		event->pmu->read(event);
 
 	*value = local64_read(&event->count);