Message ID | 20191013091533.12971-3-like.xu@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/vPMU: Efficiency optimization by reusing last created perf_event | expand |
On Sun, Oct 13, 2019 at 05:15:31PM +0800, Like Xu wrote: > Exporting perf_event_pause() as an external accessor for kernel users (such > as KVM) who may do both disable perf_event and read count with just one > time to hold perf_event_ctx_lock. Also the value could be reset optionally. > +u64 perf_event_pause(struct perf_event *event, bool reset) > +{ > + struct perf_event_context *ctx; > + u64 count, enabled, running; > + > + ctx = perf_event_ctx_lock(event); > + _perf_event_disable(event); > + count = __perf_event_read_value(event, &enabled, &running); > + if (reset) > + local64_set(&event->count, 0); This local64_set() already assumes there are no child events, so maybe write the thing like: WARN_ON_ONCE(event->attr.inherit); _perf_event_disable(event); count = local64_read(&event->count); local64_set(&event->count, 0); > + perf_event_ctx_unlock(event, ctx); > + > + return count; > +} > +EXPORT_SYMBOL_GPL(perf_event_pause);
On 2019/10/14 19:51, Peter Zijlstra wrote: > On Sun, Oct 13, 2019 at 05:15:31PM +0800, Like Xu wrote: >> Exporting perf_event_pause() as an external accessor for kernel users (such >> as KVM) who may do both disable perf_event and read count with just one >> time to hold perf_event_ctx_lock. Also the value could be reset optionally. > >> +u64 perf_event_pause(struct perf_event *event, bool reset) >> +{ >> + struct perf_event_context *ctx; >> + u64 count, enabled, running; >> + >> + ctx = perf_event_ctx_lock(event); > >> + _perf_event_disable(event); >> + count = __perf_event_read_value(event, &enabled, &running); >> + if (reset) >> + local64_set(&event->count, 0); > > This local64_set() already assumes there are no child events, so maybe > write the thing like: > > WARN_ON_ONCE(event->attr.inherit); > _perf_event_disable(event); > count = local64_read(&event->count); > local64_set(&event->count, 0); > Thanks. It looks good to me and I will apply this. > >> + perf_event_ctx_unlock(event, ctx); >> + >> + return count; >> +} >> +EXPORT_SYMBOL_GPL(perf_event_pause); >
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index d601df36e671..e9768bfc76f6 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1337,6 +1337,7 @@ extern void perf_event_disable_inatomic(struct perf_event *event); extern void perf_event_task_tick(void); extern int perf_event_account_interrupt(struct perf_event *event); extern int perf_event_period(struct perf_event *event, u64 value); +extern u64 perf_event_pause(struct perf_event *event, bool reset); #else /* !CONFIG_PERF_EVENTS: */ static inline void * perf_aux_output_begin(struct perf_output_handle *handle, @@ -1420,6 +1421,10 @@ static inline int perf_event_period(struct perf_event *event, u64 value) { return -EINVAL; } +static inline u64 perf_event_pause(struct perf_event *event, bool reset) +{ + return 0; +} #endif #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) diff --git a/kernel/events/core.c b/kernel/events/core.c index e1b83d2731da..e29038984cf4 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5029,6 +5029,22 @@ static void _perf_event_reset(struct perf_event *event) perf_event_update_userpage(event); } +u64 perf_event_pause(struct perf_event *event, bool reset) +{ + struct perf_event_context *ctx; + u64 count, enabled, running; + + ctx = perf_event_ctx_lock(event); + _perf_event_disable(event); + count = __perf_event_read_value(event, &enabled, &running); + if (reset) + local64_set(&event->count, 0); + perf_event_ctx_unlock(event, ctx); + + return count; +} +EXPORT_SYMBOL_GPL(perf_event_pause); + /* * Holding the top-level event's child_mutex means that any * descendant process that has inherited this event will block
Exporting perf_event_pause() as an external accessor for kernel users (such as KVM) who may do both disable perf_event and read count with just one time to hold perf_event_ctx_lock. Also the value could be reset optionally. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Like Xu <like.xu@linux.intel.com> --- include/linux/perf_event.h | 5 +++++ kernel/events/core.c | 16 ++++++++++++++++ 2 files changed, 21 insertions(+)