diff mbox series

KVM: x86/pmu: Prevent zero period event from being repeatedly released

Message ID 20221207071506.15733-2-likexu@tencent.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86/pmu: Prevent zero period event from being repeatedly released | expand

Commit Message

Like Xu Dec. 7, 2022, 7:15 a.m. UTC
From: Like Xu <likexu@tencent.com>

The current vPMU can reuse the same pmc->perf_event for the same
hardware event via pmc_pause/resume_counter(), but this optimization
does not apply to a portion of the TSX events (e.g., "event=0x3c,in_tx=1,
in_tx_cp=1"), where event->attr.sample_period is legally zero at creation,
thus making the perf call to perf_event_period() meaningless (no need to
adjust sample period in this case), and instead causing such reusable
perf_events to be repeatedly released and created.

Avoid releasing zero sample_period events by checking is_sampling_event()
to follow the previously enable/disable optimization.

Signed-off-by: Like Xu <likexu@tencent.com>
---
 arch/x86/kvm/pmu.c | 3 ++-
 arch/x86/kvm/pmu.h | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

Comments

Sean Christopherson Dec. 7, 2022, 4:52 p.m. UTC | #1
Please don't mix kernel and KVM-unit-tests patches in the same "series", for those
of us that have become dependent on b4, mixing patches for two separate repos
makes life miserable.

The best alternative I have come up with is to post the KVM patch(es), and then
provide a lore link in the KUT patch(es).  It means waiting a few minutes before
sending the KUT if you want to double check that you got the lore link right,
but I find that it's fairly easy to account for that in my workflow.

On Wed, Dec 07, 2022, Like Xu wrote:
> From: Like Xu <likexu@tencent.com>
> 
> The current vPMU can reuse the same pmc->perf_event for the same
> hardware event via pmc_pause/resume_counter(), but this optimization
> does not apply to a portion of the TSX events (e.g., "event=0x3c,in_tx=1,
> in_tx_cp=1"), where event->attr.sample_period is legally zero at creation,
> thus making the perf call to perf_event_period() meaningless (no need to
> adjust sample period in this case), and instead causing such reusable
> perf_events to be repeatedly released and created.
> 
> Avoid releasing zero sample_period events by checking is_sampling_event()
> to follow the previously enable/disable optimization.
> 
> Signed-off-by: Like Xu <likexu@tencent.com>
> ---
>  arch/x86/kvm/pmu.c | 3 ++-
>  arch/x86/kvm/pmu.h | 3 ++-
>  2 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index 684393c22105..eb594620dd75 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -238,7 +238,8 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
>  		return false;
>  
>  	/* recalibrate sample period and check if it's accepted by perf core */
> -	if (perf_event_period(pmc->perf_event,
> +	if (is_sampling_event(pmc->perf_event) &&
> +	    perf_event_period(pmc->perf_event,
>  			      get_sample_period(pmc, pmc->counter)))
>  		return false;
>  
> diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
> index 85ff3c0588ba..cdb91009701d 100644
> --- a/arch/x86/kvm/pmu.h
> +++ b/arch/x86/kvm/pmu.h
> @@ -140,7 +140,8 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value)
>  
>  static inline void pmc_update_sample_period(struct kvm_pmc *pmc)
>  {
> -	if (!pmc->perf_event || pmc->is_paused)
> +	if (!pmc->perf_event || pmc->is_paused ||
> +	    !is_sampling_event(pmc->perf_event))
>  		return;
>  
>  	perf_event_period(pmc->perf_event,
> -- 
> 2.38.1
>
Paolo Bonzini Dec. 23, 2022, 4:58 p.m. UTC | #2
Queued, thanks.  Please resubmit the test though.

Paolo
diff mbox series

Patch

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 684393c22105..eb594620dd75 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -238,7 +238,8 @@  static bool pmc_resume_counter(struct kvm_pmc *pmc)
 		return false;
 
 	/* recalibrate sample period and check if it's accepted by perf core */
-	if (perf_event_period(pmc->perf_event,
+	if (is_sampling_event(pmc->perf_event) &&
+	    perf_event_period(pmc->perf_event,
 			      get_sample_period(pmc, pmc->counter)))
 		return false;
 
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 85ff3c0588ba..cdb91009701d 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -140,7 +140,8 @@  static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value)
 
 static inline void pmc_update_sample_period(struct kvm_pmc *pmc)
 {
-	if (!pmc->perf_event || pmc->is_paused)
+	if (!pmc->perf_event || pmc->is_paused ||
+	    !is_sampling_event(pmc->perf_event))
 		return;
 
 	perf_event_period(pmc->perf_event,