diff mbox series

[v1] KVM: arm64: PMU: Restore the guest's EL0 event counting after migration

Message ID 20230328034725.2051499-1-reijiw@google.com (mailing list archive)
State New, archived
Headers show
Series [v1] KVM: arm64: PMU: Restore the guest's EL0 event counting after migration | expand

Commit Message

Reiji Watanabe March 28, 2023, 3:47 a.m. UTC
Currently, with VHE, KVM enables the EL0 event counting for the
guest on vcpu_load() or KVM enables it as a part of the PMU
register emulation process, when needed.  However, in the migration
case (with VHE), the same handling is lacking.  So, enable it on the
first KVM_RUN with VHE (after the migration) when needed.

Fixes: d0c94c49792c ("KVM: arm64: Restore PMU configuration on first run")
Cc: stable@vger.kernel.org
Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/pmu-emul.c | 1 +
 arch/arm64/kvm/sys_regs.c | 1 -
 2 files changed, 1 insertion(+), 1 deletion(-)

Comments

Marc Zyngier March 28, 2023, 11:08 a.m. UTC | #1
On Tue, 28 Mar 2023 04:47:25 +0100,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Currently, with VHE, KVM enables the EL0 event counting for the
> guest on vcpu_load() or KVM enables it as a part of the PMU
> register emulation process, when needed.  However, in the migration
> case (with VHE), the same handling is lacking.  So, enable it on the
> first KVM_RUN with VHE (after the migration) when needed.

It wasn't completely clear to me how the migration case was affected
by this until I started digging into the call stack:

At load-time, the PMCR_EL0 effects haven't been propagated yet (the
events haven't been created, as this is what kvm_pmu_handle_pmcr()
does on first run). So there is an ordering inversion between
kvm_pmu_handle_pmcr() and kvm_vcpu_pmu_restore_guest().

Moving the latter call into the former fixes the issue, completely
emulating an extra write to PMCR_EL0.

I think it would be worth capturing some of the above in the commit
message so that it doesn't get lost...

> 
> Fixes: d0c94c49792c ("KVM: arm64: Restore PMU configuration on first run")
> Cc: stable@vger.kernel.org
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>  arch/arm64/kvm/pmu-emul.c | 1 +
>  arch/arm64/kvm/sys_regs.c | 1 -
>  2 files changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index c243b10f3e15..5eca0cdd961d 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -558,6 +558,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  		for_each_set_bit(i, &mask, 32)
>  			kvm_pmu_set_pmc_value(kvm_vcpu_idx_to_pmc(vcpu, i), 0, true);
>  	}
> +	kvm_vcpu_pmu_restore_guest(vcpu);
>  }
>  
>  static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc)
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 1b2c161120be..34688918c811 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -794,7 +794,6 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  		if (!kvm_supports_32bit_el0())
>  			val |= ARMV8_PMU_PMCR_LC;
>  		kvm_pmu_handle_pmcr(vcpu, val);
> -		kvm_vcpu_pmu_restore_guest(vcpu);
>  	} else {
>  		/* PMCR.P & PMCR.C are RAZ */
>  		val = __vcpu_sys_reg(vcpu, PMCR_EL0)

With the nitpicking above addressed, and should this go into 6.3 as a
fix:

Reviewed-by: Marc Zyngier <maz@kernel.org>

I can otherwise take it into 6.4, depending on what Oliver decides to
do.

Thanks,

	M.
Reiji Watanabe March 28, 2023, 10:37 p.m. UTC | #2
Hi Marc,

On Tue, Mar 28, 2023 at 12:08:31PM +0100, Marc Zyngier wrote:
> On Tue, 28 Mar 2023 04:47:25 +0100,
> Reiji Watanabe <reijiw@google.com> wrote:
> > 
> > Currently, with VHE, KVM enables the EL0 event counting for the
> > guest on vcpu_load() or KVM enables it as a part of the PMU
> > register emulation process, when needed.  However, in the migration
> > case (with VHE), the same handling is lacking.  So, enable it on the
> > first KVM_RUN with VHE (after the migration) when needed.
> 
> It wasn't completely clear to me how the migration case was affected
> by this until I started digging into the call stack:
> 
> At load-time, the PMCR_EL0 effects haven't been propagated yet (the
> events haven't been created, as this is what kvm_pmu_handle_pmcr()
> does on first run). So there is an ordering inversion between
> kvm_pmu_handle_pmcr() and kvm_vcpu_pmu_restore_guest().
> 
> Moving the latter call into the former fixes the issue, completely
> emulating an extra write to PMCR_EL0.
> 
> I think it would be worth capturing some of the above in the commit
> message so that it doesn't get lost...

I agree with that. I will add the explanation in the commit message,
and will post v2.

> 
> > 
> > Fixes: d0c94c49792c ("KVM: arm64: Restore PMU configuration on first run")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >  arch/arm64/kvm/pmu-emul.c | 1 +
> >  arch/arm64/kvm/sys_regs.c | 1 -
> >  2 files changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > index c243b10f3e15..5eca0cdd961d 100644
> > --- a/arch/arm64/kvm/pmu-emul.c
> > +++ b/arch/arm64/kvm/pmu-emul.c
> > @@ -558,6 +558,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
> >  		for_each_set_bit(i, &mask, 32)
> >  			kvm_pmu_set_pmc_value(kvm_vcpu_idx_to_pmc(vcpu, i), 0, true);
> >  	}
> > +	kvm_vcpu_pmu_restore_guest(vcpu);
> >  }
> >  
> >  static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc)
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 1b2c161120be..34688918c811 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -794,7 +794,6 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >  		if (!kvm_supports_32bit_el0())
> >  			val |= ARMV8_PMU_PMCR_LC;
> >  		kvm_pmu_handle_pmcr(vcpu, val);
> > -		kvm_vcpu_pmu_restore_guest(vcpu);
> >  	} else {
> >  		/* PMCR.P & PMCR.C are RAZ */
> >  		val = __vcpu_sys_reg(vcpu, PMCR_EL0)
> 
> With the nitpicking above addressed, and should this go into 6.3 as a
> fix:
> 
> Reviewed-by: Marc Zyngier <maz@kernel.org>

Thank you!
Reiji


> 
> I can otherwise take it into 6.4, depending on what Oliver decides to
> do.
> 
> Thanks,
> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.
diff mbox series

Patch

diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index c243b10f3e15..5eca0cdd961d 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -558,6 +558,7 @@  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 		for_each_set_bit(i, &mask, 32)
 			kvm_pmu_set_pmc_value(kvm_vcpu_idx_to_pmc(vcpu, i), 0, true);
 	}
+	kvm_vcpu_pmu_restore_guest(vcpu);
 }
 
 static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1b2c161120be..34688918c811 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -794,7 +794,6 @@  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		if (!kvm_supports_32bit_el0())
 			val |= ARMV8_PMU_PMCR_LC;
 		kvm_pmu_handle_pmcr(vcpu, val);
-		kvm_vcpu_pmu_restore_guest(vcpu);
 	} else {
 		/* PMCR.P & PMCR.C are RAZ */
 		val = __vcpu_sys_reg(vcpu, PMCR_EL0)