diff mbox series

[v6,02/10] KVM: x86/pmu: Return #GP if user sets the GLOBAL_STATUS reserved bits

Message ID 20230530060423.32361-3-likexu@tencent.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86: Add AMD Guest PerfMonV2 PMU support | expand

Commit Message

Like Xu May 30, 2023, 6:04 a.m. UTC
From: Like Xu <likexu@tencent.com>

Return #GP if KVM user space attempts to set a reserved bit for guest.
If the user space sets reserved bits when restoring the MSR_CORE_
PERF_GLOBAL_STATUS register, these bits will be accidentally returned
when the guest runs a read access to this register, and cannot be cleared
up inside the guest, which makes the guest's PMI handler very confused.

Note, reusing global_ovf_ctrl_mask as global_status_mask will be broken
if KVM supports higher versions of Intel arch pmu.

Signed-off-by: Like Xu <likexu@tencent.com>
---
 arch/x86/kvm/vmx/pmu_intel.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

Comments

Sean Christopherson June 2, 2023, 9:59 p.m. UTC | #1
On Tue, May 30, 2023, Like Xu wrote:
> From: Like Xu <likexu@tencent.com>
> 
> Return #GP if KVM user space attempts to set a reserved bit for guest.

It's not a #GP, it's simply an error.

> If the user space sets reserved bits when restoring the MSR_CORE_
> PERF_GLOBAL_STATUS register, these bits will be accidentally returned
> when the guest runs a read access to this register, and cannot be cleared
> up inside the guest, which makes the guest's PMI handler very confused.
> 
> Note, reusing global_ovf_ctrl_mask as global_status_mask will be broken
> if KVM supports higher versions of Intel arch pmu.
> 
> Signed-off-by: Like Xu <likexu@tencent.com>
> ---
>  arch/x86/kvm/vmx/pmu_intel.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> index 1f9c3e916a21..343b3182b7f4 100644
> --- a/arch/x86/kvm/vmx/pmu_intel.c
> +++ b/arch/x86/kvm/vmx/pmu_intel.c
> @@ -399,7 +399,11 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  			reprogram_fixed_counters(pmu, data);
>  		break;
>  	case MSR_CORE_PERF_GLOBAL_STATUS:
> -		if (!msr_info->host_initiated)
> +		/*
> +		 * Caution, the assumption here is that some of the bits (such as
> +		 * ASCI, CTR_FREEZE, and LBR_FREEZE) are not yet supported by KVM.
> +		 */

A comment wasn't what I had in mind when I objected to "good enough"[*].  Luckily,
there's no need to add another mask.  After rereading the SDM, there isn't actually
a divergence between GLOBAL_STATUS and GLOBAL_OVF_CTRL, Intel just renamed GLOBAL_OVF_CTRL
to GLOBAL_STATUS_RESET and for some asinine reason, added separate entries in the
architectural MSRs table instead of adding a redirect.

I'll post and slot in a patch to do s/global_ovf_ctrl_mask/global_status, along
with a comment explaining the rename and how the "reset" MSR is always tied to the
status MSR, regardless of what it's named.

[*] https://lore.kernel.org/all/37a18b89-c0c3-4c88-7f07-072573ac0c92@gmail.com
diff mbox series

Patch

diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 1f9c3e916a21..343b3182b7f4 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -399,7 +399,11 @@  static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			reprogram_fixed_counters(pmu, data);
 		break;
 	case MSR_CORE_PERF_GLOBAL_STATUS:
-		if (!msr_info->host_initiated)
+		/*
+		 * Caution, the assumption here is that some of the bits (such as
+		 * ASCI, CTR_FREEZE, and LBR_FREEZE) are not yet supported by KVM.
+		 */
+		if (!msr_info->host_initiated || (data & pmu->global_ovf_ctrl_mask))
 			return 1; /* RO MSR */
 
 		pmu->global_status = data;