diff mbox series

[v3,1/3] KVM: SEV-ES: Prevent MSR access post VMSA encryption

Message ID 20240523121828.808-2-ravi.bangoria@amd.com (mailing list archive)
State New
Headers show
Series KVM: SEV-ES: Fix KVM_{GET|SET}_MSRS and LBRV handling | expand

Commit Message

Ravi Bangoria May 23, 2024, 12:18 p.m. UTC
From: Nikunj A Dadhania <nikunj@amd.com>

KVM currently allows userspace to read/write MSRs even after the VMSA is
encrypted. This can cause unintentional issues if MSR access has side-
effects. For ex, while migrating a guest, userspace could attempt to
migrate MSR_IA32_DEBUGCTLMSR and end up unintentionally disabling LBRV on
the target. Fix this by preventing access to those MSRs which are context
switched via the VMSA, once the VMSA is encrypted.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
---
 arch/x86/kvm/svm/svm.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

Comments

Paolo Bonzini May 28, 2024, 4:31 p.m. UTC | #1
On 5/23/24 14:18, Ravi Bangoria wrote:
> From: Nikunj A Dadhania <nikunj@amd.com>
> 
> KVM currently allows userspace to read/write MSRs even after the VMSA is
> encrypted. This can cause unintentional issues if MSR access has side-
> effects. For ex, while migrating a guest, userspace could attempt to
> migrate MSR_IA32_DEBUGCTLMSR and end up unintentionally disabling LBRV on
> the target. Fix this by preventing access to those MSRs which are context
> switched via the VMSA, once the VMSA is encrypted.
> 
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
> ---
>   arch/x86/kvm/svm/svm.c | 18 ++++++++++++++++++
>   1 file changed, 18 insertions(+)
> 
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 3d0549ca246f..489b0183f37d 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -2834,10 +2834,24 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
>   	return 0;
>   }
>   
> +static bool
> +sev_es_prevent_msr_access(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> +{
> +	return sev_es_guest(vcpu->kvm) &&
> +	       vcpu->arch.guest_state_protected &&
> +	       svm_msrpm_offset(msr_info->index) != MSR_INVALID &&
> +	       !msr_write_intercepted(vcpu, msr_info->index);
> +}
> +
>   static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>   {
>   	struct vcpu_svm *svm = to_svm(vcpu);
>   
> +	if (sev_es_prevent_msr_access(vcpu, msr_info)) {
> +		msr_info->data = 0;
> +		return 0;

This should return -EINVAL, not 0.  Likewise below in svm_set_msr().

Paolo

> +	}
> +
>   	switch (msr_info->index) {
>   	case MSR_AMD64_TSC_RATIO:
>   		if (!msr_info->host_initiated &&
> @@ -2988,6 +3002,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>   
>   	u32 ecx = msr->index;
>   	u64 data = msr->data;
> +
> +	if (sev_es_prevent_msr_access(vcpu, msr))
> +		return 0;
> +
>   	switch (ecx) {
>   	case MSR_AMD64_TSC_RATIO:
>
Ravi Bangoria May 29, 2024, 10:44 a.m. UTC | #2
On 5/28/2024 10:01 PM, Paolo Bonzini wrote:
> On 5/23/24 14:18, Ravi Bangoria wrote:
>> From: Nikunj A Dadhania <nikunj@amd.com>
>>
>> KVM currently allows userspace to read/write MSRs even after the VMSA is
>> encrypted. This can cause unintentional issues if MSR access has side-
>> effects. For ex, while migrating a guest, userspace could attempt to
>> migrate MSR_IA32_DEBUGCTLMSR and end up unintentionally disabling LBRV on
>> the target. Fix this by preventing access to those MSRs which are context
>> switched via the VMSA, once the VMSA is encrypted.
>>
>> Suggested-by: Sean Christopherson <seanjc@google.com>
>> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
>> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
>> ---
>>   arch/x86/kvm/svm/svm.c | 18 ++++++++++++++++++
>>   1 file changed, 18 insertions(+)
>>
>> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
>> index 3d0549ca246f..489b0183f37d 100644
>> --- a/arch/x86/kvm/svm/svm.c
>> +++ b/arch/x86/kvm/svm/svm.c
>> @@ -2834,10 +2834,24 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
>>       return 0;
>>   }
>>   +static bool
>> +sev_es_prevent_msr_access(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>> +{
>> +    return sev_es_guest(vcpu->kvm) &&
>> +           vcpu->arch.guest_state_protected &&
>> +           svm_msrpm_offset(msr_info->index) != MSR_INVALID &&
>> +           !msr_write_intercepted(vcpu, msr_info->index);
>> +}
>> +
>>   static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>>   {
>>       struct vcpu_svm *svm = to_svm(vcpu);
>>   +    if (sev_es_prevent_msr_access(vcpu, msr_info)) {
>> +        msr_info->data = 0;
>> +        return 0;
> 
> This should return -EINVAL, not 0.  Likewise below in svm_set_msr().

Sure.

Thanks,
Ravi
Michael Roth June 4, 2024, 11:10 p.m. UTC | #3
On Wed, May 29, 2024 at 04:14:10PM +0530, Ravi Bangoria wrote:
> On 5/28/2024 10:01 PM, Paolo Bonzini wrote:
> > On 5/23/24 14:18, Ravi Bangoria wrote:
> >> From: Nikunj A Dadhania <nikunj@amd.com>
> >>
> >> KVM currently allows userspace to read/write MSRs even after the VMSA is
> >> encrypted. This can cause unintentional issues if MSR access has side-
> >> effects. For ex, while migrating a guest, userspace could attempt to
> >> migrate MSR_IA32_DEBUGCTLMSR and end up unintentionally disabling LBRV on
> >> the target. Fix this by preventing access to those MSRs which are context
> >> switched via the VMSA, once the VMSA is encrypted.
> >>
> >> Suggested-by: Sean Christopherson <seanjc@google.com>
> >> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
> >> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
> >> ---
> >>   arch/x86/kvm/svm/svm.c | 18 ++++++++++++++++++
> >>   1 file changed, 18 insertions(+)
> >>
> >> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> >> index 3d0549ca246f..489b0183f37d 100644
> >> --- a/arch/x86/kvm/svm/svm.c
> >> +++ b/arch/x86/kvm/svm/svm.c
> >> @@ -2834,10 +2834,24 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
> >>       return 0;
> >>   }
> >>   +static bool
> >> +sev_es_prevent_msr_access(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >> +{
> >> +    return sev_es_guest(vcpu->kvm) &&
> >> +           vcpu->arch.guest_state_protected &&
> >> +           svm_msrpm_offset(msr_info->index) != MSR_INVALID &&
> >> +           !msr_write_intercepted(vcpu, msr_info->index);
> >> +}
> >> +
> >>   static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >>   {
> >>       struct vcpu_svm *svm = to_svm(vcpu);
> >>   +    if (sev_es_prevent_msr_access(vcpu, msr_info)) {
> >> +        msr_info->data = 0;
> >> +        return 0;
> > 
> > This should return -EINVAL, not 0.  Likewise below in svm_set_msr().
> 
> Sure.

One consequence of this change is that older VMMs that might still call
svm_get_msr()/svm_set_msr() for SEV-ES guests.

Newer VMMs that are aware of KVM_SEV_INIT2 however are already aware of
the stricter limitations of what vCPU state can be sync'd during
guest run-time, so newer QEMU for instance will work both for legacy
KVM_SEV_ES_INIT interface as well as KVM_SEV_INIT2.

So when using KVM_SEV_INIT2 it's okay to assume userspace can deal with
-EINVAL, whereas for legacy KVM_SEV_ES_INIT we sort of have to assume the
VMM does not have the necessary changes to deal with -EINVAL, so in that
case it's probably more appropriate to return 0 and just silently noop.

We had a similar situations with stricter limitations on fpstate sync'ing
for KVM_SEV_INIT2 and that was the approach taken there:

  https://lore.kernel.org/kvm/ZfRhu0GVjWeAAJMB@google.com/

so I'll submit a patch that takes the same approach.

-Mike

> 
> Thanks,
> Ravi
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3d0549ca246f..489b0183f37d 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2834,10 +2834,24 @@  static int svm_get_msr_feature(struct kvm_msr_entry *msr)
 	return 0;
 }
 
+static bool
+sev_es_prevent_msr_access(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+{
+	return sev_es_guest(vcpu->kvm) &&
+	       vcpu->arch.guest_state_protected &&
+	       svm_msrpm_offset(msr_info->index) != MSR_INVALID &&
+	       !msr_write_intercepted(vcpu, msr_info->index);
+}
+
 static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
+	if (sev_es_prevent_msr_access(vcpu, msr_info)) {
+		msr_info->data = 0;
+		return 0;
+	}
+
 	switch (msr_info->index) {
 	case MSR_AMD64_TSC_RATIO:
 		if (!msr_info->host_initiated &&
@@ -2988,6 +3002,10 @@  static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 
 	u32 ecx = msr->index;
 	u64 data = msr->data;
+
+	if (sev_es_prevent_msr_access(vcpu, msr))
+		return 0;
+
 	switch (ecx) {
 	case MSR_AMD64_TSC_RATIO: