diff mbox series

[3/3] KVM: x86: move kvm_inject_gp up from kvm_set_dr to callers

Message ID 20210202165141.88275-4-pbonzini@redhat.com (mailing list archive)
State New, archived
Headers show
Series use kvm_complete_insn_gp more | expand

Commit Message

Paolo Bonzini Feb. 2, 2021, 4:51 p.m. UTC
Push the injection of #GP up to the callers, so that they can just use
kvm_complete_insn_gp. __kvm_set_dr is pretty much what the callers can use
together with kvm_complete_insn_gp, so rename it to kvm_set_dr and drop
the old kvm_set_dr wrapper.

This allows nested VMX code, which really wanted to use __kvm_set_dr, to
use the right function.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm/svm.c | 14 +++++++-------
 arch/x86/kvm/vmx/vmx.c | 19 ++++++++++---------
 arch/x86/kvm/x86.c     | 19 +++++--------------
 3 files changed, 22 insertions(+), 30 deletions(-)

Comments

Sean Christopherson Feb. 2, 2021, 6:17 p.m. UTC | #1
On Tue, Feb 02, 2021, Paolo Bonzini wrote:
> Push the injection of #GP up to the callers, so that they can just use
> kvm_complete_insn_gp. __kvm_set_dr is pretty much what the callers can use
> together with kvm_complete_insn_gp, so rename it to kvm_set_dr and drop
> the old kvm_set_dr wrapper.
> 
> This allows nested VMX code, which really wanted to use __kvm_set_dr, to
> use the right function.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/svm/svm.c | 14 +++++++-------
>  arch/x86/kvm/vmx/vmx.c | 19 ++++++++++---------
>  arch/x86/kvm/x86.c     | 19 +++++--------------
>  3 files changed, 22 insertions(+), 30 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index c0d41a6920f0..818cf3babef2 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -2599,6 +2599,7 @@ static int dr_interception(struct vcpu_svm *svm)
>  {
>  	int reg, dr;
>  	unsigned long val;
> +	int err;
>  
>  	if (svm->vcpu.guest_debug == 0) {
>  		/*
> @@ -2617,19 +2618,18 @@ static int dr_interception(struct vcpu_svm *svm)
>  	reg = svm->vmcb->control.exit_info_1 & SVM_EXITINFO_REG_MASK;
>  	dr = svm->vmcb->control.exit_code - SVM_EXIT_READ_DR0;
>  
> +	if (!kvm_require_dr(&svm->vcpu, dr & 15))

Purely because I suck at reading base-10 bitwise operations, can we do "dr & 0xf"?
This also creates separate logic for writes (AND versus SUB), can you also
convert the other 'dr - 16'?

Alternatively, grab the "mov to DR" flag early on, but that feels less
readable, e.g.

	mov_to_dr = dr & 0x10;
	dr &= 0xf;

> +		return 1;
> +
>  	if (dr >= 16) { /* mov to DRn */
> -		if (!kvm_require_dr(&svm->vcpu, dr - 16))
> -			return 1;
>  		val = kvm_register_read(&svm->vcpu, reg);
> -		kvm_set_dr(&svm->vcpu, dr - 16, val);
> +		err = kvm_set_dr(&svm->vcpu, dr - 16, val);
>  	} else {
> -		if (!kvm_require_dr(&svm->vcpu, dr))
> -			return 1;
> -		kvm_get_dr(&svm->vcpu, dr, &val);
> +		err = kvm_get_dr(&svm->vcpu, dr, &val);

Technically, 'err' needs to be checked, else 'reg' will theoretically be
clobbered with garbage.  I say "theoretically", because kvm_get_dr() always
returns '0'; the CR4.DE=1 behavior is handled by kvm_require_dr(), presumably
due to it being a #UD instead of #GP.  AFAICT, you can simply add a prep patch
to change the return type to void.

Side topic, is the kvm_require_dr() check needed on SVM interception?  The APM
states:

  All normal exception checks take precedence over the by implicit DR6/DR7 writes.)

I can't find anything that would suggest the CR4.DE=1 #UD isn't a "normal"
exception.

>  		kvm_register_write(&svm->vcpu, reg, val);
>  	}
>  
> -	return kvm_skip_emulated_instruction(&svm->vcpu);
> +	return kvm_complete_insn_gp(&svm->vcpu, err);
>  }
>  
>  static int cr8_write_interception(struct vcpu_svm *svm)
Paolo Bonzini Feb. 3, 2021, 9:24 a.m. UTC | #2
On 02/02/21 19:17, Sean Christopherson wrote:
>> @@ -2617,19 +2618,18 @@ static int dr_interception(struct vcpu_svm *svm)
>>   	reg = svm->vmcb->control.exit_info_1 & SVM_EXITINFO_REG_MASK;
>>   	dr = svm->vmcb->control.exit_code - SVM_EXIT_READ_DR0;
>>   
>> +	if (!kvm_require_dr(&svm->vcpu, dr & 15))
> 
> Purely because I suck at reading base-10 bitwise operations, can we do "dr & 0xf"?

I would have never said that having this => 
https://www.youtube.com/watch?v=nfUY3_XVKHI as a kid would give me a 
competitive advantage as KVM maintainer. :)

(Aside: that game was incredibly popular in the 80s in Italy and as you 
can see from the advertisement at 
https://www.youtube.com/watch?v=o6L9cegnCrw it even had "the binary 
teacher" in it, yes in English despite no one spoke English fluently at 
the time.  The guy who invented it was an absolute genius.  Also, the 
name means "branches").

But seriously: I think the usage of "-" was intentional because the AMD 
exit codes have READ first and WRITE second---but it's (almost) a 
coincidence that CR/DR intercepts are naturally aligned and bit 4 means 
read vs. write.

So v2 will remove the kvm_require_dr (I tested your hypothesis with 
debug.flat and KVM_DEBUGREG_WONT_EXIT disabled, and you're right) and have:

         dr = svm->vmcb->control.exit_code - SVM_EXIT_READ_DR0;
         if (dr >= 16) { /* Move to dr.  */
                 dr -= 16;
                 val = kvm_register_read(&svm->vcpu, reg);
                 err = kvm_set_dr(&svm->vcpu, dr, val);
         } else {
                 kvm_get_dr(&svm->vcpu, dr, &val);
                 kvm_register_write(&svm->vcpu, reg, val);
         }

Paolo

> Technically, 'err' needs to be checked, else 'reg' will theoretically be
> clobbered with garbage.  I say "theoretically", because kvm_get_dr() always
> returns '0'; the CR4.DE=1 behavior is handled by kvm_require_dr(), presumably
> due to it being a #UD instead of #GP.  AFAICT, you can simply add a prep patch
> to change the return type to void.
> 
> Side topic, is the kvm_require_dr() check needed on SVM interception?  The APM
> states:
> 
>    All normal exception checks take precedence over the by implicit DR6/DR7 writes.)
> 
> I can't find anything that would suggest the CR4.DE=1 #UD isn't a "normal"
> exception.
> 
>>   		kvm_register_write(&svm->vcpu, reg, val);
>>   	}
>>   
>> -	return kvm_skip_emulated_instruction(&svm->vcpu);
>> +	return kvm_complete_insn_gp(&svm->vcpu, err);
>>   }
>>   
>>   static int cr8_write_interception(struct vcpu_svm *svm)
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index c0d41a6920f0..818cf3babef2 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2599,6 +2599,7 @@  static int dr_interception(struct vcpu_svm *svm)
 {
 	int reg, dr;
 	unsigned long val;
+	int err;
 
 	if (svm->vcpu.guest_debug == 0) {
 		/*
@@ -2617,19 +2618,18 @@  static int dr_interception(struct vcpu_svm *svm)
 	reg = svm->vmcb->control.exit_info_1 & SVM_EXITINFO_REG_MASK;
 	dr = svm->vmcb->control.exit_code - SVM_EXIT_READ_DR0;
 
+	if (!kvm_require_dr(&svm->vcpu, dr & 15))
+		return 1;
+
 	if (dr >= 16) { /* mov to DRn */
-		if (!kvm_require_dr(&svm->vcpu, dr - 16))
-			return 1;
 		val = kvm_register_read(&svm->vcpu, reg);
-		kvm_set_dr(&svm->vcpu, dr - 16, val);
+		err = kvm_set_dr(&svm->vcpu, dr - 16, val);
 	} else {
-		if (!kvm_require_dr(&svm->vcpu, dr))
-			return 1;
-		kvm_get_dr(&svm->vcpu, dr, &val);
+		err = kvm_get_dr(&svm->vcpu, dr, &val);
 		kvm_register_write(&svm->vcpu, reg, val);
 	}
 
-	return kvm_skip_emulated_instruction(&svm->vcpu);
+	return kvm_complete_insn_gp(&svm->vcpu, err);
 }
 
 static int cr8_write_interception(struct vcpu_svm *svm)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a07fce6d0bbb..41a26d98fb95 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5099,6 +5099,7 @@  static int handle_dr(struct kvm_vcpu *vcpu)
 {
 	unsigned long exit_qualification;
 	int dr, dr7, reg;
+	int err = 1;
 
 	exit_qualification = vmx_get_exit_qual(vcpu);
 	dr = exit_qualification & DEBUG_REG_ACCESS_NUM;
@@ -5107,9 +5108,9 @@  static int handle_dr(struct kvm_vcpu *vcpu)
 	if (!kvm_require_dr(vcpu, dr))
 		return 1;
 
-	/* Do not handle if the CPL > 0, will trigger GP on re-entry */
-	if (!kvm_require_cpl(vcpu, 0))
-		return 1;
+	if (kvm_x86_ops.get_cpl(vcpu) > 0)
+		goto out;
+
 	dr7 = vmcs_readl(GUEST_DR7);
 	if (dr7 & DR7_GD) {
 		/*
@@ -5146,14 +5147,14 @@  static int handle_dr(struct kvm_vcpu *vcpu)
 	if (exit_qualification & TYPE_MOV_FROM_DR) {
 		unsigned long val;
 
-		if (kvm_get_dr(vcpu, dr, &val))
-			return 1;
+		err = kvm_get_dr(vcpu, dr, &val);
 		kvm_register_write(vcpu, reg, val);
-	} else
-		if (kvm_set_dr(vcpu, dr, kvm_register_readl(vcpu, reg)))
-			return 1;
+	} else {
+		err = kvm_set_dr(vcpu, dr, kvm_register_readl(vcpu, reg));
+	}
 
-	return kvm_skip_emulated_instruction(vcpu);
+out:
+	return kvm_complete_insn_gp(vcpu, err);
 }
 
 static void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index edbeb162012b..b748bf0d6d33 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1147,7 +1147,7 @@  static u64 kvm_dr6_fixed(struct kvm_vcpu *vcpu)
 	return fixed;
 }
 
-static int __kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val)
+int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val)
 {
 	size_t size = ARRAY_SIZE(vcpu->arch.db);
 
@@ -1160,13 +1160,13 @@  static int __kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val)
 	case 4:
 	case 6:
 		if (!kvm_dr6_valid(val))
-			return -1; /* #GP */
+			return 1; /* #GP */
 		vcpu->arch.dr6 = (val & DR6_VOLATILE) | kvm_dr6_fixed(vcpu);
 		break;
 	case 5:
 	default: /* 7 */
 		if (!kvm_dr7_valid(val))
-			return -1; /* #GP */
+			return 1; /* #GP */
 		vcpu->arch.dr7 = (val & DR7_VOLATILE) | DR7_FIXED_1;
 		kvm_update_dr7(vcpu);
 		break;
@@ -1174,15 +1174,6 @@  static int __kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val)
 
 	return 0;
 }
-
-int kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val)
-{
-	if (__kvm_set_dr(vcpu, dr, val)) {
-		kvm_inject_gp(vcpu, 0);
-		return 1;
-	}
-	return 0;
-}
 EXPORT_SYMBOL_GPL(kvm_set_dr);
 
 int kvm_get_dr(struct kvm_vcpu *vcpu, int dr, unsigned long *val)
@@ -6595,7 +6586,7 @@  static int emulator_set_dr(struct x86_emulate_ctxt *ctxt, int dr,
 			   unsigned long value)
 {
 
-	return __kvm_set_dr(emul_to_vcpu(ctxt), dr, value);
+	return kvm_set_dr(emul_to_vcpu(ctxt), dr, value);
 }
 
 static u64 mk_cr_64(u64 curr_cr, u32 new_val)
@@ -8636,7 +8627,7 @@  static void enter_smm(struct kvm_vcpu *vcpu)
 	dt.address = dt.size = 0;
 	static_call(kvm_x86_set_idt)(vcpu, &dt);
 
-	__kvm_set_dr(vcpu, 7, DR7_FIXED_1);
+	kvm_set_dr(vcpu, 7, DR7_FIXED_1);
 
 	cs.selector = (vcpu->arch.smbase >> 4) & 0xffff;
 	cs.base = vcpu->arch.smbase;