diff mbox series

KVM: VMX: Handle single-step #DB for EMULTYPE_SKIP on EPT misconfig

Message ID 20190823213115.31908-1-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: VMX: Handle single-step #DB for EMULTYPE_SKIP on EPT misconfig | expand

Commit Message

Sean Christopherson Aug. 23, 2019, 9:31 p.m. UTC
VMX's EPT misconfig flow to handle fast-MMIO path falls back to decoding
the instruction to determine the instruction length when running as a
guest (Hyper-V doesn't fill VMCS.VM_EXIT_INSTRUCTION_LEN because it's
technically not defined for EPT misconfigs).  Rather than implement the
slow skip in VMX's generic skip_emulated_instruction(),
handle_ept_misconfig() directly calls kvm_emulate_instruction() with
EMULTYPE_SKIP, which intentionally doesn't do single-step detection, and
so handle_ept_misconfig() misses a single-step #DB.

Rework the EPT misconfig fallback case to route it through
kvm_skip_emulated_instruction() so that single-step #DBs and interrupt
shadow updates are handled automatically.  I.e. make VMX's slow skip
logic match SVM's and have the SVM flow not intentionally avoid the
shadow update.

Alternatively, the handle_ept_misconfig() could manually handle single-
step detection, but that results in EMULTYPE_SKIP having split logic for
the interrupt shadow vs. single-step #DBs, and split emulator logic is
largely what led to this mess in the first place.

Modifying SVM to mirror VMX flow isn't really an option as SVM's case
isn't limited to a specific exit reason, i.e. handling the slow skip in
skip_emulated_instruction() is mandatory for all intents and purposes.

Drop VMX's skip_emulated_instruction() wrapper since it can now fail,
and instead WARN if it fails unexpectedly, e.g. if exit_reason somehow
becomes corrupted.

Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Fixes: d391f12070672 ("x86/kvm/vmx: do not use vm-exit instruction length for fast MMIO when running nested")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---

*** LOOK HERE ***

This patch applies on top my recent emulation cleanup[1][2] as it has
non-trivial conflicts, dealing with those seemed like a waste of time,
and this doesn't seem like a candidate for stable.  Let me know if you'd
prefer it to be respun without the dependency.

Sadly/ironically, this unwinds some of the logic that was recently
added by Vitaly at my suggestion.  Hindsight is 20/20 and all that...

[1] https://lkml.kernel.org/r/20190823010709.24879-1-sean.j.christopherson@intel.com
[2] https://patchwork.kernel.org/cover/11110331/

 arch/x86/kvm/svm.c     | 17 +++++++-------
 arch/x86/kvm/vmx/vmx.c | 52 ++++++++++++++++++------------------------
 arch/x86/kvm/x86.c     |  6 ++++-
 3 files changed, 36 insertions(+), 39 deletions(-)

Comments

Sean Christopherson Aug. 27, 2019, 8:43 p.m. UTC | #1
On Fri, Aug 23, 2019 at 02:31:15PM -0700, Sean Christopherson wrote:
> VMX's EPT misconfig flow to handle fast-MMIO path falls back to decoding
> the instruction to determine the instruction length when running as a
> guest (Hyper-V doesn't fill VMCS.VM_EXIT_INSTRUCTION_LEN because it's
> technically not defined for EPT misconfigs).  Rather than implement the
> slow skip in VMX's generic skip_emulated_instruction(),
> handle_ept_misconfig() directly calls kvm_emulate_instruction() with
> EMULTYPE_SKIP, which intentionally doesn't do single-step detection, and
> so handle_ept_misconfig() misses a single-step #DB.
> 
> Rework the EPT misconfig fallback case to route it through
> kvm_skip_emulated_instruction() so that single-step #DBs and interrupt
> shadow updates are handled automatically.  I.e. make VMX's slow skip
> logic match SVM's and have the SVM flow not intentionally avoid the
> shadow update.
> 
> Alternatively, the handle_ept_misconfig() could manually handle single-
> step detection, but that results in EMULTYPE_SKIP having split logic for
> the interrupt shadow vs. single-step #DBs, and split emulator logic is
> largely what led to this mess in the first place.
> 
> Modifying SVM to mirror VMX flow isn't really an option as SVM's case
> isn't limited to a specific exit reason, i.e. handling the slow skip in
> skip_emulated_instruction() is mandatory for all intents and purposes.
> 
> Drop VMX's skip_emulated_instruction() wrapper since it can now fail,
> and instead WARN if it fails unexpectedly, e.g. if exit_reason somehow
> becomes corrupted.
> 
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Fixes: d391f12070672 ("x86/kvm/vmx: do not use vm-exit instruction length for fast MMIO when running nested")
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
> 
> *** LOOK HERE ***
> 
> This patch applies on top my recent emulation cleanup[1][2] as it has
> non-trivial conflicts, dealing with those seemed like a waste of time,
> and this doesn't seem like a candidate for stable.  Let me know if you'd
> prefer it to be respun without the dependency.
> 
> Sadly/ironically, this unwinds some of the logic that was recently
> added by Vitaly at my suggestion.  Hindsight is 20/20 and all that...
> 
> [1] https://lkml.kernel.org/r/20190823010709.24879-1-sean.j.christopherson@intel.com
> [2] https://patchwork.kernel.org/cover/11110331/

Paolo and/or Radim,

Please ignore this patch, I'll fold it into the aforementioned emulation
cleanup since I need to spin v2 of that series.
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f374f11358b7..93137d5f71f8 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -777,14 +777,15 @@  static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
 		svm->next_rip = svm->vmcb->control.next_rip;
 	}
 
-	if (!svm->next_rip)
-		return kvm_emulate_instruction(vcpu, EMULTYPE_SKIP);
-
-	if (svm->next_rip - kvm_rip_read(vcpu) > MAX_INST_SIZE)
-		printk(KERN_ERR "%s: ip 0x%lx next 0x%llx\n",
-		       __func__, kvm_rip_read(vcpu), svm->next_rip);
-
-	kvm_rip_write(vcpu, svm->next_rip);
+	if (!svm->next_rip) {
+		if (!kvm_emulate_instruction(vcpu, EMULTYPE_SKIP))
+			return 0;
+	} else {
+		if (svm->next_rip - kvm_rip_read(vcpu) > MAX_INST_SIZE)
+			pr_err("%s: ip 0x%lx next 0x%llx\n",
+			       __func__, kvm_rip_read(vcpu), svm->next_rip);
+		kvm_rip_write(vcpu, svm->next_rip);
+	}
 	svm_set_interrupt_shadow(vcpu, 0);
 
 	return 1;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 44d868c49301..4ee1572e1a7e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1472,17 +1472,27 @@  static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu, u64 data)
 	return 0;
 }
 
-/*
- * Returns an int to be compatible with SVM implementation (which can fail).
- * Do not use directly, use skip_emulated_instruction() instead.
- */
-static int __skip_emulated_instruction(struct kvm_vcpu *vcpu)
+static int skip_emulated_instruction(struct kvm_vcpu *vcpu)
 {
 	unsigned long rip;
 
-	rip = kvm_rip_read(vcpu);
-	rip += vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
-	kvm_rip_write(vcpu, rip);
+	/*
+	 * Using VMCS.VM_EXIT_INSTRUCTION_LEN on EPT misconfig depends on
+	 * undefined behavior: Intel's SDM doesn't mandate the VMCS field be
+	 * set when EPT misconfig occurs.  In practice, real hardware updates
+	 * VM_EXIT_INSTRUCTION_LEN on EPT misconfig, but other hypervisors
+	 * (namely Hyper-V) don't set it due to it being undefined behavior,
+	 * i.e. we end up advancing IP with some random value.
+	 */
+	if (!static_cpu_has(X86_FEATURE_HYPERVISOR) ||
+	    to_vmx(vcpu)->exit_reason != EXIT_REASON_EPT_MISCONFIG) {
+		rip = kvm_rip_read(vcpu);
+		rip += vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
+		kvm_rip_write(vcpu, rip);
+	} else {
+		if (!kvm_emulate_instruction(vcpu, EMULTYPE_SKIP))
+			return 0;
+	}
 
 	/* skipping an emulated instruction also counts */
 	vmx_set_interrupt_shadow(vcpu, 0);
@@ -1490,11 +1500,6 @@  static int __skip_emulated_instruction(struct kvm_vcpu *vcpu)
 	return 1;
 }
 
-static inline void skip_emulated_instruction(struct kvm_vcpu *vcpu)
-{
-	(void)__skip_emulated_instruction(vcpu);
-}
-
 static void vmx_clear_hlt(struct kvm_vcpu *vcpu)
 {
 	/*
@@ -4552,7 +4557,7 @@  static int handle_exception_nmi(struct kvm_vcpu *vcpu)
 			vcpu->arch.dr6 &= ~DR_TRAP_BITS;
 			vcpu->arch.dr6 |= dr6 | DR6_RTM;
 			if (is_icebp(intr_info))
-				skip_emulated_instruction(vcpu);
+				WARN_ON(!skip_emulated_instruction(vcpu));
 
 			kvm_queue_exception(vcpu, DB_VECTOR);
 			return 1;
@@ -5062,7 +5067,7 @@  static int handle_task_switch(struct kvm_vcpu *vcpu)
 	if (!idt_v || (type != INTR_TYPE_HARD_EXCEPTION &&
 		       type != INTR_TYPE_EXT_INTR &&
 		       type != INTR_TYPE_NMI_INTR))
-		skip_emulated_instruction(vcpu);
+		WARN_ON(!skip_emulated_instruction(vcpu));
 
 	/*
 	 * TODO: What about debug traps on tss switch?
@@ -5129,20 +5134,7 @@  static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
 	if (!is_guest_mode(vcpu) &&
 	    !kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
 		trace_kvm_fast_mmio(gpa);
-		/*
-		 * Doing kvm_skip_emulated_instruction() depends on undefined
-		 * behavior: Intel's manual doesn't mandate
-		 * VM_EXIT_INSTRUCTION_LEN to be set in VMCS when EPT MISCONFIG
-		 * occurs and while on real hardware it was observed to be set,
-		 * other hypervisors (namely Hyper-V) don't set it, we end up
-		 * advancing IP with some random value. Disable fast mmio when
-		 * running nested and keep it for real hardware in hope that
-		 * VM_EXIT_INSTRUCTION_LEN will always be set correctly.
-		 */
-		if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
-			return kvm_skip_emulated_instruction(vcpu);
-		else
-			return kvm_emulate_instruction(vcpu, EMULTYPE_SKIP);
+		return kvm_skip_emulated_instruction(vcpu);
 	}
 
 	return kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
@@ -7696,7 +7688,7 @@  static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 
 	.run = vmx_vcpu_run,
 	.handle_exit = vmx_handle_exit,
-	.skip_emulated_instruction = __skip_emulated_instruction,
+	.skip_emulated_instruction = skip_emulated_instruction,
 	.set_interrupt_shadow = vmx_set_interrupt_shadow,
 	.get_interrupt_shadow = vmx_get_interrupt_shadow,
 	.patch_hypercall = vmx_patch_hypercall,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 53a16eb2aba8..9d5a2b77473b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6552,11 +6552,15 @@  int x86_emulate_instruction(struct kvm_vcpu *vcpu,
 		return 1;
 	}
 
+	/*
+	 * Note, EMULTYPE_SKIP is intended for use *only* by vendor callbacks
+	 * for kvm_skip_emulated_instruction().  The caller is responsible for
+	 * updating interruptibility state and injecting single-step #DBs.
+	 */
 	if (emulation_type & EMULTYPE_SKIP) {
 		kvm_rip_write(vcpu, ctxt->_eip);
 		if (ctxt->eflags & X86_EFLAGS_RF)
 			kvm_set_rflags(vcpu, ctxt->eflags & ~X86_EFLAGS_RF);
-		kvm_x86_ops->set_interrupt_shadow(vcpu, 0);
 		return 1;
 	}