diff mbox series

[v2,13/18] KVM: nVMX: do not skip VMEnter instruction that succeeds

Message ID 20180828160459.14093-14-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: nVMX: add option to perform early consistency checks via H/W | expand

Commit Message

Sean Christopherson Aug. 28, 2018, 4:04 p.m. UTC
A successful VMEnter is essentially a fancy indirect branch that
pulls the target RIP from the VMCS.  Skipping the instruction is
unnecessary (RIP will get overwritten by the VMExit handler) and
is problematic because it can incorrectly suppress a #DB due to
EFLAGS.TF when a VMFail is detected by hardware (happens after we
skip the instruction).

Now that vmx_nested_run() is not prematurely skipping the instr,
use the full kvm_skip_emulated_instruction() in the VMFail path
of nested_vmx_vmexit().  We also need to explicitly update the
GUEST_INTERRUPTIBILITY_INFO when loading vmcs12 host state.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx.c | 21 +++++----------------
 1 file changed, 5 insertions(+), 16 deletions(-)

Comments

Jim Mattson Sept. 20, 2018, 8:08 p.m. UTC | #1
On Tue, Aug 28, 2018 at 9:04 AM, Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
> A successful VMEnter is essentially a fancy indirect branch that
> pulls the target RIP from the VMCS.  Skipping the instruction is
> unnecessary (RIP will get overwritten by the VMExit handler) and
> is problematic because it can incorrectly suppress a #DB due to
> EFLAGS.TF when a VMFail is detected by hardware (happens after we
> skip the instruction).
>
> Now that vmx_nested_run() is not prematurely skipping the instr,
> use the full kvm_skip_emulated_instruction() in the VMFail path
> of nested_vmx_vmexit().  We also need to explicitly update the
> GUEST_INTERRUPTIBILITY_INFO when loading vmcs12 host state.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
diff mbox series

Patch

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index db4751c626ce..c94d600baefe 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -12735,15 +12735,6 @@  static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
 		goto out;
 	}
 
-	/*
-	 * After this point, the trap flag no longer triggers a singlestep trap
-	 * on the vm entry instructions; don't call kvm_skip_emulated_instruction.
-	 * This is not 100% correct; for performance reasons, we delegate most
-	 * of the checks on host state to the processor.  If those fail,
-	 * the singlestep trap is missed.
-	 */
-	skip_emulated_instruction(vcpu);
-
 	/*
 	 * We're finally done with prerequisite checking, and can start with
 	 * the nested entry.
@@ -13135,6 +13126,8 @@  static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
 	kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->host_rsp);
 	kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->host_rip);
 	vmx_set_rflags(vcpu, X86_EFLAGS_FIXED);
+	vmx_set_interrupt_shadow(vcpu, 0);
+
 	/*
 	 * Note that calling vmx_set_cr0 is important, even if cr0 hasn't
 	 * actually changed, because vmx_set_cr0 refers to efer set above.
@@ -13390,18 +13383,14 @@  static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
 	 * in L1 which thinks it just finished a VMLAUNCH or
 	 * VMRESUME instruction, so we need to set the failure
 	 * flag and the VM-instruction error field of the VMCS
-	 * accordingly.
+	 * accordingly, and skip the emulated instruction.
 	 */
 	nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD);
 
+	kvm_skip_emulated_instruction(vcpu);
+
 	load_vmcs12_mmu_host_state(vcpu, vmcs12);
 
-	/*
-	 * The emulated instruction was already skipped in
-	 * nested_vmx_run, but the updated RIP was never
-	 * written back to the vmcs01.
-	 */
-	skip_emulated_instruction(vcpu);
 	vmx->fail = 0;
 }