[09/11] KVM: nVMX: Add eVMCS support to nested_vmx_check_vmentry_hw()
diff mbox series

Message ID 20181220203049.23213-1-sean.j.christopherson@intel.com
State New
Headers show
Series
  • KVM: VMX: Clean up VM-Enter/VM-Exit asm code
Related show

Commit Message

Sean Christopherson Dec. 20, 2018, 8:30 p.m. UTC
Adding eVMCS support to nested early checks makes the RSP shenanigans
in vmx_vcpu_run() and nested_vmx_check_vmentry_hw() more or less
identical.  This will allow encapsulating the shenanigans in a set of
helper macros to reduce the maintenance burden and prettify the code.

Note that while this technically "fixes" eVMCS support, there isn't a
known use case for nested early checks when running on Hyper-V, i.e.
the motivation for the change is purely to allow code consolidation.

Fixes: 773e8a0425c9 ("x86/kvm: use Enlightened VMCS when running on Hyper-V")
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/nested.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

Patch
diff mbox series

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 9be3156f972d..d6d88dfad39b 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2705,7 +2705,7 @@  static int nested_vmx_check_vmentry_postreqs(struct kvm_vcpu *vcpu,
 static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
-	unsigned long cr3, cr4;
+	unsigned long cr3, cr4, evmcs_rsp;
 
 	if (!nested_early_check)
 		return 0;
@@ -2741,12 +2741,21 @@  static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
 
 	vmx->__launched = vmx->loaded_vmcs->launched;
 
+	evmcs_rsp = static_branch_unlikely(&enable_evmcs) ?
+		(unsigned long)&current_evmcs->host_rsp : 0;
+
 	asm(
 		/* Set HOST_RSP */
 		"sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */
 		"cmp %%" _ASM_SP ", (%% " _ASM_DI") \n\t"
 		"je 1f \n\t"
 		"mov %%" _ASM_SP ", (%% " _ASM_DI") \n\t"
+		/* Avoid VMWRITE when Enlightened VMCS is in use */
+		"test %%" _ASM_SI ", %%" _ASM_SI " \n\t"
+		"jz 2f \n\t"
+		"mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t"
+		"jmp 1f \n\t"
+		"2: \n\t"
 		"mov $%c[HOST_RSP], %%" _ASM_DI " \n\t"
 		__ex("vmwrite %%" _ASM_SP ", %%" _ASM_DI) "\n\t"
 		"1: \n\t"
@@ -2759,8 +2768,8 @@  static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
 
 		/* Set vmx->fail accordingly */
 		"setbe %c[fail](%% " _ASM_CX")\n\t"
-	      : ASM_CALL_CONSTRAINT, "=D"((int){0})
-	      : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp),
+	      : ASM_CALL_CONSTRAINT, "=D"((int){0}), "=S"((int){0})
+	      : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp), "S"(evmcs_rsp),
 		[HOST_RSP]"i"(HOST_RSP),
 		[launched]"i"(offsetof(struct vcpu_vmx, __launched)),
 		[fail]"i"(offsetof(struct vcpu_vmx, fail)),