Message ID | 20190125154120.19385-14-sean.j.christopherson@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: VMX: Move vCPU-run to proper asm sub-routine | expand |
On Fri, Jan 25, 2019 at 7:41 AM Sean Christopherson <sean.j.christopherson@intel.com> wrote: > > Temporarily propagating vmx->loaded_vmcs->launched to vmx->__launched > is not functionally necessary, but rather was done historically to > avoid passing both 'vmx' and 'loaded_vmcs' to the vCPU-run asm blob. > Nested early checks inherited this behavior by virtue of copy+paste. > > A future patch will move HOST_RSP caching to be per-VMCS, i.e. store > 'host_rsp' in loaded VMCS. Now that the reference to 'vmx->fail' is > also gone from nested early checks, referencing 'loaded_vmcs' directly > means we can drop the 'vmx' reference when introducing per-VMCS RSP > caching. And it means __launched can be dropped from struct vcpu_vmx > if/when vCPU-run receives similar treatment. > > Note the use of a named register constraint for 'loaded_vmcs'. Using > RCX to hold 'vmx' was inherited from vCPU-run. In the vCPU-run case, > the scratch register needs to be explicitly defined as it is crushed > when loading guest state, i.e. deferring to the compiler would corrupt > the pointer. Since nested early checks never loads guests state, it's > a-ok to let the compiler pick any register. Naming the constraint > avoids the fragility of referencing constraints via %1, %2, etc.., which > breaks horribly when modifying constraints, and generally makes the asm > blob more readable. > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Jim Mattson <jmattson@google.com>
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 15a8ae7bb247..d42134f836a6 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2751,8 +2751,6 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) vmx->loaded_vmcs->host_state.cr4 = cr4; } - vmx->__launched = vmx->loaded_vmcs->launched; - asm( /* Set HOST_RSP */ "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ @@ -2761,7 +2759,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ /* Check if vmlaunch or vmresume is needed */ - "cmpb $0, %c[launched](%% " _ASM_CX")\n\t" + "cmpb $0, %c[launched](%[loaded_vmcs])\n\t" /* * VMLAUNCH and VMRESUME clear RFLAGS.{CF,ZF} on VM-Exit, set @@ -2774,7 +2772,8 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) CC_SET(be) : ASM_CALL_CONSTRAINT, CC_OUT(be) (vm_fail) : "c"(vmx), "d"((unsigned long)HOST_RSP), - [launched]"i"(offsetof(struct vcpu_vmx, __launched)), + [loaded_vmcs]"r"(vmx->loaded_vmcs), + [launched]"i"(offsetof(struct loaded_vmcs, launched)), [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), [wordsize]"i"(sizeof(ulong)) : "cc", "memory"
Temporarily propagating vmx->loaded_vmcs->launched to vmx->__launched is not functionally necessary, but rather was done historically to avoid passing both 'vmx' and 'loaded_vmcs' to the vCPU-run asm blob. Nested early checks inherited this behavior by virtue of copy+paste. A future patch will move HOST_RSP caching to be per-VMCS, i.e. store 'host_rsp' in loaded VMCS. Now that the reference to 'vmx->fail' is also gone from nested early checks, referencing 'loaded_vmcs' directly means we can drop the 'vmx' reference when introducing per-VMCS RSP caching. And it means __launched can be dropped from struct vcpu_vmx if/when vCPU-run receives similar treatment. Note the use of a named register constraint for 'loaded_vmcs'. Using RCX to hold 'vmx' was inherited from vCPU-run. In the vCPU-run case, the scratch register needs to be explicitly defined as it is crushed when loading guest state, i.e. deferring to the compiler would corrupt the pointer. Since nested early checks never loads guests state, it's a-ok to let the compiler pick any register. Naming the constraint avoids the fragility of referencing constraints via %1, %2, etc.., which breaks horribly when modifying constraints, and generally makes the asm blob more readable. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/kvm/vmx/nested.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)