From patchwork Thu Dec 20 20:30:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739569 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 40E956C5 for ; Thu, 20 Dec 2018 20:30:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 32C5E28CF8 for ; Thu, 20 Dec 2018 20:30:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2746728D15; Thu, 20 Dec 2018 20:30:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C879528CF8 for ; Thu, 20 Dec 2018 20:30:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389562AbeLTUay (ORCPT ); Thu, 20 Dec 2018 15:30:54 -0500 Received: from mga07.intel.com ([134.134.136.100]:25746 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732267AbeLTUay (ORCPT ); Thu, 20 Dec 2018 15:30:54 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:30:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="120049498" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by FMSMGA003.fm.intel.com with ESMTP; 20 Dec 2018 12:30:53 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Sean Christopherson Subject: [PATCH 10/11] KVM: VMX: Add macros to handle HOST_RSP updates at VM-Enter Date: Thu, 20 Dec 2018 12:30:51 -0800 Message-Id: <20181220203051.23256-1-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...now that nested_vmx_check_vmentry_hw() conditionally synchronizes RSP with the {e,}VMCS, i.e. duplicates vmx_vcpu_run()'s esoteric RSP assembly blob. Note that VMX_UPDATE_VMCS_HOST_RSP_OUTPUTS "incorrectly" marks RDI as being clobbered (by sending it to a dummy output param). RDI needs to be marked as clobbered in the vmx_vcpu_run() case, but trying to do so by adding RDI to the clobber list would generate a compiler error due to it being an input parameter. Alternatively vmx_vcpu_run() could manually specify '"=D"((int){0}),' but creating a subtle dependency on the macro's internals is more likely to cause problems than cloberring RDI unnecessarily in nested_vmx_check_vmentry_hw(). Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 24 ++++-------------------- arch/x86/kvm/vmx/vmx.c | 21 ++++----------------- arch/x86/kvm/vmx/vmx.h | 28 ++++++++++++++++++++++++++++ 3 files changed, 36 insertions(+), 37 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index d6d88dfad39b..99a972fac7e3 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2745,21 +2745,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) (unsigned long)¤t_evmcs->host_rsp : 0; asm( - /* Set HOST_RSP */ - "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ - "cmp %%" _ASM_SP ", (%% " _ASM_DI") \n\t" - "je 1f \n\t" - "mov %%" _ASM_SP ", (%% " _ASM_DI") \n\t" - /* Avoid VMWRITE when Enlightened VMCS is in use */ - "test %%" _ASM_SI ", %%" _ASM_SI " \n\t" - "jz 2f \n\t" - "mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t" - "jmp 1f \n\t" - "2: \n\t" - "mov $%c[HOST_RSP], %%" _ASM_DI " \n\t" - __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DI) "\n\t" - "1: \n\t" - "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ + VMX_UPDATE_VMCS_HOST_RSP /* Check if vmlaunch or vmresume is needed */ "cmpl $0, %c[launched](%% " _ASM_CX")\n\t" @@ -2768,12 +2754,10 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) /* Set vmx->fail accordingly */ "setbe %c[fail](%% " _ASM_CX")\n\t" - : ASM_CALL_CONSTRAINT, "=D"((int){0}), "=S"((int){0}) - : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp), "S"(evmcs_rsp), - [HOST_RSP]"i"(HOST_RSP), + : ASM_CALL_CONSTRAINT, VMX_UPDATE_VMCS_HOST_RSP_OUTPUTS + : "c"(vmx), VMX_UPDATE_VMCS_HOST_RSP_INPUTS, [launched]"i"(offsetof(struct vcpu_vmx, __launched)), - [fail]"i"(offsetof(struct vcpu_vmx, fail)), - [wordsize]"i"(sizeof(ulong)) + [fail]"i"(offsetof(struct vcpu_vmx, fail)) : "rax", "cc", "memory" ); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 3ecb4c86a240..de709769f2ed 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6126,20 +6126,8 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) "push %%" _ASM_BP " \n\t" "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* placeholder for guest rcx */ "push %%" _ASM_CX " \n\t" - "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ - "cmp %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" - "je 1f \n\t" - "mov %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" - /* Avoid VMWRITE when Enlightened VMCS is in use */ - "test %%" _ASM_SI ", %%" _ASM_SI " \n\t" - "jz 2f \n\t" - "mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t" - "jmp 1f \n\t" - "2: \n\t" - "mov $%c[HOST_RSP], %%" _ASM_DX " \n\t" - __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" - "1: \n\t" - "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ + + VMX_UPDATE_VMCS_HOST_RSP /* Reload cr2 if changed */ "mov %c[cr2](%%" _ASM_CX "), %%" _ASM_AX " \n\t" @@ -6221,11 +6209,10 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) "xor %%esi, %%esi \n\t" "xor %%edi, %%edi \n\t" "pop %%" _ASM_BP " \n\t" - : ASM_CALL_CONSTRAINT, "=D"((int){0}), "=S"((int){0}) - : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp), "S"(evmcs_rsp), + : ASM_CALL_CONSTRAINT, VMX_UPDATE_VMCS_HOST_RSP_OUTPUTS + : "c"(vmx), VMX_UPDATE_VMCS_HOST_RSP_INPUTS, [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), - [HOST_RSP]"i"(HOST_RSP), [rax]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RAX])), [rbx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBX])), [rcx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RCX])), diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 2138ddffb1cf..4fa17a7180ed 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -265,6 +265,34 @@ struct kvm_vmx { spinlock_t ept_pointer_lock; }; +#define VMX_UPDATE_VMCS_HOST_RSP \ + /* Temporarily adjust RSP for CALL */ \ + "sub $%c[stacksize], %%" _ASM_SP "\n\t" \ + "cmp %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" \ + "je 2f \n\t" \ + "mov %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" \ + /* Avoid VMWRITE when Enlightened VMCS is in use */ \ + "test %%" _ASM_SI ", %%" _ASM_SI " \n\t" \ + "jz 1f \n\t" \ + "mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t" \ + "jmp 2f \n\t" \ + "1: \n\t" \ + "mov $%c[HOST_RSP], %%" _ASM_SI " \n\t" \ + __ex("vmwrite %%" _ASM_SP ", %%" _ASM_SI) "\n\t" \ + "2: \n\t" \ + /* un-adjust RSP */ \ + "add $%c[stacksize], %%" _ASM_SP "\n\t" + +#define VMX_UPDATE_VMCS_HOST_RSP_OUTPUTS \ + "=D"((int){0}), \ + "=S"((int){0}) + +#define VMX_UPDATE_VMCS_HOST_RSP_INPUTS \ + "D"(&vmx->loaded_vmcs->host_state.rsp), \ + "S"(evmcs_rsp), \ + [HOST_RSP]"i"(HOST_RSP), \ + [stacksize]"i"(sizeof(ulong)) + bool nested_vmx_allowed(struct kvm_vcpu *vcpu); void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu); void vmx_vcpu_put(struct kvm_vcpu *vcpu);