From patchwork Thu Dec 20 20:25:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739555 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 064A814E5 for ; Thu, 20 Dec 2018 20:25:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EB1BE28D05 for ; Thu, 20 Dec 2018 20:25:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DF29D28D11; Thu, 20 Dec 2018 20:25:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8598928D05 for ; Thu, 20 Dec 2018 20:25:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732875AbeLTUZm (ORCPT ); Thu, 20 Dec 2018 15:25:42 -0500 Received: from mga14.intel.com ([192.55.52.115]:65141 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732346AbeLTUZl (ORCPT ); Thu, 20 Dec 2018 15:25:41 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:25:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="112203703" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by orsmga003.jf.intel.com with ESMTP; 20 Dec 2018 12:25:39 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Miguel Ojeda Cc: kvm@vger.kernel.org, Andi Kleen , Martin Jambor , Nadav Amit , Josh Poimboeuf , Arnd Bergmann , Steven Rostedt , Miroslav Benes Subject: [PATCH 01/11] KVM: VMX: Explicitly reference RCX as the vmx_vcpu pointer in asm blobs Date: Thu, 20 Dec 2018 12:25:16 -0800 Message-Id: <20181220202518.21442-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use '%% " _ASM_CX"' instead of '%0' to dereference RCX, i.e. the 'struct vmx_vcpu' pointer, in the VM-Enter asm blobs of vmx_vcpu_run() and nested_vmx_check_vmentry_hw(). Using the symbolic name means that adding/removing an output parameter(s) requires "rewriting" almost all of the asm blob, which makes it nearly impossible to understand what's being changed in even the most minor patches. Opportunistically improve the code comments. Signed-off-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 6 +-- arch/x86/kvm/vmx/vmx.c | 86 +++++++++++++++++++++------------------ 2 files changed, 50 insertions(+), 42 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 3f019aa63341..43b33cd23ac5 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2759,17 +2759,17 @@ static int __noclone nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) asm( /* Set HOST_RSP */ __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" - "mov %%" _ASM_SP ", %c[host_rsp](%0)\n\t" + "mov %%" _ASM_SP ", %c[host_rsp](%% " _ASM_CX")\n\t" /* Check if vmlaunch or vmresume is needed */ - "cmpl $0, %c[launched](%0)\n\t" + "cmpl $0, %c[launched](%% " _ASM_CX")\n\t" "jne 1f\n\t" __ex("vmlaunch") "\n\t" "jmp 2f\n\t" "1: " __ex("vmresume") "\n\t" "2: " /* Set vmx->fail accordingly */ - "setbe %c[fail](%0)\n\t" + "setbe %c[fail](%% " _ASM_CX")\n\t" ".pushsection .rodata\n\t" ".global vmx_early_consistency_check_return\n\t" diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b8fa74ce3af2..42bfcd28c27b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6124,9 +6124,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) "push %%" _ASM_DX "; push %%" _ASM_BP ";" "push %%" _ASM_CX " \n\t" /* placeholder for guest rcx */ "push %%" _ASM_CX " \n\t" - "cmp %%" _ASM_SP ", %c[host_rsp](%0) \n\t" + "cmp %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" "je 1f \n\t" - "mov %%" _ASM_SP ", %c[host_rsp](%0) \n\t" + "mov %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" /* Avoid VMWRITE when Enlightened VMCS is in use */ "test %%" _ASM_SI ", %%" _ASM_SI " \n\t" "jz 2f \n\t" @@ -6136,32 +6136,33 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" "1: \n\t" /* Reload cr2 if changed */ - "mov %c[cr2](%0), %%" _ASM_AX " \n\t" + "mov %c[cr2](%%" _ASM_CX "), %%" _ASM_AX " \n\t" "mov %%cr2, %%" _ASM_DX " \n\t" "cmp %%" _ASM_AX ", %%" _ASM_DX " \n\t" "je 3f \n\t" "mov %%" _ASM_AX", %%cr2 \n\t" "3: \n\t" /* Check if vmlaunch or vmresume is needed */ - "cmpl $0, %c[launched](%0) \n\t" + "cmpl $0, %c[launched](%%" _ASM_CX ") \n\t" /* Load guest registers. Don't clobber flags. */ - "mov %c[rax](%0), %%" _ASM_AX " \n\t" - "mov %c[rbx](%0), %%" _ASM_BX " \n\t" - "mov %c[rdx](%0), %%" _ASM_DX " \n\t" - "mov %c[rsi](%0), %%" _ASM_SI " \n\t" - "mov %c[rdi](%0), %%" _ASM_DI " \n\t" - "mov %c[rbp](%0), %%" _ASM_BP " \n\t" + "mov %c[rax](%%" _ASM_CX "), %%" _ASM_AX " \n\t" + "mov %c[rbx](%%" _ASM_CX "), %%" _ASM_BX " \n\t" + "mov %c[rdx](%%" _ASM_CX "), %%" _ASM_DX " \n\t" + "mov %c[rsi](%%" _ASM_CX "), %%" _ASM_SI " \n\t" + "mov %c[rdi](%%" _ASM_CX "), %%" _ASM_DI " \n\t" + "mov %c[rbp](%%" _ASM_CX "), %%" _ASM_BP " \n\t" #ifdef CONFIG_X86_64 - "mov %c[r8](%0), %%r8 \n\t" - "mov %c[r9](%0), %%r9 \n\t" - "mov %c[r10](%0), %%r10 \n\t" - "mov %c[r11](%0), %%r11 \n\t" - "mov %c[r12](%0), %%r12 \n\t" - "mov %c[r13](%0), %%r13 \n\t" - "mov %c[r14](%0), %%r14 \n\t" - "mov %c[r15](%0), %%r15 \n\t" + "mov %c[r8](%%" _ASM_CX "), %%r8 \n\t" + "mov %c[r9](%%" _ASM_CX "), %%r9 \n\t" + "mov %c[r10](%%" _ASM_CX "), %%r10 \n\t" + "mov %c[r11](%%" _ASM_CX "), %%r11 \n\t" + "mov %c[r12](%%" _ASM_CX "), %%r12 \n\t" + "mov %c[r13](%%" _ASM_CX "), %%r13 \n\t" + "mov %c[r14](%%" _ASM_CX "), %%r14 \n\t" + "mov %c[r15](%%" _ASM_CX "), %%r15 \n\t" #endif - "mov %c[rcx](%0), %%" _ASM_CX " \n\t" /* kills %0 (ecx) */ + /* Load guest RCX. This kills the vmx_vcpu pointer! */ + "mov %c[rcx](%%" _ASM_CX "), %%" _ASM_CX " \n\t" /* Enter guest mode */ "jne 1f \n\t" @@ -6169,26 +6170,33 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) "jmp 2f \n\t" "1: " __ex("vmresume") "\n\t" "2: " - /* Save guest registers, load host registers, keep flags */ - "mov %0, %c[wordsize](%%" _ASM_SP ") \n\t" - "pop %0 \n\t" - "setbe %c[fail](%0)\n\t" - "mov %%" _ASM_AX ", %c[rax](%0) \n\t" - "mov %%" _ASM_BX ", %c[rbx](%0) \n\t" - __ASM_SIZE(pop) " %c[rcx](%0) \n\t" - "mov %%" _ASM_DX ", %c[rdx](%0) \n\t" - "mov %%" _ASM_SI ", %c[rsi](%0) \n\t" - "mov %%" _ASM_DI ", %c[rdi](%0) \n\t" - "mov %%" _ASM_BP ", %c[rbp](%0) \n\t" + + /* Save guest's RCX to the stack placeholder (see above) */ + "mov %%" _ASM_CX ", %c[wordsize](%%" _ASM_SP ") \n\t" + + /* Load host's RCX, i.e. the vmx_vcpu pointer */ + "pop %%" _ASM_CX " \n\t" + + /* Set vmx->fail based on EFLAGS.{CF,ZF} */ + "setbe %c[fail](%%" _ASM_CX ")\n\t" + + /* Save all guest registers, including RCX from the stack */ + "mov %%" _ASM_AX ", %c[rax](%%" _ASM_CX ") \n\t" + "mov %%" _ASM_BX ", %c[rbx](%%" _ASM_CX ") \n\t" + __ASM_SIZE(pop) " %c[rcx](%%" _ASM_CX ") \n\t" + "mov %%" _ASM_DX ", %c[rdx](%%" _ASM_CX ") \n\t" + "mov %%" _ASM_SI ", %c[rsi](%%" _ASM_CX ") \n\t" + "mov %%" _ASM_DI ", %c[rdi](%%" _ASM_CX ") \n\t" + "mov %%" _ASM_BP ", %c[rbp](%%" _ASM_CX ") \n\t" #ifdef CONFIG_X86_64 - "mov %%r8, %c[r8](%0) \n\t" - "mov %%r9, %c[r9](%0) \n\t" - "mov %%r10, %c[r10](%0) \n\t" - "mov %%r11, %c[r11](%0) \n\t" - "mov %%r12, %c[r12](%0) \n\t" - "mov %%r13, %c[r13](%0) \n\t" - "mov %%r14, %c[r14](%0) \n\t" - "mov %%r15, %c[r15](%0) \n\t" + "mov %%r8, %c[r8](%%" _ASM_CX ") \n\t" + "mov %%r9, %c[r9](%%" _ASM_CX ") \n\t" + "mov %%r10, %c[r10](%%" _ASM_CX ") \n\t" + "mov %%r11, %c[r11](%%" _ASM_CX ") \n\t" + "mov %%r12, %c[r12](%%" _ASM_CX ") \n\t" + "mov %%r13, %c[r13](%%" _ASM_CX ") \n\t" + "mov %%r14, %c[r14](%%" _ASM_CX ") \n\t" + "mov %%r15, %c[r15](%%" _ASM_CX ") \n\t" /* * Clear host registers marked as clobbered to prevent * speculative use. @@ -6203,7 +6211,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) "xor %%r15d, %%r15d \n\t" #endif "mov %%cr2, %%" _ASM_AX " \n\t" - "mov %%" _ASM_AX ", %c[cr2](%0) \n\t" + "mov %%" _ASM_AX ", %c[cr2](%%" _ASM_CX ") \n\t" "xor %%eax, %%eax \n\t" "xor %%ebx, %%ebx \n\t" From patchwork Thu Dec 20 20:25:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739553 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A12536C5 for ; Thu, 20 Dec 2018 20:25:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9044528D0A for ; Thu, 20 Dec 2018 20:25:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8446428D11; Thu, 20 Dec 2018 20:25:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DAABB28D05 for ; Thu, 20 Dec 2018 20:25:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387799AbeLTUZm (ORCPT ); Thu, 20 Dec 2018 15:25:42 -0500 Received: from mga14.intel.com ([192.55.52.115]:65143 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732616AbeLTUZl (ORCPT ); Thu, 20 Dec 2018 15:25:41 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:25:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="112203704" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by orsmga003.jf.intel.com with ESMTP; 20 Dec 2018 12:25:39 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Miguel Ojeda Cc: kvm@vger.kernel.org, Andi Kleen , Martin Jambor , Nadav Amit , Josh Poimboeuf , Arnd Bergmann , Steven Rostedt , Miroslav Benes Subject: [PATCH 02/11] KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines Date: Thu, 20 Dec 2018 12:25:17 -0800 Message-Id: <20181220202518.21442-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Transitioning to/from a VMX guest requires KVM to manually save/load the bulk of CPU state that the guest is allowed to direclty access, e.g. XSAVE state, CR2, GPRs, etc... For obvious reasons, loading the guest's GPR snapshot prior to VM-Enter and saving the snapshot after VM-Exit is done via handcoded assembly. The assembly blob is written as inline asm so that it can easily access KVM-defined structs that are used to hold guest state, e.g. moving the blob to a standalone assembly file would require generating defines for struct offsets. The other relevant aspect of VMX transitions in KVM is the handling of VM-Exits. KVM doesn't employ a separate VM-Exit handler per se, but rather treats the VMX transition as a mega instruction (with many side effects), i.e. sets the VMCS.HOST_RIP to a label immediately following VMLAUNCH/VMRESUME. The label is then exposed to C code via a global variable definition in the inline assembly. Because of the global variable, KVM takes steps to (attempt to) ensure only a single instance of the owning C function, e.g. vmx_vcpu_run, is generated by the compiler. The earliest approach placed the inline assembly in a separate noinline function[1]. Later, the assembly was folded back into vmx_vcpu_run() and tagged with __noclone[2][3], which is still used today. After moving to __noclone, an edge case was encountered where GCC's -ftracer optimization resulted in the inline assembly blob being duplicated. This was "fixed" by explicitly disabling -ftracer in the __noclone definition[4]. Recently, it was found that disabling -ftracer causes build warnings for unsuspecting users of __noclone[5], and more importantly for KVM, prevents the compiler for properly optimizing vmx_vcpu_run()[6]. And perhaps most importantly of all, it was pointed out that there is no way to prevent duplication of a function with 100% reliability[7], i.e. more edge cases may be encountered in the future. So to summarize, the only way to prevent the compiler from duplicating the global variable definition is to move the variable out of inline assembly, which has been suggested several times over[1][7][8]. Resolve the aforementioned issues by moving the VMLAUNCH+VRESUME and VM-Exit "handler" to standalone assembly sub-routines. Moving only the core VMX transition codes allows the struct indexing to remain as inline assembly and also allows the sub-routines to be used by nested_vmx_check_vmentry_hw(). Reusing the sub-routines has a happy side-effect of eliminating two VMWRITEs in the nested_early_check path as there is no longer a need to dynamically change VMCS.HOST_RIP. Note that callers to vmx_vmenter() must account for the CALL modifying RSP, e.g. must subtract op-size from RSP when synchronizing RSP with VMCS.HOST_RSP and "restore" RSP prior to the CALL. There are no great alternatives to fudging RSP. Saving RSP in vmx_enter() is difficult because doing so requires a second register (VMWRITE does not provide an immediate encoding for the VMCS field and KVM supports Hyper-V's memory-based eVMCS ABI). The other more drastic alternative would be to use eschew VMCS.HOST_RSP and manually save/load RSP using a per-cpu variable (which can be encoded as e.g. gs:[imm]). But because a valid stack is needed at the time of VM-Exit (NMIs aren't blocked and a user could theoretically insert INT3/INT1ICEBRK at the VM-Exit handler), a dedicated per-cpu VM-Exit stack would be required. A dedicated stack isn't difficult to implement, but it would require at least one page per CPU and knowledge of the stack in the dumpstack routines. And in most cases there is essentially zero overhead in dynamically updating VMCS.HOST_RSP, e.g. the VMWRITE can be avoided for all but the first VMLAUNCH unless nested_early_check=1, which is not a fast path. In other words, avoiding the VMCS.HOST_RSP by using a dedicated stack would only make the code marginally less ugly while requiring at least one page per CPU and forcing the kernel to be aware (and approve) of the VM-Exit stack shenanigans. [1] cea15c24ca39 ("KVM: Move KVM context switch into own function") [2] a3b5ba49a8c5 ("KVM: VMX: add the __noclone attribute to vmx_vcpu_run") [3] 104f226bfd0a ("KVM: VMX: Fold __vmx_vcpu_run() into vmx_vcpu_run()") [4] 95272c29378e ("compiler-gcc: disable -ftracer for __noclone functions") [5] https://lkml.kernel.org/r/20181218140105.ajuiglkpvstt3qxs@treble [6] https://patchwork.kernel.org/patch/8707981/#21817015 [7] https://lkml.kernel.org/r/ri6y38lo23g.fsf@suse.cz [8] https://lkml.kernel.org/r/20181218212042.GE25620@tassilo.jf.intel.com Suggested-by: Andi Kleen Suggested-by: Martin Jambor Cc: Paolo Bonzini Cc: Nadav Amit Cc: Andi Kleen Cc: Josh Poimboeuf Cc: Martin Jambor Cc: Arnd Bergmann Cc: Steven Rostedt Cc: Miroslav Benes Signed-off-by: Sean Christopherson --- arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/vmx/nested.c | 30 +++++++------------- arch/x86/kvm/vmx/vmenter.S | 57 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 22 +++++++-------- arch/x86/kvm/vmx/vmx.h | 1 - 5 files changed, 78 insertions(+), 34 deletions(-) create mode 100644 arch/x86/kvm/vmx/vmenter.S diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 83dc7d6a0294..69b3a7c30013 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -16,7 +16,7 @@ kvm-y += x86.o mmu.o emulate.o i8259.o irq.o lapic.o \ i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \ hyperv.o page_track.o debugfs.o -kvm-intel-y += vmx/vmx.o vmx/pmu_intel.o vmx/vmcs12.o vmx/evmcs.o vmx/nested.o +kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o vmx/evmcs.o vmx/nested.o kvm-amd-y += svm.o pmu_amd.o obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 43b33cd23ac5..33235fc0a8fc 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -19,8 +19,6 @@ module_param_named(enable_shadow_vmcs, enable_shadow_vmcs, bool, S_IRUGO); static bool __read_mostly nested_early_check = 0; module_param(nested_early_check, bool, S_IRUGO); -extern const ulong vmx_early_consistency_check_return; - /* * Hyper-V requires all of these, so mark them as supported even though * they are just treated the same as all-context. @@ -2715,7 +2713,7 @@ static int nested_vmx_check_vmentry_postreqs(struct kvm_vcpu *vcpu, return 0; } -static int __noclone nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) +static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long cr3, cr4; @@ -2740,8 +2738,6 @@ static int __noclone nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) */ vmcs_writel(GUEST_RFLAGS, 0); - vmcs_writel(HOST_RIP, vmx_early_consistency_check_return); - cr3 = __get_current_cr3_fast(); if (unlikely(cr3 != vmx->loaded_vmcs->host_state.cr3)) { vmcs_writel(HOST_CR3, cr3); @@ -2758,33 +2754,27 @@ static int __noclone nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) asm( /* Set HOST_RSP */ + "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" - "mov %%" _ASM_SP ", %c[host_rsp](%% " _ASM_CX")\n\t" + "mov %%" _ASM_SP ", %c[host_rsp](%1)\n\t" + "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ /* Check if vmlaunch or vmresume is needed */ "cmpl $0, %c[launched](%% " _ASM_CX")\n\t" - "jne 1f\n\t" - __ex("vmlaunch") "\n\t" - "jmp 2f\n\t" - "1: " __ex("vmresume") "\n\t" - "2: " + + "call vmx_vmenter\n\t" + /* Set vmx->fail accordingly */ "setbe %c[fail](%% " _ASM_CX")\n\t" - - ".pushsection .rodata\n\t" - ".global vmx_early_consistency_check_return\n\t" - "vmx_early_consistency_check_return: " _ASM_PTR " 2b\n\t" - ".popsection" - : + : ASM_CALL_CONSTRAINT : "c"(vmx), "d"((unsigned long)HOST_RSP), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), - [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)) + [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), + [wordsize]"i"(sizeof(ulong)) : "rax", "cc", "memory" ); - vmcs_writel(HOST_RIP, vmx_return); - preempt_enable(); if (vmx->msr_autoload.host.nr) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S new file mode 100644 index 000000000000..bcef2c7e9bc4 --- /dev/null +++ b/arch/x86/kvm/vmx/vmenter.S @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#include +#include + + .text + +/** + * vmx_vmenter - VM-Enter the current loaded VMCS + * + * %RFLAGS.ZF: !VMCS.LAUNCHED, i.e. controls VMLAUNCH vs. VMRESUME + * + * Returns: + * %RFLAGS.CF is set on VM-Fail Invalid + * %RFLAGS.ZF is set on VM-Fail Valid + * %RFLAGS.{CF,ZF} are cleared on VM-Success, i.e. VM-Exit + * + * Note that VMRESUME/VMLAUNCH fall-through and return directly if + * they VM-Fail, whereas a successful VM-Enter + VM-Exit will jump + * to vmx_vmexit. + */ +ENTRY(vmx_vmenter) + /* EFLAGS.ZF is set if VMCS.LAUNCHED == 0 */ + je 2f + +1: vmresume + ret + +2: vmlaunch + ret + +3: cmpb $0, kvm_rebooting + jne 4f + call kvm_spurious_fault +4: ret + + .pushsection .fixup, "ax" +5: jmp 3b + .popsection + + _ASM_EXTABLE(1b, 5b) + _ASM_EXTABLE(2b, 5b) + +ENDPROC(vmx_vmenter) + +/** + * vmx_vmexit - Handle a VMX VM-Exit + * + * Returns: + * %RFLAGS.{CF,ZF} are cleared on VM-Success, i.e. VM-Exit + * + * This is vmx_vmenter's partner in crime. On a VM-Exit, control will jump + * here after hardware loads the host's state, i.e. this is the destination + * referred to by VMCS.HOST_RIP. + */ +ENTRY(vmx_vmexit) + ret +ENDPROC(vmx_vmexit) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 42bfcd28c27b..bd7f45fafab6 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -325,6 +325,8 @@ static u32 vmx_segment_access_rights(struct kvm_segment *var); static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, u32 msr, int type); +void vmx_vmexit(void); + static DEFINE_PER_CPU(struct vmcs *, vmxarea); DEFINE_PER_CPU(struct vmcs *, current_vmcs); /* @@ -3473,7 +3475,7 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vmx) vmcs_writel(HOST_IDTR_BASE, dt.address); /* 22.2.4 */ vmx->host_idt_base = dt.address; - vmcs_writel(HOST_RIP, vmx_return); /* 22.2.5 */ + vmcs_writel(HOST_RIP, (unsigned long)vmx_vmexit); /* 22.2.5 */ rdmsr(MSR_IA32_SYSENTER_CS, low32, high32); vmcs_write32(HOST_IA32_SYSENTER_CS, low32); @@ -6046,7 +6048,7 @@ static void vmx_update_hv_timer(struct kvm_vcpu *vcpu) vmx->loaded_vmcs->hv_timer_armed = false; } -static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) +static void vmx_vcpu_run(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long cr3, cr4, evmcs_rsp; @@ -6124,6 +6126,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) "push %%" _ASM_DX "; push %%" _ASM_BP ";" "push %%" _ASM_CX " \n\t" /* placeholder for guest rcx */ "push %%" _ASM_CX " \n\t" + "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ "cmp %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" "je 1f \n\t" "mov %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" @@ -6135,6 +6138,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) "2: \n\t" __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" "1: \n\t" + "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ + /* Reload cr2 if changed */ "mov %c[cr2](%%" _ASM_CX "), %%" _ASM_AX " \n\t" "mov %%cr2, %%" _ASM_DX " \n\t" @@ -6165,11 +6170,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) "mov %c[rcx](%%" _ASM_CX "), %%" _ASM_CX " \n\t" /* Enter guest mode */ - "jne 1f \n\t" - __ex("vmlaunch") "\n\t" - "jmp 2f \n\t" - "1: " __ex("vmresume") "\n\t" - "2: " + "call vmx_vmenter\n\t" /* Save guest's RCX to the stack placeholder (see above) */ "mov %%" _ASM_CX ", %c[wordsize](%%" _ASM_SP ") \n\t" @@ -6218,11 +6219,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) "xor %%esi, %%esi \n\t" "xor %%edi, %%edi \n\t" "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t" - ".pushsection .rodata \n\t" - ".global vmx_return \n\t" - "vmx_return: " _ASM_PTR " 2b \n\t" - ".popsection" - : : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp), + : ASM_CALL_CONSTRAINT + : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index f932d7c971e9..ab15a905b71b 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -11,7 +11,6 @@ #include "vmcs.h" extern const u32 vmx_msr_index[]; -extern const ulong vmx_return; extern u64 host_efer; #define MSR_TYPE_R 1 From patchwork Thu Dec 20 20:25:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739551 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9AE9713BF for ; Thu, 20 Dec 2018 20:25:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B67328D05 for ; Thu, 20 Dec 2018 20:25:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7F39328D11; Thu, 20 Dec 2018 20:25:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EB1C328D05 for ; Thu, 20 Dec 2018 20:25:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387914AbeLTUZn (ORCPT ); Thu, 20 Dec 2018 15:25:43 -0500 Received: from mga14.intel.com ([192.55.52.115]:65143 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732755AbeLTUZl (ORCPT ); Thu, 20 Dec 2018 15:25:41 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:25:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="112203705" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by orsmga003.jf.intel.com with ESMTP; 20 Dec 2018 12:25:39 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Miguel Ojeda Cc: kvm@vger.kernel.org, Andi Kleen , Martin Jambor , Nadav Amit , Josh Poimboeuf , Arnd Bergmann , Steven Rostedt , Miroslav Benes Subject: [PATCH 03/11] Revert "compiler-gcc: disable -ftracer for __noclone functions" Date: Thu, 20 Dec 2018 12:25:18 -0800 Message-Id: <20181220202518.21442-4-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The -ftracer optimization was disabled in __noclone as a workaround to GCC duplicating a blob of inline assembly that happened to define a global variable. It has been pointed out that no amount of workarounds can guarantee the compiler won't duplicate inline assembly[1], and that disabling the -ftracer optimization has several unintended and nasty side effects[2][3]. Now that the offending KVM code which required the workaround has been properly fixed and no longer uses __noclone, remove the -ftracer optimization tweak from __noclone. [1] https://lore.kernel.org/lkml/ri6y38lo23g.fsf@suse.cz/T/#u [2] https://lore.kernel.org/lkml/20181218140105.ajuiglkpvstt3qxs@treble/T/#u [3] https://patchwork.kernel.org/patch/8707981/#21817015 This reverts commit 95272c29378ee7dc15f43fa2758cb28a5913a06d. Suggested-by: Andi Kleen Cc: Paolo Bonzini Cc: Nadav Amit Cc: Andi Kleen Cc: Josh Poimboeuf Cc: Martin Jambor Cc: Arnd Bergmann Cc: Steven Rostedt Cc: Miroslav Benes Signed-off-by: Sean Christopherson Reviewed-by: Miguel Ojeda Reviewed-by: Miroslav Benes --- include/linux/compiler_attributes.h | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h index f8c400ba1929..f3e16fc9a5df 100644 --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -163,17 +163,11 @@ /* * Optional: not supported by clang - * Note: icc does not recognize gcc's no-tracer * * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-noclone-function-attribute - * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-optimize-function-attribute */ #if __has_attribute(__noclone__) -# if __has_attribute(__optimize__) -# define __noclone __attribute__((__noclone__, __optimize__("no-tracer"))) -# else -# define __noclone __attribute__((__noclone__)) -# endif +# define __noclone __attribute__((__noclone__)) #else # define __noclone #endif From patchwork Thu Dec 20 20:30:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739557 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 329626C5 for ; Thu, 20 Dec 2018 20:30:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 23A5128CF8 for ; Thu, 20 Dec 2018 20:30:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 17D8428CFD; Thu, 20 Dec 2018 20:30:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B3F4428CF8 for ; Thu, 20 Dec 2018 20:30:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389257AbeLTUaf (ORCPT ); Thu, 20 Dec 2018 15:30:35 -0500 Received: from mga14.intel.com ([192.55.52.115]:1092 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388900AbeLTUaf (ORCPT ); Thu, 20 Dec 2018 15:30:35 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:30:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="100314078" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by orsmga007.jf.intel.com with ESMTP; 20 Dec 2018 12:30:35 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Sean Christopherson Subject: [PATCH 04/11] KVM: VMX: Modify only RSP when creating a placeholder for guest's RCX Date: Thu, 20 Dec 2018 12:30:26 -0800 Message-Id: <20181220203026.22998-1-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In vmx_vcpu_run(), the guest's RCX is temporarily saved onto the stack after VMX as the host's RCX need to be reloaded before guest registers can be saved to struct vcpu_vmx (host RCX points at said struct). Since the stack usage is to (1)save host, (2)save guest, (3)load host and (4)load guest, the code can't conform to the stack's natural FIFO semantics, i.e. it can't simply do PUSH/POP. Regardless of whether it is done for the host RCX or guest RCX, at some point the code needs to manually adjust RSP and save/load to/from the stack using e.g. MOV. vmx_vcpu_run() opts to create a placeholder on the stack for guest's RCX (adjust RSP) and save RCX to its place immediately after VM-Exit. In other words, the purpose of the first 'PUSH RCX' at the start of vmx_vcpu_run()'s assembly blob is to adjust RSP down, i.e. there's no need to actually access memory. Use 'SUB $wordsize, RSP' instead of 'PUSH RCX' to make it more obvious that the intent is simply to create a gap on the stack for the guest's RCX. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index bd7f45fafab6..5d07d385b637 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6124,7 +6124,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) asm( /* Store host registers */ "push %%" _ASM_DX "; push %%" _ASM_BP ";" - "push %%" _ASM_CX " \n\t" /* placeholder for guest rcx */ + "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* placeholder for guest rcx */ "push %%" _ASM_CX " \n\t" "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ "cmp %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" From patchwork Thu Dec 20 20:30:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739559 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AF0496C5 for ; Thu, 20 Dec 2018 20:30:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A0A3028CF8 for ; Thu, 20 Dec 2018 20:30:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 94C4928CFB; Thu, 20 Dec 2018 20:30:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 34F9928D15 for ; Thu, 20 Dec 2018 20:30:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389286AbeLTUam (ORCPT ); Thu, 20 Dec 2018 15:30:42 -0500 Received: from mga01.intel.com ([192.55.52.88]:63451 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387914AbeLTUam (ORCPT ); Thu, 20 Dec 2018 15:30:42 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:30:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="103579917" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by orsmga008.jf.intel.com with ESMTP; 20 Dec 2018 12:30:41 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Sean Christopherson , Vitaly Kuznetsov Subject: [PATCH 05/11] KVM: VMX: Save RSI to an unused output in vmx_vcpu_run() asm blob Date: Thu, 20 Dec 2018 12:30:35 -0800 Message-Id: <20181220203035.23041-1-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP RSI is clobbered by the VM-Enter asm blob in vmx_vcpu_run(), but it's not marked as such, probably because GCC doesn't let you mark inputs as clobbered. "Save" RSI to a dummy output so that GCC recognizes it as being clobbered. Note that this has a funky dependency on removing the 'vmx_return' global variable definition from vmx_vcpu_run()'s asm blob (done in a prior patch), as attempting to declare RSI as output would generate a cryptic compiler error that magically disappeared when vmx_return was eliminated. Fixes: 773e8a0425c9 ("x86/kvm: use Enlightened VMCS when running on Hyper-V") Cc: Vitaly Kuznetsov Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 5d07d385b637..1821fb9ac009 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6219,7 +6219,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) "xor %%esi, %%esi \n\t" "xor %%edi, %%edi \n\t" "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t" - : ASM_CALL_CONSTRAINT + : ASM_CALL_CONSTRAINT, "=S"((int){0}) : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), From patchwork Thu Dec 20 20:30:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739561 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 563E26C5 for ; Thu, 20 Dec 2018 20:30:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 47A8428CF8 for ; Thu, 20 Dec 2018 20:30:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3C72D28CFD; Thu, 20 Dec 2018 20:30:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E669928CF8 for ; Thu, 20 Dec 2018 20:30:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389399AbeLTUao (ORCPT ); Thu, 20 Dec 2018 15:30:44 -0500 Received: from mga04.intel.com ([192.55.52.120]:32862 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389339AbeLTUan (ORCPT ); Thu, 20 Dec 2018 15:30:43 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:30:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="127770023" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by fmsmga002.fm.intel.com with ESMTP; 20 Dec 2018 12:30:42 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Sean Christopherson Subject: [PATCH 06/11] KVM: VMX: Manually load RDX in vmx_vcpu_run() asm blob Date: Thu, 20 Dec 2018 12:30:41 -0800 Message-Id: <20181220203041.23084-1-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Load RDX with the HOST_RSP field enum on-demand instead of having the compiler load it as an input. In addition to saving one whole MOV instruction, this allows RDX to be properly clobbered (in a future patch) instead of being saved/loaded to/from the stack. Despite nested_vmx_check_vmentry_hw() having similar code, leave it alone. In that case, RDX is unconditionally used and isn't clobbered, i.e. sending in HOST_RSP as an input is simpler. Note that because HOST_RSP is an enum and not a define, it must be redefined as an immediate instead of using __stringify(HOST_RSP). The naming "conflict" between host_rsp and HOST_RSP is slightly confusing, but the former will (hopefully) be going away soon, at which point HOST_RSP is absolutely what is desired. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 1821fb9ac009..07c7fc8e5ddb 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6136,6 +6136,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) "mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t" "jmp 1f \n\t" "2: \n\t" + "mov $%c[HOST_RSP], %%" _ASM_DX " \n\t" __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" "1: \n\t" "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ @@ -6220,10 +6221,11 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) "xor %%edi, %%edi \n\t" "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t" : ASM_CALL_CONSTRAINT, "=S"((int){0}) - : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp), + : "c"(vmx), "S"(evmcs_rsp), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), + [HOST_RSP]"i"(HOST_RSP), [rax]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RAX])), [rbx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBX])), [rcx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RCX])), From patchwork Thu Dec 20 20:30:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739563 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6D4C1399 for ; Thu, 20 Dec 2018 20:30:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C739028CF8 for ; Thu, 20 Dec 2018 20:30:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BBA0728CFD; Thu, 20 Dec 2018 20:30:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6FE4628CF8 for ; Thu, 20 Dec 2018 20:30:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389439AbeLTUar (ORCPT ); Thu, 20 Dec 2018 15:30:47 -0500 Received: from mga05.intel.com ([192.55.52.43]:27776 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389290AbeLTUar (ORCPT ); Thu, 20 Dec 2018 15:30:47 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:30:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="131677716" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by fmsmga001.fm.intel.com with ESMTP; 20 Dec 2018 12:30:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Sean Christopherson Subject: [PATCH 07/11] KVM: VMX: Let the compiler save/load RDX around VM-Enter Date: Thu, 20 Dec 2018 12:30:43 -0800 Message-Id: <20181220203043.23127-1-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Per commit c20363006af6 ("KVM: VMX: Let gcc to choose which registers to save (x86_64)"), the only reason RDX is saved/loaded to/from the stack is because it was specified as an input, i.e. couldn't be marked as clobbered (ignoring the fact that "saving" it to a dummy output would indirectly mark it as clobbered). Now that RDX is no longer an input, mark it as clobbered and zero it out to prevent speculative use. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 07c7fc8e5ddb..3f144a7fcfdb 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6123,7 +6123,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) asm( /* Store host registers */ - "push %%" _ASM_DX "; push %%" _ASM_BP ";" + "push %%" _ASM_BP " \n\t" "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* placeholder for guest rcx */ "push %%" _ASM_CX " \n\t" "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ @@ -6217,9 +6217,10 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) "xor %%eax, %%eax \n\t" "xor %%ebx, %%ebx \n\t" + "xor %%edx, %%edx \n\t" "xor %%esi, %%esi \n\t" "xor %%edi, %%edi \n\t" - "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t" + "pop %%" _ASM_BP " \n\t" : ASM_CALL_CONSTRAINT, "=S"((int){0}) : "c"(vmx), "S"(evmcs_rsp), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), @@ -6247,10 +6248,10 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) [wordsize]"i"(sizeof(ulong)) : "cc", "memory" #ifdef CONFIG_X86_64 - , "rax", "rbx", "rdi" + , "rax", "rbx", "rdx", "rdi" , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" #else - , "eax", "ebx", "edi" + , "eax", "ebx", "edx", "edi" #endif ); From patchwork Thu Dec 20 20:30:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739565 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA6A91399 for ; Thu, 20 Dec 2018 20:30:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BBFBE28CF8 for ; Thu, 20 Dec 2018 20:30:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B032B28CFD; Thu, 20 Dec 2018 20:30:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 55BD328CF8 for ; Thu, 20 Dec 2018 20:30:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389496AbeLTUau (ORCPT ); Thu, 20 Dec 2018 15:30:50 -0500 Received: from mga18.intel.com ([134.134.136.126]:13421 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389486AbeLTUau (ORCPT ); Thu, 20 Dec 2018 15:30:50 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:30:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="305504083" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by fmsmga005.fm.intel.com with ESMTP; 20 Dec 2018 12:30:49 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Sean Christopherson Subject: [PATCH 08/11] KVM: nVMX: Cache host_rsp on a per-VMCS basis Date: Thu, 20 Dec 2018 12:30:47 -0800 Message-Id: <20181220203047.23170-1-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, host_rsp is cached on a per-vCPU basis, i.e. it's stored in struct vcpu_vmx. In non-nested usage the caching is for all intents and purposes 100% effective, e.g. only the first VMLAUNCH needs to synchronize VMCS.HOST_RSP since the call stack to vmx_vcpu_run() is identical each and every time. But when running a nested guest, KVM must invalidate the cache when switching the current VMCS as it can't guarantee the new VMCS has the same HOST_RSP as the previous VMCS. In other words, the cache loses almost all of its efficacy when running a nested VM. Move host_rsp to struct vmcs_host_state, which is per-VMCS, so that it is cached on a per-VMCS basis and restores its 100% hit rate when nested VMs are in play. Note that the host_rsp cache for vmcs02 essentially "breaks" when nested early checks are enabled as nested_vmx_check_vmentry_hw() will see a different RSP at the time of its VM-Enter. While it's possible to avoid even that VMCS.HOST_RSP synchronization, e.g. by employing a dedicated VM-Exit stack, there is little motivation for doing so as the overhead of two VMWRITEs (~55 cycles) is dwarfed by the overhead of the extra VMX transition (600+ cycles) and is a proverbial drop in the ocean relative to the total cost of a nested transtion (10s of thousands of cycles). Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 28 +++++++++------------------- arch/x86/kvm/vmx/vmcs.h | 1 + arch/x86/kvm/vmx/vmx.c | 13 ++++++------- arch/x86/kvm/vmx/vmx.h | 1 - 4 files changed, 16 insertions(+), 27 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 33235fc0a8fc..9be3156f972d 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1978,17 +1978,6 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12) if (vmx->nested.dirty_vmcs12 || vmx->nested.hv_evmcs) prepare_vmcs02_early_full(vmx, vmcs12); - /* - * HOST_RSP is normally set correctly in vmx_vcpu_run() just before - * entry, but only if the current (host) sp changed from the value - * we wrote last (vmx->host_rsp). This cache is no longer relevant - * if we switch vmcs, and rather than hold a separate cache per vmcs, - * here we just force the write to happen on entry. host_rsp will - * also be written unconditionally by nested_vmx_check_vmentry_hw() - * if we are doing early consistency checks via hardware. - */ - vmx->host_rsp = 0; - /* * PIN CONTROLS */ @@ -2755,8 +2744,12 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) asm( /* Set HOST_RSP */ "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ - __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" - "mov %%" _ASM_SP ", %c[host_rsp](%1)\n\t" + "cmp %%" _ASM_SP ", (%% " _ASM_DI") \n\t" + "je 1f \n\t" + "mov %%" _ASM_SP ", (%% " _ASM_DI") \n\t" + "mov $%c[HOST_RSP], %%" _ASM_DI " \n\t" + __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DI) "\n\t" + "1: \n\t" "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ /* Check if vmlaunch or vmresume is needed */ @@ -2766,11 +2759,11 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) /* Set vmx->fail accordingly */ "setbe %c[fail](%% " _ASM_CX")\n\t" - : ASM_CALL_CONSTRAINT - : "c"(vmx), "d"((unsigned long)HOST_RSP), + : ASM_CALL_CONSTRAINT, "=D"((int){0}) + : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp), + [HOST_RSP]"i"(HOST_RSP), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), - [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), [wordsize]"i"(sizeof(ulong)) : "rax", "cc", "memory" ); @@ -3904,9 +3897,6 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, vmx_flush_tlb(vcpu, true); } - /* This is needed for same reason as it was needed in prepare_vmcs02 */ - vmx->host_rsp = 0; - /* Unpin physical memory we referred to in vmcs02 */ if (vmx->nested.apic_access_page) { kvm_release_page_dirty(vmx->nested.apic_access_page); diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h index 6def3ba88e3b..cb6079f8a227 100644 --- a/arch/x86/kvm/vmx/vmcs.h +++ b/arch/x86/kvm/vmx/vmcs.h @@ -34,6 +34,7 @@ struct vmcs_host_state { unsigned long cr4; /* May not match real cr4 */ unsigned long gs_base; unsigned long fs_base; + unsigned long rsp; u16 fs_sel, gs_sel, ldt_sel; #ifdef CONFIG_X86_64 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 3f144a7fcfdb..3ecb4c86a240 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6127,9 +6127,9 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* placeholder for guest rcx */ "push %%" _ASM_CX " \n\t" "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ - "cmp %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" + "cmp %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" "je 1f \n\t" - "mov %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" + "mov %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" /* Avoid VMWRITE when Enlightened VMCS is in use */ "test %%" _ASM_SI ", %%" _ASM_SI " \n\t" "jz 2f \n\t" @@ -6221,11 +6221,10 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) "xor %%esi, %%esi \n\t" "xor %%edi, %%edi \n\t" "pop %%" _ASM_BP " \n\t" - : ASM_CALL_CONSTRAINT, "=S"((int){0}) - : "c"(vmx), "S"(evmcs_rsp), + : ASM_CALL_CONSTRAINT, "=D"((int){0}), "=S"((int){0}) + : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp), "S"(evmcs_rsp), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), - [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), [HOST_RSP]"i"(HOST_RSP), [rax]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RAX])), [rbx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBX])), @@ -6248,10 +6247,10 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) [wordsize]"i"(sizeof(ulong)) : "cc", "memory" #ifdef CONFIG_X86_64 - , "rax", "rbx", "rdx", "rdi" + , "rax", "rbx", "rdx" , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" #else - , "eax", "ebx", "edx", "edi" + , "eax", "ebx", "edx" #endif ); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index ab15a905b71b..2138ddffb1cf 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -155,7 +155,6 @@ struct nested_vmx { struct vcpu_vmx { struct kvm_vcpu vcpu; - unsigned long host_rsp; u8 fail; u8 msr_bitmap_mode; u32 exit_intr_info; From patchwork Thu Dec 20 20:30:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739567 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A05706C5 for ; Thu, 20 Dec 2018 20:30:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9184D28CF8 for ; Thu, 20 Dec 2018 20:30:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 85BFB28CFD; Thu, 20 Dec 2018 20:30:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3C46728CF8 for ; Thu, 20 Dec 2018 20:30:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389512AbeLTUaw (ORCPT ); Thu, 20 Dec 2018 15:30:52 -0500 Received: from mga14.intel.com ([192.55.52.115]:1126 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732286AbeLTUav (ORCPT ); Thu, 20 Dec 2018 15:30:51 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:30:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="303880185" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by fmsmga006.fm.intel.com with ESMTP; 20 Dec 2018 12:30:51 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Sean Christopherson , Vitaly Kuznetsov Subject: [PATCH 09/11] KVM: nVMX: Add eVMCS support to nested_vmx_check_vmentry_hw() Date: Thu, 20 Dec 2018 12:30:49 -0800 Message-Id: <20181220203049.23213-1-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Adding eVMCS support to nested early checks makes the RSP shenanigans in vmx_vcpu_run() and nested_vmx_check_vmentry_hw() more or less identical. This will allow encapsulating the shenanigans in a set of helper macros to reduce the maintenance burden and prettify the code. Note that while this technically "fixes" eVMCS support, there isn't a known use case for nested early checks when running on Hyper-V, i.e. the motivation for the change is purely to allow code consolidation. Fixes: 773e8a0425c9 ("x86/kvm: use Enlightened VMCS when running on Hyper-V") Cc: Vitaly Kuznetsov Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 9be3156f972d..d6d88dfad39b 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2705,7 +2705,7 @@ static int nested_vmx_check_vmentry_postreqs(struct kvm_vcpu *vcpu, static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); - unsigned long cr3, cr4; + unsigned long cr3, cr4, evmcs_rsp; if (!nested_early_check) return 0; @@ -2741,12 +2741,21 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) vmx->__launched = vmx->loaded_vmcs->launched; + evmcs_rsp = static_branch_unlikely(&enable_evmcs) ? + (unsigned long)¤t_evmcs->host_rsp : 0; + asm( /* Set HOST_RSP */ "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ "cmp %%" _ASM_SP ", (%% " _ASM_DI") \n\t" "je 1f \n\t" "mov %%" _ASM_SP ", (%% " _ASM_DI") \n\t" + /* Avoid VMWRITE when Enlightened VMCS is in use */ + "test %%" _ASM_SI ", %%" _ASM_SI " \n\t" + "jz 2f \n\t" + "mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t" + "jmp 1f \n\t" + "2: \n\t" "mov $%c[HOST_RSP], %%" _ASM_DI " \n\t" __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DI) "\n\t" "1: \n\t" @@ -2759,8 +2768,8 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) /* Set vmx->fail accordingly */ "setbe %c[fail](%% " _ASM_CX")\n\t" - : ASM_CALL_CONSTRAINT, "=D"((int){0}) - : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp), + : ASM_CALL_CONSTRAINT, "=D"((int){0}), "=S"((int){0}) + : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp), "S"(evmcs_rsp), [HOST_RSP]"i"(HOST_RSP), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), From patchwork Thu Dec 20 20:30:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739569 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 40E956C5 for ; Thu, 20 Dec 2018 20:30:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 32C5E28CF8 for ; Thu, 20 Dec 2018 20:30:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2746728D15; Thu, 20 Dec 2018 20:30:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C879528CF8 for ; Thu, 20 Dec 2018 20:30:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389562AbeLTUay (ORCPT ); Thu, 20 Dec 2018 15:30:54 -0500 Received: from mga07.intel.com ([134.134.136.100]:25746 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732267AbeLTUay (ORCPT ); Thu, 20 Dec 2018 15:30:54 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:30:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="120049498" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by FMSMGA003.fm.intel.com with ESMTP; 20 Dec 2018 12:30:53 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Sean Christopherson Subject: [PATCH 10/11] KVM: VMX: Add macros to handle HOST_RSP updates at VM-Enter Date: Thu, 20 Dec 2018 12:30:51 -0800 Message-Id: <20181220203051.23256-1-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...now that nested_vmx_check_vmentry_hw() conditionally synchronizes RSP with the {e,}VMCS, i.e. duplicates vmx_vcpu_run()'s esoteric RSP assembly blob. Note that VMX_UPDATE_VMCS_HOST_RSP_OUTPUTS "incorrectly" marks RDI as being clobbered (by sending it to a dummy output param). RDI needs to be marked as clobbered in the vmx_vcpu_run() case, but trying to do so by adding RDI to the clobber list would generate a compiler error due to it being an input parameter. Alternatively vmx_vcpu_run() could manually specify '"=D"((int){0}),' but creating a subtle dependency on the macro's internals is more likely to cause problems than cloberring RDI unnecessarily in nested_vmx_check_vmentry_hw(). Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 24 ++++-------------------- arch/x86/kvm/vmx/vmx.c | 21 ++++----------------- arch/x86/kvm/vmx/vmx.h | 28 ++++++++++++++++++++++++++++ 3 files changed, 36 insertions(+), 37 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index d6d88dfad39b..99a972fac7e3 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2745,21 +2745,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) (unsigned long)¤t_evmcs->host_rsp : 0; asm( - /* Set HOST_RSP */ - "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ - "cmp %%" _ASM_SP ", (%% " _ASM_DI") \n\t" - "je 1f \n\t" - "mov %%" _ASM_SP ", (%% " _ASM_DI") \n\t" - /* Avoid VMWRITE when Enlightened VMCS is in use */ - "test %%" _ASM_SI ", %%" _ASM_SI " \n\t" - "jz 2f \n\t" - "mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t" - "jmp 1f \n\t" - "2: \n\t" - "mov $%c[HOST_RSP], %%" _ASM_DI " \n\t" - __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DI) "\n\t" - "1: \n\t" - "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ + VMX_UPDATE_VMCS_HOST_RSP /* Check if vmlaunch or vmresume is needed */ "cmpl $0, %c[launched](%% " _ASM_CX")\n\t" @@ -2768,12 +2754,10 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) /* Set vmx->fail accordingly */ "setbe %c[fail](%% " _ASM_CX")\n\t" - : ASM_CALL_CONSTRAINT, "=D"((int){0}), "=S"((int){0}) - : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp), "S"(evmcs_rsp), - [HOST_RSP]"i"(HOST_RSP), + : ASM_CALL_CONSTRAINT, VMX_UPDATE_VMCS_HOST_RSP_OUTPUTS + : "c"(vmx), VMX_UPDATE_VMCS_HOST_RSP_INPUTS, [launched]"i"(offsetof(struct vcpu_vmx, __launched)), - [fail]"i"(offsetof(struct vcpu_vmx, fail)), - [wordsize]"i"(sizeof(ulong)) + [fail]"i"(offsetof(struct vcpu_vmx, fail)) : "rax", "cc", "memory" ); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 3ecb4c86a240..de709769f2ed 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6126,20 +6126,8 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) "push %%" _ASM_BP " \n\t" "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* placeholder for guest rcx */ "push %%" _ASM_CX " \n\t" - "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ - "cmp %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" - "je 1f \n\t" - "mov %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" - /* Avoid VMWRITE when Enlightened VMCS is in use */ - "test %%" _ASM_SI ", %%" _ASM_SI " \n\t" - "jz 2f \n\t" - "mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t" - "jmp 1f \n\t" - "2: \n\t" - "mov $%c[HOST_RSP], %%" _ASM_DX " \n\t" - __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" - "1: \n\t" - "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ + + VMX_UPDATE_VMCS_HOST_RSP /* Reload cr2 if changed */ "mov %c[cr2](%%" _ASM_CX "), %%" _ASM_AX " \n\t" @@ -6221,11 +6209,10 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) "xor %%esi, %%esi \n\t" "xor %%edi, %%edi \n\t" "pop %%" _ASM_BP " \n\t" - : ASM_CALL_CONSTRAINT, "=D"((int){0}), "=S"((int){0}) - : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp), "S"(evmcs_rsp), + : ASM_CALL_CONSTRAINT, VMX_UPDATE_VMCS_HOST_RSP_OUTPUTS + : "c"(vmx), VMX_UPDATE_VMCS_HOST_RSP_INPUTS, [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), - [HOST_RSP]"i"(HOST_RSP), [rax]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RAX])), [rbx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBX])), [rcx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RCX])), diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 2138ddffb1cf..4fa17a7180ed 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -265,6 +265,34 @@ struct kvm_vmx { spinlock_t ept_pointer_lock; }; +#define VMX_UPDATE_VMCS_HOST_RSP \ + /* Temporarily adjust RSP for CALL */ \ + "sub $%c[stacksize], %%" _ASM_SP "\n\t" \ + "cmp %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" \ + "je 2f \n\t" \ + "mov %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" \ + /* Avoid VMWRITE when Enlightened VMCS is in use */ \ + "test %%" _ASM_SI ", %%" _ASM_SI " \n\t" \ + "jz 1f \n\t" \ + "mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t" \ + "jmp 2f \n\t" \ + "1: \n\t" \ + "mov $%c[HOST_RSP], %%" _ASM_SI " \n\t" \ + __ex("vmwrite %%" _ASM_SP ", %%" _ASM_SI) "\n\t" \ + "2: \n\t" \ + /* un-adjust RSP */ \ + "add $%c[stacksize], %%" _ASM_SP "\n\t" + +#define VMX_UPDATE_VMCS_HOST_RSP_OUTPUTS \ + "=D"((int){0}), \ + "=S"((int){0}) + +#define VMX_UPDATE_VMCS_HOST_RSP_INPUTS \ + "D"(&vmx->loaded_vmcs->host_state.rsp), \ + "S"(evmcs_rsp), \ + [HOST_RSP]"i"(HOST_RSP), \ + [stacksize]"i"(sizeof(ulong)) + bool nested_vmx_allowed(struct kvm_vcpu *vcpu); void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu); void vmx_vcpu_put(struct kvm_vcpu *vcpu); From patchwork Thu Dec 20 20:30:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10739571 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B42631399 for ; Thu, 20 Dec 2018 20:30:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A5C8928CFB for ; Thu, 20 Dec 2018 20:30:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9A0C728D16; Thu, 20 Dec 2018 20:30:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4611C28CFB for ; Thu, 20 Dec 2018 20:30:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389566AbeLTUa5 (ORCPT ); Thu, 20 Dec 2018 15:30:57 -0500 Received: from mga17.intel.com ([192.55.52.151]:9236 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732286AbeLTUa5 (ORCPT ); Thu, 20 Dec 2018 15:30:57 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Dec 2018 12:30:55 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,378,1539673200"; d="scan'208";a="129561866" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by fmsmga004.fm.intel.com with ESMTP; 20 Dec 2018 12:30:54 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Sean Christopherson Subject: [PATCH 11/11] KVM: nVMX: Remove a rogue "rax" clobber from nested_vmx_check_vmentry_hw() Date: Thu, 20 Dec 2018 12:30:53 -0800 Message-Id: <20181220203053.23299-1-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181220202518.21442-1-sean.j.christopherson@intel.com> References: <20181220202518.21442-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP RAX is not touched by nested_vmx_check_vmentry_hw(), directly or indirectly via e.g. VMX_UPDATE_VMCS_HOST_RSP, vmx_vmenter or fixup. Remove it from the clobber list. Fixes: 52017608da33 ("KVM: nVMX: add option to perform early consistency checks via H/W") Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 99a972fac7e3..c847975d3724 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2758,7 +2758,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) : "c"(vmx), VMX_UPDATE_VMCS_HOST_RSP_INPUTS, [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)) - : "rax", "cc", "memory" + : "cc", "memory" ); preempt_enable();