From patchwork Fri Jan 25 15:40:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781563 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 11A3F1515 for ; Fri, 25 Jan 2019 15:41:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0181A2F750 for ; Fri, 25 Jan 2019 15:41:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E9B542FA05; Fri, 25 Jan 2019 15:41:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 783802F750 for ; Fri, 25 Jan 2019 15:41:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728934AbfAYPlv (ORCPT ); Fri, 25 Jan 2019 10:41:51 -0500 Received: from mga09.intel.com ([134.134.136.24]:54654 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726126AbfAYPls (ORCPT ); Fri, 25 Jan 2019 10:41:48 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877868" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 01/33] KVM: VMX: Compare only a single byte for VMCS' "launched" in vCPU-run Date: Fri, 25 Jan 2019 07:40:48 -0800 Message-Id: <20190125154120.19385-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The vCPU-run asm blob does a manual comparison of a VMCS' launched status to execute the correct VM-Enter instruction, i.e. VMLAUNCH vs. VMRESUME. The launched flag is a bool, which is a typedef of _Bool. C99 does not define an exact size for _Bool, stating only that is must be large enough to hold '0' and '1'. Most, if not all, compilers use a single byte for _Bool, including gcc[1]. Originally, 'launched' was of type 'int' and so the asm blob used 'cmpl' to check the launch status. When 'launched' was moved to be stored on a per-VMCS basis, struct vcpu_vmx's "temporary" __launched flag was added in order to avoid having to pass the current VMCS into the asm blob. The new '__launched' was defined as a 'bool' and not an 'int', but the 'cmp' instruction was not updated. This has not caused any known problems, likely due to compilers aligning variables to 4-byte or 8-byte boundaries and KVM zeroing out struct vcpu_vmx during allocation. I.e. vCPU-run accesses "junk" data, it just happens to always be zero and so doesn't affect the result. [1] https://gcc.gnu.org/ml/gcc-patches/2000-10/msg01127.html Fixes: d462b8192368 ("KVM: VMX: Keep list of loaded VMCSs, instead of vcpus") Cc: Reviewed-by: Jim Mattson Reviewed-by: Konrad Rzeszutek Wilk Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 85683b0d9ad3..57735ef11179 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6401,7 +6401,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "mov %%" _ASM_AX", %%cr2 \n\t" "3: \n\t" /* Check if vmlaunch or vmresume is needed */ - "cmpl $0, %c[launched](%%" _ASM_CX ") \n\t" + "cmpb $0, %c[launched](%%" _ASM_CX ") \n\t" /* Load guest registers. Don't clobber flags. */ "mov %c[rax](%%" _ASM_CX "), %%" _ASM_AX " \n\t" "mov %c[rbx](%%" _ASM_CX "), %%" _ASM_BX " \n\t" From patchwork Fri Jan 25 15:40:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781613 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B682139A for ; Fri, 25 Jan 2019 15:42:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A8E72F750 for ; Fri, 25 Jan 2019 15:42:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4F2DC2F7C2; Fri, 25 Jan 2019 15:42:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DF60D2F750 for ; Fri, 25 Jan 2019 15:42:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729145AbfAYPma (ORCPT ); Fri, 25 Jan 2019 10:42:30 -0500 Received: from mga09.intel.com ([134.134.136.24]:54656 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728746AbfAYPlt (ORCPT ); Fri, 25 Jan 2019 10:41:49 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877869" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 02/33] KVM: nVMX: Check a single byte for VMCS "launched" in nested early checks Date: Fri, 25 Jan 2019 07:40:49 -0800 Message-Id: <20190125154120.19385-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Nested early checks does a manual comparison of a VMCS' launched status in its asm blob to execute the correct VM-Enter instruction, i.e. VMLAUNCH vs. VMRESUME. The launched flag is a bool, which is a typedef of _Bool. C99 does not define an exact size for _Bool, stating only that is must be large enough to hold '0' and '1'. Most, if not all, compilers use a single byte for _Bool, including gcc[1]. The use of 'cmpl' instead of 'cmpb' was not deliberate, but rather the result of a copy-paste as the asm blob was directly derived from the asm blob for vCPU-run. This has not caused any known problems, likely due to compilers aligning variables to 4-byte or 8-byte boundaries and KVM zeroing out struct vcpu_vmx during allocation. I.e. vCPU-run accesses "junk" data, it just happens to always be zero and so doesn't affect the result. [1] https://gcc.gnu.org/ml/gcc-patches/2000-10/msg01127.html Fixes: 52017608da33 ("KVM: nVMX: add option to perform early consistency checks via H/W") Cc: Reviewed-by: Jim Mattson Reviewed-by: Konrad Rzeszutek Wilk Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 2616bd2c7f2c..cfcf67e548fa 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2760,7 +2760,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ /* Check if vmlaunch or vmresume is needed */ - "cmpl $0, %c[launched](%% " _ASM_CX")\n\t" + "cmpb $0, %c[launched](%% " _ASM_CX")\n\t" "call vmx_vmenter\n\t" From patchwork Fri Jan 25 15:40:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781617 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C3A721515 for ; Fri, 25 Jan 2019 15:42:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B2B712F750 for ; Fri, 25 Jan 2019 15:42:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A713C2F7C2; Fri, 25 Jan 2019 15:42:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4123D2F750 for ; Fri, 25 Jan 2019 15:42:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729222AbfAYPmc (ORCPT ); Fri, 25 Jan 2019 10:42:32 -0500 Received: from mga09.intel.com ([134.134.136.24]:54654 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728700AbfAYPlt (ORCPT ); Fri, 25 Jan 2019 10:41:49 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877870" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 03/33] KVM: VMX: Zero out *all* general purpose registers after VM-Exit Date: Fri, 25 Jan 2019 07:40:50 -0800 Message-Id: <20190125154120.19385-4-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...except RSP, which is restored by hardware as part of VM-Exit. Paolo theorized that restoring registers from the stack after a VM-Exit in lieu of zeroing them could lead to speculative execution with the guest's values, e.g. if the stack accesses miss the L1 cache[1]. Zeroing XORs are dirt cheap, so just be ultra-paranoid. Note that the scratch register (currently RCX) used to save/restore the guest state is also zeroed as its host-defined value is loaded via the stack, just with a MOV instead of a POP. [1] https://patchwork.kernel.org/patch/10771539/#22441255 Fixes: 0cb5b30698fd ("kvm: vmx: Scrub hardware GPRs at VM-exit") Cc: Cc: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 57735ef11179..a8de1d7f06e1 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6451,10 +6451,15 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "mov %%r13, %c[r13](%%" _ASM_CX ") \n\t" "mov %%r14, %c[r14](%%" _ASM_CX ") \n\t" "mov %%r15, %c[r15](%%" _ASM_CX ") \n\t" + /* - * Clear host registers marked as clobbered to prevent - * speculative use. - */ + * Clear all general purpose registers (except RSP, which is loaded by + * the CPU during VM-Exit) to prevent speculative use of the guest's + * values, even those that are saved/loaded via the stack. In theory, + * an L1 cache miss when restoring registers could lead to speculative + * execution with the guest's values. Zeroing XORs are dirt cheap, + * i.e. the extra paranoia is essentially free. + */ "xor %%r8d, %%r8d \n\t" "xor %%r9d, %%r9d \n\t" "xor %%r10d, %%r10d \n\t" @@ -6469,8 +6474,11 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%eax, %%eax \n\t" "xor %%ebx, %%ebx \n\t" + "xor %%ecx, %%ecx \n\t" + "xor %%edx, %%edx \n\t" "xor %%esi, %%esi \n\t" "xor %%edi, %%edi \n\t" + "xor %%ebp, %%ebp \n\t" "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t" : ASM_CALL_CONSTRAINT : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp), From patchwork Fri Jan 25 15:40:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781559 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C8CB1139A for ; Fri, 25 Jan 2019 15:41:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B88BA2F750 for ; Fri, 25 Jan 2019 15:41:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AD1FB2FA05; Fri, 25 Jan 2019 15:41:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 429262F750 for ; Fri, 25 Jan 2019 15:41:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728736AbfAYPlt (ORCPT ); Fri, 25 Jan 2019 10:41:49 -0500 Received: from mga09.intel.com ([134.134.136.24]:54654 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727826AbfAYPls (ORCPT ); Fri, 25 Jan 2019 10:41:48 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877871" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 04/33] KVM: VMX: Modify only RSP when creating a placeholder for guest's RCX Date: Fri, 25 Jan 2019 07:40:51 -0800 Message-Id: <20190125154120.19385-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the vCPU-run asm blob, the guest's RCX is temporarily saved onto the stack after VM-Exit as the exit flow must first load a register with a pointer to the vCPU's save area in order to save the guest's registers. RCX is arbitrarily designated as the scratch register. Since the stack usage is to (1)save host, (2)save guest, (3)load host and (4)load guest, the code can't conform to the stack's natural FIFO semantics, i.e. it can't simply do PUSH/POP. Regardless of whether it is done for the host's value or guest's value, at some point the code needs to access the stack using a non-traditional method, e.g. MOV instead of POP. vCPU-run opts to create a placeholder on the stack for guest's RCX (by adjusting RSP) and saves RCX to its place immediately after VM-Exit (via MOV). In other words, the purpose of the first 'PUSH RCX' at the start of the vCPU-run asm blob is to adjust RSP down, i.e. there's no need to actually access memory. Use 'SUB $wordsize, RSP' instead of 'PUSH RCX' to make it more obvious that the intent is simply to create a gap on the stack for the guest's RCX. Reviewed-by: Jim Mattson Reviewed-by: Konrad Rzeszutek Wilk Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a8de1d7f06e1..45a7cda813c8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6377,7 +6377,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) asm( /* Store host registers */ "push %%" _ASM_DX "; push %%" _ASM_BP ";" - "push %%" _ASM_CX " \n\t" /* placeholder for guest rcx */ + "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* placeholder for guest RCX */ "push %%" _ASM_CX " \n\t" "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ "cmp %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" From patchwork Fri Jan 25 15:40:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781561 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3941E1515 for ; Fri, 25 Jan 2019 15:41:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2948A2F750 for ; Fri, 25 Jan 2019 15:41:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1DDB02FA05; Fri, 25 Jan 2019 15:41:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A617D2F750 for ; Fri, 25 Jan 2019 15:41:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728913AbfAYPlv (ORCPT ); Fri, 25 Jan 2019 10:41:51 -0500 Received: from mga09.intel.com ([134.134.136.24]:54654 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728641AbfAYPlt (ORCPT ); Fri, 25 Jan 2019 10:41:49 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877872" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 05/33] KVM: VMX: Save RSI to an unused output in the vCPU-run asm blob Date: Fri, 25 Jan 2019 07:40:52 -0800 Message-Id: <20190125154120.19385-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP RSI is clobbered by the vCPU-run asm blob, but it's not marked as such, probably because GCC doesn't let you mark inputs as clobbered. "Save" RSI to a dummy output so that GCC recognizes it as being clobbered. Fixes: 773e8a0425c9 ("x86/kvm: use Enlightened VMCS when running on Hyper-V") Reviewed-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 45a7cda813c8..6f3cd19cbe3a 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6480,7 +6480,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%edi, %%edi \n\t" "xor %%ebp, %%ebp \n\t" "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t" - : ASM_CALL_CONSTRAINT + : ASM_CALL_CONSTRAINT, "=S"((int){0}) : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), From patchwork Fri Jan 25 15:40:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781609 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3ACE41515 for ; Fri, 25 Jan 2019 15:42:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2A04E2F750 for ; Fri, 25 Jan 2019 15:42:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1E9102F7C2; Fri, 25 Jan 2019 15:42:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5EC32F750 for ; Fri, 25 Jan 2019 15:42:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729136AbfAYPm0 (ORCPT ); Fri, 25 Jan 2019 10:42:26 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727826AbfAYPlt (ORCPT ); Fri, 25 Jan 2019 10:41:49 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877873" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 06/33] KVM: VMX: Manually load RDX in vCPU-run asm blob Date: Fri, 25 Jan 2019 07:40:53 -0800 Message-Id: <20190125154120.19385-7-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Load RDX with the VMCS.HOST_RSP field encoding on-demand instead of delegating to the compiler via an input constraint. In addition to saving one whole MOV instruction, this allows RDX to be properly clobbered (in a future patch) instead of being saved/loaded to/from the stack. Despite nested_vmx_check_vmentry_hw() having similar code, leave it alone, for now. In that case, RDX is unconditionally used and isn't clobbered, i.e. sending in HOST_RSP as an input is simpler. Note that because HOST_RSP is an enum and not a define, it must be redefined as an immediate instead of using __stringify(HOST_RSP). The naming "conflict" between host_rsp and HOST_RSP is slightly confusing, but the former will be removed in a future patch, at which point HOST_RSP is absolutely what is desired. Reviewed-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6f3cd19cbe3a..5dcd38a74ef4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6389,6 +6389,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t" "jmp 1f \n\t" "2: \n\t" + "mov $%c[HOST_RSP], %%" _ASM_DX " \n\t" __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" "1: \n\t" "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ @@ -6481,10 +6482,11 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%ebp, %%ebp \n\t" "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t" : ASM_CALL_CONSTRAINT, "=S"((int){0}) - : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp), + : "c"(vmx), "S"(evmcs_rsp), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), + [HOST_RSP]"i"(HOST_RSP), [rax]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RAX])), [rbx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBX])), [rcx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RCX])), From patchwork Fri Jan 25 15:40:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781615 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 816E51515 for ; Fri, 25 Jan 2019 15:42:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 701212F750 for ; Fri, 25 Jan 2019 15:42:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 646D92F7C2; Fri, 25 Jan 2019 15:42:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0FFAC2F750 for ; Fri, 25 Jan 2019 15:42:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729240AbfAYPmd (ORCPT ); Fri, 25 Jan 2019 10:42:33 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728629AbfAYPlt (ORCPT ); Fri, 25 Jan 2019 10:41:49 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877874" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 07/33] KVM: VMX: Let the compiler save/load RDX during vCPU-run Date: Fri, 25 Jan 2019 07:40:54 -0800 Message-Id: <20190125154120.19385-8-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Per commit c20363006af6 ("KVM: VMX: Let gcc to choose which registers to save (x86_64)"), the only reason RDX is saved/loaded to/from the stack is because it was specified as an input, i.e. couldn't be marked as clobbered (ignoring the fact that "saving" it to a dummy output would indirectly mark it as clobbered). Now that RDX is no longer an input, clobber it. Reviewed-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 5dcd38a74ef4..e3b06fecdfb5 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6376,7 +6376,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) asm( /* Store host registers */ - "push %%" _ASM_DX "; push %%" _ASM_BP ";" + "push %%" _ASM_BP " \n\t" "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* placeholder for guest RCX */ "push %%" _ASM_CX " \n\t" "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ @@ -6480,7 +6480,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%esi, %%esi \n\t" "xor %%edi, %%edi \n\t" "xor %%ebp, %%ebp \n\t" - "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t" + "pop %%" _ASM_BP " \n\t" : ASM_CALL_CONSTRAINT, "=S"((int){0}) : "c"(vmx), "S"(evmcs_rsp), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), @@ -6508,10 +6508,10 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) [wordsize]"i"(sizeof(ulong)) : "cc", "memory" #ifdef CONFIG_X86_64 - , "rax", "rbx", "rdi" + , "rax", "rbx", "rdx", "rdi" , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" #else - , "eax", "ebx", "edi" + , "eax", "ebx", "edx", "edi" #endif ); } From patchwork Fri Jan 25 15:40:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781555 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0F6A91515 for ; Fri, 25 Jan 2019 15:41:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F0CB52F750 for ; Fri, 25 Jan 2019 15:41:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E4ACE2FA05; Fri, 25 Jan 2019 15:41:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8AC752F750 for ; Fri, 25 Jan 2019 15:41:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726778AbfAYPlr (ORCPT ); Fri, 25 Jan 2019 10:41:47 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726122AbfAYPlr (ORCPT ); Fri, 25 Jan 2019 10:41:47 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877875" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 08/33] KVM: nVMX: Remove a rogue "rax" clobber from nested_vmx_check_vmentry_hw() Date: Fri, 25 Jan 2019 07:40:55 -0800 Message-Id: <20190125154120.19385-9-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP RAX is not touched by nested_vmx_check_vmentry_hw(), directly or indirectly (e.g. vmx_vmenter()). Remove it from the clobber list. Fixes: 52017608da33 ("KVM: nVMX: add option to perform early consistency checks via H/W") Reviewed-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index cfcf67e548fa..7b4c45b2361e 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2772,7 +2772,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) [fail]"i"(offsetof(struct vcpu_vmx, fail)), [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), [wordsize]"i"(sizeof(ulong)) - : "rax", "cc", "memory" + : "cc", "memory" ); preempt_enable(); From patchwork Fri Jan 25 15:40:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781557 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BFE9617F0 for ; Fri, 25 Jan 2019 15:41:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AFC7C2F7C2 for ; Fri, 25 Jan 2019 15:41:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A289F2F750; Fri, 25 Jan 2019 15:41:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A9062F750 for ; Fri, 25 Jan 2019 15:41:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727749AbfAYPls (ORCPT ); Fri, 25 Jan 2019 10:41:48 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726126AbfAYPlr (ORCPT ); Fri, 25 Jan 2019 10:41:47 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877876" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 09/33] KVM: nVMX: Drop STACK_FRAME_NON_STANDARD from nested_vmx_check_vmentry_hw() Date: Fri, 25 Jan 2019 07:40:56 -0800 Message-Id: <20190125154120.19385-10-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...as it doesn't technically actually do anything non-standard with the stack even though it modifies RSP in a weird way. E.g. RSP is loaded with VMCS.HOST_RSP if the VM-Enter gets far enough to trigger VM-Exit, but it's simply reloaded with the current value. Reviewed-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 7b4c45b2361e..671887f7d8c7 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2808,8 +2808,6 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) return 0; } -STACK_FRAME_NON_STANDARD(nested_vmx_check_vmentry_hw); - static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12); From patchwork Fri Jan 25 15:40:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781621 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 708F3139A for ; Fri, 25 Jan 2019 15:42:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5FF632F750 for ; Fri, 25 Jan 2019 15:42:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 545232F7C2; Fri, 25 Jan 2019 15:42:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0BC712F750 for ; Fri, 25 Jan 2019 15:42:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729206AbfAYPmi (ORCPT ); Fri, 25 Jan 2019 10:42:38 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726800AbfAYPls (ORCPT ); Fri, 25 Jan 2019 10:41:48 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877877" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 10/33] KVM: nVMX: Explicitly reference the scratch reg in nested early checks Date: Fri, 25 Jan 2019 07:40:57 -0800 Message-Id: <20190125154120.19385-11-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Using %1 to reference RCX, i.e. the 'vmx' pointer', is obtuse and fragile, e.g. it results in cryptic and infurating compile errors if the output constraints are touched by anything more than a gentle breeze. Reviewed-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 671887f7d8c7..b92291b8e54b 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2756,7 +2756,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) /* Set HOST_RSP */ "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" - "mov %%" _ASM_SP ", %c[host_rsp](%1)\n\t" + "mov %%" _ASM_SP ", %c[host_rsp](%% " _ASM_CX ")\n\t" "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ /* Check if vmlaunch or vmresume is needed */ From patchwork Fri Jan 25 15:40:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781619 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 86C39139A for ; Fri, 25 Jan 2019 15:42:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75B0B2F750 for ; Fri, 25 Jan 2019 15:42:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 69E062F7C2; Fri, 25 Jan 2019 15:42:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 005252F750 for ; Fri, 25 Jan 2019 15:42:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727105AbfAYPls (ORCPT ); Fri, 25 Jan 2019 10:41:48 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726122AbfAYPlr (ORCPT ); Fri, 25 Jan 2019 10:41:47 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877878" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 11/33] KVM: nVMX: Capture VM-Fail to a local var in nested_vmx_check_vmentry_hw() Date: Fri, 25 Jan 2019 07:40:58 -0800 Message-Id: <20190125154120.19385-12-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Unlike the primary vCPU-run flow, the nested early checks code doesn't actually want to propagate VM-Fail back 'vmx'. Yay copy+paste. In additional to eliminating the need to clear vmx->fail before returning, using a local boolean also drops a reference to 'vmx' in the asm blob. Dropping the reference to 'vmx' will save a register in the long run as future patches will shift all pointer references from 'vmx' to 'vmx->loaded_vmcs'. Fixes: 52017608da33 ("KVM: nVMX: add option to perform early consistency checks via H/W") Signed-off-by: Sean Christopherson Reviewed-by: Jim Mattson --- arch/x86/kvm/vmx/nested.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b92291b8e54b..e35a55bdb271 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2717,6 +2717,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long cr3, cr4; + bool vm_fail; if (!nested_early_check) return 0; @@ -2762,14 +2763,18 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) /* Check if vmlaunch or vmresume is needed */ "cmpb $0, %c[launched](%% " _ASM_CX")\n\t" + /* + * VMLAUNCH and VMRESUME clear RFLAGS.{CF,ZF} on VM-Exit, set + * RFLAGS.CF on VM-Fail Invalid and set RFLAGS.ZF on VM-Fail + * Valid. vmx_vmenter() directly "returns" RFLAGS, and so the + * results of VM-Enter is captured via SETBE to vm_fail. + */ "call vmx_vmenter\n\t" - /* Set vmx->fail accordingly */ - "setbe %c[fail](%% " _ASM_CX")\n\t" - : ASM_CALL_CONSTRAINT + "setbe %[fail]\n\t" + : ASM_CALL_CONSTRAINT, [fail]"=qm"(vm_fail) : "c"(vmx), "d"((unsigned long)HOST_RSP), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), - [fail]"i"(offsetof(struct vcpu_vmx, fail)), [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), [wordsize]"i"(sizeof(ulong)) : "cc", "memory" @@ -2782,10 +2787,9 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) if (vmx->msr_autoload.guest.nr) vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr); - if (vmx->fail) { + if (vm_fail) { WARN_ON_ONCE(vmcs_read32(VM_INSTRUCTION_ERROR) != VMXERR_ENTRY_INVALID_CONTROL_FIELD); - vmx->fail = 0; return 1; } From patchwork Fri Jan 25 15:40:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781607 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AFD01139A for ; Fri, 25 Jan 2019 15:42:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9EEAB2F750 for ; Fri, 25 Jan 2019 15:42:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 937972F7C2; Fri, 25 Jan 2019 15:42:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2771F2F750 for ; Fri, 25 Jan 2019 15:42:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729194AbfAYPm0 (ORCPT ); Fri, 25 Jan 2019 10:42:26 -0500 Received: from mga09.intel.com ([134.134.136.24]:54654 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728751AbfAYPlt (ORCPT ); Fri, 25 Jan 2019 10:41:49 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877879" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 12/33] KVM: nVMX: Capture VM-Fail via CC_{SET,OUT} in nested early checks Date: Fri, 25 Jan 2019 07:40:59 -0800 Message-Id: <20190125154120.19385-13-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...to take advantage of __GCC_ASM_FLAG_OUTPUTS__ when possible. Reviewed-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index e35a55bdb271..15a8ae7bb247 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2767,12 +2767,12 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) * VMLAUNCH and VMRESUME clear RFLAGS.{CF,ZF} on VM-Exit, set * RFLAGS.CF on VM-Fail Invalid and set RFLAGS.ZF on VM-Fail * Valid. vmx_vmenter() directly "returns" RFLAGS, and so the - * results of VM-Enter is captured via SETBE to vm_fail. + * results of VM-Enter is captured via CC_{SET,OUT} to vm_fail. */ "call vmx_vmenter\n\t" - "setbe %[fail]\n\t" - : ASM_CALL_CONSTRAINT, [fail]"=qm"(vm_fail) + CC_SET(be) + : ASM_CALL_CONSTRAINT, CC_OUT(be) (vm_fail) : "c"(vmx), "d"((unsigned long)HOST_RSP), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), From patchwork Fri Jan 25 15:41:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781611 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99CFF1515 for ; Fri, 25 Jan 2019 15:42:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 893B22F750 for ; Fri, 25 Jan 2019 15:42:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7DC9F2F7C2; Fri, 25 Jan 2019 15:42:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D4BB72F750 for ; Fri, 25 Jan 2019 15:42:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728821AbfAYPm0 (ORCPT ); Fri, 25 Jan 2019 10:42:26 -0500 Received: from mga09.intel.com ([134.134.136.24]:54656 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728816AbfAYPlu (ORCPT ); Fri, 25 Jan 2019 10:41:50 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877880" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 13/33] KVM: nVMX: Reference vmx->loaded_vmcs->launched directly Date: Fri, 25 Jan 2019 07:41:00 -0800 Message-Id: <20190125154120.19385-14-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Temporarily propagating vmx->loaded_vmcs->launched to vmx->__launched is not functionally necessary, but rather was done historically to avoid passing both 'vmx' and 'loaded_vmcs' to the vCPU-run asm blob. Nested early checks inherited this behavior by virtue of copy+paste. A future patch will move HOST_RSP caching to be per-VMCS, i.e. store 'host_rsp' in loaded VMCS. Now that the reference to 'vmx->fail' is also gone from nested early checks, referencing 'loaded_vmcs' directly means we can drop the 'vmx' reference when introducing per-VMCS RSP caching. And it means __launched can be dropped from struct vcpu_vmx if/when vCPU-run receives similar treatment. Note the use of a named register constraint for 'loaded_vmcs'. Using RCX to hold 'vmx' was inherited from vCPU-run. In the vCPU-run case, the scratch register needs to be explicitly defined as it is crushed when loading guest state, i.e. deferring to the compiler would corrupt the pointer. Since nested early checks never loads guests state, it's a-ok to let the compiler pick any register. Naming the constraint avoids the fragility of referencing constraints via %1, %2, etc.., which breaks horribly when modifying constraints, and generally makes the asm blob more readable. Signed-off-by: Sean Christopherson Reviewed-by: Jim Mattson --- arch/x86/kvm/vmx/nested.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 15a8ae7bb247..d42134f836a6 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2751,8 +2751,6 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) vmx->loaded_vmcs->host_state.cr4 = cr4; } - vmx->__launched = vmx->loaded_vmcs->launched; - asm( /* Set HOST_RSP */ "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ @@ -2761,7 +2759,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ /* Check if vmlaunch or vmresume is needed */ - "cmpb $0, %c[launched](%% " _ASM_CX")\n\t" + "cmpb $0, %c[launched](%[loaded_vmcs])\n\t" /* * VMLAUNCH and VMRESUME clear RFLAGS.{CF,ZF} on VM-Exit, set @@ -2774,7 +2772,8 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) CC_SET(be) : ASM_CALL_CONSTRAINT, CC_OUT(be) (vm_fail) : "c"(vmx), "d"((unsigned long)HOST_RSP), - [launched]"i"(offsetof(struct vcpu_vmx, __launched)), + [loaded_vmcs]"r"(vmx->loaded_vmcs), + [launched]"i"(offsetof(struct loaded_vmcs, launched)), [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), [wordsize]"i"(sizeof(ulong)) : "cc", "memory" From patchwork Fri Jan 25 15:41:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781565 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 45782139A for ; Fri, 25 Jan 2019 15:41:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 33C952F750 for ; Fri, 25 Jan 2019 15:41:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2852E2FA05; Fri, 25 Jan 2019 15:41:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C56A22F750 for ; Fri, 25 Jan 2019 15:41:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728974AbfAYPlw (ORCPT ); Fri, 25 Jan 2019 10:41:52 -0500 Received: from mga09.intel.com ([134.134.136.24]:54654 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728839AbfAYPlu (ORCPT ); Fri, 25 Jan 2019 10:41:50 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877881" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 14/33] KVM: nVMX: Let the compiler select the reg for holding HOST_RSP Date: Fri, 25 Jan 2019 07:41:01 -0800 Message-Id: <20190125154120.19385-15-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...and provide an explicit name for the constraint. Naming the input constraint makes the code self-documenting and also avoids the fragility of numerically referring to constraints, e.g. %4 breaks badly whenever the constraints are modified. Explicitly using RDX was inherited from vCPU-run, i.e. completely arbitrary. Even vCPU-run doesn't truly need to explicitly use RDX, but doing so is more robust as vCPU-run needs tight control over its register usage. Note that while the naming "conflict" between host_rsp and HOST_RSP is slightly confusing, the former will be renamed slightly in a future patch, at which point HOST_RSP is absolutely what is desired. Signed-off-by: Sean Christopherson Reviewed-by: Jim Mattson --- arch/x86/kvm/vmx/nested.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index d42134f836a6..48281b0684ca 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2752,9 +2752,8 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) } asm( - /* Set HOST_RSP */ "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ - __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" + __ex("vmwrite %%" _ASM_SP ", %[HOST_RSP]") "\n\t" "mov %%" _ASM_SP ", %c[host_rsp](%% " _ASM_CX ")\n\t" "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ @@ -2771,7 +2770,8 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) CC_SET(be) : ASM_CALL_CONSTRAINT, CC_OUT(be) (vm_fail) - : "c"(vmx), "d"((unsigned long)HOST_RSP), + : "c"(vmx), + [HOST_RSP]"r"((unsigned long)HOST_RSP), [loaded_vmcs]"r"(vmx->loaded_vmcs), [launched]"i"(offsetof(struct loaded_vmcs, launched)), [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), From patchwork Fri Jan 25 15:41:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781603 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C87011515 for ; Fri, 25 Jan 2019 15:42:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B6E0E2F750 for ; Fri, 25 Jan 2019 15:42:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AB9462F7C2; Fri, 25 Jan 2019 15:42:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1ED862F750 for ; Fri, 25 Jan 2019 15:42:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728970AbfAYPmX (ORCPT ); Fri, 25 Jan 2019 10:42:23 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728821AbfAYPlu (ORCPT ); Fri, 25 Jan 2019 10:41:50 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877882" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 15/33] KVM: nVMX: Cache host_rsp on a per-VMCS basis Date: Fri, 25 Jan 2019 07:41:02 -0800 Message-Id: <20190125154120.19385-16-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, host_rsp is cached on a per-vCPU basis, i.e. it's stored in struct vcpu_vmx. In non-nested usage the caching is for all intents and purposes 100% effective, e.g. only the first VMLAUNCH needs to synchronize VMCS.HOST_RSP since the call stack to vmx_vcpu_run() is identical each and every time. But when running a nested guest, KVM must invalidate the cache when switching the current VMCS as it can't guarantee the new VMCS has the same HOST_RSP as the previous VMCS. In other words, the cache loses almost all of its efficacy when running a nested VM. Move host_rsp to struct vmcs_host_state, which is per-VMCS, so that it is cached on a per-VMCS basis and restores its 100% hit rate when nested VMs are in play. Note that the host_rsp cache for vmcs02 essentially "breaks" when nested early checks are enabled as nested_vmx_check_vmentry_hw() will see a different RSP at the time of its VM-Enter. While it's possible to avoid even that VMCS.HOST_RSP synchronization, e.g. by employing a dedicated VM-Exit stack, there is little motivation for doing so as the overhead of two VMWRITEs (~55 cycles) is dwarfed by the overhead of the extra VMX transition (600+ cycles) and is a proverbial drop in the ocean relative to the total cost of a nested transtion (10s of thousands of cycles). Signed-off-by: Sean Christopherson Reviewed-by: Jim Mattson --- arch/x86/kvm/vmx/nested.c | 24 ++++++------------------ arch/x86/kvm/vmx/vmcs.h | 1 + arch/x86/kvm/vmx/vmx.c | 13 ++++++------- arch/x86/kvm/vmx/vmx.h | 1 - 4 files changed, 13 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 48281b0684ca..5d5218a14fb3 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1978,17 +1978,6 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12) if (vmx->nested.dirty_vmcs12 || vmx->nested.hv_evmcs) prepare_vmcs02_early_full(vmx, vmcs12); - /* - * HOST_RSP is normally set correctly in vmx_vcpu_run() just before - * entry, but only if the current (host) sp changed from the value - * we wrote last (vmx->host_rsp). This cache is no longer relevant - * if we switch vmcs, and rather than hold a separate cache per vmcs, - * here we just force the write to happen on entry. host_rsp will - * also be written unconditionally by nested_vmx_check_vmentry_hw() - * if we are doing early consistency checks via hardware. - */ - vmx->host_rsp = 0; - /* * PIN CONTROLS */ @@ -2753,8 +2742,11 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) asm( "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ + "cmp %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t" + "je 1f \n\t" __ex("vmwrite %%" _ASM_SP ", %[HOST_RSP]") "\n\t" - "mov %%" _ASM_SP ", %c[host_rsp](%% " _ASM_CX ")\n\t" + "mov %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t" + "1: \n\t" "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ /* Check if vmlaunch or vmresume is needed */ @@ -2770,11 +2762,10 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) CC_SET(be) : ASM_CALL_CONSTRAINT, CC_OUT(be) (vm_fail) - : "c"(vmx), - [HOST_RSP]"r"((unsigned long)HOST_RSP), + : [HOST_RSP]"r"((unsigned long)HOST_RSP), [loaded_vmcs]"r"(vmx->loaded_vmcs), [launched]"i"(offsetof(struct loaded_vmcs, launched)), - [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), + [host_state_rsp]"i"(offsetof(struct loaded_vmcs, host_state.rsp)), [wordsize]"i"(sizeof(ulong)) : "cc", "memory" ); @@ -3911,9 +3902,6 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, vmx_flush_tlb(vcpu, true); } - /* This is needed for same reason as it was needed in prepare_vmcs02 */ - vmx->host_rsp = 0; - /* Unpin physical memory we referred to in vmcs02 */ if (vmx->nested.apic_access_page) { kvm_release_page_dirty(vmx->nested.apic_access_page); diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h index 6def3ba88e3b..cb6079f8a227 100644 --- a/arch/x86/kvm/vmx/vmcs.h +++ b/arch/x86/kvm/vmx/vmcs.h @@ -34,6 +34,7 @@ struct vmcs_host_state { unsigned long cr4; /* May not match real cr4 */ unsigned long gs_base; unsigned long fs_base; + unsigned long rsp; u16 fs_sel, gs_sel, ldt_sel; #ifdef CONFIG_X86_64 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e3b06fecdfb5..fff1e5b5febe 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6380,9 +6380,9 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* placeholder for guest RCX */ "push %%" _ASM_CX " \n\t" "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ - "cmp %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" + "cmp %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" "je 1f \n\t" - "mov %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t" + "mov %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" /* Avoid VMWRITE when Enlightened VMCS is in use */ "test %%" _ASM_SI ", %%" _ASM_SI " \n\t" "jz 2f \n\t" @@ -6481,11 +6481,10 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%edi, %%edi \n\t" "xor %%ebp, %%ebp \n\t" "pop %%" _ASM_BP " \n\t" - : ASM_CALL_CONSTRAINT, "=S"((int){0}) - : "c"(vmx), "S"(evmcs_rsp), + : ASM_CALL_CONSTRAINT, "=D"((int){0}), "=S"((int){0}) + : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp), "S"(evmcs_rsp), [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), - [host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)), [HOST_RSP]"i"(HOST_RSP), [rax]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RAX])), [rbx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBX])), @@ -6508,10 +6507,10 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) [wordsize]"i"(sizeof(ulong)) : "cc", "memory" #ifdef CONFIG_X86_64 - , "rax", "rbx", "rdx", "rdi" + , "rax", "rbx", "rdx" , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" #else - , "eax", "ebx", "edx", "edi" + , "eax", "ebx", "edx" #endif ); } diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 99328954c2fc..8e203b725928 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -175,7 +175,6 @@ struct nested_vmx { struct vcpu_vmx { struct kvm_vcpu vcpu; - unsigned long host_rsp; u8 fail; u8 msr_bitmap_mode; u32 exit_intr_info; From patchwork Fri Jan 25 15:41:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781589 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9C0AB1515 for ; Fri, 25 Jan 2019 15:42:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8BA732F750 for ; Fri, 25 Jan 2019 15:42:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 803892FA05; Fri, 25 Jan 2019 15:42:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 24B8A2F750 for ; Fri, 25 Jan 2019 15:42:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728984AbfAYPlx (ORCPT ); Fri, 25 Jan 2019 10:41:53 -0500 Received: from mga09.intel.com ([134.134.136.24]:54654 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728860AbfAYPlu (ORCPT ); Fri, 25 Jan 2019 10:41:50 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877883" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 16/33] KVM: VMX: Load/save guest CR2 via C code in __vmx_vcpu_run() Date: Fri, 25 Jan 2019 07:41:03 -0800 Message-Id: <20190125154120.19385-17-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...to eliminate its parameter and struct vcpu_vmx offset definition from the assembly blob. Accessing CR2 from C versus assembly doesn't change the likelihood of taking a page fault (and modifying CR2) while it's loaded with the guest's value, so long as we don't do anything silly between accessing CR2 and VM-Enter/VM-Exit. Signed-off-by: Sean Christopherson Reviewed-by: Jim Mattson --- arch/x86/kvm/vmx/vmx.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index fff1e5b5febe..ef6b3fb2e98b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6374,6 +6374,9 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); + if (vcpu->arch.cr2 != read_cr2()) + write_cr2(vcpu->arch.cr2); + asm( /* Store host registers */ "push %%" _ASM_BP " \n\t" @@ -6394,13 +6397,6 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "1: \n\t" "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ - /* Reload cr2 if changed */ - "mov %c[cr2](%%" _ASM_CX "), %%" _ASM_AX " \n\t" - "mov %%cr2, %%" _ASM_DX " \n\t" - "cmp %%" _ASM_AX ", %%" _ASM_DX " \n\t" - "je 3f \n\t" - "mov %%" _ASM_AX", %%cr2 \n\t" - "3: \n\t" /* Check if vmlaunch or vmresume is needed */ "cmpb $0, %c[launched](%%" _ASM_CX ") \n\t" /* Load guest registers. Don't clobber flags. */ @@ -6470,9 +6466,6 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%r14d, %%r14d \n\t" "xor %%r15d, %%r15d \n\t" #endif - "mov %%cr2, %%" _ASM_AX " \n\t" - "mov %%" _ASM_AX ", %c[cr2](%%" _ASM_CX ") \n\t" - "xor %%eax, %%eax \n\t" "xor %%ebx, %%ebx \n\t" "xor %%ecx, %%ecx \n\t" @@ -6503,7 +6496,6 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) [r14]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R14])), [r15]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R15])), #endif - [cr2]"i"(offsetof(struct vcpu_vmx, vcpu.arch.cr2)), [wordsize]"i"(sizeof(ulong)) : "cc", "memory" #ifdef CONFIG_X86_64 @@ -6513,6 +6505,8 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) , "eax", "ebx", "edx" #endif ); + + vcpu->arch.cr2 = read_cr2(); } STACK_FRAME_NON_STANDARD(__vmx_vcpu_run); From patchwork Fri Jan 25 15:41:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781599 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2A74B139A for ; Fri, 25 Jan 2019 15:42:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A9CB2F750 for ; Fri, 25 Jan 2019 15:42:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0F24A2FA05; Fri, 25 Jan 2019 15:42:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 932B02F750 for ; Fri, 25 Jan 2019 15:42:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729103AbfAYPmU (ORCPT ); Fri, 25 Jan 2019 10:42:20 -0500 Received: from mga09.intel.com ([134.134.136.24]:54656 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726122AbfAYPlv (ORCPT ); Fri, 25 Jan 2019 10:41:51 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877884" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 17/33] KVM: VMX: Update VMCS.HOST_RSP via helper C function Date: Fri, 25 Jan 2019 07:41:04 -0800 Message-Id: <20190125154120.19385-18-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Providing a helper function to update HOST_RSP is visibly easier to read, and more importantly (for the future) eliminates two arguments to the VM-Enter assembly blob. Reducing the number of arguments to the asm blob is for all intents and purposes a prerequisite to moving the code to a proper assembly routine. It's not truly mandatory, but it greatly simplifies the future code, and the cost of the extra CALL+RET is negligible in the grand scheme. Note that although _ASM_ARG[1-3] can be used in the inline asm itself, the intput/output constraints need to be manually defined. gcc will actually compile with _ASM_ARG[1-3] specified as constraints, but what it actually ends up doing with the bogus constraint is unknown. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 51 +++++++++++++++++++++--------------------- 1 file changed, 26 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ef6b3fb2e98b..70d1e951b33d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6362,15 +6362,18 @@ static void vmx_update_hv_timer(struct kvm_vcpu *vcpu) vmx->loaded_vmcs->hv_timer_armed = false; } +void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp) +{ + if (unlikely(host_rsp != vmx->loaded_vmcs->host_state.rsp)) { + vmx->loaded_vmcs->host_state.rsp = host_rsp; + vmcs_writel(HOST_RSP, host_rsp); + } +} + static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) { - unsigned long evmcs_rsp; - vmx->__launched = vmx->loaded_vmcs->launched; - evmcs_rsp = static_branch_unlikely(&enable_evmcs) ? - (unsigned long)¤t_evmcs->host_rsp : 0; - if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); @@ -6381,21 +6384,14 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) /* Store host registers */ "push %%" _ASM_BP " \n\t" "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* placeholder for guest RCX */ - "push %%" _ASM_CX " \n\t" - "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ - "cmp %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" - "je 1f \n\t" - "mov %%" _ASM_SP ", (%%" _ASM_DI ") \n\t" - /* Avoid VMWRITE when Enlightened VMCS is in use */ - "test %%" _ASM_SI ", %%" _ASM_SI " \n\t" - "jz 2f \n\t" - "mov %%" _ASM_SP ", (%%" _ASM_SI ") \n\t" - "jmp 1f \n\t" - "2: \n\t" - "mov $%c[HOST_RSP], %%" _ASM_DX " \n\t" - __ex("vmwrite %%" _ASM_SP ", %%" _ASM_DX) "\n\t" - "1: \n\t" - "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ + "push %%" _ASM_ARG1 " \n\t" + + /* Adjust RSP to account for the CALL to vmx_vmenter(). */ + "lea -%c[wordsize](%%" _ASM_SP "), %%" _ASM_ARG2 " \n\t" + "call vmx_update_host_rsp \n\t" + + /* Load the vcpu_vmx pointer to RCX. */ + "mov (%%" _ASM_SP "), %%" _ASM_CX " \n\t" /* Check if vmlaunch or vmresume is needed */ "cmpb $0, %c[launched](%%" _ASM_CX ") \n\t" @@ -6474,11 +6470,16 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%edi, %%edi \n\t" "xor %%ebp, %%ebp \n\t" "pop %%" _ASM_BP " \n\t" - : ASM_CALL_CONSTRAINT, "=D"((int){0}), "=S"((int){0}) - : "c"(vmx), "D"(&vmx->loaded_vmcs->host_state.rsp), "S"(evmcs_rsp), + : ASM_CALL_CONSTRAINT, +#ifdef CONFIG_X86_64 + "=D"((int){0}) + : "D"(vmx), +#else + "=a"((int){0}) + : "a"(vmx), +#endif [launched]"i"(offsetof(struct vcpu_vmx, __launched)), [fail]"i"(offsetof(struct vcpu_vmx, fail)), - [HOST_RSP]"i"(HOST_RSP), [rax]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RAX])), [rbx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBX])), [rcx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RCX])), @@ -6499,10 +6500,10 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) [wordsize]"i"(sizeof(ulong)) : "cc", "memory" #ifdef CONFIG_X86_64 - , "rax", "rbx", "rdx" + , "rax", "rbx", "rcx", "rdx", "rsi" , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" #else - , "eax", "ebx", "edx" + , "ebx", "ecx", "edx", "edi", "esi" #endif ); From patchwork Fri Jan 25 15:41:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781605 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 516B117F0 for ; Fri, 25 Jan 2019 15:42:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 40BA92F750 for ; Fri, 25 Jan 2019 15:42:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 34F722F761; Fri, 25 Jan 2019 15:42:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C40632FA05 for ; Fri, 25 Jan 2019 15:42:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728870AbfAYPmW (ORCPT ); Fri, 25 Jan 2019 10:42:22 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728877AbfAYPlu (ORCPT ); Fri, 25 Jan 2019 10:41:50 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877885" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 18/33] KVM: VMX: Pass "launched" directly to the vCPU-run asm blob Date: Fri, 25 Jan 2019 07:41:05 -0800 Message-Id: <20190125154120.19385-19-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...and remove struct vcpu_vmx's temporary __launched variable. Eliminating __launched is a bonus, the real motivation is to get to the point where the only reference to struct vcpu_vmx in the asm code is to vcpu.arch.regs, which will simplify moving the blob to a proper asm file. Note that also means this approach is deliberately different than what is used in nested_vmx_check_vmentry_hw(). Use BL as it is a non-volatile register in both 32-bit and 64-bit ABIs, i.e. it can't be modified by vmx_update_host_rsp(), to avoid having to temporarily save/restore the launched flag. Signed-off-by: Sean Christopherson Reviewed-by: Jim Mattson --- arch/x86/kvm/vmx/vmx.c | 13 ++++++------- arch/x86/kvm/vmx/vmx.h | 2 +- 2 files changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 70d1e951b33d..51ac6a2a1fc5 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6372,8 +6372,6 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp) static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) { - vmx->__launched = vmx->loaded_vmcs->launched; - if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); @@ -6394,7 +6392,8 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "mov (%%" _ASM_SP "), %%" _ASM_CX " \n\t" /* Check if vmlaunch or vmresume is needed */ - "cmpb $0, %c[launched](%%" _ASM_CX ") \n\t" + "cmpb $0, %%bl \n\t" + /* Load guest registers. Don't clobber flags. */ "mov %c[rax](%%" _ASM_CX "), %%" _ASM_AX " \n\t" "mov %c[rbx](%%" _ASM_CX "), %%" _ASM_BX " \n\t" @@ -6470,7 +6469,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%edi, %%edi \n\t" "xor %%ebp, %%ebp \n\t" "pop %%" _ASM_BP " \n\t" - : ASM_CALL_CONSTRAINT, + : ASM_CALL_CONSTRAINT, "=b"((int){0}), #ifdef CONFIG_X86_64 "=D"((int){0}) : "D"(vmx), @@ -6478,7 +6477,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "=a"((int){0}) : "a"(vmx), #endif - [launched]"i"(offsetof(struct vcpu_vmx, __launched)), + "b"(vmx->loaded_vmcs->launched), [fail]"i"(offsetof(struct vcpu_vmx, fail)), [rax]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RAX])), [rbx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBX])), @@ -6500,10 +6499,10 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) [wordsize]"i"(sizeof(ulong)) : "cc", "memory" #ifdef CONFIG_X86_64 - , "rax", "rbx", "rcx", "rdx", "rsi" + , "rax", "rcx", "rdx", "rsi" , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" #else - , "ebx", "ecx", "edx", "edi", "esi" + , "ecx", "edx", "edi", "esi" #endif ); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 8e203b725928..6ee6a492efaf 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -208,7 +208,7 @@ struct vcpu_vmx { struct loaded_vmcs vmcs01; struct loaded_vmcs *loaded_vmcs; struct loaded_vmcs *loaded_cpu_state; - bool __launched; /* temporary, used in vmx_vcpu_run */ + struct msr_autoload { struct vmx_msrs guest; struct vmx_msrs host; From patchwork Fri Jan 25 15:41:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781587 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3329D139A for ; Fri, 25 Jan 2019 15:42:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 22A942F750 for ; Fri, 25 Jan 2019 15:42:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 170802FA05; Fri, 25 Jan 2019 15:42:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5BE82F750 for ; Fri, 25 Jan 2019 15:42:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729007AbfAYPlx (ORCPT ); Fri, 25 Jan 2019 10:41:53 -0500 Received: from mga09.intel.com ([134.134.136.24]:54654 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728910AbfAYPlv (ORCPT ); Fri, 25 Jan 2019 10:41:51 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877886" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 19/33] KVM: VMX: Invert the ordering of saving guest/host scratch reg at VM-Enter Date: Fri, 25 Jan 2019 07:41:06 -0800 Message-Id: <20190125154120.19385-20-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Switching the ordering allows for an out-of-line path for VM-Fail that elides saving guest state but still shares the register clearing with the VM-Exit path. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 51ac6a2a1fc5..b2e114144e13 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6381,7 +6381,6 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) asm( /* Store host registers */ "push %%" _ASM_BP " \n\t" - "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* placeholder for guest RCX */ "push %%" _ASM_ARG1 " \n\t" /* Adjust RSP to account for the CALL to vmx_vmenter(). */ @@ -6417,11 +6416,11 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) /* Enter guest mode */ "call vmx_vmenter\n\t" - /* Save guest's RCX to the stack placeholder (see above) */ - "mov %%" _ASM_CX ", %c[wordsize](%%" _ASM_SP ") \n\t" + /* Temporarily save guest's RCX. */ + "push %%" _ASM_CX " \n\t" - /* Load host's RCX, i.e. the vmx_vcpu pointer */ - "pop %%" _ASM_CX " \n\t" + /* Reload the vcpu_vmx pointer to RCX. */ + "mov %c[wordsize](%%" _ASM_SP "), %%" _ASM_CX " \n\t" /* Set vmx->fail based on EFLAGS.{CF,ZF} */ "setbe %c[fail](%%" _ASM_CX ")\n\t" @@ -6468,6 +6467,9 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%esi, %%esi \n\t" "xor %%edi, %%edi \n\t" "xor %%ebp, %%ebp \n\t" + + /* "POP" the vcpu_vmx pointer. */ + "add $%c[wordsize], %%" _ASM_SP " \n\t" "pop %%" _ASM_BP " \n\t" : ASM_CALL_CONSTRAINT, "=b"((int){0}), #ifdef CONFIG_X86_64 From patchwork Fri Jan 25 15:41:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781567 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DCEF3139A for ; Fri, 25 Jan 2019 15:41:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CD1AD2F750 for ; Fri, 25 Jan 2019 15:41:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C1E022FA05; Fri, 25 Jan 2019 15:41:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 516142F750 for ; Fri, 25 Jan 2019 15:41:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729041AbfAYPly (ORCPT ); Fri, 25 Jan 2019 10:41:54 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728912AbfAYPlw (ORCPT ); Fri, 25 Jan 2019 10:41:52 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877887" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 20/33] KVM: VMX: Don't save guest registers after VM-Fail Date: Fri, 25 Jan 2019 07:41:07 -0800 Message-Id: <20190125154120.19385-21-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A failed VM-Enter (obviously) didn't succeed, meaning the CPU never executed an instrunction in guest mode and so can't have changed the general purpose registers. In addition to saving some instructions in the VM-Fail case, this also provides a separate path entirely and thus an opportunity to propagate the fail condition to vmx->fail via register without introducing undue pain. Using a register, as opposed to directly referencing vmx->fail, eliminates the need to pass the offset of 'fail', which will simplify moving the code to proper assembly in future patches. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 35 +++++++++++++++++++++++------------ 1 file changed, 23 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b2e114144e13..0df574071d77 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6415,6 +6415,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) /* Enter guest mode */ "call vmx_vmenter\n\t" + "jbe 2f \n\t" /* Temporarily save guest's RCX. */ "push %%" _ASM_CX " \n\t" @@ -6422,9 +6423,6 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) /* Reload the vcpu_vmx pointer to RCX. */ "mov %c[wordsize](%%" _ASM_SP "), %%" _ASM_CX " \n\t" - /* Set vmx->fail based on EFLAGS.{CF,ZF} */ - "setbe %c[fail](%%" _ASM_CX ")\n\t" - /* Save all guest registers, including RCX from the stack */ "mov %%" _ASM_AX ", %c[rax](%%" _ASM_CX ") \n\t" "mov %%" _ASM_BX ", %c[rbx](%%" _ASM_CX ") \n\t" @@ -6442,15 +6440,22 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "mov %%r13, %c[r13](%%" _ASM_CX ") \n\t" "mov %%r14, %c[r14](%%" _ASM_CX ") \n\t" "mov %%r15, %c[r15](%%" _ASM_CX ") \n\t" +#endif + + /* Clear EBX to indicate VM-Exit (as opposed to VM-Fail). */ + "xor %%ebx, %%ebx \n\t" /* - * Clear all general purpose registers (except RSP, which is loaded by - * the CPU during VM-Exit) to prevent speculative use of the guest's - * values, even those that are saved/loaded via the stack. In theory, - * an L1 cache miss when restoring registers could lead to speculative - * execution with the guest's values. Zeroing XORs are dirt cheap, - * i.e. the extra paranoia is essentially free. + * Clear all general purpose registers except RSP and RBX to prevent + * speculative use of the guest's values, even those that are reloaded + * via the stack. In theory, an L1 cache miss when restoring registers + * could lead to speculative execution with the guest's values. + * Zeroing XORs are dirt cheap, i.e. the extra paranoia is essentially + * free. RSP and RBX are exempt as RSP is restored by hardware during + * VM-Exit and RBX is explicitly loaded with 0 or 1 to "return" VM-Fail. */ + "1: \n\t" +#ifdef CONFIG_X86_64 "xor %%r8d, %%r8d \n\t" "xor %%r9d, %%r9d \n\t" "xor %%r10d, %%r10d \n\t" @@ -6461,7 +6466,6 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%r15d, %%r15d \n\t" #endif "xor %%eax, %%eax \n\t" - "xor %%ebx, %%ebx \n\t" "xor %%ecx, %%ecx \n\t" "xor %%edx, %%edx \n\t" "xor %%esi, %%esi \n\t" @@ -6471,7 +6475,15 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) /* "POP" the vcpu_vmx pointer. */ "add $%c[wordsize], %%" _ASM_SP " \n\t" "pop %%" _ASM_BP " \n\t" - : ASM_CALL_CONSTRAINT, "=b"((int){0}), + "jmp 3f \n\t" + + /* VM-Fail. Out-of-line to avoid a taken Jcc after VM-Exit. */ + "2: \n\t" + "mov $1, %%ebx \n\t" + "jmp 1b \n\t" + "3: \n\t" + + : ASM_CALL_CONSTRAINT, "=b"(vmx->fail), #ifdef CONFIG_X86_64 "=D"((int){0}) : "D"(vmx), @@ -6480,7 +6492,6 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) : "a"(vmx), #endif "b"(vmx->loaded_vmcs->launched), - [fail]"i"(offsetof(struct vcpu_vmx, fail)), [rax]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RAX])), [rbx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBX])), [rcx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RCX])), From patchwork Fri Jan 25 15:41:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781595 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D96861515 for ; Fri, 25 Jan 2019 15:42:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C8B1B2F750 for ; Fri, 25 Jan 2019 15:42:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BD8012FA05; Fri, 25 Jan 2019 15:42:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F67B2F750 for ; Fri, 25 Jan 2019 15:42:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729190AbfAYPmR (ORCPT ); Fri, 25 Jan 2019 10:42:17 -0500 Received: from mga09.intel.com ([134.134.136.24]:54656 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728641AbfAYPlv (ORCPT ); Fri, 25 Jan 2019 10:41:51 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877888" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 21/33] KVM: VMX: Use vcpu->arch.regs directly when saving/loading guest state Date: Fri, 25 Jan 2019 07:41:08 -0800 Message-Id: <20190125154120.19385-22-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...now that all other references to struct vcpu_vmx have been removed. Note that 'vmx' still needs to be passed into the asm blob in _ASM_ARG1 as it is consumed by vmx_update_host_rsp(). And similar to that code, use _ASM_ARG2 in the assembly code to prepare for moving to proper asm, while explicitly referencing the exact registers in the clobber list for clarity in the short term and to avoid additional precompiler games. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 53 +++++++++++++++++++++++------------------- 1 file changed, 29 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0df574071d77..f5759c3a3bd3 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6381,13 +6381,18 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) asm( /* Store host registers */ "push %%" _ASM_BP " \n\t" - "push %%" _ASM_ARG1 " \n\t" + + /* + * Save @regs, _ASM_ARG2 may be modified by vmx_update_host_rsp() and + * @regs is needed after VM-Exit to save the guest's register values. + */ + "push %%" _ASM_ARG2 " \n\t" /* Adjust RSP to account for the CALL to vmx_vmenter(). */ "lea -%c[wordsize](%%" _ASM_SP "), %%" _ASM_ARG2 " \n\t" "call vmx_update_host_rsp \n\t" - /* Load the vcpu_vmx pointer to RCX. */ + /* Load RCX with @regs. */ "mov (%%" _ASM_SP "), %%" _ASM_CX " \n\t" /* Check if vmlaunch or vmresume is needed */ @@ -6420,7 +6425,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) /* Temporarily save guest's RCX. */ "push %%" _ASM_CX " \n\t" - /* Reload the vcpu_vmx pointer to RCX. */ + /* Reload RCX with @regs. */ "mov %c[wordsize](%%" _ASM_SP "), %%" _ASM_CX " \n\t" /* Save all guest registers, including RCX from the stack */ @@ -6485,37 +6490,37 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) : ASM_CALL_CONSTRAINT, "=b"(vmx->fail), #ifdef CONFIG_X86_64 - "=D"((int){0}) - : "D"(vmx), + "=D"((int){0}), "=S"((int){0}) + : "D"(vmx), "S"(&vcpu->arch.regs), #else - "=a"((int){0}) - : "a"(vmx), + "=a"((int){0}), "=d"((int){0}) + : "a"(vmx), "d"(&vcpu->arch.regs), #endif "b"(vmx->loaded_vmcs->launched), - [rax]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RAX])), - [rbx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBX])), - [rcx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RCX])), - [rdx]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RDX])), - [rsi]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RSI])), - [rdi]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RDI])), - [rbp]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_RBP])), + [rax]"i"(VCPU_REGS_RAX * sizeof(ulong)), + [rbx]"i"(VCPU_REGS_RBX * sizeof(ulong)), + [rcx]"i"(VCPU_REGS_RCX * sizeof(ulong)), + [rdx]"i"(VCPU_REGS_RDX * sizeof(ulong)), + [rsi]"i"(VCPU_REGS_RSI * sizeof(ulong)), + [rdi]"i"(VCPU_REGS_RDI * sizeof(ulong)), + [rbp]"i"(VCPU_REGS_RBP * sizeof(ulong)), #ifdef CONFIG_X86_64 - [r8]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R8])), - [r9]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R9])), - [r10]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R10])), - [r11]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R11])), - [r12]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R12])), - [r13]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R13])), - [r14]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R14])), - [r15]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R15])), + [r8]"i"(VCPU_REGS_R8 * sizeof(ulong)), + [r9]"i"(VCPU_REGS_R9 * sizeof(ulong)), + [r10]"i"(VCPU_REGS_R10 * sizeof(ulong)), + [r11]"i"(VCPU_REGS_R11 * sizeof(ulong)), + [r12]"i"(VCPU_REGS_R12 * sizeof(ulong)), + [r13]"i"(VCPU_REGS_R13 * sizeof(ulong)), + [r14]"i"(VCPU_REGS_R14 * sizeof(ulong)), + [r15]"i"(VCPU_REGS_R15 * sizeof(ulong)), #endif [wordsize]"i"(sizeof(ulong)) : "cc", "memory" #ifdef CONFIG_X86_64 - , "rax", "rcx", "rdx", "rsi" + , "rax", "rcx", "rdx" , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" #else - , "ecx", "edx", "edi", "esi" + , "ecx", "edi", "esi" #endif ); From patchwork Fri Jan 25 15:41:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781575 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 105D41515 for ; Fri, 25 Jan 2019 15:42:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F2F422F750 for ; Fri, 25 Jan 2019 15:42:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E78912FA05; Fri, 25 Jan 2019 15:42:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 83E312F750 for ; Fri, 25 Jan 2019 15:42:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729038AbfAYPmA (ORCPT ); Fri, 25 Jan 2019 10:42:00 -0500 Received: from mga09.intel.com ([134.134.136.24]:54654 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728937AbfAYPlz (ORCPT ); Fri, 25 Jan 2019 10:41:55 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877889" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 22/33] KVM: x86: Explicitly #define the VCPU_REGS_* indices Date: Fri, 25 Jan 2019 07:41:09 -0800 Message-Id: <20190125154120.19385-23-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Declaring the VCPU_REGS_* as enums allows for more robust C code, but it prevents using the values in assembly files. Expliciting #define the indices in an asm-friendly file to prepare for VMX moving its transition code to a proper assembly file, but keep the enums for general usage. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 33 ++++++++++++++-------------- arch/x86/include/asm/kvm_vcpu_regs.h | 25 +++++++++++++++++++++ 2 files changed, 42 insertions(+), 16 deletions(-) create mode 100644 arch/x86/include/asm/kvm_vcpu_regs.h diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 49f449f56434..b157d87b105b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -35,6 +35,7 @@ #include #include #include +#include #include #define KVM_MAX_VCPUS 288 @@ -138,23 +139,23 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) #define ASYNC_PF_PER_VCPU (1 << ASYNC_PF_PER_VCPU_ORDER) enum kvm_reg { - VCPU_REGS_RAX = 0, - VCPU_REGS_RCX = 1, - VCPU_REGS_RDX = 2, - VCPU_REGS_RBX = 3, - VCPU_REGS_RSP = 4, - VCPU_REGS_RBP = 5, - VCPU_REGS_RSI = 6, - VCPU_REGS_RDI = 7, + VCPU_REGS_RAX = __VCPU_REGS_RAX, + VCPU_REGS_RCX = __VCPU_REGS_RCX, + VCPU_REGS_RDX = __VCPU_REGS_RDX, + VCPU_REGS_RBX = __VCPU_REGS_RBX, + VCPU_REGS_RSP = __VCPU_REGS_RSP, + VCPU_REGS_RBP = __VCPU_REGS_RBP, + VCPU_REGS_RSI = __VCPU_REGS_RSI, + VCPU_REGS_RDI = __VCPU_REGS_RDI, #ifdef CONFIG_X86_64 - VCPU_REGS_R8 = 8, - VCPU_REGS_R9 = 9, - VCPU_REGS_R10 = 10, - VCPU_REGS_R11 = 11, - VCPU_REGS_R12 = 12, - VCPU_REGS_R13 = 13, - VCPU_REGS_R14 = 14, - VCPU_REGS_R15 = 15, + VCPU_REGS_R8 = __VCPU_REGS_R8, + VCPU_REGS_R9 = __VCPU_REGS_R9, + VCPU_REGS_R10 = __VCPU_REGS_R10, + VCPU_REGS_R11 = __VCPU_REGS_R11, + VCPU_REGS_R12 = __VCPU_REGS_R12, + VCPU_REGS_R13 = __VCPU_REGS_R13, + VCPU_REGS_R14 = __VCPU_REGS_R14, + VCPU_REGS_R15 = __VCPU_REGS_R15, #endif VCPU_REGS_RIP, NR_VCPU_REGS diff --git a/arch/x86/include/asm/kvm_vcpu_regs.h b/arch/x86/include/asm/kvm_vcpu_regs.h new file mode 100644 index 000000000000..1af2cb59233b --- /dev/null +++ b/arch/x86/include/asm/kvm_vcpu_regs.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_KVM_VCPU_REGS_H +#define _ASM_X86_KVM_VCPU_REGS_H + +#define __VCPU_REGS_RAX 0 +#define __VCPU_REGS_RCX 1 +#define __VCPU_REGS_RDX 2 +#define __VCPU_REGS_RBX 3 +#define __VCPU_REGS_RSP 4 +#define __VCPU_REGS_RBP 5 +#define __VCPU_REGS_RSI 6 +#define __VCPU_REGS_RDI 7 + +#ifdef CONFIG_X86_64 +#define __VCPU_REGS_R8 8 +#define __VCPU_REGS_R9 9 +#define __VCPU_REGS_R10 10 +#define __VCPU_REGS_R11 11 +#define __VCPU_REGS_R12 12 +#define __VCPU_REGS_R13 13 +#define __VCPU_REGS_R14 14 +#define __VCPU_REGS_R15 15 +#endif + +#endif /* _ASM_X86_KVM_VCPU_REGS_H */ From patchwork Fri Jan 25 15:41:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781597 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4EE9E1515 for ; Fri, 25 Jan 2019 15:42:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B28E2F750 for ; Fri, 25 Jan 2019 15:42:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2D0812FA05; Fri, 25 Jan 2019 15:42:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 763202F750 for ; Fri, 25 Jan 2019 15:42:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729172AbfAYPmQ (ORCPT ); Fri, 25 Jan 2019 10:42:16 -0500 Received: from mga09.intel.com ([134.134.136.24]:54656 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728970AbfAYPlw (ORCPT ); Fri, 25 Jan 2019 10:41:52 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877890" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 23/33] KVM: VMX: Use #defines in place of immediates in VM-Enter inline asm Date: Fri, 25 Jan 2019 07:41:10 -0800 Message-Id: <20190125154120.19385-24-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...to prepare for moving the inline asm to a proper asm sub-routine. Eliminating the immediates allows a nearly verbatim move, e.g. quotes, newlines, tabs and __stringify need to be dropped, but other than those cosmetic changes the only function change will be to replace the final "jmp" with a "ret". Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 113 ++++++++++++++++++++++------------------- 1 file changed, 61 insertions(+), 52 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f5759c3a3bd3..24cf8fa4cb7c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6370,6 +6370,33 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp) } } +#ifdef CONFIG_X86_64 +#define WORD_SIZE 8 +#else +#define WORD_SIZE 4 +#endif + +#define _WORD_SIZE __stringify(WORD_SIZE) + +#define VCPU_RAX __stringify(__VCPU_REGS_RAX * WORD_SIZE) +#define VCPU_RCX __stringify(__VCPU_REGS_RCX * WORD_SIZE) +#define VCPU_RDX __stringify(__VCPU_REGS_RDX * WORD_SIZE) +#define VCPU_RBX __stringify(__VCPU_REGS_RBX * WORD_SIZE) +/* Intentionally omit %RSP as it's context switched by hardware */ +#define VCPU_RBP __stringify(__VCPU_REGS_RBP * WORD_SIZE) +#define VCPU_RSI __stringify(__VCPU_REGS_RSI * WORD_SIZE) +#define VCPU_RDI __stringify(__VCPU_REGS_RDI * WORD_SIZE) +#ifdef CONFIG_X86_64 +#define VCPU_R8 __stringify(__VCPU_REGS_R8 * WORD_SIZE) +#define VCPU_R9 __stringify(__VCPU_REGS_R9 * WORD_SIZE) +#define VCPU_R10 __stringify(__VCPU_REGS_R10 * WORD_SIZE) +#define VCPU_R11 __stringify(__VCPU_REGS_R11 * WORD_SIZE) +#define VCPU_R12 __stringify(__VCPU_REGS_R12 * WORD_SIZE) +#define VCPU_R13 __stringify(__VCPU_REGS_R13 * WORD_SIZE) +#define VCPU_R14 __stringify(__VCPU_REGS_R14 * WORD_SIZE) +#define VCPU_R15 __stringify(__VCPU_REGS_R15 * WORD_SIZE) +#endif + static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) { if (static_branch_unlikely(&vmx_l1d_should_flush)) @@ -6389,7 +6416,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "push %%" _ASM_ARG2 " \n\t" /* Adjust RSP to account for the CALL to vmx_vmenter(). */ - "lea -%c[wordsize](%%" _ASM_SP "), %%" _ASM_ARG2 " \n\t" + "lea -" _WORD_SIZE "(%%" _ASM_SP "), %%" _ASM_ARG2 " \n\t" "call vmx_update_host_rsp \n\t" /* Load RCX with @regs. */ @@ -6399,24 +6426,24 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "cmpb $0, %%bl \n\t" /* Load guest registers. Don't clobber flags. */ - "mov %c[rax](%%" _ASM_CX "), %%" _ASM_AX " \n\t" - "mov %c[rbx](%%" _ASM_CX "), %%" _ASM_BX " \n\t" - "mov %c[rdx](%%" _ASM_CX "), %%" _ASM_DX " \n\t" - "mov %c[rsi](%%" _ASM_CX "), %%" _ASM_SI " \n\t" - "mov %c[rdi](%%" _ASM_CX "), %%" _ASM_DI " \n\t" - "mov %c[rbp](%%" _ASM_CX "), %%" _ASM_BP " \n\t" + "mov " VCPU_RAX "(%%" _ASM_CX "), %%" _ASM_AX " \n\t" + "mov " VCPU_RBX "(%%" _ASM_CX "), %%" _ASM_BX " \n\t" + "mov " VCPU_RDX "(%%" _ASM_CX "), %%" _ASM_DX " \n\t" + "mov " VCPU_RSI "(%%" _ASM_CX "), %%" _ASM_SI " \n\t" + "mov " VCPU_RDI "(%%" _ASM_CX "), %%" _ASM_DI " \n\t" + "mov " VCPU_RBP "(%%" _ASM_CX "), %%" _ASM_BP " \n\t" #ifdef CONFIG_X86_64 - "mov %c[r8](%%" _ASM_CX "), %%r8 \n\t" - "mov %c[r9](%%" _ASM_CX "), %%r9 \n\t" - "mov %c[r10](%%" _ASM_CX "), %%r10 \n\t" - "mov %c[r11](%%" _ASM_CX "), %%r11 \n\t" - "mov %c[r12](%%" _ASM_CX "), %%r12 \n\t" - "mov %c[r13](%%" _ASM_CX "), %%r13 \n\t" - "mov %c[r14](%%" _ASM_CX "), %%r14 \n\t" - "mov %c[r15](%%" _ASM_CX "), %%r15 \n\t" + "mov " VCPU_R8 "(%%" _ASM_CX "), %%r8 \n\t" + "mov " VCPU_R9 "(%%" _ASM_CX "), %%r9 \n\t" + "mov " VCPU_R10 "(%%" _ASM_CX "), %%r10 \n\t" + "mov " VCPU_R11 "(%%" _ASM_CX "), %%r11 \n\t" + "mov " VCPU_R12 "(%%" _ASM_CX "), %%r12 \n\t" + "mov " VCPU_R13 "(%%" _ASM_CX "), %%r13 \n\t" + "mov " VCPU_R14 "(%%" _ASM_CX "), %%r14 \n\t" + "mov " VCPU_R15 "(%%" _ASM_CX "), %%r15 \n\t" #endif /* Load guest RCX. This kills the vmx_vcpu pointer! */ - "mov %c[rcx](%%" _ASM_CX "), %%" _ASM_CX " \n\t" + "mov " VCPU_RCX"(%%" _ASM_CX "), %%" _ASM_CX " \n\t" /* Enter guest mode */ "call vmx_vmenter\n\t" @@ -6426,25 +6453,25 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "push %%" _ASM_CX " \n\t" /* Reload RCX with @regs. */ - "mov %c[wordsize](%%" _ASM_SP "), %%" _ASM_CX " \n\t" + "mov " _WORD_SIZE "(%%" _ASM_SP "), %%" _ASM_CX " \n\t" /* Save all guest registers, including RCX from the stack */ - "mov %%" _ASM_AX ", %c[rax](%%" _ASM_CX ") \n\t" - "mov %%" _ASM_BX ", %c[rbx](%%" _ASM_CX ") \n\t" - __ASM_SIZE(pop) " %c[rcx](%%" _ASM_CX ") \n\t" - "mov %%" _ASM_DX ", %c[rdx](%%" _ASM_CX ") \n\t" - "mov %%" _ASM_SI ", %c[rsi](%%" _ASM_CX ") \n\t" - "mov %%" _ASM_DI ", %c[rdi](%%" _ASM_CX ") \n\t" - "mov %%" _ASM_BP ", %c[rbp](%%" _ASM_CX ") \n\t" + "mov %%" _ASM_AX ", " VCPU_RAX "(%%" _ASM_CX ") \n\t" + "mov %%" _ASM_BX ", " VCPU_RBX "(%%" _ASM_CX ") \n\t" + __ASM_SIZE(pop) " " VCPU_RCX "(%%" _ASM_CX ") \n\t" + "mov %%" _ASM_DX ", " VCPU_RDX "(%%" _ASM_CX ") \n\t" + "mov %%" _ASM_SI ", " VCPU_RSI "(%%" _ASM_CX ") \n\t" + "mov %%" _ASM_DI ", " VCPU_RDI "(%%" _ASM_CX ") \n\t" + "mov %%" _ASM_BP ", " VCPU_RBP "(%%" _ASM_CX ") \n\t" #ifdef CONFIG_X86_64 - "mov %%r8, %c[r8](%%" _ASM_CX ") \n\t" - "mov %%r9, %c[r9](%%" _ASM_CX ") \n\t" - "mov %%r10, %c[r10](%%" _ASM_CX ") \n\t" - "mov %%r11, %c[r11](%%" _ASM_CX ") \n\t" - "mov %%r12, %c[r12](%%" _ASM_CX ") \n\t" - "mov %%r13, %c[r13](%%" _ASM_CX ") \n\t" - "mov %%r14, %c[r14](%%" _ASM_CX ") \n\t" - "mov %%r15, %c[r15](%%" _ASM_CX ") \n\t" + "mov %%r8, " VCPU_R8 "(%%" _ASM_CX ") \n\t" + "mov %%r9, " VCPU_R9 "(%%" _ASM_CX ") \n\t" + "mov %%r10, " VCPU_R10 "(%%" _ASM_CX ") \n\t" + "mov %%r11, " VCPU_R11 "(%%" _ASM_CX ") \n\t" + "mov %%r12, " VCPU_R12 "(%%" _ASM_CX ") \n\t" + "mov %%r13, " VCPU_R13 "(%%" _ASM_CX ") \n\t" + "mov %%r14, " VCPU_R14 "(%%" _ASM_CX ") \n\t" + "mov %%r15, " VCPU_R15 "(%%" _ASM_CX ") \n\t" #endif /* Clear EBX to indicate VM-Exit (as opposed to VM-Fail). */ @@ -6478,7 +6505,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%ebp, %%ebp \n\t" /* "POP" the vcpu_vmx pointer. */ - "add $%c[wordsize], %%" _ASM_SP " \n\t" + "add $" _WORD_SIZE ", %%" _ASM_SP " \n\t" "pop %%" _ASM_BP " \n\t" "jmp 3f \n\t" @@ -6496,25 +6523,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "=a"((int){0}), "=d"((int){0}) : "a"(vmx), "d"(&vcpu->arch.regs), #endif - "b"(vmx->loaded_vmcs->launched), - [rax]"i"(VCPU_REGS_RAX * sizeof(ulong)), - [rbx]"i"(VCPU_REGS_RBX * sizeof(ulong)), - [rcx]"i"(VCPU_REGS_RCX * sizeof(ulong)), - [rdx]"i"(VCPU_REGS_RDX * sizeof(ulong)), - [rsi]"i"(VCPU_REGS_RSI * sizeof(ulong)), - [rdi]"i"(VCPU_REGS_RDI * sizeof(ulong)), - [rbp]"i"(VCPU_REGS_RBP * sizeof(ulong)), -#ifdef CONFIG_X86_64 - [r8]"i"(VCPU_REGS_R8 * sizeof(ulong)), - [r9]"i"(VCPU_REGS_R9 * sizeof(ulong)), - [r10]"i"(VCPU_REGS_R10 * sizeof(ulong)), - [r11]"i"(VCPU_REGS_R11 * sizeof(ulong)), - [r12]"i"(VCPU_REGS_R12 * sizeof(ulong)), - [r13]"i"(VCPU_REGS_R13 * sizeof(ulong)), - [r14]"i"(VCPU_REGS_R14 * sizeof(ulong)), - [r15]"i"(VCPU_REGS_R15 * sizeof(ulong)), -#endif - [wordsize]"i"(sizeof(ulong)) + "b"(vmx->loaded_vmcs->launched) : "cc", "memory" #ifdef CONFIG_X86_64 , "rax", "rcx", "rdx" From patchwork Fri Jan 25 15:41:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781593 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC60B139A for ; Fri, 25 Jan 2019 15:42:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BC0292F750 for ; Fri, 25 Jan 2019 15:42:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B0CE72FA05; Fri, 25 Jan 2019 15:42:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 684112F750 for ; Fri, 25 Jan 2019 15:42:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728933AbfAYPmP (ORCPT ); Fri, 25 Jan 2019 10:42:15 -0500 Received: from mga09.intel.com ([134.134.136.24]:54656 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728839AbfAYPlx (ORCPT ); Fri, 25 Jan 2019 10:41:53 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877891" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 24/33] KVM: VMX: Create a stack frame in vCPU-run Date: Fri, 25 Jan 2019 07:41:11 -0800 Message-Id: <20190125154120.19385-25-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...in preparation for moving to a proper assembly sub-routnine. vCPU-run isn't a leaf function since it calls vmx_update_host_rsp() and vmx_vmenter(). And since we need to save/restore RBP anyways, unconditionally creating the frame costs a single MOV, i.e. don't bother keying off CONFIG_FRAME_POINTER or using FRAME_BEGIN, etc... Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 24cf8fa4cb7c..206da8e49b04 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6406,8 +6406,8 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) write_cr2(vcpu->arch.cr2); asm( - /* Store host registers */ "push %%" _ASM_BP " \n\t" + "mov %%" _ASM_SP ", %%" _ASM_BP " \n\t" /* * Save @regs, _ASM_ARG2 may be modified by vmx_update_host_rsp() and From patchwork Fri Jan 25 15:41:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781591 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4025E1515 for ; Fri, 25 Jan 2019 15:42:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E6612F750 for ; Fri, 25 Jan 2019 15:42:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 22E9F2FA05; Fri, 25 Jan 2019 15:42:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 39FD52F750 for ; Fri, 25 Jan 2019 15:42:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728986AbfAYPmN (ORCPT ); Fri, 25 Jan 2019 10:42:13 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726451AbfAYPlx (ORCPT ); Fri, 25 Jan 2019 10:41:53 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877892" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 25/33] KVM: VMX: Move vCPU-run code to a proper assembly routine Date: Fri, 25 Jan 2019 07:41:12 -0800 Message-Id: <20190125154120.19385-26-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As evidenced by the myriad patches leading up to this moment, using an inline asm blob for vCPU-run is nothing short of horrific. It's also been called "unholy", "an abomination" and likely a whole host of other names that would violate the Code of Conduct if recorded here and now. The code is relocated nearly verbatim, e.g. quotes, newlines, tabs and __stringify need to be dropped, but other than those cosmetic changes the only functional changees are to add the "call" and replace the final "jmp" with a "ret". Note that STACK_FRAME_NON_STANDARD is also dropped from __vmx_vcpu_run(). Suggested-by: Andi Kleen Suggested-by: Josh Poimboeuf Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmenter.S | 147 +++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 138 +--------------------------------- 2 files changed, 148 insertions(+), 137 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index bcef2c7e9bc4..db223cfe9812 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -1,6 +1,33 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include #include +#include + +#ifdef CONFIG_X86_64 +#define WORD_SIZE 8 +#else +#define WORD_SIZE 4 +#endif + +#define VCPU_RAX __VCPU_REGS_RAX * WORD_SIZE +#define VCPU_RCX __VCPU_REGS_RCX * WORD_SIZE +#define VCPU_RDX __VCPU_REGS_RDX * WORD_SIZE +#define VCPU_RBX __VCPU_REGS_RBX * WORD_SIZE +/* Intentionally omit RSP as it's context switched by hardware */ +#define VCPU_RBP __VCPU_REGS_RBP * WORD_SIZE +#define VCPU_RSI __VCPU_REGS_RSI * WORD_SIZE +#define VCPU_RDI __VCPU_REGS_RDI * WORD_SIZE + +#ifdef CONFIG_X86_64 +#define VCPU_R8 __VCPU_REGS_R8 * WORD_SIZE +#define VCPU_R9 __VCPU_REGS_R9 * WORD_SIZE +#define VCPU_R10 __VCPU_REGS_R10 * WORD_SIZE +#define VCPU_R11 __VCPU_REGS_R11 * WORD_SIZE +#define VCPU_R12 __VCPU_REGS_R12 * WORD_SIZE +#define VCPU_R13 __VCPU_REGS_R13 * WORD_SIZE +#define VCPU_R14 __VCPU_REGS_R14 * WORD_SIZE +#define VCPU_R15 __VCPU_REGS_R15 * WORD_SIZE +#endif .text @@ -55,3 +82,123 @@ ENDPROC(vmx_vmenter) ENTRY(vmx_vmexit) ret ENDPROC(vmx_vmexit) + +/** + * ____vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode + * @vmx: struct vcpu_vmx * + * @regs: unsigned long * (to guest registers) + * %RBX: VMCS launched status (non-zero indicates already launched) + * + * Returns: + * %RBX is 0 on VM-Exit, 1 on VM-Fail + */ +ENTRY(____vmx_vcpu_run) + push %_ASM_BP + mov %_ASM_SP, %_ASM_BP + + /* + * Save @regs, _ASM_ARG2 may be modified by vmx_update_host_rsp() and + * @regs is needed after VM-Exit to save the guest's register values. + */ + push %_ASM_ARG2 + + /* Adjust RSP to account for the CALL to vmx_vmenter(). */ + lea -WORD_SIZE(%_ASM_SP), %_ASM_ARG2 + call vmx_update_host_rsp + + /* Load @regs to RCX. */ + mov (%_ASM_SP), %_ASM_CX + + /* Check if vmlaunch or vmresume is needed */ + cmpb $0, %bl + + /* Load guest registers. Don't clobber flags. */ + mov VCPU_RAX(%_ASM_CX), %_ASM_AX + mov VCPU_RBX(%_ASM_CX), %_ASM_BX + mov VCPU_RDX(%_ASM_CX), %_ASM_DX + mov VCPU_RSI(%_ASM_CX), %_ASM_SI + mov VCPU_RDI(%_ASM_CX), %_ASM_DI + mov VCPU_RBP(%_ASM_CX), %_ASM_BP +#ifdef CONFIG_X86_64 + mov VCPU_R8 (%_ASM_CX), %r8 + mov VCPU_R9 (%_ASM_CX), %r9 + mov VCPU_R10(%_ASM_CX), %r10 + mov VCPU_R11(%_ASM_CX), %r11 + mov VCPU_R12(%_ASM_CX), %r12 + mov VCPU_R13(%_ASM_CX), %r13 + mov VCPU_R14(%_ASM_CX), %r14 + mov VCPU_R15(%_ASM_CX), %r15 +#endif + /* Load guest RCX. This kills the vmx_vcpu pointer! */ + mov VCPU_RCX(%_ASM_CX), %_ASM_CX + + /* Enter guest mode */ + call vmx_vmenter + + /* Jump on VM-Fail. */ + jbe 2f + + /* Temporarily save guest's RCX. */ + push %_ASM_CX + + /* Reload @regs to RCX. */ + mov WORD_SIZE(%_ASM_SP), %_ASM_CX + + /* Save all guest registers, including RCX from the stack */ + mov %_ASM_AX, VCPU_RAX(%_ASM_CX) + mov %_ASM_BX, VCPU_RBX(%_ASM_CX) + __ASM_SIZE(pop) VCPU_RCX(%_ASM_CX) + mov %_ASM_DX, VCPU_RDX(%_ASM_CX) + mov %_ASM_SI, VCPU_RSI(%_ASM_CX) + mov %_ASM_DI, VCPU_RDI(%_ASM_CX) + mov %_ASM_BP, VCPU_RBP(%_ASM_CX) +#ifdef CONFIG_X86_64 + mov %r8, VCPU_R8 (%_ASM_CX) + mov %r9, VCPU_R9 (%_ASM_CX) + mov %r10, VCPU_R10(%_ASM_CX) + mov %r11, VCPU_R11(%_ASM_CX) + mov %r12, VCPU_R12(%_ASM_CX) + mov %r13, VCPU_R13(%_ASM_CX) + mov %r14, VCPU_R14(%_ASM_CX) + mov %r15, VCPU_R15(%_ASM_CX) +#endif + + /* Clear EBX to indicate VM-Exit (as opposed to VM-Fail). */ + xor %ebx, %ebx + + /* + * Clear all general purpose registers except RSP and RBX to prevent + * speculative use of the guest's values, even those that are reloaded + * via the stack. In theory, an L1 cache miss when restoring registers + * could lead to speculative execution with the guest's values. + * Zeroing XORs are dirt cheap, i.e. the extra paranoia is essentially + * free. RSP and RBX are exempt as RSP is restored by hardware during + * VM-Exit and RBX is explicitly loaded with 0 or 1 to "return" VM-Fail. + */ +1: +#ifdef CONFIG_X86_64 + xor %r8d, %r8d + xor %r9d, %r9d + xor %r10d, %r10d + xor %r11d, %r11d + xor %r12d, %r12d + xor %r13d, %r13d + xor %r14d, %r14d + xor %r15d, %r15d +#endif + xor %eax, %eax + xor %ecx, %ecx + xor %edx, %edx + xor %esi, %esi + xor %edi, %edi + xor %ebp, %ebp + + /* "POP" @regs. */ + add $WORD_SIZE, %_ASM_SP + pop %_ASM_BP + ret + + /* VM-Fail. Out-of-line to avoid a taken Jcc after VM-Exit. */ +2: mov $1, %ebx + jmp 1b +ENDPROC(____vmx_vcpu_run) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 206da8e49b04..76b68492e077 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6370,33 +6370,6 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp) } } -#ifdef CONFIG_X86_64 -#define WORD_SIZE 8 -#else -#define WORD_SIZE 4 -#endif - -#define _WORD_SIZE __stringify(WORD_SIZE) - -#define VCPU_RAX __stringify(__VCPU_REGS_RAX * WORD_SIZE) -#define VCPU_RCX __stringify(__VCPU_REGS_RCX * WORD_SIZE) -#define VCPU_RDX __stringify(__VCPU_REGS_RDX * WORD_SIZE) -#define VCPU_RBX __stringify(__VCPU_REGS_RBX * WORD_SIZE) -/* Intentionally omit %RSP as it's context switched by hardware */ -#define VCPU_RBP __stringify(__VCPU_REGS_RBP * WORD_SIZE) -#define VCPU_RSI __stringify(__VCPU_REGS_RSI * WORD_SIZE) -#define VCPU_RDI __stringify(__VCPU_REGS_RDI * WORD_SIZE) -#ifdef CONFIG_X86_64 -#define VCPU_R8 __stringify(__VCPU_REGS_R8 * WORD_SIZE) -#define VCPU_R9 __stringify(__VCPU_REGS_R9 * WORD_SIZE) -#define VCPU_R10 __stringify(__VCPU_REGS_R10 * WORD_SIZE) -#define VCPU_R11 __stringify(__VCPU_REGS_R11 * WORD_SIZE) -#define VCPU_R12 __stringify(__VCPU_REGS_R12 * WORD_SIZE) -#define VCPU_R13 __stringify(__VCPU_REGS_R13 * WORD_SIZE) -#define VCPU_R14 __stringify(__VCPU_REGS_R14 * WORD_SIZE) -#define VCPU_R15 __stringify(__VCPU_REGS_R15 * WORD_SIZE) -#endif - static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) { if (static_branch_unlikely(&vmx_l1d_should_flush)) @@ -6406,115 +6379,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) write_cr2(vcpu->arch.cr2); asm( - "push %%" _ASM_BP " \n\t" - "mov %%" _ASM_SP ", %%" _ASM_BP " \n\t" - - /* - * Save @regs, _ASM_ARG2 may be modified by vmx_update_host_rsp() and - * @regs is needed after VM-Exit to save the guest's register values. - */ - "push %%" _ASM_ARG2 " \n\t" - - /* Adjust RSP to account for the CALL to vmx_vmenter(). */ - "lea -" _WORD_SIZE "(%%" _ASM_SP "), %%" _ASM_ARG2 " \n\t" - "call vmx_update_host_rsp \n\t" - - /* Load RCX with @regs. */ - "mov (%%" _ASM_SP "), %%" _ASM_CX " \n\t" - - /* Check if vmlaunch or vmresume is needed */ - "cmpb $0, %%bl \n\t" - - /* Load guest registers. Don't clobber flags. */ - "mov " VCPU_RAX "(%%" _ASM_CX "), %%" _ASM_AX " \n\t" - "mov " VCPU_RBX "(%%" _ASM_CX "), %%" _ASM_BX " \n\t" - "mov " VCPU_RDX "(%%" _ASM_CX "), %%" _ASM_DX " \n\t" - "mov " VCPU_RSI "(%%" _ASM_CX "), %%" _ASM_SI " \n\t" - "mov " VCPU_RDI "(%%" _ASM_CX "), %%" _ASM_DI " \n\t" - "mov " VCPU_RBP "(%%" _ASM_CX "), %%" _ASM_BP " \n\t" -#ifdef CONFIG_X86_64 - "mov " VCPU_R8 "(%%" _ASM_CX "), %%r8 \n\t" - "mov " VCPU_R9 "(%%" _ASM_CX "), %%r9 \n\t" - "mov " VCPU_R10 "(%%" _ASM_CX "), %%r10 \n\t" - "mov " VCPU_R11 "(%%" _ASM_CX "), %%r11 \n\t" - "mov " VCPU_R12 "(%%" _ASM_CX "), %%r12 \n\t" - "mov " VCPU_R13 "(%%" _ASM_CX "), %%r13 \n\t" - "mov " VCPU_R14 "(%%" _ASM_CX "), %%r14 \n\t" - "mov " VCPU_R15 "(%%" _ASM_CX "), %%r15 \n\t" -#endif - /* Load guest RCX. This kills the vmx_vcpu pointer! */ - "mov " VCPU_RCX"(%%" _ASM_CX "), %%" _ASM_CX " \n\t" - - /* Enter guest mode */ - "call vmx_vmenter\n\t" - "jbe 2f \n\t" - - /* Temporarily save guest's RCX. */ - "push %%" _ASM_CX " \n\t" - - /* Reload RCX with @regs. */ - "mov " _WORD_SIZE "(%%" _ASM_SP "), %%" _ASM_CX " \n\t" - - /* Save all guest registers, including RCX from the stack */ - "mov %%" _ASM_AX ", " VCPU_RAX "(%%" _ASM_CX ") \n\t" - "mov %%" _ASM_BX ", " VCPU_RBX "(%%" _ASM_CX ") \n\t" - __ASM_SIZE(pop) " " VCPU_RCX "(%%" _ASM_CX ") \n\t" - "mov %%" _ASM_DX ", " VCPU_RDX "(%%" _ASM_CX ") \n\t" - "mov %%" _ASM_SI ", " VCPU_RSI "(%%" _ASM_CX ") \n\t" - "mov %%" _ASM_DI ", " VCPU_RDI "(%%" _ASM_CX ") \n\t" - "mov %%" _ASM_BP ", " VCPU_RBP "(%%" _ASM_CX ") \n\t" -#ifdef CONFIG_X86_64 - "mov %%r8, " VCPU_R8 "(%%" _ASM_CX ") \n\t" - "mov %%r9, " VCPU_R9 "(%%" _ASM_CX ") \n\t" - "mov %%r10, " VCPU_R10 "(%%" _ASM_CX ") \n\t" - "mov %%r11, " VCPU_R11 "(%%" _ASM_CX ") \n\t" - "mov %%r12, " VCPU_R12 "(%%" _ASM_CX ") \n\t" - "mov %%r13, " VCPU_R13 "(%%" _ASM_CX ") \n\t" - "mov %%r14, " VCPU_R14 "(%%" _ASM_CX ") \n\t" - "mov %%r15, " VCPU_R15 "(%%" _ASM_CX ") \n\t" -#endif - - /* Clear EBX to indicate VM-Exit (as opposed to VM-Fail). */ - "xor %%ebx, %%ebx \n\t" - - /* - * Clear all general purpose registers except RSP and RBX to prevent - * speculative use of the guest's values, even those that are reloaded - * via the stack. In theory, an L1 cache miss when restoring registers - * could lead to speculative execution with the guest's values. - * Zeroing XORs are dirt cheap, i.e. the extra paranoia is essentially - * free. RSP and RBX are exempt as RSP is restored by hardware during - * VM-Exit and RBX is explicitly loaded with 0 or 1 to "return" VM-Fail. - */ - "1: \n\t" -#ifdef CONFIG_X86_64 - "xor %%r8d, %%r8d \n\t" - "xor %%r9d, %%r9d \n\t" - "xor %%r10d, %%r10d \n\t" - "xor %%r11d, %%r11d \n\t" - "xor %%r12d, %%r12d \n\t" - "xor %%r13d, %%r13d \n\t" - "xor %%r14d, %%r14d \n\t" - "xor %%r15d, %%r15d \n\t" -#endif - "xor %%eax, %%eax \n\t" - "xor %%ecx, %%ecx \n\t" - "xor %%edx, %%edx \n\t" - "xor %%esi, %%esi \n\t" - "xor %%edi, %%edi \n\t" - "xor %%ebp, %%ebp \n\t" - - /* "POP" the vcpu_vmx pointer. */ - "add $" _WORD_SIZE ", %%" _ASM_SP " \n\t" - "pop %%" _ASM_BP " \n\t" - "jmp 3f \n\t" - - /* VM-Fail. Out-of-line to avoid a taken Jcc after VM-Exit. */ - "2: \n\t" - "mov $1, %%ebx \n\t" - "jmp 1b \n\t" - "3: \n\t" - + "call ____vmx_vcpu_run \n\t" : ASM_CALL_CONSTRAINT, "=b"(vmx->fail), #ifdef CONFIG_X86_64 "=D"((int){0}), "=S"((int){0}) @@ -6535,7 +6400,6 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) vcpu->arch.cr2 = read_cr2(); } -STACK_FRAME_NON_STANDARD(__vmx_vcpu_run); static void vmx_vcpu_run(struct kvm_vcpu *vcpu) { From patchwork Fri Jan 25 15:41:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781585 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ECF01139A for ; Fri, 25 Jan 2019 15:42:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DC7802F750 for ; Fri, 25 Jan 2019 15:42:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D12B92FA05; Fri, 25 Jan 2019 15:42:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 772A02F750 for ; Fri, 25 Jan 2019 15:42:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729124AbfAYPmI (ORCPT ); Fri, 25 Jan 2019 10:42:08 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728986AbfAYPly (ORCPT ); Fri, 25 Jan 2019 10:41:54 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877893" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 26/33] KVM: VMX: Fold __vmx_vcpu_run() back into vmx_vcpu_run() Date: Fri, 25 Jan 2019 07:41:13 -0800 Message-Id: <20190125154120.19385-27-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...now that the code is no longer tagged with STACK_FRAME_NON_STANDARD. Arguably, providing __vmx_vcpu_run() to break up vmx_vcpu_run() is valuable on its own, but the previous split was purposely made as small as possible to limit the effects STACK_FRAME_NON_STANDARD. In other words, the current split is now completely arbitrary and likely not the most logical. This also allows renaming ____vmx_vcpu_run() to __vmx_vcpu_run() in a future patch. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 59 +++++++++++++++++++----------------------- 1 file changed, 27 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 76b68492e077..3adfe7ab7438 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6370,37 +6370,6 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp) } } -static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) -{ - if (static_branch_unlikely(&vmx_l1d_should_flush)) - vmx_l1d_flush(vcpu); - - if (vcpu->arch.cr2 != read_cr2()) - write_cr2(vcpu->arch.cr2); - - asm( - "call ____vmx_vcpu_run \n\t" - : ASM_CALL_CONSTRAINT, "=b"(vmx->fail), -#ifdef CONFIG_X86_64 - "=D"((int){0}), "=S"((int){0}) - : "D"(vmx), "S"(&vcpu->arch.regs), -#else - "=a"((int){0}), "=d"((int){0}) - : "a"(vmx), "d"(&vcpu->arch.regs), -#endif - "b"(vmx->loaded_vmcs->launched) - : "cc", "memory" -#ifdef CONFIG_X86_64 - , "rax", "rcx", "rdx" - , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" -#else - , "ecx", "edi", "esi" -#endif - ); - - vcpu->arch.cr2 = read_cr2(); -} - static void vmx_vcpu_run(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -6468,7 +6437,33 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) */ x86_spec_ctrl_set_guest(vmx->spec_ctrl, 0); - __vmx_vcpu_run(vcpu, vmx); + if (static_branch_unlikely(&vmx_l1d_should_flush)) + vmx_l1d_flush(vcpu); + + if (vcpu->arch.cr2 != read_cr2()) + write_cr2(vcpu->arch.cr2); + + asm( + "call ____vmx_vcpu_run \n\t" + : ASM_CALL_CONSTRAINT, "=b"(vmx->fail), +#ifdef CONFIG_X86_64 + "=D"((int){0}), "=S"((int){0}) + : "D"(vmx), "S"(&vcpu->arch.regs), +#else + "=a"((int){0}), "=d"((int){0}) + : "a"(vmx), "d"(&vcpu->arch.regs), +#endif + "b"(vmx->loaded_vmcs->launched) + : "cc", "memory" +#ifdef CONFIG_X86_64 + , "rax", "rcx", "rdx" + , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" +#else + , "ecx", "edi", "esi" +#endif + ); + + vcpu->arch.cr2 = read_cr2(); /* * We do not use IBRS in the kernel. If this vCPU has used the From patchwork Fri Jan 25 15:41:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781577 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A3951515 for ; Fri, 25 Jan 2019 15:42:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 49BEA2F750 for ; Fri, 25 Jan 2019 15:42:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3E6662FA05; Fri, 25 Jan 2019 15:42:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA4792F750 for ; Fri, 25 Jan 2019 15:42:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729079AbfAYPlz (ORCPT ); Fri, 25 Jan 2019 10:41:55 -0500 Received: from mga09.intel.com ([134.134.136.24]:54656 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728860AbfAYPly (ORCPT ); Fri, 25 Jan 2019 10:41:54 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877894" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 27/33] KVM: VMX: Rename ____vmx_vcpu_run() to __vmx_vcpu_run() Date: Fri, 25 Jan 2019 07:41:14 -0800 Message-Id: <20190125154120.19385-28-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...now that the name is no longer usurped by a defunct helper function. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmenter.S | 6 +++--- arch/x86/kvm/vmx/vmx.c | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index db223cfe9812..835bb452f926 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -84,7 +84,7 @@ ENTRY(vmx_vmexit) ENDPROC(vmx_vmexit) /** - * ____vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode + * __vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode * @vmx: struct vcpu_vmx * * @regs: unsigned long * (to guest registers) * %RBX: VMCS launched status (non-zero indicates already launched) @@ -92,7 +92,7 @@ ENDPROC(vmx_vmexit) * Returns: * %RBX is 0 on VM-Exit, 1 on VM-Fail */ -ENTRY(____vmx_vcpu_run) +ENTRY(__vmx_vcpu_run) push %_ASM_BP mov %_ASM_SP, %_ASM_BP @@ -201,4 +201,4 @@ ENTRY(____vmx_vcpu_run) /* VM-Fail. Out-of-line to avoid a taken Jcc after VM-Exit. */ 2: mov $1, %ebx jmp 1b -ENDPROC(____vmx_vcpu_run) +ENDPROC(__vmx_vcpu_run) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 3adfe7ab7438..0ddd08f9337c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6444,7 +6444,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) write_cr2(vcpu->arch.cr2); asm( - "call ____vmx_vcpu_run \n\t" + "call __vmx_vcpu_run \n\t" : ASM_CALL_CONSTRAINT, "=b"(vmx->fail), #ifdef CONFIG_X86_64 "=D"((int){0}), "=S"((int){0}) From patchwork Fri Jan 25 15:41:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781583 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AC741515 for ; Fri, 25 Jan 2019 15:42:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 592722F750 for ; Fri, 25 Jan 2019 15:42:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4DD912FA05; Fri, 25 Jan 2019 15:42:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D05272F750 for ; Fri, 25 Jan 2019 15:42:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726917AbfAYPmI (ORCPT ); Fri, 25 Jan 2019 10:42:08 -0500 Received: from mga09.intel.com ([134.134.136.24]:54656 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728910AbfAYPly (ORCPT ); Fri, 25 Jan 2019 10:41:54 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877895" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 28/33] KVM: VMX: Use RAX as the scratch register during vCPU-run Date: Fri, 25 Jan 2019 07:41:15 -0800 Message-Id: <20190125154120.19385-29-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...to prepare for making the sub-routine callable from C code. That means returning the result in RAX. Since RAX will be used to return the result, use it as the scratch register as well to make the code readable and to document that the scratch register is more or less arbitrary. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmenter.S | 76 +++++++++++++++++++------------------- 1 file changed, 38 insertions(+), 38 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 835bb452f926..8b3d9e071095 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -106,31 +106,31 @@ ENTRY(__vmx_vcpu_run) lea -WORD_SIZE(%_ASM_SP), %_ASM_ARG2 call vmx_update_host_rsp - /* Load @regs to RCX. */ - mov (%_ASM_SP), %_ASM_CX + /* Load @regs to RAX. */ + mov (%_ASM_SP), %_ASM_AX /* Check if vmlaunch or vmresume is needed */ cmpb $0, %bl /* Load guest registers. Don't clobber flags. */ - mov VCPU_RAX(%_ASM_CX), %_ASM_AX - mov VCPU_RBX(%_ASM_CX), %_ASM_BX - mov VCPU_RDX(%_ASM_CX), %_ASM_DX - mov VCPU_RSI(%_ASM_CX), %_ASM_SI - mov VCPU_RDI(%_ASM_CX), %_ASM_DI - mov VCPU_RBP(%_ASM_CX), %_ASM_BP + mov VCPU_RBX(%_ASM_AX), %_ASM_BX + mov VCPU_RCX(%_ASM_AX), %_ASM_CX + mov VCPU_RDX(%_ASM_AX), %_ASM_DX + mov VCPU_RSI(%_ASM_AX), %_ASM_SI + mov VCPU_RDI(%_ASM_AX), %_ASM_DI + mov VCPU_RBP(%_ASM_AX), %_ASM_BP #ifdef CONFIG_X86_64 - mov VCPU_R8 (%_ASM_CX), %r8 - mov VCPU_R9 (%_ASM_CX), %r9 - mov VCPU_R10(%_ASM_CX), %r10 - mov VCPU_R11(%_ASM_CX), %r11 - mov VCPU_R12(%_ASM_CX), %r12 - mov VCPU_R13(%_ASM_CX), %r13 - mov VCPU_R14(%_ASM_CX), %r14 - mov VCPU_R15(%_ASM_CX), %r15 + mov VCPU_R8 (%_ASM_AX), %r8 + mov VCPU_R9 (%_ASM_AX), %r9 + mov VCPU_R10(%_ASM_AX), %r10 + mov VCPU_R11(%_ASM_AX), %r11 + mov VCPU_R12(%_ASM_AX), %r12 + mov VCPU_R13(%_ASM_AX), %r13 + mov VCPU_R14(%_ASM_AX), %r14 + mov VCPU_R15(%_ASM_AX), %r15 #endif - /* Load guest RCX. This kills the vmx_vcpu pointer! */ - mov VCPU_RCX(%_ASM_CX), %_ASM_CX + /* Load guest RAX. This kills the vmx_vcpu pointer! */ + mov VCPU_RAX(%_ASM_AX), %_ASM_AX /* Enter guest mode */ call vmx_vmenter @@ -138,29 +138,29 @@ ENTRY(__vmx_vcpu_run) /* Jump on VM-Fail. */ jbe 2f - /* Temporarily save guest's RCX. */ - push %_ASM_CX + /* Temporarily save guest's RAX. */ + push %_ASM_AX - /* Reload @regs to RCX. */ - mov WORD_SIZE(%_ASM_SP), %_ASM_CX + /* Reload @regs to RAX. */ + mov WORD_SIZE(%_ASM_SP), %_ASM_AX - /* Save all guest registers, including RCX from the stack */ - mov %_ASM_AX, VCPU_RAX(%_ASM_CX) - mov %_ASM_BX, VCPU_RBX(%_ASM_CX) - __ASM_SIZE(pop) VCPU_RCX(%_ASM_CX) - mov %_ASM_DX, VCPU_RDX(%_ASM_CX) - mov %_ASM_SI, VCPU_RSI(%_ASM_CX) - mov %_ASM_DI, VCPU_RDI(%_ASM_CX) - mov %_ASM_BP, VCPU_RBP(%_ASM_CX) + /* Save all guest registers, including RAX from the stack */ + __ASM_SIZE(pop) VCPU_RAX(%_ASM_AX) + mov %_ASM_BX, VCPU_RBX(%_ASM_AX) + mov %_ASM_CX, VCPU_RCX(%_ASM_AX) + mov %_ASM_DX, VCPU_RDX(%_ASM_AX) + mov %_ASM_SI, VCPU_RSI(%_ASM_AX) + mov %_ASM_DI, VCPU_RDI(%_ASM_AX) + mov %_ASM_BP, VCPU_RBP(%_ASM_AX) #ifdef CONFIG_X86_64 - mov %r8, VCPU_R8 (%_ASM_CX) - mov %r9, VCPU_R9 (%_ASM_CX) - mov %r10, VCPU_R10(%_ASM_CX) - mov %r11, VCPU_R11(%_ASM_CX) - mov %r12, VCPU_R12(%_ASM_CX) - mov %r13, VCPU_R13(%_ASM_CX) - mov %r14, VCPU_R14(%_ASM_CX) - mov %r15, VCPU_R15(%_ASM_CX) + mov %r8, VCPU_R8 (%_ASM_AX) + mov %r9, VCPU_R9 (%_ASM_AX) + mov %r10, VCPU_R10(%_ASM_AX) + mov %r11, VCPU_R11(%_ASM_AX) + mov %r12, VCPU_R12(%_ASM_AX) + mov %r13, VCPU_R13(%_ASM_AX) + mov %r14, VCPU_R14(%_ASM_AX) + mov %r15, VCPU_R15(%_ASM_AX) #endif /* Clear EBX to indicate VM-Exit (as opposed to VM-Fail). */ From patchwork Fri Jan 25 15:41:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781569 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF4A91515 for ; Fri, 25 Jan 2019 15:41:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DE6D92F750 for ; Fri, 25 Jan 2019 15:41:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D336C2FA05; Fri, 25 Jan 2019 15:41:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7F7682F750 for ; Fri, 25 Jan 2019 15:41:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729098AbfAYPlz (ORCPT ); Fri, 25 Jan 2019 10:41:55 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729027AbfAYPly (ORCPT ); Fri, 25 Jan 2019 10:41:54 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877896" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 29/33] KVM: VMX: Pass @launched to the vCPU-run asm via standard ABI regs Date: Fri, 25 Jan 2019 07:41:16 -0800 Message-Id: <20190125154120.19385-30-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...to prepare for making the sub-routine callable from C code. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmenter.S | 5 ++++- arch/x86/kvm/vmx/vmx.c | 13 ++++++------- 2 files changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 8b3d9e071095..3299fafbaa9b 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -87,7 +87,7 @@ ENDPROC(vmx_vmexit) * __vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode * @vmx: struct vcpu_vmx * * @regs: unsigned long * (to guest registers) - * %RBX: VMCS launched status (non-zero indicates already launched) + * @launched: %true if the VMCS has been launched * * Returns: * %RBX is 0 on VM-Exit, 1 on VM-Fail @@ -102,6 +102,9 @@ ENTRY(__vmx_vcpu_run) */ push %_ASM_ARG2 + /* Copy @launched to BL, _ASM_ARG3 is volatile. */ + mov %_ASM_ARG3B, %bl + /* Adjust RSP to account for the CALL to vmx_vmenter(). */ lea -WORD_SIZE(%_ASM_SP), %_ASM_ARG2 call vmx_update_host_rsp diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0ddd08f9337c..70e9b1820cc9 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6447,19 +6447,18 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) "call __vmx_vcpu_run \n\t" : ASM_CALL_CONSTRAINT, "=b"(vmx->fail), #ifdef CONFIG_X86_64 - "=D"((int){0}), "=S"((int){0}) - : "D"(vmx), "S"(&vcpu->arch.regs), + "=D"((int){0}), "=S"((int){0}), "=d"((int){0}) + : "D"(vmx), "S"(&vcpu->arch.regs), "d"(vmx->loaded_vmcs->launched) #else - "=a"((int){0}), "=d"((int){0}) - : "a"(vmx), "d"(&vcpu->arch.regs), + "=a"((int){0}), "=d"((int){0}), "=c"((int){0}) + : "a"(vmx), "d"(&vcpu->arch.regs), "c"(vmx->loaded_vmcs->launched) #endif - "b"(vmx->loaded_vmcs->launched) : "cc", "memory" #ifdef CONFIG_X86_64 - , "rax", "rcx", "rdx" + , "rax", "rcx" , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" #else - , "ecx", "edi", "esi" + , "edi", "esi" #endif ); From patchwork Fri Jan 25 15:41:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781581 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D8161515 for ; Fri, 25 Jan 2019 15:42:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D6A02F750 for ; Fri, 25 Jan 2019 15:42:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 423242FA05; Fri, 25 Jan 2019 15:42:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D2FFF2F750 for ; Fri, 25 Jan 2019 15:42:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729108AbfAYPmF (ORCPT ); Fri, 25 Jan 2019 10:42:05 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729060AbfAYPly (ORCPT ); Fri, 25 Jan 2019 10:41:54 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877897" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 30/33] KVM: VMX: Return VM-Fail from vCPU-run assembly via standard ABI reg Date: Fri, 25 Jan 2019 07:41:17 -0800 Message-Id: <20190125154120.19385-31-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...to prepare for making the assembly sub-routine callable from C code. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmenter.S | 16 ++++++++-------- arch/x86/kvm/vmx/vmx.c | 8 ++++---- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 3299fafbaa9b..246404b38b37 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -90,7 +90,7 @@ ENDPROC(vmx_vmexit) * @launched: %true if the VMCS has been launched * * Returns: - * %RBX is 0 on VM-Exit, 1 on VM-Fail + * 0 on VM-Exit, 1 on VM-Fail */ ENTRY(__vmx_vcpu_run) push %_ASM_BP @@ -166,17 +166,17 @@ ENTRY(__vmx_vcpu_run) mov %r15, VCPU_R15(%_ASM_AX) #endif - /* Clear EBX to indicate VM-Exit (as opposed to VM-Fail). */ - xor %ebx, %ebx + /* Clear RAX to indicate VM-Exit (as opposed to VM-Fail). */ + xor %eax, %eax /* - * Clear all general purpose registers except RSP and RBX to prevent + * Clear all general purpose registers except RSP and RAX to prevent * speculative use of the guest's values, even those that are reloaded * via the stack. In theory, an L1 cache miss when restoring registers * could lead to speculative execution with the guest's values. * Zeroing XORs are dirt cheap, i.e. the extra paranoia is essentially - * free. RSP and RBX are exempt as RSP is restored by hardware during - * VM-Exit and RBX is explicitly loaded with 0 or 1 to "return" VM-Fail. + * free. RSP and RAX are exempt as RSP is restored by hardware during + * VM-Exit and RAX is explicitly loaded with 0 or 1 to return VM-Fail. */ 1: #ifdef CONFIG_X86_64 @@ -189,7 +189,7 @@ ENTRY(__vmx_vcpu_run) xor %r14d, %r14d xor %r15d, %r15d #endif - xor %eax, %eax + xor %ebx, %ebx xor %ecx, %ecx xor %edx, %edx xor %esi, %esi @@ -202,6 +202,6 @@ ENTRY(__vmx_vcpu_run) ret /* VM-Fail. Out-of-line to avoid a taken Jcc after VM-Exit. */ -2: mov $1, %ebx +2: mov $1, %eax jmp 1b ENDPROC(__vmx_vcpu_run) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 70e9b1820cc9..73a1e656b123 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6445,20 +6445,20 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) asm( "call __vmx_vcpu_run \n\t" - : ASM_CALL_CONSTRAINT, "=b"(vmx->fail), + : ASM_CALL_CONSTRAINT, "=a"(vmx->fail), #ifdef CONFIG_X86_64 "=D"((int){0}), "=S"((int){0}), "=d"((int){0}) : "D"(vmx), "S"(&vcpu->arch.regs), "d"(vmx->loaded_vmcs->launched) #else - "=a"((int){0}), "=d"((int){0}), "=c"((int){0}) + "=d"((int){0}), "=c"((int){0}) : "a"(vmx), "d"(&vcpu->arch.regs), "c"(vmx->loaded_vmcs->launched) #endif : "cc", "memory" #ifdef CONFIG_X86_64 - , "rax", "rcx" + , "rbx", "rcx" , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" #else - , "edi", "esi" + , "ebx", "edi", "esi" #endif ); From patchwork Fri Jan 25 15:41:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781573 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 824C8139A for ; Fri, 25 Jan 2019 15:41:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71D012F750 for ; Fri, 25 Jan 2019 15:41:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 63C3A2FA05; Fri, 25 Jan 2019 15:41:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0950C2F750 for ; Fri, 25 Jan 2019 15:41:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729119AbfAYPl4 (ORCPT ); Fri, 25 Jan 2019 10:41:56 -0500 Received: from mga09.intel.com ([134.134.136.24]:54656 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729059AbfAYPlz (ORCPT ); Fri, 25 Jan 2019 10:41:55 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877898" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 31/33] KVM: VMX: Preserve callee-save registers in vCPU-run asm sub-routine Date: Fri, 25 Jan 2019 07:41:18 -0800 Message-Id: <20190125154120.19385-32-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...to make it callable from C code. Note that because KVM chooses to be ultra paranoid about guest register values, all callee-save registers are still cleared after VM-Exit even though the host's values are now reloaded from the stack. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmenter.S | 21 +++++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 5 +---- 2 files changed, 22 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 246404b38b37..900be1d542c8 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -95,6 +95,16 @@ ENDPROC(vmx_vmexit) ENTRY(__vmx_vcpu_run) push %_ASM_BP mov %_ASM_SP, %_ASM_BP +#ifdef CONFIG_X86_64 + push %r15 + push %r14 + push %r13 + push %r12 +#else + push %edi + push %esi +#endif + push %_ASM_BX /* * Save @regs, _ASM_ARG2 may be modified by vmx_update_host_rsp() and @@ -198,6 +208,17 @@ ENTRY(__vmx_vcpu_run) /* "POP" @regs. */ add $WORD_SIZE, %_ASM_SP + pop %_ASM_BX + +#ifdef CONFIG_X86_64 + pop %r12 + pop %r13 + pop %r14 + pop %r15 +#else + pop %esi + pop %edi +#endif pop %_ASM_BP ret diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 73a1e656b123..b9064b646a85 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6455,10 +6455,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) #endif : "cc", "memory" #ifdef CONFIG_X86_64 - , "rbx", "rcx" - , "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15" -#else - , "ebx", "edi", "esi" + , "rcx", "r8", "r9", "r10", "r11" #endif ); From patchwork Fri Jan 25 15:41:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781579 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CCF2E139A for ; Fri, 25 Jan 2019 15:42:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BC7922F750 for ; Fri, 25 Jan 2019 15:42:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B10312FA05; Fri, 25 Jan 2019 15:42:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 634BF2F750 for ; Fri, 25 Jan 2019 15:42:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728937AbfAYPmC (ORCPT ); Fri, 25 Jan 2019 10:42:02 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729081AbfAYPlz (ORCPT ); Fri, 25 Jan 2019 10:41:55 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877899" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 32/33] KVM: VMX: Call vCPU-run asm sub-routine from C and remove clobbering Date: Fri, 25 Jan 2019 07:41:19 -0800 Message-Id: <20190125154120.19385-33-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...now that the sub-routine follows standard calling conventions. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 19 ++++--------------- 1 file changed, 4 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b9064b646a85..fa75fb8bdaa0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6370,6 +6370,8 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp) } } +bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched); + static void vmx_vcpu_run(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -6443,21 +6445,8 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) if (vcpu->arch.cr2 != read_cr2()) write_cr2(vcpu->arch.cr2); - asm( - "call __vmx_vcpu_run \n\t" - : ASM_CALL_CONSTRAINT, "=a"(vmx->fail), -#ifdef CONFIG_X86_64 - "=D"((int){0}), "=S"((int){0}), "=d"((int){0}) - : "D"(vmx), "S"(&vcpu->arch.regs), "d"(vmx->loaded_vmcs->launched) -#else - "=d"((int){0}), "=c"((int){0}) - : "a"(vmx), "d"(&vcpu->arch.regs), "c"(vmx->loaded_vmcs->launched) -#endif - : "cc", "memory" -#ifdef CONFIG_X86_64 - , "rcx", "r8", "r9", "r10", "r11" -#endif - ); + vmx->fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs, + vmx->loaded_vmcs->launched); vcpu->arch.cr2 = read_cr2(); From patchwork Fri Jan 25 15:41:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781571 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 13FA21515 for ; Fri, 25 Jan 2019 15:41:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 037232F750 for ; Fri, 25 Jan 2019 15:41:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EC60A2FA05; Fri, 25 Jan 2019 15:41:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A00E52F750 for ; Fri, 25 Jan 2019 15:41:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729135AbfAYPl4 (ORCPT ); Fri, 25 Jan 2019 10:41:56 -0500 Received: from mga09.intel.com ([134.134.136.24]:54653 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729091AbfAYPlz (ORCPT ); Fri, 25 Jan 2019 10:41:55 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877900" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 33/33] KVM: VMX: Reorder clearing of registers in the vCPU-run assembly flow Date: Fri, 25 Jan 2019 07:41:20 -0800 Message-Id: <20190125154120.19385-34-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the clearing of the common registers (not 64-bit-only) to the start of the flow that clears volatile registers holding guest state. This is purely a cosmetic change so that the label doesn't point at a blank line and a #define. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmenter.S | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 900be1d542c8..79e5a50edd10 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -188,7 +188,12 @@ ENTRY(__vmx_vcpu_run) * free. RSP and RAX are exempt as RSP is restored by hardware during * VM-Exit and RAX is explicitly loaded with 0 or 1 to return VM-Fail. */ -1: +1: xor %ebx, %ebx + xor %ecx, %ecx + xor %edx, %edx + xor %esi, %esi + xor %edi, %edi + xor %ebp, %ebp #ifdef CONFIG_X86_64 xor %r8d, %r8d xor %r9d, %r9d @@ -199,12 +204,6 @@ ENTRY(__vmx_vcpu_run) xor %r14d, %r14d xor %r15d, %r15d #endif - xor %ebx, %ebx - xor %ecx, %ecx - xor %edx, %edx - xor %esi, %esi - xor %edi, %edi - xor %ebp, %ebp /* "POP" @regs. */ add $WORD_SIZE, %_ASM_SP