From patchwork Fri Jan 25 15:41:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10781589 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9C0AB1515 for ; Fri, 25 Jan 2019 15:42:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8BA732F750 for ; Fri, 25 Jan 2019 15:42:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 803892FA05; Fri, 25 Jan 2019 15:42:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 24B8A2F750 for ; Fri, 25 Jan 2019 15:42:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728984AbfAYPlx (ORCPT ); Fri, 25 Jan 2019 10:41:53 -0500 Received: from mga09.intel.com ([134.134.136.24]:54654 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728860AbfAYPlu (ORCPT ); Fri, 25 Jan 2019 10:41:50 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2019 07:41:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,521,1539673200"; d="scan'208";a="128877883" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by orsmga002.jf.intel.com with ESMTP; 25 Jan 2019 07:41:46 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Jim Mattson , Konrad Rzeszutek Wilk Subject: [PATCH v3 16/33] KVM: VMX: Load/save guest CR2 via C code in __vmx_vcpu_run() Date: Fri, 25 Jan 2019 07:41:03 -0800 Message-Id: <20190125154120.19385-17-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190125154120.19385-1-sean.j.christopherson@intel.com> References: <20190125154120.19385-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ...to eliminate its parameter and struct vcpu_vmx offset definition from the assembly blob. Accessing CR2 from C versus assembly doesn't change the likelihood of taking a page fault (and modifying CR2) while it's loaded with the guest's value, so long as we don't do anything silly between accessing CR2 and VM-Enter/VM-Exit. Signed-off-by: Sean Christopherson Reviewed-by: Jim Mattson --- arch/x86/kvm/vmx/vmx.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index fff1e5b5febe..ef6b3fb2e98b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6374,6 +6374,9 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); + if (vcpu->arch.cr2 != read_cr2()) + write_cr2(vcpu->arch.cr2); + asm( /* Store host registers */ "push %%" _ASM_BP " \n\t" @@ -6394,13 +6397,6 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "1: \n\t" "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ - /* Reload cr2 if changed */ - "mov %c[cr2](%%" _ASM_CX "), %%" _ASM_AX " \n\t" - "mov %%cr2, %%" _ASM_DX " \n\t" - "cmp %%" _ASM_AX ", %%" _ASM_DX " \n\t" - "je 3f \n\t" - "mov %%" _ASM_AX", %%cr2 \n\t" - "3: \n\t" /* Check if vmlaunch or vmresume is needed */ "cmpb $0, %c[launched](%%" _ASM_CX ") \n\t" /* Load guest registers. Don't clobber flags. */ @@ -6470,9 +6466,6 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) "xor %%r14d, %%r14d \n\t" "xor %%r15d, %%r15d \n\t" #endif - "mov %%cr2, %%" _ASM_AX " \n\t" - "mov %%" _ASM_AX ", %c[cr2](%%" _ASM_CX ") \n\t" - "xor %%eax, %%eax \n\t" "xor %%ebx, %%ebx \n\t" "xor %%ecx, %%ecx \n\t" @@ -6503,7 +6496,6 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) [r14]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R14])), [r15]"i"(offsetof(struct vcpu_vmx, vcpu.arch.regs[VCPU_REGS_R15])), #endif - [cr2]"i"(offsetof(struct vcpu_vmx, vcpu.arch.cr2)), [wordsize]"i"(sizeof(ulong)) : "cc", "memory" #ifdef CONFIG_X86_64 @@ -6513,6 +6505,8 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) , "eax", "ebx", "edx" #endif ); + + vcpu->arch.cr2 = read_cr2(); } STACK_FRAME_NON_STANDARD(__vmx_vcpu_run);