From patchwork Tue Jul 31 15:32:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10550931 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 57148A748 for ; Tue, 31 Jul 2018 15:32:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 44BB32B181 for ; Tue, 31 Jul 2018 15:32:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 380A82B1A2; Tue, 31 Jul 2018 15:32:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C6BE92B181 for ; Tue, 31 Jul 2018 15:32:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732461AbeGaRN2 (ORCPT ); Tue, 31 Jul 2018 13:13:28 -0400 Received: from mga03.intel.com ([134.134.136.65]:23684 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732442AbeGaRN2 (ORCPT ); Tue, 31 Jul 2018 13:13:28 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 Jul 2018 08:32:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,427,1526367600"; d="scan'208";a="79342640" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.132]) by orsmga002.jf.intel.com with ESMTP; 31 Jul 2018 08:32:20 -0700 From: Sean Christopherson To: kvm@vger.kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com Cc: jmattson@google.com, Sean Christopherson Subject: [PATCH 11/16] KVM: nVMX: do early preparation of vmcs02 before check_vmentry_postreqs() Date: Tue, 31 Jul 2018 08:32:10 -0700 Message-Id: <20180731153215.31794-12-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180731153215.31794-1-sean.j.christopherson@intel.com> References: <20180731153215.31794-1-sean.j.christopherson@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In anticipation of using vmcs02 to do early consistency checks, move the early preparation of vmcs02 prior to checking the postreqs. The downside of this approach is that we'll unnecessary load vmcs02 in the case that check_vmentry_postreqs() fails, but that is essentially our slow path anyways (not actually slow, but it's the path we don't really care about optimizing). Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 6e7ee3637f12..d5328518b1c6 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -11808,6 +11808,15 @@ static int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu) u32 msr_entry_idx; u32 exit_qual; + if (!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS)) + vmx->nested.vmcs01_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL); + + vmx_switch_vmcs(vcpu, &vmx->nested.vmcs02); + + prepare_vmcs02_early(vmx, vmcs12); + + nested_get_vmcs12_pages(vcpu, vmcs12); + if (!is_vmentry) goto enter_non_root_mode; @@ -11817,22 +11826,14 @@ static int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu) enter_non_root_mode: enter_guest_mode(vcpu); - if (!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS)) - vmx->nested.vmcs01_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL); - - vmx_switch_vmcs(vcpu, &vmx->nested.vmcs02); vmx_segment_cache_clear(vmx); if (vmcs12->cpu_based_vm_exec_control & CPU_BASED_USE_TSC_OFFSETING) vcpu->arch.tsc_offset += vmcs12->tsc_offset; - prepare_vmcs02_early(vmx, vmcs12); - if (prepare_vmcs02(vcpu, vmcs12, &exit_qual)) goto fail; - nested_get_vmcs12_pages(vcpu, vmcs12); - exit_reason = EXIT_REASON_MSR_LOAD_FAIL; msr_entry_idx = nested_vmx_load_msr(vcpu, vmcs12->vm_entry_msr_load_addr, @@ -11852,7 +11853,6 @@ static int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu) if (vmcs12->cpu_based_vm_exec_control & CPU_BASED_USE_TSC_OFFSETING) vcpu->arch.tsc_offset -= vmcs12->tsc_offset; leave_guest_mode(vcpu); - vmx_switch_vmcs(vcpu, &vmx->vmcs01); /* * A consistency check VMExit during L1's VMEnter to L2 is a subset @@ -11861,6 +11861,7 @@ static int nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu) * reason and exit-qualification parameters). */ consistency_check_vmexit: + vmx_switch_vmcs(vcpu, &vmx->vmcs01); vm_entry_controls_reset_shadow(vmx); vm_exit_controls_reset_shadow(vmx); vmx_segment_cache_clear(vmx);