From patchwork Mon Jul 23 19:32:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10540801 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D109E112B for ; Mon, 23 Jul 2018 19:33:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF6822832D for ; Mon, 23 Jul 2018 19:33:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B3A36283C9; Mon, 23 Jul 2018 19:33:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 396822841C for ; Mon, 23 Jul 2018 19:33:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388129AbeGWUgO (ORCPT ); Mon, 23 Jul 2018 16:36:14 -0400 Received: from mga02.intel.com ([134.134.136.20]:3729 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388104AbeGWUgK (ORCPT ); Mon, 23 Jul 2018 16:36:10 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Jul 2018 12:33:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,394,1526367600"; d="scan'208";a="247679516" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.132]) by fmsmga005.fm.intel.com with ESMTP; 23 Jul 2018 12:32:54 -0700 From: Sean Christopherson To: kvm@vger.kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com Cc: sean.j.christopherson@intel.com Subject: [PATCH 07/11] KVM: vmx: compute need to reload FS/GS/LDT on demand Date: Mon, 23 Jul 2018 12:32:46 -0700 Message-Id: <20180723193250.13555-8-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180723193250.13555-1-sean.j.christopherson@intel.com> References: <20180723193250.13555-1-sean.j.christopherson@intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Remove fs_reload_needed and gs_ldt_reload_needed from host_state and instead compute whether we need to reload various state at the time we actually do the reload. The state that is tracked by the *_reload_needed variables is not any more volatile than the trackers themselves. Signed-off-by: Sean Christopherson Reviewed-by: Peter Shier Tested-by: Peter Shier --- arch/x86/kvm/vmx.c | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 5e884ad3ec51..2f070f192906 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -804,8 +804,6 @@ struct vcpu_vmx { #ifdef CONFIG_X86_64 u16 ds_sel, es_sel; #endif - int gs_ldt_reload_needed; - int fs_reload_needed; } host_state; struct { int vm86_active; @@ -2590,7 +2588,6 @@ static void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) * allow segment selectors with cpl > 0 or ti == 1. */ vmx->host_state.ldt_sel = kvm_read_ldt(); - vmx->host_state.gs_ldt_reload_needed = vmx->host_state.ldt_sel; #ifdef CONFIG_X86_64 savesegment(ds, vmx->host_state.ds_sel); @@ -2613,20 +2610,15 @@ static void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) #endif vmx->host_state.fs_sel = fs_sel; - if (!(fs_sel & 7)) { + if (!(fs_sel & 7)) vmcs_write16(HOST_FS_SELECTOR, fs_sel); - vmx->host_state.fs_reload_needed = 0; - } else { + else vmcs_write16(HOST_FS_SELECTOR, 0); - vmx->host_state.fs_reload_needed = 1; - } vmx->host_state.gs_sel = gs_sel; if (!(gs_sel & 7)) vmcs_write16(HOST_GS_SELECTOR, gs_sel); - else { + else vmcs_write16(HOST_GS_SELECTOR, 0); - vmx->host_state.gs_ldt_reload_needed = 1; - } vmcs_writel(HOST_FS_BASE, fs_base); vmcs_writel(HOST_GS_BASE, gs_base); @@ -2653,7 +2645,7 @@ static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx) if (is_long_mode(&vmx->vcpu)) rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base); #endif - if (vmx->host_state.gs_ldt_reload_needed) { + if (vmx->host_state.ldt_sel || (vmx->host_state.gs_sel & 7)) { kvm_load_ldt(vmx->host_state.ldt_sel); #ifdef CONFIG_X86_64 load_gs_index(vmx->host_state.gs_sel); @@ -2661,7 +2653,7 @@ static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx) loadsegment(gs, vmx->host_state.gs_sel); #endif } - if (vmx->host_state.fs_reload_needed) + if (vmx->host_state.fs_sel & 7) loadsegment(fs, vmx->host_state.fs_sel); #ifdef CONFIG_X86_64 if (unlikely(vmx->host_state.ds_sel | vmx->host_state.es_sel)) {