From patchwork Fri Mar 11 10:26:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Duan, Zhenzhong" X-Patchwork-Id: 12777758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73FCCC433EF for ; Fri, 11 Mar 2022 10:32:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347936AbiCKKdL (ORCPT ); Fri, 11 Mar 2022 05:33:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347932AbiCKKdK (ORCPT ); Fri, 11 Mar 2022 05:33:10 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F17C1B8C80; Fri, 11 Mar 2022 02:32:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646994727; x=1678530727; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CUrQ4iZGpwAUu9vgjv4Bg8vnW+Jbs0mMi6B0DF1SJL8=; b=DrUJWJ2Ef5pZyi1OdBLWj+UN9Fmqj8sRlUaHQ2YfvozNwY0HMky9CVcm rUh6jte0AEdv4xiUlptcwGuwPQnwk7eF8DgQb/tpkc2T4qHeuB7JFmvRp L2X1GkJqUVvdNT3U4iE46zL7ZBtox8MrlU0OHbbg0oSokPl7NZTNyFb1b zJ9zgkFZLbX8guUP6EMgujF+Wv7dPD+x1VsdDSCNKG2snDhN8acZ4fgDD O8XxeTDaosTvcdlBEw5+xzCVka75bZCGECwcp6+Pl6U67btbio7g7NLfZ uZssGzFBDNQ86yWRVaESoH/iVhAwpF3cdpkV0srgV8gekPw0lq6BFdXRd A==; X-IronPort-AV: E=McAfee;i="6200,9189,10282"; a="341976368" X-IronPort-AV: E=Sophos;i="5.90,173,1643702400"; d="scan'208";a="341976368" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 02:32:07 -0800 X-IronPort-AV: E=Sophos;i="5.90,173,1643702400"; d="scan'208";a="538955709" Received: from duan-server-s2600bt.bj.intel.com ([10.240.192.123]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 02:32:04 -0800 From: Zhenzhong Duan To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Subject: [PATCH v2 1/2] KVM: x86: Remove unnecessory assignment to uret->data Date: Fri, 11 Mar 2022 18:26:42 +0800 Message-Id: <20220311102643.807507-2-zhenzhong.duan@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220311102643.807507-1-zhenzhong.duan@intel.com> References: <20220311102643.807507-1-zhenzhong.duan@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org While update_transition_efer() doesn't unconditionally set uret->data, which on the surface makes this look suspect, but it's safe because uret->data is consumed if and only if uret->load_into_hardware is true, and it's (a) set to false if uret->data isn't updated and (b) uret->data is guaranteed to be updated before it's set to true. Drop the local "msr" and use "vmx" directly instead of redoing to_vmx(). Signed-off-by: Zhenzhong Duan --- arch/x86/kvm/vmx/vmx.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b730d799c26e..cadb3769031c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2871,21 +2871,17 @@ static void enter_rmode(struct kvm_vcpu *vcpu) int vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer) { struct vcpu_vmx *vmx = to_vmx(vcpu); - struct vmx_uret_msr *msr = vmx_find_uret_msr(vmx, MSR_EFER); /* Nothing to do if hardware doesn't support EFER. */ - if (!msr) + if (!vmx_find_uret_msr(vmx, MSR_EFER)) return 0; vcpu->arch.efer = efer; - if (efer & EFER_LMA) { - vm_entry_controls_setbit(to_vmx(vcpu), VM_ENTRY_IA32E_MODE); - msr->data = efer; - } else { - vm_entry_controls_clearbit(to_vmx(vcpu), VM_ENTRY_IA32E_MODE); + if (efer & EFER_LMA) + vm_entry_controls_setbit(vmx, VM_ENTRY_IA32E_MODE); + else + vm_entry_controls_clearbit(vmx, VM_ENTRY_IA32E_MODE); - msr->data = efer & ~EFER_LME; - } vmx_setup_uret_msrs(vmx); return 0; } From patchwork Fri Mar 11 10:26:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Duan, Zhenzhong" X-Patchwork-Id: 12777759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8704C433F5 for ; Fri, 11 Mar 2022 10:32:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347949AbiCKKdR (ORCPT ); Fri, 11 Mar 2022 05:33:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347940AbiCKKdN (ORCPT ); Fri, 11 Mar 2022 05:33:13 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30E021BFDF8; Fri, 11 Mar 2022 02:32:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646994731; x=1678530731; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=e/UsX7cNMuryOkY6jULWGNVc2efVlgmX0ItMhmQS2qw=; b=bCFvsCV9nTerews7pfommMehZmsZhKkpJs3OSqHNtTC/aIbCvocuMtXJ NTCsD34ogumNbYX3yh0hJpRqwSUkJ7D88xsaxMhdvNCiwngib4ZHIuQSz AnWDtiJLVuvJSgrDdQ6M+WJFoQ9/mY95gU99XZruGiK3GlLEU8QLawybi OpOc1SoVLfJ/B6K+EPTcx2BJJpd474PC9MgTmyuXVGw8/NxSJq98kf2GQ /jfQ+alrk4c2VR44MtmCmQLrwhJRDzR6/Y6XhSEFB1lMVdBqvBDIL22BF C6DkwO+YFyPF0na53iH+6koDNsn4NmUQigw0FbCFbWHN+NQhpsbKod+L7 w==; X-IronPort-AV: E=McAfee;i="6200,9189,10282"; a="341976375" X-IronPort-AV: E=Sophos;i="5.90,173,1643702400"; d="scan'208";a="341976375" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 02:32:10 -0800 X-IronPort-AV: E=Sophos;i="5.90,173,1643702400"; d="scan'208";a="538955722" Received: from duan-server-s2600bt.bj.intel.com ([10.240.192.123]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 02:32:07 -0800 From: Zhenzhong Duan To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Subject: [PATCH v2 2/2] KVM: x86: Remove redundant vm_entry_controls_clearbit() call Date: Fri, 11 Mar 2022 18:26:43 +0800 Message-Id: <20220311102643.807507-3-zhenzhong.duan@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220311102643.807507-1-zhenzhong.duan@intel.com> References: <20220311102643.807507-1-zhenzhong.duan@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When emulating exit from long mode, EFER_LMA is cleared which lead to efer writing emulation, which will unset VM_ENTRY_IA32E_MODE control bit as requested by SDM. So no need to unset VM_ENTRY_IA32E_MODE again in exit_lmode() explicitly. In fact benefited from shadow controls mechanism, this change doesn't eliminate vmread or vmwrite. In case EFER isn't supported by hardware, long mode isn't supported, so this will no break. Signed-off-by: Zhenzhong Duan --- arch/x86/kvm/vmx/vmx.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index cadb3769031c..70717f56a2a2 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2907,7 +2907,6 @@ static void enter_lmode(struct kvm_vcpu *vcpu) static void exit_lmode(struct kvm_vcpu *vcpu) { - vm_entry_controls_clearbit(to_vmx(vcpu), VM_ENTRY_IA32E_MODE); vmx_set_efer(vcpu, vcpu->arch.efer & ~EFER_LMA); }