From patchwork Fri Jul 2 22:04:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 12356597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C161C07E96 for ; Fri, 2 Jul 2021 22:05:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 03C4E6141C for ; Fri, 2 Jul 2021 22:05:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233001AbhGBWHy (ORCPT ); Fri, 2 Jul 2021 18:07:54 -0400 Received: from mga02.intel.com ([134.134.136.20]:51166 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230249AbhGBWHx (ORCPT ); Fri, 2 Jul 2021 18:07:53 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10033"; a="195951881" X-IronPort-AV: E=Sophos;i="5.83,320,1616482800"; d="scan'208";a="195951881" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jul 2021 15:05:19 -0700 X-IronPort-AV: E=Sophos;i="5.83,320,1616482800"; d="scan'208";a="642814671" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jul 2021 15:05:19 -0700 From: isaku.yamahata@intel.com To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , erdemaktas@google.com, Connor Kuehl , Sean Christopherson , x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com Subject: [RFC PATCH v2 02/69] KVM: X86: move kvm_cpu_vmxon() from vmx.c to virtext.h Date: Fri, 2 Jul 2021 15:04:08 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Isaku Yamahata This is preparatory clean up for TDX support. Move out kvm_cpu_vmxon() from vmx.c to virtext.h with rename to vcpu_vmxon(). Which will be used outside of kvm. SEAMLDER to load TDX module which occurs at kernel early boot phase. As bonus, this also increases the symetry with cpu_vmxoff(). No functional change is intended. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/include/asm/virtext.h | 25 +++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 22 +--------------------- 2 files changed, 26 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h index 8757078d4442..9234b85dac24 100644 --- a/arch/x86/include/asm/virtext.h +++ b/arch/x86/include/asm/virtext.h @@ -30,6 +30,31 @@ static inline int cpu_has_vmx(void) } +/** + * cpu_vmxon() - Enable VMX on the current CPU + * + * Set CR4.VMXE and enable VMX + */ +static inline int cpu_vmxon(u64 vmxon_pointer) +{ + u64 msr; + + cr4_set_bits(X86_CR4_VMXE); + + asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t" + _ASM_EXTABLE(1b, %l[fault]) + : : [vmxon_pointer] "m"(vmxon_pointer) + : : fault); + return 0; + +fault: + WARN_ONCE(1, "VMXON faulted, MSR_IA32_FEAT_CTL (0x3a) = 0x%llx\n", + rdmsrl_safe(MSR_IA32_FEAT_CTL, &msr) ? 0xdeadbeef : msr); + cr4_clear_bits(X86_CR4_VMXE); + + return -EFAULT; +} + /** * cpu_vmxoff() - Disable VMX on the current CPU * diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c2a779b688e6..d73ba7a6ff8d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2376,26 +2376,6 @@ static __init int vmx_disabled_by_bios(void) !boot_cpu_has(X86_FEATURE_VMX); } -static int kvm_cpu_vmxon(u64 vmxon_pointer) -{ - u64 msr; - - cr4_set_bits(X86_CR4_VMXE); - - asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t" - _ASM_EXTABLE(1b, %l[fault]) - : : [vmxon_pointer] "m"(vmxon_pointer) - : : fault); - return 0; - -fault: - WARN_ONCE(1, "VMXON faulted, MSR_IA32_FEAT_CTL (0x3a) = 0x%llx\n", - rdmsrl_safe(MSR_IA32_FEAT_CTL, &msr) ? 0xdeadbeef : msr); - cr4_clear_bits(X86_CR4_VMXE); - - return -EFAULT; -} - static int hardware_enable(void) { int cpu = raw_smp_processor_id(); @@ -2415,7 +2395,7 @@ static int hardware_enable(void) intel_pt_handle_vmx(1); - r = kvm_cpu_vmxon(phys_addr); + r = cpu_vmxon(phys_addr); if (r) { intel_pt_handle_vmx(0); return r;