From patchwork Fri Aug 9 15:59:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11087181 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 43B5F1908 for ; Fri, 9 Aug 2019 16:20:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2F8BA20072 for ; Fri, 9 Aug 2019 16:20:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 23A2F1FF87; Fri, 9 Aug 2019 16:20:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 69AB42022B for ; Fri, 9 Aug 2019 16:20:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437276AbfHIQUT (ORCPT ); Fri, 9 Aug 2019 12:20:19 -0400 Received: from mx01.bbu.dsd.mx.bitdefender.com ([91.199.104.161]:53326 "EHLO mx01.bbu.dsd.mx.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2407427AbfHIQUA (ORCPT ); Fri, 9 Aug 2019 12:20:00 -0400 Received: from smtp.bitdefender.com (smtp02.buh.bitdefender.net [10.17.80.76]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 72460305D343; Fri, 9 Aug 2019 19:01:05 +0300 (EEST) Received: from localhost.localdomain (unknown [89.136.169.210]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 2B57A305B7A0; Fri, 9 Aug 2019 19:01:05 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: kvm@vger.kernel.org Cc: linux-mm@kvack.org, virtualization@lists.linux-foundation.org, Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Konrad Rzeszutek Wilk , Tamas K Lengyel , Mathieu Tarral , =?utf-8?q?Samuel_Laur=C3=A9?= =?utf-8?q?n?= , Patrick Colp , Jan Kiszka , Stefan Hajnoczi , Weijiang Yang , Zhang@vger.kernel.org, Yu C , =?utf-8?q?Mihai_Don=C8=9Bu?= , =?utf-8?q?Adalbert_L?= =?utf-8?q?az=C4=83r?= , He Chen , Zhang Yi Subject: [RFC PATCH v6 35/92] KVM: VMX: Add control flags for SPP enabling Date: Fri, 9 Aug 2019 18:59:50 +0300 Message-Id: <20190809160047.8319-36-alazar@bitdefender.com> In-Reply-To: <20190809160047.8319-1-alazar@bitdefender.com> References: <20190809160047.8319-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Yang Weijiang Check SPP capability in MSR_IA32_VMX_PROCBASED_CTLS2, its 23-bit indicates SPP support. Mark SPP bit in CPU capabilities bitmap if it's supported. Co-developed-by: He Chen Signed-off-by: He Chen Co-developed-by: Zhang Yi Signed-off-by: Zhang Yi Co-developed-by: Yang Weijiang Signed-off-by: Yang Weijiang Message-Id: <20190717133751.12910-3-weijiang.yang@intel.com> Signed-off-by: Adalbert Lazăr --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/vmx.h | 1 + arch/x86/kernel/cpu/intel.c | 4 ++++ arch/x86/kvm/vmx/capabilities.h | 5 +++++ arch/x86/kvm/vmx/vmx.c | 10 ++++++++++ 5 files changed, 21 insertions(+) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 6d6122524711..183b4fd864c6 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -228,6 +228,7 @@ #define X86_FEATURE_FLEXPRIORITY ( 8*32+ 2) /* Intel FlexPriority */ #define X86_FEATURE_EPT ( 8*32+ 3) /* Intel Extended Page Table */ #define X86_FEATURE_VPID ( 8*32+ 4) /* Intel Virtual Processor ID */ +#define X86_FEATURE_SPP ( 8*32+ 5) /* Intel EPT-based Sub-Page Write Protection */ #define X86_FEATURE_VMMCALL ( 8*32+15) /* Prefer VMMCALL to VMCALL */ #define X86_FEATURE_XENPV ( 8*32+16) /* "" Xen paravirtual guest */ diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 4e4133e86484..a2c9e18e0ad7 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -81,6 +81,7 @@ #define SECONDARY_EXEC_XSAVES 0x00100000 #define SECONDARY_EXEC_PT_USE_GPA 0x01000000 #define SECONDARY_EXEC_MODE_BASED_EPT_EXEC 0x00400000 +#define SECONDARY_EXEC_ENABLE_SPP 0x00800000 #define SECONDARY_EXEC_TSC_SCALING 0x02000000 #define PIN_BASED_EXT_INTR_MASK 0x00000001 diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index fc3c07fe7df5..b55156ce16da 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -476,6 +476,7 @@ static void detect_vmx_virtcap(struct cpuinfo_x86 *c) #define X86_VMX_FEATURE_PROC_CTLS2_EPT 0x00000002 #define X86_VMX_FEATURE_PROC_CTLS2_VPID 0x00000020 #define x86_VMX_FEATURE_EPT_CAP_AD 0x00200000 +#define X86_VMX_FEATURE_PROC_CTLS2_SPP 0x00800000 u32 vmx_msr_low, vmx_msr_high, msr_ctl, msr_ctl2; u32 msr_vpid_cap, msr_ept_cap; @@ -486,6 +487,7 @@ static void detect_vmx_virtcap(struct cpuinfo_x86 *c) clear_cpu_cap(c, X86_FEATURE_EPT); clear_cpu_cap(c, X86_FEATURE_VPID); clear_cpu_cap(c, X86_FEATURE_EPT_AD); + clear_cpu_cap(c, X86_FEATURE_SPP); rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, vmx_msr_low, vmx_msr_high); msr_ctl = vmx_msr_high | vmx_msr_low; @@ -509,6 +511,8 @@ static void detect_vmx_virtcap(struct cpuinfo_x86 *c) } if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VPID) set_cpu_cap(c, X86_FEATURE_VPID); + if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_SPP) + set_cpu_cap(c, X86_FEATURE_SPP); } } diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h index 854e144131c6..8221ecbf6516 100644 --- a/arch/x86/kvm/vmx/capabilities.h +++ b/arch/x86/kvm/vmx/capabilities.h @@ -239,6 +239,11 @@ static inline bool cpu_has_vmx_pml(void) return vmcs_config.cpu_based_2nd_exec_ctrl & SECONDARY_EXEC_ENABLE_PML; } +static inline bool cpu_has_vmx_ept_spp(void) +{ + return vmcs_config.cpu_based_2nd_exec_ctrl & SECONDARY_EXEC_ENABLE_SPP; +} + static inline bool vmx_xsaves_supported(void) { return vmcs_config.cpu_based_2nd_exec_ctrl & diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 97cfd5a316f3..f94e3defd9cf 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -114,6 +114,8 @@ static u64 __read_mostly host_xss; bool __read_mostly enable_pml = 1; module_param_named(pml, enable_pml, bool, S_IRUGO); +static bool __read_mostly spp_supported = 0; + #define MSR_BITMAP_MODE_X2APIC 1 #define MSR_BITMAP_MODE_X2APIC_APICV 2 @@ -2247,6 +2249,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf, SECONDARY_EXEC_RDSEED_EXITING | SECONDARY_EXEC_RDRAND_EXITING | SECONDARY_EXEC_ENABLE_PML | + SECONDARY_EXEC_ENABLE_SPP | SECONDARY_EXEC_TSC_SCALING | SECONDARY_EXEC_PT_USE_GPA | SECONDARY_EXEC_PT_CONCEAL_VMX | @@ -3901,6 +3904,9 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx) if (!enable_pml) exec_control &= ~SECONDARY_EXEC_ENABLE_PML; + if (!spp_supported) + exec_control &= ~SECONDARY_EXEC_ENABLE_SPP; + if (vmx_xsaves_supported()) { /* Exposing XSAVES only when XSAVE is exposed */ bool xsaves_enabled = @@ -7570,6 +7576,10 @@ static __init int hardware_setup(void) if (!cpu_has_vmx_flexpriority()) flexpriority_enabled = 0; + if (cpu_has_vmx_ept_spp() && enable_ept && + boot_cpu_has(X86_FEATURE_SPP)) + spp_supported = 1; + if (!cpu_has_virtual_nmis()) enable_vnmi = 0;