From patchwork Tue Nov 19 03:12:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250701 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 73B7513A4 for ; Tue, 19 Nov 2019 03:13:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 53B7822319 for ; Tue, 19 Nov 2019 03:13:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727675AbfKSDNW (ORCPT ); Mon, 18 Nov 2019 22:13:22 -0500 Received: from mga02.intel.com ([134.134.136.20]:21265 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727656AbfKSDNA (ORCPT ); Mon, 18 Nov 2019 22:13:00 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105701" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:41 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 01/19] x86/msr-index: Clean up bit defines for IA32_FEATURE_CONTROL MSR Date: Mon, 18 Nov 2019 19:12:22 -0800 Message-Id: <20191119031240.7779-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org As pointed out by Boris, the defines for bits in IA32_FEATURE_CONTROL are quite a mouthful, especially the VMX bits which must differentiate between enabling VMX inside and outside SMX (TXT) operation. Rename the bit defines to abbreviate FEATURE_CONTROL as FEAT_CTL so that they're a little friendlier on the eyes. Keep the full name for the MSR itself to help even the most obtuse reader decipher the abbreviation, and to match the name used by the Intel SDM. Opportunistically fix a few other annoyances with the defines: - Relocate the bit defines so that they immediately follow the MSR define, e.g. aren't mistaken as belonging to MISC_FEATURE_CONTROL. - Add whitespace around the block of feature control defines to make it clear that FEAT_CTL is indeed short for FEATURE_CONTROL. - Use BIT() instead of manually encoding the bit shift. - Use "VMX" instead of "VMXON" to match the SDM. - Append "_ENABLED" to the LMCE bit to be consistent with the verbiage used for all other feature control bits. (LCME is an acronym for Local Machine Check Exception, i.e. LMCE_ENABLED is not redundant). Cc: Borislav Petkov Signed-off-by: Sean Christopherson Acked-by: Jarkko Sakkinen --- arch/x86/include/asm/msr-index.h | 11 ++++++----- arch/x86/kernel/cpu/mce/intel.c | 4 ++-- arch/x86/kvm/vmx/nested.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 32 ++++++++++++++++---------------- arch/x86/kvm/vmx/vmx.h | 2 +- 5 files changed, 27 insertions(+), 26 deletions(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 6a3124664289..4c80d530f751 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -556,7 +556,13 @@ #define MSR_IA32_EBL_CR_POWERON 0x0000002a #define MSR_EBC_FREQUENCY_ID 0x0000002c #define MSR_SMI_COUNT 0x00000034 + #define MSR_IA32_FEATURE_CONTROL 0x0000003a +#define FEAT_CTL_LOCKED BIT(0) +#define FEAT_CTL_VMX_ENABLED_INSIDE_SMX BIT(1) +#define FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX BIT(2) +#define FEAT_CTL_LMCE_ENABLED BIT(20) + #define MSR_IA32_TSC_ADJUST 0x0000003b #define MSR_IA32_BNDCFGS 0x00000d90 @@ -564,11 +570,6 @@ #define MSR_IA32_XSS 0x00000da0 -#define FEATURE_CONTROL_LOCKED (1<<0) -#define FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX (1<<1) -#define FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX (1<<2) -#define FEATURE_CONTROL_LMCE (1<<20) - #define MSR_IA32_APICBASE 0x0000001b #define MSR_IA32_APICBASE_BSP (1<<8) #define MSR_IA32_APICBASE_ENABLE (1<<11) diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index e270d0770134..3e5b29acd301 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -119,8 +119,8 @@ static bool lmce_supported(void) * generate a #GP fault. */ rdmsrl(MSR_IA32_FEATURE_CONTROL, tmp); - if ((tmp & (FEATURE_CONTROL_LOCKED | FEATURE_CONTROL_LMCE)) == - (FEATURE_CONTROL_LOCKED | FEATURE_CONTROL_LMCE)) + if ((tmp & (FEAT_CTL_LOCKED | FEAT_CTL_LMCE_ENABLED)) == + (FEAT_CTL_LOCKED | FEAT_CTL_LMCE_ENABLED)) return true; return false; diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 0e7c9301fe86..5737a94a4305 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4434,8 +4434,8 @@ static int handle_vmon(struct kvm_vcpu *vcpu) gpa_t vmptr; uint32_t revision; struct vcpu_vmx *vmx = to_vmx(vcpu); - const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED - | FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; + const u64 VMXON_NEEDED_FEATURES = FEAT_CTL_LOCKED + | FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; /* * The Intel VMX Instruction Reference lists a bunch of bits that are diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b83ff2030adc..a8e2c3b74daa 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1806,7 +1806,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_MCG_EXT_CTL: if (!msr_info->host_initiated && !(vmx->msr_ia32_feature_control & - FEATURE_CONTROL_LMCE)) + FEAT_CTL_LMCE_ENABLED)) return 1; msr_info->data = vcpu->arch.mcg_ext_ctl; break; @@ -2041,7 +2041,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_MCG_EXT_CTL: if ((!msr_info->host_initiated && !(to_vmx(vcpu)->msr_ia32_feature_control & - FEATURE_CONTROL_LMCE)) || + FEAT_CTL_LMCE_ENABLED)) || (data & ~MCG_EXT_CTL_LMCE_EN)) return 1; vcpu->arch.mcg_ext_ctl = data; @@ -2049,7 +2049,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_FEATURE_CONTROL: if (!vmx_feature_control_msr_valid(vcpu, data) || (to_vmx(vcpu)->msr_ia32_feature_control & - FEATURE_CONTROL_LOCKED && !msr_info->host_initiated)) + FEAT_CTL_LOCKED && !msr_info->host_initiated)) return 1; vmx->msr_ia32_feature_control = data; if (msr_info->host_initiated && data == 0) @@ -2195,21 +2195,21 @@ static __init int vmx_disabled_by_bios(void) u64 msr; rdmsrl(MSR_IA32_FEATURE_CONTROL, msr); - if (msr & FEATURE_CONTROL_LOCKED) { + if (msr & FEAT_CTL_LOCKED) { /* launched w/ TXT and VMX disabled */ - if (!(msr & FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX) + if (!(msr & FEAT_CTL_VMX_ENABLED_INSIDE_SMX) && tboot_enabled()) return 1; /* launched w/o TXT and VMX only enabled w/ TXT */ - if (!(msr & FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX) - && (msr & FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX) + if (!(msr & FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX) + && (msr & FEAT_CTL_VMX_ENABLED_INSIDE_SMX) && !tboot_enabled()) { printk(KERN_WARNING "kvm: disable TXT in the BIOS or " "activate TXT before enabling KVM\n"); return 1; } /* launched w/o TXT and VMX disabled */ - if (!(msr & FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX) + if (!(msr & FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX) && !tboot_enabled()) return 1; } @@ -2259,10 +2259,10 @@ static int hardware_enable(void) rdmsrl(MSR_IA32_FEATURE_CONTROL, old); - test_bits = FEATURE_CONTROL_LOCKED; - test_bits |= FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; + test_bits = FEAT_CTL_LOCKED; + test_bits |= FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; if (tboot_enabled()) - test_bits |= FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX; + test_bits |= FEAT_CTL_VMX_ENABLED_INSIDE_SMX; if ((old & test_bits) != test_bits) { /* enable and lock */ @@ -6793,7 +6793,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) vmx->nested.posted_intr_nv = -1; vmx->nested.current_vmptr = -1ull; - vmx->msr_ia32_feature_control_valid_bits = FEATURE_CONTROL_LOCKED; + vmx->msr_ia32_feature_control_valid_bits = FEAT_CTL_LOCKED; /* * Enforce invariant: pi_desc.nv is always either POSTED_INTR_VECTOR @@ -7089,10 +7089,10 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu) if (nested_vmx_allowed(vcpu)) to_vmx(vcpu)->msr_ia32_feature_control_valid_bits |= - FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; + FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; else to_vmx(vcpu)->msr_ia32_feature_control_valid_bits &= - ~FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; + ~FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; if (nested_vmx_allowed(vcpu)) { nested_vmx_cr_fixed1_bits_update(vcpu); @@ -7502,10 +7502,10 @@ static void vmx_setup_mce(struct kvm_vcpu *vcpu) { if (vcpu->arch.mcg_cap & MCG_LMCE_P) to_vmx(vcpu)->msr_ia32_feature_control_valid_bits |= - FEATURE_CONTROL_LMCE; + FEAT_CTL_LMCE_ENABLED; else to_vmx(vcpu)->msr_ia32_feature_control_valid_bits &= - ~FEATURE_CONTROL_LMCE; + ~FEAT_CTL_LMCE_ENABLED; } static int vmx_smi_allowed(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 5a0f34b1e226..2b138fe7951a 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -276,7 +276,7 @@ struct vcpu_vmx { /* * Only bits masked by msr_ia32_feature_control_valid_bits can be set in - * msr_ia32_feature_control. FEATURE_CONTROL_LOCKED is always included + * msr_ia32_feature_control. FEAT_CTL_LOCKED is always included * in msr_ia32_feature_control_valid_bits. */ u64 msr_ia32_feature_control; From patchwork Tue Nov 19 03:12:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250711 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B7D914ED for ; Tue, 19 Nov 2019 03:13:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 67F292235E for ; Tue, 19 Nov 2019 03:13:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727781AbfKSDNf (ORCPT ); Mon, 18 Nov 2019 22:13:35 -0500 Received: from mga02.intel.com ([134.134.136.20]:21265 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727621AbfKSDM6 (ORCPT ); Mon, 18 Nov 2019 22:12:58 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105705" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:41 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 02/19] selftests: kvm: Replace manual MSR defs with common msr-index.h Date: Mon, 18 Nov 2019 19:12:23 -0800 Message-Id: <20191119031240.7779-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The kernel's version of msr-index.h was pulled wholesale into tools by commit 444e2ff34df8 ("tools arch x86: Grab a copy of the file containing the MSR numbers"), use the common msr-index.h instead of manually redefining everything in a KVM-only header. Note, a few MSR related definitions remain in processor.h because they are not covered by msr-index.h, including the awesomely named APIC_BASE_MSR, which refers to starting index of the x2APIC MSRs, not the actual MSR_IA32_APICBASE, which *is* defined by msr-index.h. Signed-off-by: Sean Christopherson --- tools/testing/selftests/kvm/Makefile | 4 +- .../selftests/kvm/include/x86_64/processor.h | 726 +----------------- 2 files changed, 6 insertions(+), 724 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c5ec868fa1e5..8cda0205ebbb 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -44,9 +44,11 @@ LIBKVM += $(LIBKVM_$(UNAME_M)) INSTALL_HDR_PATH = $(top_srcdir)/usr LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/ LINUX_TOOL_INCLUDE = $(top_srcdir)/tools/include +LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/x86/include CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \ - -I$(LINUX_HDR_PATH) -Iinclude -I$( #include +#include + #define X86_EFLAGS_FIXED (1u << 1) #define X86_CR4_VME (1ul << 0) @@ -343,444 +345,6 @@ void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits); #define X86_CR0_CD (1UL<<30) /* Cache Disable */ #define X86_CR0_PG (1UL<<31) /* Paging */ -/* - * CPU model specific register (MSR) numbers. - */ - -/* x86-64 specific MSRs */ -#define MSR_EFER 0xc0000080 /* extended feature register */ -#define MSR_STAR 0xc0000081 /* legacy mode SYSCALL target */ -#define MSR_LSTAR 0xc0000082 /* long mode SYSCALL target */ -#define MSR_CSTAR 0xc0000083 /* compat mode SYSCALL target */ -#define MSR_SYSCALL_MASK 0xc0000084 /* EFLAGS mask for syscall */ -#define MSR_FS_BASE 0xc0000100 /* 64bit FS base */ -#define MSR_GS_BASE 0xc0000101 /* 64bit GS base */ -#define MSR_KERNEL_GS_BASE 0xc0000102 /* SwapGS GS shadow */ -#define MSR_TSC_AUX 0xc0000103 /* Auxiliary TSC */ - -/* EFER bits: */ -#define EFER_SCE (1<<0) /* SYSCALL/SYSRET */ -#define EFER_LME (1<<8) /* Long mode enable */ -#define EFER_LMA (1<<10) /* Long mode active (read-only) */ -#define EFER_NX (1<<11) /* No execute enable */ -#define EFER_SVME (1<<12) /* Enable virtualization */ -#define EFER_LMSLE (1<<13) /* Long Mode Segment Limit Enable */ -#define EFER_FFXSR (1<<14) /* Enable Fast FXSAVE/FXRSTOR */ - -/* Intel MSRs. Some also available on other CPUs */ - -#define MSR_PPIN_CTL 0x0000004e -#define MSR_PPIN 0x0000004f - -#define MSR_IA32_PERFCTR0 0x000000c1 -#define MSR_IA32_PERFCTR1 0x000000c2 -#define MSR_FSB_FREQ 0x000000cd -#define MSR_PLATFORM_INFO 0x000000ce -#define MSR_PLATFORM_INFO_CPUID_FAULT_BIT 31 -#define MSR_PLATFORM_INFO_CPUID_FAULT BIT_ULL(MSR_PLATFORM_INFO_CPUID_FAULT_BIT) - -#define MSR_PKG_CST_CONFIG_CONTROL 0x000000e2 -#define NHM_C3_AUTO_DEMOTE (1UL << 25) -#define NHM_C1_AUTO_DEMOTE (1UL << 26) -#define ATM_LNC_C6_AUTO_DEMOTE (1UL << 25) -#define SNB_C1_AUTO_UNDEMOTE (1UL << 27) -#define SNB_C3_AUTO_UNDEMOTE (1UL << 28) - -#define MSR_MTRRcap 0x000000fe -#define MSR_IA32_BBL_CR_CTL 0x00000119 -#define MSR_IA32_BBL_CR_CTL3 0x0000011e - -#define MSR_IA32_SYSENTER_CS 0x00000174 -#define MSR_IA32_SYSENTER_ESP 0x00000175 -#define MSR_IA32_SYSENTER_EIP 0x00000176 - -#define MSR_IA32_MCG_CAP 0x00000179 -#define MSR_IA32_MCG_STATUS 0x0000017a -#define MSR_IA32_MCG_CTL 0x0000017b -#define MSR_IA32_MCG_EXT_CTL 0x000004d0 - -#define MSR_OFFCORE_RSP_0 0x000001a6 -#define MSR_OFFCORE_RSP_1 0x000001a7 -#define MSR_TURBO_RATIO_LIMIT 0x000001ad -#define MSR_TURBO_RATIO_LIMIT1 0x000001ae -#define MSR_TURBO_RATIO_LIMIT2 0x000001af - -#define MSR_LBR_SELECT 0x000001c8 -#define MSR_LBR_TOS 0x000001c9 -#define MSR_LBR_NHM_FROM 0x00000680 -#define MSR_LBR_NHM_TO 0x000006c0 -#define MSR_LBR_CORE_FROM 0x00000040 -#define MSR_LBR_CORE_TO 0x00000060 - -#define MSR_LBR_INFO_0 0x00000dc0 /* ... 0xddf for _31 */ -#define LBR_INFO_MISPRED BIT_ULL(63) -#define LBR_INFO_IN_TX BIT_ULL(62) -#define LBR_INFO_ABORT BIT_ULL(61) -#define LBR_INFO_CYCLES 0xffff - -#define MSR_IA32_PEBS_ENABLE 0x000003f1 -#define MSR_IA32_DS_AREA 0x00000600 -#define MSR_IA32_PERF_CAPABILITIES 0x00000345 -#define MSR_PEBS_LD_LAT_THRESHOLD 0x000003f6 - -#define MSR_IA32_RTIT_CTL 0x00000570 -#define MSR_IA32_RTIT_STATUS 0x00000571 -#define MSR_IA32_RTIT_ADDR0_A 0x00000580 -#define MSR_IA32_RTIT_ADDR0_B 0x00000581 -#define MSR_IA32_RTIT_ADDR1_A 0x00000582 -#define MSR_IA32_RTIT_ADDR1_B 0x00000583 -#define MSR_IA32_RTIT_ADDR2_A 0x00000584 -#define MSR_IA32_RTIT_ADDR2_B 0x00000585 -#define MSR_IA32_RTIT_ADDR3_A 0x00000586 -#define MSR_IA32_RTIT_ADDR3_B 0x00000587 -#define MSR_IA32_RTIT_CR3_MATCH 0x00000572 -#define MSR_IA32_RTIT_OUTPUT_BASE 0x00000560 -#define MSR_IA32_RTIT_OUTPUT_MASK 0x00000561 - -#define MSR_MTRRfix64K_00000 0x00000250 -#define MSR_MTRRfix16K_80000 0x00000258 -#define MSR_MTRRfix16K_A0000 0x00000259 -#define MSR_MTRRfix4K_C0000 0x00000268 -#define MSR_MTRRfix4K_C8000 0x00000269 -#define MSR_MTRRfix4K_D0000 0x0000026a -#define MSR_MTRRfix4K_D8000 0x0000026b -#define MSR_MTRRfix4K_E0000 0x0000026c -#define MSR_MTRRfix4K_E8000 0x0000026d -#define MSR_MTRRfix4K_F0000 0x0000026e -#define MSR_MTRRfix4K_F8000 0x0000026f -#define MSR_MTRRdefType 0x000002ff - -#define MSR_IA32_CR_PAT 0x00000277 - -#define MSR_IA32_DEBUGCTLMSR 0x000001d9 -#define MSR_IA32_LASTBRANCHFROMIP 0x000001db -#define MSR_IA32_LASTBRANCHTOIP 0x000001dc -#define MSR_IA32_LASTINTFROMIP 0x000001dd -#define MSR_IA32_LASTINTTOIP 0x000001de - -/* DEBUGCTLMSR bits (others vary by model): */ -#define DEBUGCTLMSR_LBR (1UL << 0) /* last branch recording */ -#define DEBUGCTLMSR_BTF_SHIFT 1 -#define DEBUGCTLMSR_BTF (1UL << 1) /* single-step on branches */ -#define DEBUGCTLMSR_TR (1UL << 6) -#define DEBUGCTLMSR_BTS (1UL << 7) -#define DEBUGCTLMSR_BTINT (1UL << 8) -#define DEBUGCTLMSR_BTS_OFF_OS (1UL << 9) -#define DEBUGCTLMSR_BTS_OFF_USR (1UL << 10) -#define DEBUGCTLMSR_FREEZE_LBRS_ON_PMI (1UL << 11) -#define DEBUGCTLMSR_FREEZE_IN_SMM_BIT 14 -#define DEBUGCTLMSR_FREEZE_IN_SMM (1UL << DEBUGCTLMSR_FREEZE_IN_SMM_BIT) - -#define MSR_PEBS_FRONTEND 0x000003f7 - -#define MSR_IA32_POWER_CTL 0x000001fc - -#define MSR_IA32_MC0_CTL 0x00000400 -#define MSR_IA32_MC0_STATUS 0x00000401 -#define MSR_IA32_MC0_ADDR 0x00000402 -#define MSR_IA32_MC0_MISC 0x00000403 - -/* C-state Residency Counters */ -#define MSR_PKG_C3_RESIDENCY 0x000003f8 -#define MSR_PKG_C6_RESIDENCY 0x000003f9 -#define MSR_ATOM_PKG_C6_RESIDENCY 0x000003fa -#define MSR_PKG_C7_RESIDENCY 0x000003fa -#define MSR_CORE_C3_RESIDENCY 0x000003fc -#define MSR_CORE_C6_RESIDENCY 0x000003fd -#define MSR_CORE_C7_RESIDENCY 0x000003fe -#define MSR_KNL_CORE_C6_RESIDENCY 0x000003ff -#define MSR_PKG_C2_RESIDENCY 0x0000060d -#define MSR_PKG_C8_RESIDENCY 0x00000630 -#define MSR_PKG_C9_RESIDENCY 0x00000631 -#define MSR_PKG_C10_RESIDENCY 0x00000632 - -/* Interrupt Response Limit */ -#define MSR_PKGC3_IRTL 0x0000060a -#define MSR_PKGC6_IRTL 0x0000060b -#define MSR_PKGC7_IRTL 0x0000060c -#define MSR_PKGC8_IRTL 0x00000633 -#define MSR_PKGC9_IRTL 0x00000634 -#define MSR_PKGC10_IRTL 0x00000635 - -/* Run Time Average Power Limiting (RAPL) Interface */ - -#define MSR_RAPL_POWER_UNIT 0x00000606 - -#define MSR_PKG_POWER_LIMIT 0x00000610 -#define MSR_PKG_ENERGY_STATUS 0x00000611 -#define MSR_PKG_PERF_STATUS 0x00000613 -#define MSR_PKG_POWER_INFO 0x00000614 - -#define MSR_DRAM_POWER_LIMIT 0x00000618 -#define MSR_DRAM_ENERGY_STATUS 0x00000619 -#define MSR_DRAM_PERF_STATUS 0x0000061b -#define MSR_DRAM_POWER_INFO 0x0000061c - -#define MSR_PP0_POWER_LIMIT 0x00000638 -#define MSR_PP0_ENERGY_STATUS 0x00000639 -#define MSR_PP0_POLICY 0x0000063a -#define MSR_PP0_PERF_STATUS 0x0000063b - -#define MSR_PP1_POWER_LIMIT 0x00000640 -#define MSR_PP1_ENERGY_STATUS 0x00000641 -#define MSR_PP1_POLICY 0x00000642 - -/* Config TDP MSRs */ -#define MSR_CONFIG_TDP_NOMINAL 0x00000648 -#define MSR_CONFIG_TDP_LEVEL_1 0x00000649 -#define MSR_CONFIG_TDP_LEVEL_2 0x0000064A -#define MSR_CONFIG_TDP_CONTROL 0x0000064B -#define MSR_TURBO_ACTIVATION_RATIO 0x0000064C - -#define MSR_PLATFORM_ENERGY_STATUS 0x0000064D - -#define MSR_PKG_WEIGHTED_CORE_C0_RES 0x00000658 -#define MSR_PKG_ANY_CORE_C0_RES 0x00000659 -#define MSR_PKG_ANY_GFXE_C0_RES 0x0000065A -#define MSR_PKG_BOTH_CORE_GFXE_C0_RES 0x0000065B - -#define MSR_CORE_C1_RES 0x00000660 -#define MSR_MODULE_C6_RES_MS 0x00000664 - -#define MSR_CC6_DEMOTION_POLICY_CONFIG 0x00000668 -#define MSR_MC6_DEMOTION_POLICY_CONFIG 0x00000669 - -#define MSR_ATOM_CORE_RATIOS 0x0000066a -#define MSR_ATOM_CORE_VIDS 0x0000066b -#define MSR_ATOM_CORE_TURBO_RATIOS 0x0000066c -#define MSR_ATOM_CORE_TURBO_VIDS 0x0000066d - - -#define MSR_CORE_PERF_LIMIT_REASONS 0x00000690 -#define MSR_GFX_PERF_LIMIT_REASONS 0x000006B0 -#define MSR_RING_PERF_LIMIT_REASONS 0x000006B1 - -/* Hardware P state interface */ -#define MSR_PPERF 0x0000064e -#define MSR_PERF_LIMIT_REASONS 0x0000064f -#define MSR_PM_ENABLE 0x00000770 -#define MSR_HWP_CAPABILITIES 0x00000771 -#define MSR_HWP_REQUEST_PKG 0x00000772 -#define MSR_HWP_INTERRUPT 0x00000773 -#define MSR_HWP_REQUEST 0x00000774 -#define MSR_HWP_STATUS 0x00000777 - -/* CPUID.6.EAX */ -#define HWP_BASE_BIT (1<<7) -#define HWP_NOTIFICATIONS_BIT (1<<8) -#define HWP_ACTIVITY_WINDOW_BIT (1<<9) -#define HWP_ENERGY_PERF_PREFERENCE_BIT (1<<10) -#define HWP_PACKAGE_LEVEL_REQUEST_BIT (1<<11) - -/* IA32_HWP_CAPABILITIES */ -#define HWP_HIGHEST_PERF(x) (((x) >> 0) & 0xff) -#define HWP_GUARANTEED_PERF(x) (((x) >> 8) & 0xff) -#define HWP_MOSTEFFICIENT_PERF(x) (((x) >> 16) & 0xff) -#define HWP_LOWEST_PERF(x) (((x) >> 24) & 0xff) - -/* IA32_HWP_REQUEST */ -#define HWP_MIN_PERF(x) (x & 0xff) -#define HWP_MAX_PERF(x) ((x & 0xff) << 8) -#define HWP_DESIRED_PERF(x) ((x & 0xff) << 16) -#define HWP_ENERGY_PERF_PREFERENCE(x) (((unsigned long long) x & 0xff) << 24) -#define HWP_EPP_PERFORMANCE 0x00 -#define HWP_EPP_BALANCE_PERFORMANCE 0x80 -#define HWP_EPP_BALANCE_POWERSAVE 0xC0 -#define HWP_EPP_POWERSAVE 0xFF -#define HWP_ACTIVITY_WINDOW(x) ((unsigned long long)(x & 0xff3) << 32) -#define HWP_PACKAGE_CONTROL(x) ((unsigned long long)(x & 0x1) << 42) - -/* IA32_HWP_STATUS */ -#define HWP_GUARANTEED_CHANGE(x) (x & 0x1) -#define HWP_EXCURSION_TO_MINIMUM(x) (x & 0x4) - -/* IA32_HWP_INTERRUPT */ -#define HWP_CHANGE_TO_GUARANTEED_INT(x) (x & 0x1) -#define HWP_EXCURSION_TO_MINIMUM_INT(x) (x & 0x2) - -#define MSR_AMD64_MC0_MASK 0xc0010044 - -#define MSR_IA32_MCx_CTL(x) (MSR_IA32_MC0_CTL + 4*(x)) -#define MSR_IA32_MCx_STATUS(x) (MSR_IA32_MC0_STATUS + 4*(x)) -#define MSR_IA32_MCx_ADDR(x) (MSR_IA32_MC0_ADDR + 4*(x)) -#define MSR_IA32_MCx_MISC(x) (MSR_IA32_MC0_MISC + 4*(x)) - -#define MSR_AMD64_MCx_MASK(x) (MSR_AMD64_MC0_MASK + (x)) - -/* These are consecutive and not in the normal 4er MCE bank block */ -#define MSR_IA32_MC0_CTL2 0x00000280 -#define MSR_IA32_MCx_CTL2(x) (MSR_IA32_MC0_CTL2 + (x)) - -#define MSR_P6_PERFCTR0 0x000000c1 -#define MSR_P6_PERFCTR1 0x000000c2 -#define MSR_P6_EVNTSEL0 0x00000186 -#define MSR_P6_EVNTSEL1 0x00000187 - -#define MSR_KNC_PERFCTR0 0x00000020 -#define MSR_KNC_PERFCTR1 0x00000021 -#define MSR_KNC_EVNTSEL0 0x00000028 -#define MSR_KNC_EVNTSEL1 0x00000029 - -/* Alternative perfctr range with full access. */ -#define MSR_IA32_PMC0 0x000004c1 - -/* AMD64 MSRs. Not complete. See the architecture manual for a more - complete list. */ - -#define MSR_AMD64_PATCH_LEVEL 0x0000008b -#define MSR_AMD64_TSC_RATIO 0xc0000104 -#define MSR_AMD64_NB_CFG 0xc001001f -#define MSR_AMD64_PATCH_LOADER 0xc0010020 -#define MSR_AMD64_OSVW_ID_LENGTH 0xc0010140 -#define MSR_AMD64_OSVW_STATUS 0xc0010141 -#define MSR_AMD64_LS_CFG 0xc0011020 -#define MSR_AMD64_DC_CFG 0xc0011022 -#define MSR_AMD64_BU_CFG2 0xc001102a -#define MSR_AMD64_IBSFETCHCTL 0xc0011030 -#define MSR_AMD64_IBSFETCHLINAD 0xc0011031 -#define MSR_AMD64_IBSFETCHPHYSAD 0xc0011032 -#define MSR_AMD64_IBSFETCH_REG_COUNT 3 -#define MSR_AMD64_IBSFETCH_REG_MASK ((1UL< X-Patchwork-Id: 11250729 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B428109A for ; Tue, 19 Nov 2019 03:13:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 84D4321906 for ; Tue, 19 Nov 2019 03:13:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727820AbfKSDNw (ORCPT ); Mon, 18 Nov 2019 22:13:52 -0500 Received: from mga02.intel.com ([134.134.136.20]:21246 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727619AbfKSDM5 (ORCPT ); Mon, 18 Nov 2019 22:12:57 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105707" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:41 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 03/19] tools arch x86: Sync msr-index.h from kernel sources Date: Mon, 18 Nov 2019 19:12:24 -0800 Message-Id: <20191119031240.7779-4-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Sync msr-index.h to pull in recent renames of the IA32_FEATURE_CONTROL MSR bit definitions. Update KVM's VMX selftest accordingly. While using the renamed defines is by no means necessary, do the sync now to avoid leaving a landmine that will get stepped on the next time msr-index.h needs to be refreshed for some other reason. No functional change intended. Signed-off-by: Sean Christopherson --- tools/arch/x86/include/asm/msr-index.h | 27 ++++++++++++++++---- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 4 +-- 2 files changed, 24 insertions(+), 7 deletions(-) diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h index 20ce682a2540..4c80d530f751 100644 --- a/tools/arch/x86/include/asm/msr-index.h +++ b/tools/arch/x86/include/asm/msr-index.h @@ -93,6 +93,18 @@ * Microarchitectural Data * Sampling (MDS) vulnerabilities. */ +#define ARCH_CAP_PSCHANGE_MC_NO BIT(6) /* + * The processor is not susceptible to a + * machine check error due to modifying the + * code page size along with either the + * physical address or cache type + * without TLB invalidation. + */ +#define ARCH_CAP_TSX_CTRL_MSR BIT(7) /* MSR for TSX control is available. */ +#define ARCH_CAP_TAA_NO BIT(8) /* + * Not susceptible to + * TSX Async Abort (TAA) vulnerabilities. + */ #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* @@ -103,6 +115,10 @@ #define MSR_IA32_BBL_CR_CTL 0x00000119 #define MSR_IA32_BBL_CR_CTL3 0x0000011e +#define MSR_IA32_TSX_CTRL 0x00000122 +#define TSX_CTRL_RTM_DISABLE BIT(0) /* Disable RTM feature */ +#define TSX_CTRL_CPUID_CLEAR BIT(1) /* Disable TSX enumeration */ + #define MSR_IA32_SYSENTER_CS 0x00000174 #define MSR_IA32_SYSENTER_ESP 0x00000175 #define MSR_IA32_SYSENTER_EIP 0x00000176 @@ -540,7 +556,13 @@ #define MSR_IA32_EBL_CR_POWERON 0x0000002a #define MSR_EBC_FREQUENCY_ID 0x0000002c #define MSR_SMI_COUNT 0x00000034 + #define MSR_IA32_FEATURE_CONTROL 0x0000003a +#define FEAT_CTL_LOCKED BIT(0) +#define FEAT_CTL_VMX_ENABLED_INSIDE_SMX BIT(1) +#define FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX BIT(2) +#define FEAT_CTL_LMCE_ENABLED BIT(20) + #define MSR_IA32_TSC_ADJUST 0x0000003b #define MSR_IA32_BNDCFGS 0x00000d90 @@ -548,11 +570,6 @@ #define MSR_IA32_XSS 0x00000da0 -#define FEATURE_CONTROL_LOCKED (1<<0) -#define FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX (1<<1) -#define FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX (1<<2) -#define FEATURE_CONTROL_LMCE (1<<20) - #define MSR_IA32_APICBASE 0x0000001b #define MSR_IA32_APICBASE_BSP (1<<8) #define MSR_IA32_APICBASE_ENABLE (1<<11) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index f6ec97b7eaef..32dfa4f5bbd3 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -157,8 +157,8 @@ bool prepare_for_vmx_operation(struct vmx_pages *vmx) * Bit 2: Enables VMXON outside of SMX operation. If clear, VMXON * outside of SMX causes a #GP. */ - required = FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; - required |= FEATURE_CONTROL_LOCKED; + required = FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; + required |= FEAT_CTL_LOCKED; feature_control = rdmsr(MSR_IA32_FEATURE_CONTROL); if ((feature_control & required) != required) wrmsr(MSR_IA32_FEATURE_CONTROL, feature_control | required); From patchwork Tue Nov 19 03:12:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250681 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4C61B14ED for ; Tue, 19 Nov 2019 03:12:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 377852070E for ; Tue, 19 Nov 2019 03:12:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727617AbfKSDM4 (ORCPT ); Mon, 18 Nov 2019 22:12:56 -0500 Received: from mga02.intel.com ([134.134.136.20]:21257 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727589AbfKSDM4 (ORCPT ); Mon, 18 Nov 2019 22:12:56 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105711" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 04/19] x86/intel: Initialize IA32_FEATURE_CONTROL MSR at boot Date: Mon, 18 Nov 2019 19:12:25 -0800 Message-Id: <20191119031240.7779-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Opportunistically initialize IA32_FEATURE_CONTROL MSR to enable VMX when the MSR is left unlocked by BIOS. Configuring IA32_FEATURE_CONTROL at boot time paves the way for similar enabling of other features, e.g. Software Guard Extensions (SGX). Temporarily leave equivalent KVM code in place in order to avoid introducing a regression on Centaur and Zhaoxin CPUs, e.g. removing KVM's code would leave the MSR unlocked on those CPUs and would break existing functionality if people are loading kvm_intel on Centaur and/or Zhaoxin. Defer enablement of the boot-time configuration on Centaur and Zhaoxin to future patches to aid bisection. Note, Local Machine Check Exceptions (LMCE) are also supported by the kernel and enabled via IA32_FEATURE_CONTROL, but the kernel currently uses LMCE if and and only if the feature is explicitly enabled by BIOS. Keep the current behavior to avoid introducing bugs, future patches can opt in to opportunistic enabling if it's deemed desirable to do so. Always lock IA32_FEATURE_CONTROL if it exists, even if the CPU doesn't support VMX, so that other existing and future kernel code that queries IA32_FEATURE_CONTROL can assume it's locked. Start from a clean slate when constructing the value to write to IA32_FEATURE_CONTROL, i.e. ignore whatever value BIOS left in the MSR so as not to enable random features or fault on the WRMSR. Suggested-by: Borislav Petkov Cc: Jarkko Sakkinen Signed-off-by: Sean Christopherson --- arch/x86/Kconfig.cpu | 4 +++ arch/x86/kernel/cpu/Makefile | 1 + arch/x86/kernel/cpu/cpu.h | 4 +++ arch/x86/kernel/cpu/feature_control.c | 35 +++++++++++++++++++++++++++ arch/x86/kernel/cpu/intel.c | 2 ++ 5 files changed, 46 insertions(+) create mode 100644 arch/x86/kernel/cpu/feature_control.c diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu index af9c967782f6..aafc14a0abf7 100644 --- a/arch/x86/Kconfig.cpu +++ b/arch/x86/Kconfig.cpu @@ -387,6 +387,10 @@ config X86_DEBUGCTLMSR def_bool y depends on !(MK6 || MWINCHIPC6 || MWINCHIP3D || MCYRIXIII || M586MMX || M586TSC || M586 || M486SX || M486) && !UML +config X86_FEATURE_CONTROL_MSR + def_bool y + depends on CPU_SUP_INTEL + menuconfig PROCESSOR_SELECT bool "Supported processor vendors" if EXPERT ---help--- diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index 890f60083eca..84e35e762f76 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -29,6 +29,7 @@ obj-y += umwait.o obj-$(CONFIG_PROC_FS) += proc.o obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o +obj-$(CONFIG_X86_FEATURE_CONTROL_MSR) += feature_control.o ifdef CONFIG_CPU_SUP_INTEL obj-y += intel.o intel_pconfig.o tsx.o obj-$(CONFIG_PM) += intel_epb.o diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h index 38ab6e115eac..a58e80866a3f 100644 --- a/arch/x86/kernel/cpu/cpu.h +++ b/arch/x86/kernel/cpu/cpu.h @@ -80,4 +80,8 @@ extern void x86_spec_ctrl_setup_ap(void); extern u64 x86_read_arch_cap_msr(void); +#ifdef CONFIG_X86_FEATURE_CONTROL_MSR +void init_feature_control_msr(struct cpuinfo_x86 *c); +#endif + #endif /* ARCH_X86_CPU_H */ diff --git a/arch/x86/kernel/cpu/feature_control.c b/arch/x86/kernel/cpu/feature_control.c new file mode 100644 index 000000000000..33c9444dda52 --- /dev/null +++ b/arch/x86/kernel/cpu/feature_control.c @@ -0,0 +1,35 @@ +// SPDX-License-Identifier: GPL-2.0 +#include + +#include +#include +#include + +void init_feature_control_msr(struct cpuinfo_x86 *c) +{ + u64 msr; + + if (rdmsrl_safe(MSR_IA32_FEATURE_CONTROL, &msr)) + return; + + if (msr & FEAT_CTL_LOCKED) + return; + + /* + * Ignore whatever value BIOS left in the MSR to avoid enabling random + * features or faulting on the WRMSR. + */ + msr = FEAT_CTL_LOCKED; + + /* + * Enable VMX if and only if the kernel may do VMXON at some point, + * i.e. KVM is enabled, to avoid unnecessarily adding an attack vector + * for the kernel, e.g. using VMX to hide malicious code. + */ + if (cpu_has(c, X86_FEATURE_VMX) && IS_ENABLED(CONFIG_KVM)) { + msr |= FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; + if (tboot_enabled()) + msr |= FEAT_CTL_VMX_ENABLED_INSIDE_SMX; + } + wrmsrl(MSR_IA32_FEATURE_CONTROL, msr); +} diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index 4a900804a023..b7c6ed0b40b6 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -755,6 +755,8 @@ static void init_intel(struct cpuinfo_x86 *c) /* Work around errata */ srat_detect_node(c); + init_feature_control_msr(c); + if (cpu_has(c, X86_FEATURE_VMX)) detect_vmx_virtcap(c); From patchwork Tue Nov 19 03:12:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250765 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D9FF6109A for ; Tue, 19 Nov 2019 03:14:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C317E22323 for ; Tue, 19 Nov 2019 03:14:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727591AbfKSDOS (ORCPT ); Mon, 18 Nov 2019 22:14:18 -0500 Received: from mga02.intel.com ([134.134.136.20]:21246 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727351AbfKSDM4 (ORCPT ); Mon, 18 Nov 2019 22:12:56 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105714" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 05/19] x86/mce: WARN once if IA32_FEATURE_CONTROL MSR is left unlocked Date: Mon, 18 Nov 2019 19:12:26 -0800 Message-Id: <20191119031240.7779-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org WARN if the IA32_FEATURE_CONTROL MSR is somehow left unlocked now that CPU initialization unconditionally locks the MSR. Reviewed-by: Borislav Petkov Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/mce/intel.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 3e5b29acd301..5abc55a67fce 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -119,11 +119,10 @@ static bool lmce_supported(void) * generate a #GP fault. */ rdmsrl(MSR_IA32_FEATURE_CONTROL, tmp); - if ((tmp & (FEAT_CTL_LOCKED | FEAT_CTL_LMCE_ENABLED)) == - (FEAT_CTL_LOCKED | FEAT_CTL_LMCE_ENABLED)) - return true; + if (WARN_ON_ONCE(!(tmp & FEAT_CTL_LOCKED))) + return false; - return false; + return tmp & FEAT_CTL_LMCE_ENABLED; } bool mce_intel_cmci_poll(void) From patchwork Tue Nov 19 03:12:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250749 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 62C3E14ED for ; Tue, 19 Nov 2019 03:14:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4409322362 for ; Tue, 19 Nov 2019 03:14:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727862AbfKSDOG (ORCPT ); Mon, 18 Nov 2019 22:14:06 -0500 Received: from mga02.intel.com ([134.134.136.20]:21265 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727591AbfKSDM4 (ORCPT ); Mon, 18 Nov 2019 22:12:56 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105717" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 06/19] x86/centaur: Use common IA32_FEATURE_CONTROL MSR initialization Date: Mon, 18 Nov 2019 19:12:27 -0800 Message-Id: <20191119031240.7779-7-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Use the recently added IA32_FEATURE_CONTROL MSR initialization sequence to opportunstically enable VMX support when running on a Centaur CPU. Signed-off-by: Sean Christopherson --- arch/x86/Kconfig.cpu | 2 +- arch/x86/kernel/cpu/centaur.c | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu index aafc14a0abf7..9e4e41424dc2 100644 --- a/arch/x86/Kconfig.cpu +++ b/arch/x86/Kconfig.cpu @@ -389,7 +389,7 @@ config X86_DEBUGCTLMSR config X86_FEATURE_CONTROL_MSR def_bool y - depends on CPU_SUP_INTEL + depends on CPU_SUP_INTEL || CPU_SUP_CENTAUR menuconfig PROCESSOR_SELECT bool "Supported processor vendors" if EXPERT diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c index 14433ff5b828..a6ca4c31c1b6 100644 --- a/arch/x86/kernel/cpu/centaur.c +++ b/arch/x86/kernel/cpu/centaur.c @@ -250,6 +250,8 @@ static void init_centaur(struct cpuinfo_x86 *c) set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC); #endif + init_feature_control_msr(c); + if (cpu_has(c, X86_FEATURE_VMX)) centaur_detect_vmx_virtcap(c); } From patchwork Tue Nov 19 03:12:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250745 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0EC57109A for ; Tue, 19 Nov 2019 03:14:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E129C214DE for ; Tue, 19 Nov 2019 03:14:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727505AbfKSDOG (ORCPT ); Mon, 18 Nov 2019 22:14:06 -0500 Received: from mga02.intel.com ([134.134.136.20]:21257 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727597AbfKSDM4 (ORCPT ); Mon, 18 Nov 2019 22:12:56 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105720" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 07/19] x86/zhaoxin: Use common IA32_FEATURE_CONTROL MSR initialization Date: Mon, 18 Nov 2019 19:12:28 -0800 Message-Id: <20191119031240.7779-8-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Use the recently added IA32_FEATURE_CONTROL MSR initialization sequence to opportunstically enable VMX support when running on a Zhaoxin CPU. Signed-off-by: Sean Christopherson --- arch/x86/Kconfig.cpu | 2 +- arch/x86/kernel/cpu/zhaoxin.c | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu index 9e4e41424dc2..e78f39adae7b 100644 --- a/arch/x86/Kconfig.cpu +++ b/arch/x86/Kconfig.cpu @@ -389,7 +389,7 @@ config X86_DEBUGCTLMSR config X86_FEATURE_CONTROL_MSR def_bool y - depends on CPU_SUP_INTEL || CPU_SUP_CENTAUR + depends on CPU_SUP_INTEL || CPU_SUP_CENTAUR || CPU_SUP_ZHAOXIN menuconfig PROCESSOR_SELECT bool "Supported processor vendors" if EXPERT diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c index 8e6f2f4b4afe..01b05a4a5a85 100644 --- a/arch/x86/kernel/cpu/zhaoxin.c +++ b/arch/x86/kernel/cpu/zhaoxin.c @@ -141,6 +141,8 @@ static void init_zhaoxin(struct cpuinfo_x86 *c) set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC); #endif + init_feature_control_msr(c); + if (cpu_has(c, X86_FEATURE_VMX)) zhaoxin_detect_vmx_virtcap(c); } From patchwork Tue Nov 19 03:12:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250769 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0A90109A for ; Tue, 19 Nov 2019 03:14:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C9C552235D for ; Tue, 19 Nov 2019 03:14:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727606AbfKSDM4 (ORCPT ); Mon, 18 Nov 2019 22:12:56 -0500 Received: from mga02.intel.com ([134.134.136.20]:21265 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727585AbfKSDM4 (ORCPT ); Mon, 18 Nov 2019 22:12:56 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105724" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 08/19] KVM: VMX: Drop initialization of IA32_FEATURE_CONTROL MSR Date: Mon, 18 Nov 2019 19:12:29 -0800 Message-Id: <20191119031240.7779-9-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Remove the code to initialize IA32_FEATURE_CONTROL MSR when KVM is loaded now that the MSR is initialized during boot on all CPUs that support VMX, i.e. can possibly load kvm_intel. Reviewed-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 48 +++++++++++++++++------------------------- 1 file changed, 19 insertions(+), 29 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a8e2c3b74daa..e9681e3fcb63 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2195,24 +2195,26 @@ static __init int vmx_disabled_by_bios(void) u64 msr; rdmsrl(MSR_IA32_FEATURE_CONTROL, msr); - if (msr & FEAT_CTL_LOCKED) { - /* launched w/ TXT and VMX disabled */ - if (!(msr & FEAT_CTL_VMX_ENABLED_INSIDE_SMX) - && tboot_enabled()) - return 1; - /* launched w/o TXT and VMX only enabled w/ TXT */ - if (!(msr & FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX) - && (msr & FEAT_CTL_VMX_ENABLED_INSIDE_SMX) - && !tboot_enabled()) { - printk(KERN_WARNING "kvm: disable TXT in the BIOS or " - "activate TXT before enabling KVM\n"); - return 1; - } - /* launched w/o TXT and VMX disabled */ - if (!(msr & FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX) - && !tboot_enabled()) - return 1; + + if (WARN_ON_ONCE(!(msr & FEAT_CTL_LOCKED))) + return 1; + + /* launched w/ TXT and VMX disabled */ + if (!(msr & FEAT_CTL_VMX_ENABLED_INSIDE_SMX) && + tboot_enabled()) + return 1; + /* launched w/o TXT and VMX only enabled w/ TXT */ + if (!(msr & FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX) && + (msr & FEAT_CTL_VMX_ENABLED_INSIDE_SMX) && + !tboot_enabled()) { + pr_warn("kvm: disable TXT in the BIOS or " + "activate TXT before enabling KVM\n"); + return 1; } + /* launched w/o TXT and VMX disabled */ + if (!(msr & FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX) && + !tboot_enabled()) + return 1; return 0; } @@ -2229,7 +2231,6 @@ static int hardware_enable(void) { int cpu = raw_smp_processor_id(); u64 phys_addr = __pa(per_cpu(vmxarea, cpu)); - u64 old, test_bits; if (cr4_read_shadow() & X86_CR4_VMXE) return -EBUSY; @@ -2257,17 +2258,6 @@ static int hardware_enable(void) */ crash_enable_local_vmclear(cpu); - rdmsrl(MSR_IA32_FEATURE_CONTROL, old); - - test_bits = FEAT_CTL_LOCKED; - test_bits |= FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; - if (tboot_enabled()) - test_bits |= FEAT_CTL_VMX_ENABLED_INSIDE_SMX; - - if ((old & test_bits) != test_bits) { - /* enable and lock */ - wrmsrl(MSR_IA32_FEATURE_CONTROL, old | test_bits); - } kvm_cpu_vmxon(phys_addr); if (enable_ept) ept_sync_global(); From patchwork Tue Nov 19 03:12:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250679 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2F08B13A4 for ; Tue, 19 Nov 2019 03:12:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 18E2F2070E for ; Tue, 19 Nov 2019 03:12:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727598AbfKSDM4 (ORCPT ); Mon, 18 Nov 2019 22:12:56 -0500 Received: from mga02.intel.com ([134.134.136.20]:21246 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727590AbfKSDM4 (ORCPT ); Mon, 18 Nov 2019 22:12:56 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105727" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 09/19] x86/cpu: Clear VMX feature flag if VMX is not fully enabled Date: Mon, 18 Nov 2019 19:12:30 -0800 Message-Id: <20191119031240.7779-10-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Now that the IA32_FEATURE_CONTROL MSR is guaranteed to be configured and locked, clear the VMX capability flag if the IA32_FEATURE_CONTROL MSR is not supported or if BIOS disabled VMX, i.e. locked IA32_FEATURE_CONTROL and did not set the appropriate VMX enable bit. Signed-off-by: Sean Christopherson --- arch/x86/kernel/cpu/feature_control.c | 28 ++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/feature_control.c b/arch/x86/kernel/cpu/feature_control.c index 33c9444dda52..2bd1a9e6021a 100644 --- a/arch/x86/kernel/cpu/feature_control.c +++ b/arch/x86/kernel/cpu/feature_control.c @@ -5,15 +5,26 @@ #include #include +#undef pr_fmt +#define pr_fmt(fmt) "x86/cpu: " fmt + +#define FEAT_CTL_UNSUPPORTED_MSG "IA32_FEATURE_CONTROL MSR unsupported on VMX capable CPU, suspected hardware or hypervisor issue.\n" + void init_feature_control_msr(struct cpuinfo_x86 *c) { + bool tboot = tboot_enabled(); u64 msr; - if (rdmsrl_safe(MSR_IA32_FEATURE_CONTROL, &msr)) + if (rdmsrl_safe(MSR_IA32_FEATURE_CONTROL, &msr)) { + if (cpu_has(c, X86_FEATURE_VMX)) { + pr_err_once(FEAT_CTL_UNSUPPORTED_MSG); + clear_cpu_cap(c, X86_FEATURE_VMX); + } return; + } if (msr & FEAT_CTL_LOCKED) - return; + goto update_caps; /* * Ignore whatever value BIOS left in the MSR to avoid enabling random @@ -28,8 +39,19 @@ void init_feature_control_msr(struct cpuinfo_x86 *c) */ if (cpu_has(c, X86_FEATURE_VMX) && IS_ENABLED(CONFIG_KVM)) { msr |= FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; - if (tboot_enabled()) + if (tboot) msr |= FEAT_CTL_VMX_ENABLED_INSIDE_SMX; } wrmsrl(MSR_IA32_FEATURE_CONTROL, msr); + +update_caps: + if (!cpu_has(c, X86_FEATURE_VMX)) + return; + + if ((tboot && !(msr & FEAT_CTL_VMX_ENABLED_INSIDE_SMX)) || + (!tboot && !(msr & FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX))) { + pr_err_once("VMX (%s TXT) disabled by BIOS\n", + tboot ? "inside" : "outside"); + clear_cpu_cap(c, X86_FEATURE_VMX); + } } From patchwork Tue Nov 19 03:12:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250795 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0730113A4 for ; Tue, 19 Nov 2019 03:14:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E42C722316 for ; Tue, 19 Nov 2019 03:14:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727519AbfKSDMz (ORCPT ); Mon, 18 Nov 2019 22:12:55 -0500 Received: from mga02.intel.com ([134.134.136.20]:21246 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727351AbfKSDMy (ORCPT ); Mon, 18 Nov 2019 22:12:54 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105730" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 10/19] KVM: VMX: Use VMX feature flag to query BIOS enabling Date: Mon, 18 Nov 2019 19:12:31 -0800 Message-Id: <20191119031240.7779-11-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Replace KVM's manual checks on IA32_FEATURE_CONTROL with a query on the boot CPU's VMX feature flag. The VMX flag is now cleared during boot if VMX isn't fully enabled via IA32_FEATURE_CONTROL, including the case where IA32_FEATURE_CONTROL isn't supported. Reviewed-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 26 +------------------------- 1 file changed, 1 insertion(+), 25 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e9681e3fcb63..eff28130cb54 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -2192,31 +2192,7 @@ static __init int cpu_has_kvm_support(void) static __init int vmx_disabled_by_bios(void) { - u64 msr; - - rdmsrl(MSR_IA32_FEATURE_CONTROL, msr); - - if (WARN_ON_ONCE(!(msr & FEAT_CTL_LOCKED))) - return 1; - - /* launched w/ TXT and VMX disabled */ - if (!(msr & FEAT_CTL_VMX_ENABLED_INSIDE_SMX) && - tboot_enabled()) - return 1; - /* launched w/o TXT and VMX only enabled w/ TXT */ - if (!(msr & FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX) && - (msr & FEAT_CTL_VMX_ENABLED_INSIDE_SMX) && - !tboot_enabled()) { - pr_warn("kvm: disable TXT in the BIOS or " - "activate TXT before enabling KVM\n"); - return 1; - } - /* launched w/o TXT and VMX disabled */ - if (!(msr & FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX) && - !tboot_enabled()) - return 1; - - return 0; + return !boot_cpu_has(X86_FEATURE_VMX); } static void kvm_cpu_vmxon(u64 addr) From patchwork Tue Nov 19 03:12:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250677 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB393109A for ; Tue, 19 Nov 2019 03:12:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 853C82231D for ; Tue, 19 Nov 2019 03:12:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727242AbfKSDMy (ORCPT ); Mon, 18 Nov 2019 22:12:54 -0500 Received: from mga02.intel.com ([134.134.136.20]:21246 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727014AbfKSDMy (ORCPT ); Mon, 18 Nov 2019 22:12:54 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105733" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 11/19] KVM: VMX: Check for full VMX support when verifying CPU compatibility Date: Mon, 18 Nov 2019 19:12:32 -0800 Message-Id: <20191119031240.7779-12-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Explicitly check the current CPU's VMX feature flag when verifying compatibility across physical CPUs. This effectively adds a check on IA32_FEATURE_CONTROL to ensure that VMX is fully enabled on all CPUs. Reviewed-by: Jim Mattson Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index eff28130cb54..d9bd99977464 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6831,6 +6831,11 @@ static int __init vmx_check_processor_compat(void) struct vmcs_config vmcs_conf; struct vmx_capability vmx_cap; + if (!this_cpu_has(X86_FEATURE_VMX)) { + pr_err("kvm: VMX is disabled on CPU %d\n", smp_processor_id()); + return -EIO; + } + if (setup_vmcs_config(&vmcs_conf, &vmx_cap) < 0) return -EIO; if (nested) From patchwork Tue Nov 19 03:12:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250779 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2F409109A for ; Tue, 19 Nov 2019 03:14:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 04FCF2235E for ; Tue, 19 Nov 2019 03:14:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727460AbfKSDO0 (ORCPT ); Mon, 18 Nov 2019 22:14:26 -0500 Received: from mga02.intel.com ([134.134.136.20]:21246 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727014AbfKSDMz (ORCPT ); Mon, 18 Nov 2019 22:12:55 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105736" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 12/19] x86/vmx: Introduce VMX_FEATURES_* Date: Mon, 18 Nov 2019 19:12:33 -0800 Message-Id: <20191119031240.7779-13-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a VMX specific variant of X86_FEATURE_* flags, which will eventually supplant the synthetic VMX flags defined in cpufeatures word 8. Use the Intel-defined layouts for the major VMX execution controls so that their word entries can be directly populated from their respective MSRs, and so that the VMX_FEATURE_* flags can be used to define the existing bit definitions in asm/vmx.h, i.e. force developers to define a VMX_FEATURE flag when adding support for a new hardware feature. The majority of Intel's (and compatible CPU's) VMX capabilities are enumerated via MSRs and not CPUID, i.e. querying /proc/cpuinfo doesn't naturally provide any insight into the virtualization capabilities of VMX enabled CPUs. Commit e38e05a85828d ("x86: extended "flags" to show virtualization HW feature in /proc/cpuinfo") attempted to address the issue by synthesizing select VMX features into a Linux-defined word in cpufeatures. The synthetic cpufeatures approach has several flaws: - The set of synthesized VMX flags has become extremely stale with respect to the full set of VMX features, e.g. only one new flag (EPT A/D) has been added in the the decade since the introduction of the synthetic VMX features. Failure to keep the VMX flags up to date is likely due to the lack of a mechanism that forces developers to consider whether or not a new feature is worth reporting. - The synthetic flags may incorrectly be misinterpreted as affecting kernel behavior, i.e. KVM, the kernel's sole consumer of VMX, completely ignores the synthetic flags. - New CPU vendors that support VMX have duplicated the hideous code that propagates VMX features from MSRs to cpufeatures. Bringing the synthetic VMX flags up to date would exacerbate the copy+paste trainwreck. Define separate VMX_FEATURE flags to set the stage for enumerating VMX capabilities outside of the cpu_has() framework, and for adding functional usage of VMX_FEATURE_* to help ensure the features reported via /proc/cpuinfo is up to date with respect to kernel recognition of VMX capabilities. Note, the displayed names 'vnmi', 'tpr_shadow' and 'flexpriority' are retained for backwards compatibility with the existing ABI. Signed-off-by: Sean Christopherson --- MAINTAINERS | 2 +- arch/x86/include/asm/processor.h | 1 + arch/x86/include/asm/vmxfeatures.h | 81 ++++++++++++++++++++++++++++++ 3 files changed, 83 insertions(+), 1 deletion(-) create mode 100644 arch/x86/include/asm/vmxfeatures.h diff --git a/MAINTAINERS b/MAINTAINERS index df711965c377..6b736e78ee9e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9009,7 +9009,7 @@ F: arch/x86/include/uapi/asm/svm.h F: arch/x86/include/asm/kvm* F: arch/x86/include/asm/pvclock-abi.h F: arch/x86/include/asm/svm.h -F: arch/x86/include/asm/vmx.h +F: arch/x86/include/asm/vmx*.h F: arch/x86/kernel/kvm.c F: arch/x86/kernel/kvmclock.c diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index b4e29d8b9e5a..772de8917430 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -25,6 +25,7 @@ struct vm86; #include #include #include +#include #include #include diff --git a/arch/x86/include/asm/vmxfeatures.h b/arch/x86/include/asm/vmxfeatures.h new file mode 100644 index 000000000000..aea39b9f1587 --- /dev/null +++ b/arch/x86/include/asm/vmxfeatures.h @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_VMXFEATURES_H +#define _ASM_X86_VMXFEATURES_H + +/* + * Note: If the comment begins with a quoted string, that string is used + * in /proc/cpuinfo instead of the macro name. If the string is "", + * this feature bit is not displayed in /proc/cpuinfo at all. + */ + +/* Pin-Based VM-Execution Controls, EPT/VPID, APIC and VM-Functions, word 0 */ +#define VMX_FEATURE_INTR_EXITING ( 0*32+ 0) /* "" VM-Exit on vectored interrupts */ +#define VMX_FEATURE_NMI_EXITING ( 0*32+ 3) /* "" VM-Exit on NMIs */ +#define VMX_FEATURE_VIRTUAL_NMIS ( 0*32+ 5) /* "vnmi" NMI virtualization */ +#define VMX_FEATURE_PREEMPTION_TIMER ( 0*32+ 6) /* VMX Preemption Timer */ +#define VMX_FEATURE_POSTED_INTR ( 0*32+ 7) /* Posted Interrupts */ + +/* EPT/VPID features, scattered to bits 16-23 */ +#define VMX_FEATURE_INVVPID ( 0*32+ 16) /* INVVPID is supported */ +#define VMX_FEATURE_EPT_EXECUTE_ONLY ( 0*32+ 17) /* "ept_x_only" EPT entries can be execute only */ +#define VMX_FEATURE_EPT_AD ( 0*32+ 18) /* EPT Accessed/Dirty bits */ +#define VMX_FEATURE_EPT_1GB ( 0*32+ 19) /* 1GB EPT pages */ + +/* Aggregated APIC features 24-27 */ +#define VMX_FEATURE_FLEXPRIORITY ( 0*32+ 24) /* TPR shadow + virt APIC */ +#define VMX_FEATURE_APICV ( 0*32+ 25) /* TPR shadow + APIC reg virt + virt intr delivery + posted interrupts */ + +/* VM-Functions, shifted to bits 28-31 */ +#define VMX_FEATURE_EPTP_SWITCHING ( 0*32+ 28) /* EPTP switching (in guest) */ + +/* Primary Processor-Based VM-Execution Controls, word 1 */ +#define VMX_FEATURE_VIRTUAL_INTR_PENDING ( 1*32+ 2) /* "" VM-Exit if INTRs are unblocked in guest */ +#define VMX_FEATURE_TSC_OFFSETTING ( 1*32+ 3) /* Offset hardware TSC when read in guest */ +#define VMX_FEATURE_HLT_EXITING ( 1*32+ 7) /* "" VM-Exit on HLT */ +#define VMX_FEATURE_INVLPG_EXITING ( 1*32+ 9) /* "" VM-Exit on INVLPG */ +#define VMX_FEATURE_MWAIT_EXITING ( 1*32+ 10) /* "" VM-Exit on MWAIT */ +#define VMX_FEATURE_RDPMC_EXITING ( 1*32+ 11) /* "" VM-Exit on RDPMC */ +#define VMX_FEATURE_RDTSC_EXITING ( 1*32+ 12) /* "" VM-Exit on RDTSC */ +#define VMX_FEATURE_CR3_LOAD_EXITING ( 1*32+ 15) /* "" VM-Exit on writes to CR3 */ +#define VMX_FEATURE_CR3_STORE_EXITING ( 1*32+ 16) /* "" VM-Exit on reads from CR3 */ +#define VMX_FEATURE_CR8_LOAD_EXITING ( 1*32+ 19) /* "" VM-Exit on writes to CR8 */ +#define VMX_FEATURE_CR8_STORE_EXITING ( 1*32+ 20) /* "" VM-Exit on reads from CR8 */ +#define VMX_FEATURE_VIRTUAL_TPR ( 1*32+ 21) /* "tpr_shadow" TPR virtualization */ +#define VMX_FEATURE_VIRTUAL_NMI_PENDING ( 1*32+ 22) /* "" VM-Exit if NMIs are unblocked in guest */ +#define VMX_FEATURE_MOV_DR_EXITING ( 1*32+ 23) /* "" VM-Exit on accesses to debug registers */ +#define VMX_FEATURE_UNCOND_IO_EXITING ( 1*32+ 24) /* "" VM-Exit on *all* IN{S} and OUT{S}*/ +#define VMX_FEATURE_USE_IO_BITMAPS ( 1*32+ 25) /* "" VM-Exit based on I/O port */ +#define VMX_FEATURE_MONITOR_TRAP_FLAG ( 1*32+ 27) /* "mtf" VMX single-step VM-Exits */ +#define VMX_FEATURE_USE_MSR_BITMAPS ( 1*32+ 28) /* "" VM-Exit based on MSR index */ +#define VMX_FEATURE_MONITOR_EXITING ( 1*32+ 29) /* "" VM-Exit on MONITOR (MWAIT's accomplice) */ +#define VMX_FEATURE_PAUSE_EXITING ( 1*32+ 30) /* "" VM-Exit on PAUSE (unconditionally) */ +#define VMX_FEATURE_SEC_CONTROLS ( 1*32+ 31) /* "" Enable Secondary VM-Execution Controls */ + +/* Secondary Processor-Based VM-Execution Controls, word 2 */ +#define VMX_FEATURE_VIRT_APIC_ACCESSES ( 2*32+ 0) /* Virtualize memory mapped APIC accesses */ +#define VMX_FEATURE_EPT ( 2*32+ 1) /* Extended Page Tables, a.k.a. Two-Dimensional Paging */ +#define VMX_FEATURE_DESC_EXITING ( 2*32+ 2) /* "" VM-Exit on {S,L}*DT instructions */ +#define VMX_FEATURE_RDTSCP ( 2*32+ 3) /* "" Enable RDTSCP in guest */ +#define VMX_FEATURE_VIRTUAL_X2APIC ( 2*32+ 4) /* "" Virtualize X2APIC for the guest */ +#define VMX_FEATURE_VPID ( 2*32+ 5) /* Virtual Processor ID (TLB ASID modifier) */ +#define VMX_FEATURE_WBINVD_EXITING ( 2*32+ 6) /* "" VM-Exit on WBINVD */ +#define VMX_FEATURE_UNRESTRICTED_GUEST ( 2*32+ 7) /* Allow Big Real Mode and other "invalid" states */ +#define VMX_FEATURE_APIC_REGISTER_VIRT ( 2*32+ 8) /* Hardware emulation of reads to the virtual-APIC */ +#define VMX_FEATURE_VIRT_INTR_DELIVERY ( 2*32+ 9) /* Evaluation and delivery of pending virtual interrupts */ +#define VMX_FEATURE_PAUSE_LOOP_EXITING ( 2*32+ 10) /* "ple" Conditionally VM-Exit on PAUSE at CPL0 */ +#define VMX_FEATURE_RDRAND_EXITING ( 2*32+ 11) /* "" VM-Exit on RDRAND*/ +#define VMX_FEATURE_INVPCID ( 2*32+ 12) /* "" Enable INVPCID in guest */ +#define VMX_FEATURE_VMFUNC ( 2*32+ 13) /* "" Enable VM-Functions (leaf dependent) */ +#define VMX_FEATURE_SHADOW_VMCS ( 2*32+ 14) /* VMREAD/VMWRITE in guest can access shadow VMCS */ +#define VMX_FEATURE_ENCLS_EXITING ( 2*32+ 15) /* "" VM-Exit on ENCLS (leaf dependent) */ +#define VMX_FEATURE_RDSEED_EXITING ( 2*32+ 16) /* "" VM-Exit on RDSEED */ +#define VMX_FEATURE_PAGE_MOD_LOGGING ( 2*32+ 17) /* "pml" Log dirty pages into buffer */ +#define VMX_FEATURE_EPT_VIOLATION_VE ( 2*32+ 18) /* "" Conditionally reflect EPT violations as #VE exceptions */ +#define VMX_FEATURE_PT_CONCEAL_VMX ( 2*32+ 19) /* "" Suppress VMX indicators in Processor Trace */ +#define VMX_FEATURE_XSAVES ( 2*32+ 20) /* "" Enable XSAVES and XRSTORS in guest */ +#define VMX_FEATURE_MODE_BASED_EPT_EXEC ( 2*32+ 22) /* Enable separate EPT EXEC bits for supervisor vs. user */ +#define VMX_FEATURE_PT_USE_GPA ( 2*32+ 24) /* "" Processor Trace logs GPAs */ +#define VMX_FEATURE_TSC_SCALING ( 2*32+ 25) /* Scale hardware TSC when read in guest */ +#define VMX_FEATURE_ENCLV_EXITING ( 2*32+ 28) /* "" VM-Exit on ENCLV (leaf dependent) */ + +#endif /* _ASM_X86_VMXFEATURES_H */ From patchwork Tue Nov 19 03:12:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250781 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4FB421871 for ; Tue, 19 Nov 2019 03:14:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 390682231A for ; Tue, 19 Nov 2019 03:14:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727908AbfKSDO0 (ORCPT ); Mon, 18 Nov 2019 22:14:26 -0500 Received: from mga02.intel.com ([134.134.136.20]:21257 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727578AbfKSDMz (ORCPT ); Mon, 18 Nov 2019 22:12:55 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105739" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 13/19] x86/cpu: Detect VMX features on Intel, Centaur and Zhaoxin CPUs Date: Mon, 18 Nov 2019 19:12:34 -0800 Message-Id: <20191119031240.7779-14-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add an entry in struct cpuinfo_x86 to track VMX capabilities and fill the capabilities during IA32_FEATURE_CONTROL MSR initialization. Make the VMX capabilities dependent on X86_FEATURE_CONTROL_MSR and X86_FEATURE_NAMES so as to avoid unnecessary overhead on CPUs that can't possibly support VMX, or when /proc/cpuinfo is not available. Signed-off-by: Sean Christopherson --- arch/x86/Kconfig.cpu | 4 ++ arch/x86/include/asm/processor.h | 3 ++ arch/x86/include/asm/vmxfeatures.h | 5 ++ arch/x86/kernel/cpu/common.c | 3 ++ arch/x86/kernel/cpu/feature_control.c | 70 +++++++++++++++++++++++++++ 5 files changed, 85 insertions(+) diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu index e78f39adae7b..e7571bd0f515 100644 --- a/arch/x86/Kconfig.cpu +++ b/arch/x86/Kconfig.cpu @@ -391,6 +391,10 @@ config X86_FEATURE_CONTROL_MSR def_bool y depends on CPU_SUP_INTEL || CPU_SUP_CENTAUR || CPU_SUP_ZHAOXIN +config X86_VMX_FEATURE_NAMES + def_bool y + depends on X86_FEATURE_CONTROL_MSR && X86_FEATURE_NAMES + menuconfig PROCESSOR_SELECT bool "Supported processor vendors" if EXPERT ---help--- diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 772de8917430..5b27877c4477 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -85,6 +85,9 @@ struct cpuinfo_x86 { #ifdef CONFIG_X86_64 /* Number of 4K pages in DTLB/ITLB combined(in pages): */ int x86_tlbsize; +#endif +#ifdef CONFIG_X86_VMX_FEATURE_NAMES + __u32 vmx_capability[NVMXINTS]; #endif __u8 x86_virt_bits; __u8 x86_phys_bits; diff --git a/arch/x86/include/asm/vmxfeatures.h b/arch/x86/include/asm/vmxfeatures.h index aea39b9f1587..bfc96d4049a7 100644 --- a/arch/x86/include/asm/vmxfeatures.h +++ b/arch/x86/include/asm/vmxfeatures.h @@ -2,6 +2,11 @@ #ifndef _ASM_X86_VMXFEATURES_H #define _ASM_X86_VMXFEATURES_H +/* + * Defines VMX CPU feature bits + */ +#define NVMXINTS 3 /* N 32-bit words worth of info */ + /* * Note: If the comment begins with a quoted string, that string is used * in /proc/cpuinfo instead of the macro name. If the string is "", diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index baa2fed8deb6..f0aff0ab4ef5 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1448,6 +1448,9 @@ static void identify_cpu(struct cpuinfo_x86 *c) #endif c->x86_cache_alignment = c->x86_clflush_size; memset(&c->x86_capability, 0, sizeof(c->x86_capability)); +#ifdef CONFIG_X86_VMX_FEATURE_NAMES + memset(&c->vmx_capability, 0, sizeof(c->vmx_capability)); +#endif generic_identify(c); diff --git a/arch/x86/kernel/cpu/feature_control.c b/arch/x86/kernel/cpu/feature_control.c index 2bd1a9e6021a..d49c42e1166c 100644 --- a/arch/x86/kernel/cpu/feature_control.c +++ b/arch/x86/kernel/cpu/feature_control.c @@ -4,6 +4,72 @@ #include #include #include +#include + +#ifdef CONFIG_X86_VMX_FEATURE_NAMES +enum vmx_feature_leafs { + MISC_FEATURES = 0, + PRIMARY_PROC_CTLS, + SECONDARY_PROC_CTLS, + NR_VMX_FEATURE_WORDS, +}; + +#define F(x) BIT(VMX_FEATURE_##x & 0x1f) + +static void init_vmx_capabilities(struct cpuinfo_x86 *c) +{ + u32 supported, funcs, ept, vpid, ign; + + BUILD_BUG_ON(NVMXINTS != NR_VMX_FEATURE_WORDS); + + /* + * The high bits contain the allowed-1 settings, i.e. features that can + * be turned on. The low bits contain the allowed-0 settings, i.e. + * features that can be turned off. Ignore the allowed-0 settings, + * if a feature can be turned on then it's supported. + */ + rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, ign, supported); + c->vmx_capability[PRIMARY_PROC_CTLS] = supported; + + rdmsr_safe(MSR_IA32_VMX_PROCBASED_CTLS2, &ign, &supported); + c->vmx_capability[SECONDARY_PROC_CTLS] = supported; + + rdmsr(MSR_IA32_VMX_PINBASED_CTLS, ign, supported); + rdmsr_safe(MSR_IA32_VMX_VMFUNC, &ign, &funcs); + + /* + * Except for EPT+VPID, which enumerates support for both in a single + * MSR, low for EPT, high for VPID. + */ + rdmsr_safe(MSR_IA32_VMX_EPT_VPID_CAP, &ept, &vpid); + + /* Pin, EPT, VPID and VM-Func are merged into a single word. */ + WARN_ON_ONCE(supported >> 16); + WARN_ON_ONCE(funcs >> 4); + c->vmx_capability[MISC_FEATURES] = (supported & 0xffff) | + ((vpid & 0x1) << 16) | + ((funcs & 0xf) << 28); + + /* EPT bits are full on scattered and must be manually handled. */ + if (ept & VMX_EPT_EXECUTE_ONLY_BIT) + c->vmx_capability[MISC_FEATURES] |= F(EPT_EXECUTE_ONLY); + if (ept & VMX_EPT_AD_BIT) + c->vmx_capability[MISC_FEATURES] |= F(EPT_AD); + if (ept & VMX_EPT_1GB_PAGE_BIT) + c->vmx_capability[MISC_FEATURES] |= F(EPT_1GB); + + /* Synthetic APIC features that are aggregates of multiple features. */ + if ((c->vmx_capability[PRIMARY_PROC_CTLS] & F(VIRTUAL_TPR)) && + (c->vmx_capability[SECONDARY_PROC_CTLS] & F(VIRT_APIC_ACCESSES))) + c->vmx_capability[MISC_FEATURES] |= F(FLEXPRIORITY); + + if ((c->vmx_capability[PRIMARY_PROC_CTLS] & F(VIRTUAL_TPR)) && + (c->vmx_capability[SECONDARY_PROC_CTLS] & F(APIC_REGISTER_VIRT)) && + (c->vmx_capability[SECONDARY_PROC_CTLS] & F(VIRT_INTR_DELIVERY)) && + (c->vmx_capability[MISC_FEATURES] & F(POSTED_INTR))) + c->vmx_capability[MISC_FEATURES] |= F(APICV); +} +#endif /* CONFIG_X86_VMX_FEATURE_NAMES */ #undef pr_fmt #define pr_fmt(fmt) "x86/cpu: " fmt @@ -53,5 +119,9 @@ void init_feature_control_msr(struct cpuinfo_x86 *c) pr_err_once("VMX (%s TXT) disabled by BIOS\n", tboot ? "inside" : "outside"); clear_cpu_cap(c, X86_FEATURE_VMX); + } else { +#ifdef CONFIG_X86_VMX_FEATURE_NAMES + init_vmx_capabilities(c); +#endif } } From patchwork Tue Nov 19 03:12:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250725 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4F9A13A4 for ; Tue, 19 Nov 2019 03:13:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BAA58214DE for ; Tue, 19 Nov 2019 03:13:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727218AbfKSDNp (ORCPT ); Mon, 18 Nov 2019 22:13:45 -0500 Received: from mga02.intel.com ([134.134.136.20]:21246 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727635AbfKSDM6 (ORCPT ); Mon, 18 Nov 2019 22:12:58 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105742" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 14/19] x86/cpu: Print VMX flags in /proc/cpuinfo using VMX_FEATURES_* Date: Mon, 18 Nov 2019 19:12:35 -0800 Message-Id: <20191119031240.7779-15-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add support for generating VMX feature names in capflags.c and use the resulting x86_vmx_flags to print the VMX flags in /proc/cpuinfo. Remove all code which sets the synthetic VMX flags in cpufeatures so that "flags" doesn't contain duplicate VMX features. The synthetic flags themselves will be removed shortly. Do not print VMX flags if no bits are set in word 0, which includes Pin Controls. INTR and NMI exiting are fundamental pillars of VMX, if they are not supported then the CPU is broken, it does not actually support VMX, or the kernel wasn't built with support for the target CPU. Note, ideally VMX flags would be a separate line in /proc/cpuinfo, e.g. as "vmx_flags", but doing so would break the ABI with respect to the existing synthetic VMX flags in cpufeatures. Signed-off-by: Sean Christopherson --- arch/x86/boot/mkcpustr.c | 1 + arch/x86/kernel/cpu/Makefile | 5 ++-- arch/x86/kernel/cpu/centaur.c | 35 ---------------------- arch/x86/kernel/cpu/intel.c | 49 ------------------------------- arch/x86/kernel/cpu/mkcapflags.sh | 15 +++++++--- arch/x86/kernel/cpu/proc.c | 14 +++++++++ arch/x86/kernel/cpu/zhaoxin.c | 35 ---------------------- 7 files changed, 29 insertions(+), 125 deletions(-) diff --git a/arch/x86/boot/mkcpustr.c b/arch/x86/boot/mkcpustr.c index 9caa10e82217..da0ccc5de538 100644 --- a/arch/x86/boot/mkcpustr.c +++ b/arch/x86/boot/mkcpustr.c @@ -15,6 +15,7 @@ #include "../include/asm/required-features.h" #include "../include/asm/disabled-features.h" #include "../include/asm/cpufeatures.h" +#include "../include/asm/vmxfeatures.h" #include "../kernel/cpu/capflags.c" int main(void) diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index 84e35e762f76..fae9448678f3 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -54,11 +54,12 @@ obj-$(CONFIG_ACRN_GUEST) += acrn.o ifdef CONFIG_X86_FEATURE_NAMES quiet_cmd_mkcapflags = MKCAP $@ - cmd_mkcapflags = $(CONFIG_SHELL) $(srctree)/$(src)/mkcapflags.sh $< $@ + cmd_mkcapflags = $(CONFIG_SHELL) $(srctree)/$(src)/mkcapflags.sh $@ $^ cpufeature = $(src)/../../include/asm/cpufeatures.h +vmxfeature = $(src)/../../include/asm/vmxfeatures.h -$(obj)/capflags.c: $(cpufeature) $(src)/mkcapflags.sh FORCE +$(obj)/capflags.c: $(cpufeature) $(vmxfeature) $(src)/mkcapflags.sh FORCE $(call if_changed,mkcapflags) endif targets += capflags.c diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c index a6ca4c31c1b6..be11c796926b 100644 --- a/arch/x86/kernel/cpu/centaur.c +++ b/arch/x86/kernel/cpu/centaur.c @@ -18,13 +18,6 @@ #define RNG_ENABLED (1 << 3) #define RNG_ENABLE (1 << 6) /* MSR_VIA_RNG */ -#define X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW 0x00200000 -#define X86_VMX_FEATURE_PROC_CTLS_VNMI 0x00400000 -#define X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS 0x80000000 -#define X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC 0x00000001 -#define X86_VMX_FEATURE_PROC_CTLS2_EPT 0x00000002 -#define X86_VMX_FEATURE_PROC_CTLS2_VPID 0x00000020 - static void init_c3(struct cpuinfo_x86 *c) { u32 lo, hi; @@ -119,31 +112,6 @@ static void early_init_centaur(struct cpuinfo_x86 *c) } } -static void centaur_detect_vmx_virtcap(struct cpuinfo_x86 *c) -{ - u32 vmx_msr_low, vmx_msr_high, msr_ctl, msr_ctl2; - - rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, vmx_msr_low, vmx_msr_high); - msr_ctl = vmx_msr_high | vmx_msr_low; - - if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW) - set_cpu_cap(c, X86_FEATURE_TPR_SHADOW); - if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_VNMI) - set_cpu_cap(c, X86_FEATURE_VNMI); - if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS) { - rdmsr(MSR_IA32_VMX_PROCBASED_CTLS2, - vmx_msr_low, vmx_msr_high); - msr_ctl2 = vmx_msr_high | vmx_msr_low; - if ((msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC) && - (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW)) - set_cpu_cap(c, X86_FEATURE_FLEXPRIORITY); - if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_EPT) - set_cpu_cap(c, X86_FEATURE_EPT); - if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VPID) - set_cpu_cap(c, X86_FEATURE_VPID); - } -} - static void init_centaur(struct cpuinfo_x86 *c) { #ifdef CONFIG_X86_32 @@ -251,9 +219,6 @@ static void init_centaur(struct cpuinfo_x86 *c) #endif init_feature_control_msr(c); - - if (cpu_has(c, X86_FEATURE_VMX)) - centaur_detect_vmx_virtcap(c); } #ifdef CONFIG_X86_32 diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index b7c6ed0b40b6..3d4b3fc6bd4b 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -494,52 +494,6 @@ static void srat_detect_node(struct cpuinfo_x86 *c) #endif } -static void detect_vmx_virtcap(struct cpuinfo_x86 *c) -{ - /* Intel VMX MSR indicated features */ -#define X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW 0x00200000 -#define X86_VMX_FEATURE_PROC_CTLS_VNMI 0x00400000 -#define X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS 0x80000000 -#define X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC 0x00000001 -#define X86_VMX_FEATURE_PROC_CTLS2_EPT 0x00000002 -#define X86_VMX_FEATURE_PROC_CTLS2_VPID 0x00000020 -#define x86_VMX_FEATURE_EPT_CAP_AD 0x00200000 - - u32 vmx_msr_low, vmx_msr_high, msr_ctl, msr_ctl2; - u32 msr_vpid_cap, msr_ept_cap; - - clear_cpu_cap(c, X86_FEATURE_TPR_SHADOW); - clear_cpu_cap(c, X86_FEATURE_VNMI); - clear_cpu_cap(c, X86_FEATURE_FLEXPRIORITY); - clear_cpu_cap(c, X86_FEATURE_EPT); - clear_cpu_cap(c, X86_FEATURE_VPID); - clear_cpu_cap(c, X86_FEATURE_EPT_AD); - - rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, vmx_msr_low, vmx_msr_high); - msr_ctl = vmx_msr_high | vmx_msr_low; - if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW) - set_cpu_cap(c, X86_FEATURE_TPR_SHADOW); - if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_VNMI) - set_cpu_cap(c, X86_FEATURE_VNMI); - if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS) { - rdmsr(MSR_IA32_VMX_PROCBASED_CTLS2, - vmx_msr_low, vmx_msr_high); - msr_ctl2 = vmx_msr_high | vmx_msr_low; - if ((msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC) && - (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW)) - set_cpu_cap(c, X86_FEATURE_FLEXPRIORITY); - if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_EPT) { - set_cpu_cap(c, X86_FEATURE_EPT); - rdmsr(MSR_IA32_VMX_EPT_VPID_CAP, - msr_ept_cap, msr_vpid_cap); - if (msr_ept_cap & x86_VMX_FEATURE_EPT_CAP_AD) - set_cpu_cap(c, X86_FEATURE_EPT_AD); - } - if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VPID) - set_cpu_cap(c, X86_FEATURE_VPID); - } -} - #define MSR_IA32_TME_ACTIVATE 0x982 /* Helpers to access TME_ACTIVATE MSR */ @@ -757,9 +711,6 @@ static void init_intel(struct cpuinfo_x86 *c) init_feature_control_msr(c); - if (cpu_has(c, X86_FEATURE_VMX)) - detect_vmx_virtcap(c); - if (cpu_has(c, X86_FEATURE_TME)) detect_tme(c); diff --git a/arch/x86/kernel/cpu/mkcapflags.sh b/arch/x86/kernel/cpu/mkcapflags.sh index aed45b8895d5..1db560ed2ca3 100644 --- a/arch/x86/kernel/cpu/mkcapflags.sh +++ b/arch/x86/kernel/cpu/mkcapflags.sh @@ -6,8 +6,7 @@ set -e -IN=$1 -OUT=$2 +OUT=$1 dump_array() { @@ -15,6 +14,7 @@ dump_array() SIZE=$2 PFX=$3 POSTFIX=$4 + IN=$5 PFX_SZ=$(echo $PFX | wc -c) TABS="$(printf '\t\t\t\t\t')" @@ -57,11 +57,18 @@ trap 'rm "$OUT"' EXIT echo "#endif" echo "" - dump_array "x86_cap_flags" "NCAPINTS*32" "X86_FEATURE_" "" + dump_array "x86_cap_flags" "NCAPINTS*32" "X86_FEATURE_" "" $2 echo "" - dump_array "x86_bug_flags" "NBUGINTS*32" "X86_BUG_" "NCAPINTS*32" + dump_array "x86_bug_flags" "NBUGINTS*32" "X86_BUG_" "NCAPINTS*32" $2 + echo "" + echo "#ifdef CONFIG_X86_VMX_FEATURE_NAMES" + echo "#ifndef _ASM_X86_VMXFEATURES_H" + echo "#include " + echo "#endif" + dump_array "x86_vmx_flags" "NVMXINTS*32" "VMX_FEATURE_" "" $3 + echo "#endif /* CONFIG_X86_VMX_FEATURE_NAMES */" ) > $OUT trap - EXIT diff --git a/arch/x86/kernel/cpu/proc.c b/arch/x86/kernel/cpu/proc.c index cb2e49810d68..8f118111279a 100644 --- a/arch/x86/kernel/cpu/proc.c +++ b/arch/x86/kernel/cpu/proc.c @@ -7,6 +7,10 @@ #include "cpu.h" +#ifdef CONFIG_X86_VMX_FEATURE_NAMES +extern const char * const x86_vmx_flags[NVMXINTS*32]; +#endif + /* * Get CPU information for use by the procfs. */ @@ -102,6 +106,16 @@ static int show_cpuinfo(struct seq_file *m, void *v) if (cpu_has(c, i) && x86_cap_flags[i] != NULL) seq_printf(m, " %s", x86_cap_flags[i]); +#ifdef CONFIG_X86_VMX_FEATURE_NAMES + if (cpu_has(c, X86_FEATURE_VMX) && c->vmx_capability[0]) { + for (i = 0; i < 32*NVMXINTS; i++) { + if (test_bit(i, (unsigned long *)c->vmx_capability) && + x86_vmx_flags[i] != NULL) + seq_printf(m, " %s", x86_vmx_flags[i]); + } + } +#endif + seq_puts(m, "\nbugs\t\t:"); for (i = 0; i < 32*NBUGINTS; i++) { unsigned int bug_bit = 32*NCAPINTS + i; diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c index 01b05a4a5a85..edfc7cc4ec33 100644 --- a/arch/x86/kernel/cpu/zhaoxin.c +++ b/arch/x86/kernel/cpu/zhaoxin.c @@ -16,13 +16,6 @@ #define RNG_ENABLED (1 << 3) #define RNG_ENABLE (1 << 8) /* MSR_ZHAOXIN_RNG */ -#define X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW 0x00200000 -#define X86_VMX_FEATURE_PROC_CTLS_VNMI 0x00400000 -#define X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS 0x80000000 -#define X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC 0x00000001 -#define X86_VMX_FEATURE_PROC_CTLS2_EPT 0x00000002 -#define X86_VMX_FEATURE_PROC_CTLS2_VPID 0x00000020 - static void init_zhaoxin_cap(struct cpuinfo_x86 *c) { u32 lo, hi; @@ -89,31 +82,6 @@ static void early_init_zhaoxin(struct cpuinfo_x86 *c) } -static void zhaoxin_detect_vmx_virtcap(struct cpuinfo_x86 *c) -{ - u32 vmx_msr_low, vmx_msr_high, msr_ctl, msr_ctl2; - - rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, vmx_msr_low, vmx_msr_high); - msr_ctl = vmx_msr_high | vmx_msr_low; - - if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW) - set_cpu_cap(c, X86_FEATURE_TPR_SHADOW); - if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_VNMI) - set_cpu_cap(c, X86_FEATURE_VNMI); - if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS) { - rdmsr(MSR_IA32_VMX_PROCBASED_CTLS2, - vmx_msr_low, vmx_msr_high); - msr_ctl2 = vmx_msr_high | vmx_msr_low; - if ((msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC) && - (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW)) - set_cpu_cap(c, X86_FEATURE_FLEXPRIORITY); - if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_EPT) - set_cpu_cap(c, X86_FEATURE_EPT); - if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VPID) - set_cpu_cap(c, X86_FEATURE_VPID); - } -} - static void init_zhaoxin(struct cpuinfo_x86 *c) { early_init_zhaoxin(c); @@ -142,9 +110,6 @@ static void init_zhaoxin(struct cpuinfo_x86 *c) #endif init_feature_control_msr(c); - - if (cpu_has(c, X86_FEATURE_VMX)) - zhaoxin_detect_vmx_virtcap(c); } #ifdef CONFIG_X86_32 From patchwork Tue Nov 19 03:12:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250739 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52E1A109A for ; Tue, 19 Nov 2019 03:14:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3C03A21906 for ; Tue, 19 Nov 2019 03:14:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727833AbfKSDNx (ORCPT ); Mon, 18 Nov 2019 22:13:53 -0500 Received: from mga02.intel.com ([134.134.136.20]:21257 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727625AbfKSDM5 (ORCPT ); Mon, 18 Nov 2019 22:12:57 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105745" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 15/19] x86/cpufeatures: Drop synthetic VMX feature flags Date: Mon, 18 Nov 2019 19:12:36 -0800 Message-Id: <20191119031240.7779-16-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Remove the synthetic VMX feature flags from word 8 as they have been superseded by VMX_FEATURE_*. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/cpufeatures.h | 7 ------- 1 file changed, 7 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index e9b62498fe75..8d6c3bc128e2 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -222,15 +222,8 @@ #define X86_FEATURE_IBRS_ENHANCED ( 7*32+30) /* Enhanced IBRS */ /* Virtualization flags: Linux defined, word 8 */ -#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */ -#define X86_FEATURE_VNMI ( 8*32+ 1) /* Intel Virtual NMI */ -#define X86_FEATURE_FLEXPRIORITY ( 8*32+ 2) /* Intel FlexPriority */ -#define X86_FEATURE_EPT ( 8*32+ 3) /* Intel Extended Page Table */ -#define X86_FEATURE_VPID ( 8*32+ 4) /* Intel Virtual Processor ID */ - #define X86_FEATURE_VMMCALL ( 8*32+15) /* Prefer VMMCALL to VMCALL */ #define X86_FEATURE_XENPV ( 8*32+16) /* "" Xen paravirtual guest */ -#define X86_FEATURE_EPT_AD ( 8*32+17) /* Intel Extended Page Table access-dirty bit */ #define X86_FEATURE_VMCALL ( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */ #define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */ From patchwork Tue Nov 19 03:12:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250693 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B974513A4 for ; Tue, 19 Nov 2019 03:13:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A3A4922360 for ; Tue, 19 Nov 2019 03:13:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727664AbfKSDM7 (ORCPT ); Mon, 18 Nov 2019 22:12:59 -0500 Received: from mga02.intel.com ([134.134.136.20]:21257 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727634AbfKSDM6 (ORCPT ); Mon, 18 Nov 2019 22:12:58 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105748" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 16/19] KVM: VMX: Use VMX_FEATURE_* flags to define VMCS control bits Date: Mon, 18 Nov 2019 19:12:37 -0800 Message-Id: <20191119031240.7779-17-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Define the VMCS execution control flags (consumed by KVM) using their associated VMX_FEATURE_* to provide a strong hint that new VMX features are expected to be added to VMX_FEATURE and considered for reporting via /proc/cpuinfo. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/vmx.h | 105 +++++++++++++++++++------------------ 1 file changed, 55 insertions(+), 50 deletions(-) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 1835767aa335..9fbba31be825 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -15,67 +15,70 @@ #include #include #include +#include + +#define VMCS_CONTROL_BIT(x) BIT(VMX_FEATURE_##x & 0x1f) /* * Definitions of Primary Processor-Based VM-Execution Controls. */ -#define CPU_BASED_VIRTUAL_INTR_PENDING 0x00000004 -#define CPU_BASED_USE_TSC_OFFSETING 0x00000008 -#define CPU_BASED_HLT_EXITING 0x00000080 -#define CPU_BASED_INVLPG_EXITING 0x00000200 -#define CPU_BASED_MWAIT_EXITING 0x00000400 -#define CPU_BASED_RDPMC_EXITING 0x00000800 -#define CPU_BASED_RDTSC_EXITING 0x00001000 -#define CPU_BASED_CR3_LOAD_EXITING 0x00008000 -#define CPU_BASED_CR3_STORE_EXITING 0x00010000 -#define CPU_BASED_CR8_LOAD_EXITING 0x00080000 -#define CPU_BASED_CR8_STORE_EXITING 0x00100000 -#define CPU_BASED_TPR_SHADOW 0x00200000 -#define CPU_BASED_VIRTUAL_NMI_PENDING 0x00400000 -#define CPU_BASED_MOV_DR_EXITING 0x00800000 -#define CPU_BASED_UNCOND_IO_EXITING 0x01000000 -#define CPU_BASED_USE_IO_BITMAPS 0x02000000 -#define CPU_BASED_MONITOR_TRAP_FLAG 0x08000000 -#define CPU_BASED_USE_MSR_BITMAPS 0x10000000 -#define CPU_BASED_MONITOR_EXITING 0x20000000 -#define CPU_BASED_PAUSE_EXITING 0x40000000 -#define CPU_BASED_ACTIVATE_SECONDARY_CONTROLS 0x80000000 +#define CPU_BASED_VIRTUAL_INTR_PENDING VMCS_CONTROL_BIT(VIRTUAL_INTR_PENDING) +#define CPU_BASED_USE_TSC_OFFSETING VMCS_CONTROL_BIT(TSC_OFFSETTING) +#define CPU_BASED_HLT_EXITING VMCS_CONTROL_BIT(HLT_EXITING) +#define CPU_BASED_INVLPG_EXITING VMCS_CONTROL_BIT(INVLPG_EXITING) +#define CPU_BASED_MWAIT_EXITING VMCS_CONTROL_BIT(MWAIT_EXITING) +#define CPU_BASED_RDPMC_EXITING VMCS_CONTROL_BIT(RDPMC_EXITING) +#define CPU_BASED_RDTSC_EXITING VMCS_CONTROL_BIT(RDTSC_EXITING) +#define CPU_BASED_CR3_LOAD_EXITING VMCS_CONTROL_BIT(CR3_LOAD_EXITING) +#define CPU_BASED_CR3_STORE_EXITING VMCS_CONTROL_BIT(CR3_STORE_EXITING) +#define CPU_BASED_CR8_LOAD_EXITING VMCS_CONTROL_BIT(CR8_LOAD_EXITING) +#define CPU_BASED_CR8_STORE_EXITING VMCS_CONTROL_BIT(CR8_STORE_EXITING) +#define CPU_BASED_TPR_SHADOW VMCS_CONTROL_BIT(VIRTUAL_TPR) +#define CPU_BASED_VIRTUAL_NMI_PENDING VMCS_CONTROL_BIT(VIRTUAL_NMI_PENDING) +#define CPU_BASED_MOV_DR_EXITING VMCS_CONTROL_BIT(MOV_DR_EXITING) +#define CPU_BASED_UNCOND_IO_EXITING VMCS_CONTROL_BIT(UNCOND_IO_EXITING) +#define CPU_BASED_USE_IO_BITMAPS VMCS_CONTROL_BIT(USE_IO_BITMAPS) +#define CPU_BASED_MONITOR_TRAP_FLAG VMCS_CONTROL_BIT(MONITOR_TRAP_FLAG) +#define CPU_BASED_USE_MSR_BITMAPS VMCS_CONTROL_BIT(USE_MSR_BITMAPS) +#define CPU_BASED_MONITOR_EXITING VMCS_CONTROL_BIT(MONITOR_EXITING) +#define CPU_BASED_PAUSE_EXITING VMCS_CONTROL_BIT(PAUSE_EXITING) +#define CPU_BASED_ACTIVATE_SECONDARY_CONTROLS VMCS_CONTROL_BIT(SEC_CONTROLS) #define CPU_BASED_ALWAYSON_WITHOUT_TRUE_MSR 0x0401e172 /* * Definitions of Secondary Processor-Based VM-Execution Controls. */ -#define SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES 0x00000001 -#define SECONDARY_EXEC_ENABLE_EPT 0x00000002 -#define SECONDARY_EXEC_DESC 0x00000004 -#define SECONDARY_EXEC_RDTSCP 0x00000008 -#define SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE 0x00000010 -#define SECONDARY_EXEC_ENABLE_VPID 0x00000020 -#define SECONDARY_EXEC_WBINVD_EXITING 0x00000040 -#define SECONDARY_EXEC_UNRESTRICTED_GUEST 0x00000080 -#define SECONDARY_EXEC_APIC_REGISTER_VIRT 0x00000100 -#define SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY 0x00000200 -#define SECONDARY_EXEC_PAUSE_LOOP_EXITING 0x00000400 -#define SECONDARY_EXEC_RDRAND_EXITING 0x00000800 -#define SECONDARY_EXEC_ENABLE_INVPCID 0x00001000 -#define SECONDARY_EXEC_ENABLE_VMFUNC 0x00002000 -#define SECONDARY_EXEC_SHADOW_VMCS 0x00004000 -#define SECONDARY_EXEC_ENCLS_EXITING 0x00008000 -#define SECONDARY_EXEC_RDSEED_EXITING 0x00010000 -#define SECONDARY_EXEC_ENABLE_PML 0x00020000 -#define SECONDARY_EXEC_PT_CONCEAL_VMX 0x00080000 -#define SECONDARY_EXEC_XSAVES 0x00100000 -#define SECONDARY_EXEC_PT_USE_GPA 0x01000000 -#define SECONDARY_EXEC_MODE_BASED_EPT_EXEC 0x00400000 -#define SECONDARY_EXEC_TSC_SCALING 0x02000000 +#define SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES VMCS_CONTROL_BIT(VIRT_APIC_ACCESSES) +#define SECONDARY_EXEC_ENABLE_EPT VMCS_CONTROL_BIT(EPT) +#define SECONDARY_EXEC_DESC VMCS_CONTROL_BIT(DESC_EXITING) +#define SECONDARY_EXEC_RDTSCP VMCS_CONTROL_BIT(RDTSCP) +#define SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE VMCS_CONTROL_BIT(VIRTUAL_X2APIC) +#define SECONDARY_EXEC_ENABLE_VPID VMCS_CONTROL_BIT(VPID) +#define SECONDARY_EXEC_WBINVD_EXITING VMCS_CONTROL_BIT(WBINVD_EXITING) +#define SECONDARY_EXEC_UNRESTRICTED_GUEST VMCS_CONTROL_BIT(UNRESTRICTED_GUEST) +#define SECONDARY_EXEC_APIC_REGISTER_VIRT VMCS_CONTROL_BIT(APIC_REGISTER_VIRT) +#define SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY VMCS_CONTROL_BIT(VIRT_INTR_DELIVERY) +#define SECONDARY_EXEC_PAUSE_LOOP_EXITING VMCS_CONTROL_BIT(PAUSE_LOOP_EXITING) +#define SECONDARY_EXEC_RDRAND_EXITING VMCS_CONTROL_BIT(RDRAND_EXITING) +#define SECONDARY_EXEC_ENABLE_INVPCID VMCS_CONTROL_BIT(INVPCID) +#define SECONDARY_EXEC_ENABLE_VMFUNC VMCS_CONTROL_BIT(VMFUNC) +#define SECONDARY_EXEC_SHADOW_VMCS VMCS_CONTROL_BIT(SHADOW_VMCS) +#define SECONDARY_EXEC_ENCLS_EXITING VMCS_CONTROL_BIT(ENCLS_EXITING) +#define SECONDARY_EXEC_RDSEED_EXITING VMCS_CONTROL_BIT(RDSEED_EXITING) +#define SECONDARY_EXEC_ENABLE_PML VMCS_CONTROL_BIT(PAGE_MOD_LOGGING) +#define SECONDARY_EXEC_PT_CONCEAL_VMX VMCS_CONTROL_BIT(PT_CONCEAL_VMX) +#define SECONDARY_EXEC_XSAVES VMCS_CONTROL_BIT(XSAVES) +#define SECONDARY_EXEC_MODE_BASED_EPT_EXEC VMCS_CONTROL_BIT(MODE_BASED_EPT_EXEC) +#define SECONDARY_EXEC_PT_USE_GPA VMCS_CONTROL_BIT(PT_USE_GPA) +#define SECONDARY_EXEC_TSC_SCALING VMCS_CONTROL_BIT(TSC_SCALING) #define SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE 0x04000000 -#define PIN_BASED_EXT_INTR_MASK 0x00000001 -#define PIN_BASED_NMI_EXITING 0x00000008 -#define PIN_BASED_VIRTUAL_NMIS 0x00000020 -#define PIN_BASED_VMX_PREEMPTION_TIMER 0x00000040 -#define PIN_BASED_POSTED_INTR 0x00000080 +#define PIN_BASED_EXT_INTR_MASK VMCS_CONTROL_BIT(INTR_EXITING) +#define PIN_BASED_NMI_EXITING VMCS_CONTROL_BIT(NMI_EXITING) +#define PIN_BASED_VIRTUAL_NMIS VMCS_CONTROL_BIT(VIRTUAL_NMIS) +#define PIN_BASED_VMX_PREEMPTION_TIMER VMCS_CONTROL_BIT(PREEMPTION_TIMER) +#define PIN_BASED_POSTED_INTR VMCS_CONTROL_BIT(POSTED_INTR) #define PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR 0x00000016 @@ -114,7 +117,9 @@ #define VMX_MISC_MSR_LIST_MULTIPLIER 512 /* VMFUNC functions */ -#define VMX_VMFUNC_EPTP_SWITCHING 0x00000001 +#define VMFUNC_CONTROL_BIT(x) BIT((VMX_FEATURE_##x & 0x1f) - 28) + +#define VMX_VMFUNC_EPTP_SWITCHING VMFUNC_CONTROL_BIT(EPTP_SWITCHING) #define VMFUNC_EPTP_ENTRIES 512 static inline u32 vmx_basic_vmcs_revision_id(u64 vmx_basic) From patchwork Tue Nov 19 03:12:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250703 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9603914ED for ; Tue, 19 Nov 2019 03:13:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7F0E322363 for ; Tue, 19 Nov 2019 03:13:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727653AbfKSDNW (ORCPT ); Mon, 18 Nov 2019 22:13:22 -0500 Received: from mga02.intel.com ([134.134.136.20]:21257 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727658AbfKSDNA (ORCPT ); Mon, 18 Nov 2019 22:13:00 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105751" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:42 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 17/19] x86/cpufeatures: Clean up synthetic virtualization flags Date: Mon, 18 Nov 2019 19:12:38 -0800 Message-Id: <20191119031240.7779-18-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Shift the remaining synthetic virtualization flags so that the flags are contiguous starting from bit 0. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/cpufeatures.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 8d6c3bc128e2..24ba7ea06e25 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -222,10 +222,10 @@ #define X86_FEATURE_IBRS_ENHANCED ( 7*32+30) /* Enhanced IBRS */ /* Virtualization flags: Linux defined, word 8 */ -#define X86_FEATURE_VMMCALL ( 8*32+15) /* Prefer VMMCALL to VMCALL */ -#define X86_FEATURE_XENPV ( 8*32+16) /* "" Xen paravirtual guest */ -#define X86_FEATURE_VMCALL ( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */ -#define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */ +#define X86_FEATURE_VMMCALL ( 8*32+ 0) /* Prefer VMMCALL to VMCALL */ +#define X86_FEATURE_XENPV ( 8*32+ 1) /* "" Xen paravirtual guest */ +#define X86_FEATURE_VMCALL ( 8*32+ 2) /* "" Hypervisor supports the VMCALL instruction */ +#define X86_FEATURE_VMW_VMMCALL ( 8*32+ 3) /* "" VMware prefers VMMCALL hypercall instruction */ /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */ #define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/ From patchwork Tue Nov 19 03:12:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250687 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9055713A4 for ; Tue, 19 Nov 2019 03:13:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7A7132235E for ; Tue, 19 Nov 2019 03:13:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727671AbfKSDNA (ORCPT ); Mon, 18 Nov 2019 22:13:00 -0500 Received: from mga02.intel.com ([134.134.136.20]:21246 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727659AbfKSDNA (ORCPT ); Mon, 18 Nov 2019 22:13:00 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105754" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:43 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 18/19] perf/x86: Provide stubs of KVM helpers for non-Intel CPUs Date: Mon, 18 Nov 2019 19:12:39 -0800 Message-Id: <20191119031240.7779-19-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Provide stubs for perf_guest_get_msrs() and intel_pt_handle_vmx() when building without support for Intel CPUs, i.e. CPU_SUP_INTEL=n. Lack of stubs is not currently a problem as the only user, KVM_INTEL, takes a dependency on CPU_SUP_INTEL=y. Provide the stubs for all CPUs so that KVM_INTEL can be built for any CPU with compatible hardware support, e.g. Centuar and Zhaoxin CPUs. Note, the existing stub for perf_guest_get_msrs() is essentially dead code as KVM selects CONFIG_PERF_EVENTS, i.e. the only user guarantees the full implementation is built. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/perf_event.h | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index ee26e9215f18..29964b0e1075 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -322,17 +322,10 @@ struct perf_guest_switch_msr { u64 host, guest; }; -extern struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr); extern void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap); extern void perf_check_microcode(void); extern int x86_perf_rdpmc_index(struct perf_event *event); #else -static inline struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr) -{ - *nr = 0; - return NULL; -} - static inline void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap) { memset(cap, 0, sizeof(*cap)); @@ -342,8 +335,23 @@ static inline void perf_events_lapic_init(void) { } static inline void perf_check_microcode(void) { } #endif +#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) +extern struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr); +#else +static inline struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr) +{ + *nr = 0; + return NULL; +} +#endif + #ifdef CONFIG_CPU_SUP_INTEL extern void intel_pt_handle_vmx(int on); +#else +static inline void intel_pt_handle_vmx(int on) +{ + +} #endif #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD) From patchwork Tue Nov 19 03:12:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11250719 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 91D2B109A for ; Tue, 19 Nov 2019 03:13:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7A81320862 for ; Tue, 19 Nov 2019 03:13:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727777AbfKSDNe (ORCPT ); Mon, 18 Nov 2019 22:13:34 -0500 Received: from mga02.intel.com ([134.134.136.20]:21246 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727645AbfKSDM6 (ORCPT ); Mon, 18 Nov 2019 22:12:58 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Nov 2019 19:12:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,322,1569308400"; d="scan'208";a="218105757" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.41]) by orsmga002.jf.intel.com with ESMTP; 18 Nov 2019 19:12:43 -0800 From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org Cc: "H. Peter Anvin" , Peter Zijlstra , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Tony Luck , Tony W Wang-oc , Shuah Khan , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-edac@vger.kernel.org, linux-kselftest@vger.kernel.org, Borislav Petkov , Jarkko Sakkinen Subject: [PATCH v3 19/19] KVM: VMX: Allow KVM_INTEL when building for Centaur and/or Zhaoxin CPUs Date: Mon, 18 Nov 2019 19:12:40 -0800 Message-Id: <20191119031240.7779-20-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191119031240.7779-1-sean.j.christopherson@intel.com> References: <20191119031240.7779-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Change the dependency for KVM_INTEL, i.e. KVM w/ VMX, from Intel CPUs to any CPU that has IA32_FEATURE_CONTROL MSR and thus VMX functionality. This effectively allows building KVM_INTEL for Centaur and Zhaoxin CPUs. Signed-off-by: Sean Christopherson --- arch/x86/kvm/Kconfig | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 840e12583b85..f364efe324ce 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -60,13 +60,11 @@ config KVM If unsure, say N. config KVM_INTEL - tristate "KVM for Intel processors support" - depends on KVM - # for perf_guest_get_msrs(): - depends on CPU_SUP_INTEL + tristate "KVM for Intel (and compatible) processors support" + depends on KVM && X86_FEATURE_CONTROL_MSR ---help--- - Provides support for KVM on Intel processors equipped with the VT - extensions. + Provides support for KVM on processors equipped with Intel's VT + extensions, a.k.a. Virtual Machine Extensions (VMX). To compile this as a module, choose M here: the module will be called kvm-intel.