Message ID | 20191004215615.5479-2-sean.j.christopherson@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86/cpu: Clean up handling of VMX features | expand |
On 04/10/19 23:56, Sean Christopherson wrote: > Always lock IA32_FEATURE_CONTROL if it exists, even if the CPU doesn't > support VMX, so that other existing and future kernel code that queries > IA32_FEATURE_CONTROL can assume it's locked. Possibly stupid question: why bother locking it? It makes sense to lock the MSR bits to _off_ in the firmware, but if the BIOS hasn't locked it, why should the OS? It seems to me that locking introduces a lot of complication. Paolo
On Mon, Oct 07, 2019 at 07:05:32PM +0200, Paolo Bonzini wrote: > On 04/10/19 23:56, Sean Christopherson wrote: > > Always lock IA32_FEATURE_CONTROL if it exists, even if the CPU doesn't > > support VMX, so that other existing and future kernel code that queries > > IA32_FEATURE_CONTROL can assume it's locked. > > Possibly stupid question: why bother locking it? It makes sense to lock > the MSR bits to _off_ in the firmware, but if the BIOS hasn't locked it, > why should the OS? > > It seems to me that locking introduces a lot of complication. None of the enable bits take effect until the MSR is locked. If I had to guess, ucode likely goes and pokes the enabled features during the WRMSR with the lock bit set, as opposed to the relevant features querying the MSR value as needed (querying the MSR is likely slow).
diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu index af9c967782f6..aafc14a0abf7 100644 --- a/arch/x86/Kconfig.cpu +++ b/arch/x86/Kconfig.cpu @@ -387,6 +387,10 @@ config X86_DEBUGCTLMSR def_bool y depends on !(MK6 || MWINCHIPC6 || MWINCHIP3D || MCYRIXIII || M586MMX || M586TSC || M586 || M486SX || M486) && !UML +config X86_FEATURE_CONTROL_MSR + def_bool y + depends on CPU_SUP_INTEL + menuconfig PROCESSOR_SELECT bool "Supported processor vendors" if EXPERT ---help--- diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index d7a1e5a9331c..df5ad0cfe3e9 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -29,6 +29,7 @@ obj-y += umwait.o obj-$(CONFIG_PROC_FS) += proc.o obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o +obj-$(CONFIG_X86_FEATURE_CONTROL_MSR) += feature_control.o ifdef CONFIG_CPU_SUP_INTEL obj-y += intel.o intel_pconfig.o obj-$(CONFIG_PM) += intel_epb.o diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h index c0e2407abdd6..d2750f53a0cb 100644 --- a/arch/x86/kernel/cpu/cpu.h +++ b/arch/x86/kernel/cpu/cpu.h @@ -62,4 +62,8 @@ unsigned int aperfmperf_get_khz(int cpu); extern void x86_spec_ctrl_setup_ap(void); +#ifdef CONFIG_X86_FEATURE_CONTROL_MSR +void init_feature_control_msr(struct cpuinfo_x86 *c); +#endif + #endif /* ARCH_X86_CPU_H */ diff --git a/arch/x86/kernel/cpu/feature_control.c b/arch/x86/kernel/cpu/feature_control.c new file mode 100644 index 000000000000..57b928e64cf5 --- /dev/null +++ b/arch/x86/kernel/cpu/feature_control.c @@ -0,0 +1,30 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <linux/tboot.h> + +#include <asm/cpufeature.h> +#include <asm/msr-index.h> +#include <asm/processor.h> + +void init_feature_control_msr(struct cpuinfo_x86 *c) +{ + u64 msr; + + if (rdmsrl_safe(MSR_IA32_FEATURE_CONTROL, &msr)) + return; + + if (msr & FEATURE_CONTROL_LOCKED) + return; + + /* + * Ignore whatever value BIOS left in the MSR to avoid enabling random + * features or faulting on the WRMSR. + */ + msr = FEATURE_CONTROL_LOCKED; + + if (cpu_has(c, X86_FEATURE_VMX)) { + msr |= FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; + if (tboot_enabled()) + msr |= FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX; + } + wrmsrl(MSR_IA32_FEATURE_CONTROL, msr); +} diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index c2fdc00df163..15d59224e2f8 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -755,6 +755,8 @@ static void init_intel(struct cpuinfo_x86 *c) /* Work around errata */ srat_detect_node(c); + init_feature_control_msr(c); + if (cpu_has(c, X86_FEATURE_VMX)) detect_vmx_virtcap(c);
Opportunistically initialize IA32_FEATURE_CONTROL MSR to enable VMX when the MSR is left unlocked by BIOS. Configuring IA32_FEATURE_CONTROL at boot time paves the way for similar enabling of other features, e.g. Software Guard Extensions (SGX). Temporarily leave equivalent KVM code in place in order to avoid introducing a regression on Centaur and Zhaoxin CPUs, e.g. removing KVM's code would leave the MSR unlocked on those CPUs and would break existing functionality if people are loading kvm_intel on Centaur and/or Zhaoxin. Defer enablement of the boot-time configuration on Centaur and Zhaoxin to future patches to aid bisection. Note, Local Machine Check Exceptions (LMCE) are also supported by the kernel and enabled via IA32_FEATURE_CONTROL, but the kernel currently uses LMCE if and and only if the feature is explicitly enable by BIOS. Keep the current behavior to avoid introducing bugs, future patches can opt in to opportunistic enabling if it's deemed desirable to do so. Always lock IA32_FEATURE_CONTROL if it exists, even if the CPU doesn't support VMX, so that other existing and future kernel code that queries IA32_FEATURE_CONTROL can assume it's locked. Start from a clean slate when constructing the value to write to IA32_FEATURE_CONTROL, i.e. ignore whatever value BIOS left in the MSR so as not to enable random features or fault on the WRMSR. Suggested-by: Borislav Petkov <bp@suse.de> Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/Kconfig.cpu | 4 ++++ arch/x86/kernel/cpu/Makefile | 1 + arch/x86/kernel/cpu/cpu.h | 4 ++++ arch/x86/kernel/cpu/feature_control.c | 30 +++++++++++++++++++++++++++ arch/x86/kernel/cpu/intel.c | 2 ++ 5 files changed, 41 insertions(+) create mode 100644 arch/x86/kernel/cpu/feature_control.c