Message ID | ab118fb9bd39b200feb843660a9b10421943aa70.1649219184.git.kai.huang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | TDX host kernel support | expand |
On 4/5/22 9:49 PM, Kai Huang wrote: > +/* BIOS must configure SEAMRR registers for all cores consistently */ > +static u64 seamrr_base, seamrr_mask; > + > +static bool __seamrr_enabled(void) > +{ > + return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; > +} > + > +static void detect_seam_bsp(struct cpuinfo_x86 *c) > +{ > + u64 mtrrcap, base, mask; > + > + /* SEAMRR is reported via MTRRcap */ > + if (!boot_cpu_has(X86_FEATURE_MTRR)) > + return; > + > + rdmsrl(MSR_MTRRcap, mtrrcap); > + if (!(mtrrcap & MTRR_CAP_SEAMRR)) > + return; > + > + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); > + if (!(base & SEAMRR_PHYS_BASE_CONFIGURED)) { > + pr_info("SEAMRR base is not configured by BIOS\n"); > + return; > + } > + > + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); > + if ((mask & SEAMRR_ENABLED_BITS) != SEAMRR_ENABLED_BITS) { > + pr_info("SEAMRR is not enabled by BIOS\n"); > + return; > + } > + > + seamrr_base = base; > + seamrr_mask = mask; > +} > + > +static void detect_seam_ap(struct cpuinfo_x86 *c) > +{ > + u64 base, mask; > + > + /* > + * Don't bother to detect this AP if SEAMRR is not > + * enabled after earlier detections. > + */ > + if (!__seamrr_enabled()) > + return; > + > + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); > + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); > + > + if (base == seamrr_base && mask == seamrr_mask) > + return; > + > + pr_err("Inconsistent SEAMRR configuration by BIOS\n"); Do we need to panic for SEAM config issue (for security)? > + /* Mark SEAMRR as disabled. */ > + seamrr_base = 0; > + seamrr_mask = 0 > +} > + > +static void detect_seam(struct cpuinfo_x86 *c) > +{ why not do this check directly in tdx_detect_cpu()? > + if (c == &boot_cpu_data) > + detect_seam_bsp(c); > + else > + detect_seam_ap(c); > +} > + > +void tdx_detect_cpu(struct cpuinfo_x86 *c) > +{ > + detect_seam(c); > +}
On Mon, Apr 18, 2022, Sathyanarayanan Kuppuswamy wrote: > > +static void detect_seam_ap(struct cpuinfo_x86 *c) > > +{ > > + u64 base, mask; > > + > > + /* > > + * Don't bother to detect this AP if SEAMRR is not > > + * enabled after earlier detections. > > + */ > > + if (!__seamrr_enabled()) > > + return; > > + > > + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); > > + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); > > + > > + if (base == seamrr_base && mask == seamrr_mask) > > + return; > > + > > + pr_err("Inconsistent SEAMRR configuration by BIOS\n"); > > Do we need to panic for SEAM config issue (for security)? No, clearing seamrr_mask will effectively prevent the kernel from attempting to use TDX or any other feature that might depend on SEAM. Panicking because the user's BIOS is crappy would be to kicking them while they're down. As for security, it's the TDX Module's responsibility to validate the security properties of the system, the kernel only cares about not dying/crashing. > > + /* Mark SEAMRR as disabled. */ > > + seamrr_base = 0; > > + seamrr_mask = 0 > > +}
> > + > > +static void detect_seam(struct cpuinfo_x86 *c) > > +{ > > why not do this check directly in tdx_detect_cpu()? The second patch will detect TDX KeyID too. I suppose you are saying below is better? void tdx_detect_cpu(struct cpuinfo_x86 *c) { if (c == &boot_cpu_data) { detect_seam_bsp(c); detect_tdx_keyids_bsp(c); } else { detect_seam_ap(c); detect_tdx_keyids_ap(c); } } I personally don't see how above is better than the current way. Instead, I think having SEAM and TDX KeyID detection code in single function respectively is more flexible for future extension (if needed). > > > + if (c == &boot_cpu_data) > > + detect_seam_bsp(c); > > + else > > + detect_seam_ap(c); > > +} > > + > > +void tdx_detect_cpu(struct cpuinfo_x86 *c) > > +{ > > + detect_seam(c); > > +} >
> +config INTEL_TDX_HOST > + bool "Intel Trust Domain Extensions (TDX) host support" > + default n > + depends on CPU_SUP_INTEL > + depends on X86_64 > + help > + Intel Trust Domain Extensions (TDX) protects guest VMs from malicious > + host and certain physical attacks. This option enables necessary TDX > + support in host kernel to run protected VMs. > + > + If unsure, say N. Nothing about KVM? ... > diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c > new file mode 100644 > index 000000000000..03f35c75f439 > --- /dev/null > +++ b/arch/x86/virt/vmx/tdx/tdx.c > @@ -0,0 +1,102 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Copyright(c) 2022 Intel Corporation. > + * > + * Intel Trusted Domain Extensions (TDX) support > + */ > + > +#define pr_fmt(fmt) "tdx: " fmt > + > +#include <linux/types.h> > +#include <linux/cpumask.h> > +#include <asm/msr-index.h> > +#include <asm/msr.h> > +#include <asm/cpufeature.h> > +#include <asm/cpufeatures.h> > +#include <asm/tdx.h> > + > +/* Support Intel Secure Arbitration Mode Range Registers (SEAMRR) */ > +#define MTRR_CAP_SEAMRR BIT(15) > + > +/* Core-scope Intel SEAMRR base and mask registers. */ > +#define MSR_IA32_SEAMRR_PHYS_BASE 0x00001400 > +#define MSR_IA32_SEAMRR_PHYS_MASK 0x00001401 > + > +#define SEAMRR_PHYS_BASE_CONFIGURED BIT_ULL(3) > +#define SEAMRR_PHYS_MASK_ENABLED BIT_ULL(11) > +#define SEAMRR_PHYS_MASK_LOCKED BIT_ULL(10) > + > +#define SEAMRR_ENABLED_BITS \ > + (SEAMRR_PHYS_MASK_ENABLED | SEAMRR_PHYS_MASK_LOCKED) > + > +/* BIOS must configure SEAMRR registers for all cores consistently */ > +static u64 seamrr_base, seamrr_mask; > + > +static bool __seamrr_enabled(void) > +{ > + return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; > +} But there's no case where seamrr_mask is non-zero and where _seamrr_enabled(). Why bother checking the SEAMRR_ENABLED_BITS? > +static void detect_seam_bsp(struct cpuinfo_x86 *c) > +{ > + u64 mtrrcap, base, mask; > + > + /* SEAMRR is reported via MTRRcap */ > + if (!boot_cpu_has(X86_FEATURE_MTRR)) > + return; > + > + rdmsrl(MSR_MTRRcap, mtrrcap); > + if (!(mtrrcap & MTRR_CAP_SEAMRR)) > + return; > + > + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); > + if (!(base & SEAMRR_PHYS_BASE_CONFIGURED)) { > + pr_info("SEAMRR base is not configured by BIOS\n"); > + return; > + } > + > + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); > + if ((mask & SEAMRR_ENABLED_BITS) != SEAMRR_ENABLED_BITS) { > + pr_info("SEAMRR is not enabled by BIOS\n"); > + return; > + } > + > + seamrr_base = base; > + seamrr_mask = mask; > +} Comment, please. /* * Stash the boot CPU's MSR values so that AP values * can can be checked for consistency. */ > +static void detect_seam_ap(struct cpuinfo_x86 *c) > +{ > + u64 base, mask; > + > + /* > + * Don't bother to detect this AP if SEAMRR is not > + * enabled after earlier detections. > + */ > + if (!__seamrr_enabled()) > + return; > + > + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); > + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); > + This is the place for a comment about why the values have to be equal. > + if (base == seamrr_base && mask == seamrr_mask) > + return; > + > + pr_err("Inconsistent SEAMRR configuration by BIOS\n"); > + /* Mark SEAMRR as disabled. */ > + seamrr_base = 0; > + seamrr_mask = 0; > +} > + > +static void detect_seam(struct cpuinfo_x86 *c) > +{ > + if (c == &boot_cpu_data) > + detect_seam_bsp(c); > + else > + detect_seam_ap(c); > +} > + > +void tdx_detect_cpu(struct cpuinfo_x86 *c) > +{ > + detect_seam(c); > +} The extra function looks a bit silly here now. Maybe this gets filled out later, but it's goofy-looking here.
Hi Dave, Thanks for review! On Tue, 2022-04-26 at 13:21 -0700, Dave Hansen wrote: > > +config INTEL_TDX_HOST > > + bool "Intel Trust Domain Extensions (TDX) host support" > > + default n > > + depends on CPU_SUP_INTEL > > + depends on X86_64 > > + help > > + Intel Trust Domain Extensions (TDX) protects guest VMs from > > malicious > > + host and certain physical attacks. This option enables necessary > > TDX > > + support in host kernel to run protected VMs. > > + > > + If unsure, say N. > > Nothing about KVM? I'll add KVM into the context. How about below? "Intel Trust Domain Extensions (TDX) protects guest VMs from malicious host and certain physical attacks. This option enables necessary TDX support in host kernel to allow KVM to run protected VMs called Trust Domains (TD)." > > ... > > diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c > > new file mode 100644 > > index 000000000000..03f35c75f439 > > --- /dev/null > > +++ b/arch/x86/virt/vmx/tdx/tdx.c > > @@ -0,0 +1,102 @@ > > +// SPDX-License-Identifier: GPL-2.0 > > +/* > > + * Copyright(c) 2022 Intel Corporation. > > + * > > + * Intel Trusted Domain Extensions (TDX) support > > + */ > > + > > +#define pr_fmt(fmt) "tdx: " fmt > > + > > +#include <linux/types.h> > > +#include <linux/cpumask.h> > > +#include <asm/msr-index.h> > > +#include <asm/msr.h> > > +#include <asm/cpufeature.h> > > +#include <asm/cpufeatures.h> > > +#include <asm/tdx.h> > > + > > +/* Support Intel Secure Arbitration Mode Range Registers (SEAMRR) */ > > +#define MTRR_CAP_SEAMRR BIT(15) > > + > > +/* Core-scope Intel SEAMRR base and mask registers. */ > > +#define MSR_IA32_SEAMRR_PHYS_BASE 0x00001400 > > +#define MSR_IA32_SEAMRR_PHYS_MASK 0x00001401 > > + > > +#define SEAMRR_PHYS_BASE_CONFIGURED BIT_ULL(3) > > +#define SEAMRR_PHYS_MASK_ENABLED BIT_ULL(11) > > +#define SEAMRR_PHYS_MASK_LOCKED BIT_ULL(10) > > + > > +#define SEAMRR_ENABLED_BITS \ > > + (SEAMRR_PHYS_MASK_ENABLED | SEAMRR_PHYS_MASK_LOCKED) > > + > > +/* BIOS must configure SEAMRR registers for all cores consistently */ > > +static u64 seamrr_base, seamrr_mask; > > + > > +static bool __seamrr_enabled(void) > > +{ > > + return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; > > +} > > But there's no case where seamrr_mask is non-zero and where > _seamrr_enabled(). Why bother checking the SEAMRR_ENABLED_BITS? seamrr_mask will only be non-zero when SEAMRR is enabled by BIOS, otherwise it is 0. It will also be cleared when BIOS mis-configuration is detected on any AP. SEAMRR_ENABLED_BITS is used to check whether SEAMRR is enabled. > > > +static void detect_seam_bsp(struct cpuinfo_x86 *c) > > +{ > > + u64 mtrrcap, base, mask; > > + > > + /* SEAMRR is reported via MTRRcap */ > > + if (!boot_cpu_has(X86_FEATURE_MTRR)) > > + return; > > + > > + rdmsrl(MSR_MTRRcap, mtrrcap); > > + if (!(mtrrcap & MTRR_CAP_SEAMRR)) > > + return; > > + > > + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); > > + if (!(base & SEAMRR_PHYS_BASE_CONFIGURED)) { > > + pr_info("SEAMRR base is not configured by BIOS\n"); > > + return; > > + } > > + > > + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); > > + if ((mask & SEAMRR_ENABLED_BITS) != SEAMRR_ENABLED_BITS) { > > + pr_info("SEAMRR is not enabled by BIOS\n"); > > + return; > > + } > > + > > + seamrr_base = base; > > + seamrr_mask = mask; > > +} > > Comment, please. > > /* > * Stash the boot CPU's MSR values so that AP values > * can can be checked for consistency. > */ > Thanks. Will add. > > > +static void detect_seam_ap(struct cpuinfo_x86 *c) > > +{ > > + u64 base, mask; > > + > > + /* > > + * Don't bother to detect this AP if SEAMRR is not > > + * enabled after earlier detections. > > + */ > > + if (!__seamrr_enabled()) > > + return; > > + > > + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); > > + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); > > + > > This is the place for a comment about why the values have to be equal. I'll add below: /* BIOS must configure SEAMRR consistently across all cores */ > > > + if (base == seamrr_base && mask == seamrr_mask) > > + return; > > + > > + pr_err("Inconsistent SEAMRR configuration by BIOS\n"); > > + /* Mark SEAMRR as disabled. */ > > + seamrr_base = 0; > > + seamrr_mask = 0; > > +} > > + > > +static void detect_seam(struct cpuinfo_x86 *c) > > +{ > > + if (c == &boot_cpu_data) > > + detect_seam_bsp(c); > > + else > > + detect_seam_ap(c); > > +} > > + > > +void tdx_detect_cpu(struct cpuinfo_x86 *c) > > +{ > > + detect_seam(c); > > +} > > The extra function looks a bit silly here now. Maybe this gets filled > out later, but it's goofy-looking here. Thomas suggested to put all TDX detection related in one function call, so I added tdx_detect_cpu(). I'll move this to the next patch when detecting TDX KeyIDs.
On 4/26/22 16:12, Kai Huang wrote: > Hi Dave, > > Thanks for review! > > On Tue, 2022-04-26 at 13:21 -0700, Dave Hansen wrote: >>> +config INTEL_TDX_HOST >>> + bool "Intel Trust Domain Extensions (TDX) host support" >>> + default n >>> + depends on CPU_SUP_INTEL >>> + depends on X86_64 >>> + help >>> + Intel Trust Domain Extensions (TDX) protects guest VMs from >>> malicious >>> + host and certain physical attacks. This option enables necessary >>> TDX >>> + support in host kernel to run protected VMs. >>> + >>> + If unsure, say N. >> >> Nothing about KVM? > > I'll add KVM into the context. How about below? > > "Intel Trust Domain Extensions (TDX) protects guest VMs from malicious > host and certain physical attacks. This option enables necessary TDX > support in host kernel to allow KVM to run protected VMs called Trust > Domains (TD)." What about a dependency? Isn't this dead code without CONFIG_KVM=y/m? >>> diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c >>> new file mode 100644 >>> index 000000000000..03f35c75f439 >>> --- /dev/null >>> +++ b/arch/x86/virt/vmx/tdx/tdx.c >>> @@ -0,0 +1,102 @@ >>> +// SPDX-License-Identifier: GPL-2.0 >>> +/* >>> + * Copyright(c) 2022 Intel Corporation. >>> + * >>> + * Intel Trusted Domain Extensions (TDX) support >>> + */ >>> + >>> +#define pr_fmt(fmt) "tdx: " fmt >>> + >>> +#include <linux/types.h> >>> +#include <linux/cpumask.h> >>> +#include <asm/msr-index.h> >>> +#include <asm/msr.h> >>> +#include <asm/cpufeature.h> >>> +#include <asm/cpufeatures.h> >>> +#include <asm/tdx.h> >>> + >>> +/* Support Intel Secure Arbitration Mode Range Registers (SEAMRR) */ >>> +#define MTRR_CAP_SEAMRR BIT(15) >>> + >>> +/* Core-scope Intel SEAMRR base and mask registers. */ >>> +#define MSR_IA32_SEAMRR_PHYS_BASE 0x00001400 >>> +#define MSR_IA32_SEAMRR_PHYS_MASK 0x00001401 >>> + >>> +#define SEAMRR_PHYS_BASE_CONFIGURED BIT_ULL(3) >>> +#define SEAMRR_PHYS_MASK_ENABLED BIT_ULL(11) >>> +#define SEAMRR_PHYS_MASK_LOCKED BIT_ULL(10) >>> + >>> +#define SEAMRR_ENABLED_BITS \ >>> + (SEAMRR_PHYS_MASK_ENABLED | SEAMRR_PHYS_MASK_LOCKED) >>> + >>> +/* BIOS must configure SEAMRR registers for all cores consistently */ >>> +static u64 seamrr_base, seamrr_mask; >>> + >>> +static bool __seamrr_enabled(void) >>> +{ >>> + return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; >>> +} >> >> But there's no case where seamrr_mask is non-zero and where >> _seamrr_enabled(). Why bother checking the SEAMRR_ENABLED_BITS? > > seamrr_mask will only be non-zero when SEAMRR is enabled by BIOS, otherwise it > is 0. It will also be cleared when BIOS mis-configuration is detected on any > AP. SEAMRR_ENABLED_BITS is used to check whether SEAMRR is enabled. The point is that this could be: return !!seamrr_mask; >>> +static void detect_seam_ap(struct cpuinfo_x86 *c) >>> +{ >>> + u64 base, mask; >>> + >>> + /* >>> + * Don't bother to detect this AP if SEAMRR is not >>> + * enabled after earlier detections. >>> + */ >>> + if (!__seamrr_enabled()) >>> + return; >>> + >>> + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); >>> + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); >>> + >> >> This is the place for a comment about why the values have to be equal. > > I'll add below: > > /* BIOS must configure SEAMRR consistently across all cores */ What happens if the BIOS doesn't do this? What actually breaks? In other words, do we *NEED* error checking here? >>> + if (base == seamrr_base && mask == seamrr_mask) >>> + return; >>> + >>> + pr_err("Inconsistent SEAMRR configuration by BIOS\n"); >>> + /* Mark SEAMRR as disabled. */ >>> + seamrr_base = 0; >>> + seamrr_mask = 0; >>> +} >>> + >>> +static void detect_seam(struct cpuinfo_x86 *c) >>> +{ >>> + if (c == &boot_cpu_data) >>> + detect_seam_bsp(c); >>> + else >>> + detect_seam_ap(c); >>> +} >>> + >>> +void tdx_detect_cpu(struct cpuinfo_x86 *c) >>> +{ >>> + detect_seam(c); >>> +} >> >> The extra function looks a bit silly here now. Maybe this gets filled >> out later, but it's goofy-looking here. > > Thomas suggested to put all TDX detection related in one function call, so I > added tdx_detect_cpu(). I'll move this to the next patch when detecting TDX > KeyIDs. That's fine, or just add a comment or a changelog sentence about this being filled out later.
On Tue, 2022-04-26 at 16:28 -0700, Dave Hansen wrote: > On 4/26/22 16:12, Kai Huang wrote: > > Hi Dave, > > > > Thanks for review! > > > > On Tue, 2022-04-26 at 13:21 -0700, Dave Hansen wrote: > > > > +config INTEL_TDX_HOST > > > > + bool "Intel Trust Domain Extensions (TDX) host support" > > > > + default n > > > > + depends on CPU_SUP_INTEL > > > > + depends on X86_64 > > > > + help > > > > + Intel Trust Domain Extensions (TDX) protects guest VMs from > > > > malicious > > > > + host and certain physical attacks. This option enables necessary > > > > TDX > > > > + support in host kernel to run protected VMs. > > > > + > > > > + If unsure, say N. > > > > > > Nothing about KVM? > > > > I'll add KVM into the context. How about below? > > > > "Intel Trust Domain Extensions (TDX) protects guest VMs from malicious > > host and certain physical attacks. This option enables necessary TDX > > support in host kernel to allow KVM to run protected VMs called Trust > > Domains (TD)." > > What about a dependency? Isn't this dead code without CONFIG_KVM=y/m? Conceptually, KVM is one user of the TDX module, so it doesn't seem correct to make CONFIG_INTEL_TDX_HOST depend on CONFIG_KVM. But so far KVM is the only user of TDX, so in practice the code is dead w/o KVM. What's your opinion? > > > > > diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c > > > > new file mode 100644 > > > > index 000000000000..03f35c75f439 > > > > --- /dev/null > > > > +++ b/arch/x86/virt/vmx/tdx/tdx.c > > > > @@ -0,0 +1,102 @@ > > > > +// SPDX-License-Identifier: GPL-2.0 > > > > +/* > > > > + * Copyright(c) 2022 Intel Corporation. > > > > + * > > > > + * Intel Trusted Domain Extensions (TDX) support > > > > + */ > > > > + > > > > +#define pr_fmt(fmt) "tdx: " fmt > > > > + > > > > +#include <linux/types.h> > > > > +#include <linux/cpumask.h> > > > > +#include <asm/msr-index.h> > > > > +#include <asm/msr.h> > > > > +#include <asm/cpufeature.h> > > > > +#include <asm/cpufeatures.h> > > > > +#include <asm/tdx.h> > > > > + > > > > +/* Support Intel Secure Arbitration Mode Range Registers (SEAMRR) */ > > > > +#define MTRR_CAP_SEAMRR BIT(15) > > > > + > > > > +/* Core-scope Intel SEAMRR base and mask registers. */ > > > > +#define MSR_IA32_SEAMRR_PHYS_BASE 0x00001400 > > > > +#define MSR_IA32_SEAMRR_PHYS_MASK 0x00001401 > > > > + > > > > +#define SEAMRR_PHYS_BASE_CONFIGURED BIT_ULL(3) > > > > +#define SEAMRR_PHYS_MASK_ENABLED BIT_ULL(11) > > > > +#define SEAMRR_PHYS_MASK_LOCKED BIT_ULL(10) > > > > + > > > > +#define SEAMRR_ENABLED_BITS \ > > > > + (SEAMRR_PHYS_MASK_ENABLED | SEAMRR_PHYS_MASK_LOCKED) > > > > + > > > > +/* BIOS must configure SEAMRR registers for all cores consistently */ > > > > +static u64 seamrr_base, seamrr_mask; > > > > + > > > > +static bool __seamrr_enabled(void) > > > > +{ > > > > + return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; > > > > +} > > > > > > But there's no case where seamrr_mask is non-zero and where > > > _seamrr_enabled(). Why bother checking the SEAMRR_ENABLED_BITS? > > > > seamrr_mask will only be non-zero when SEAMRR is enabled by BIOS, otherwise it > > is 0. It will also be cleared when BIOS mis-configuration is detected on any > > AP. SEAMRR_ENABLED_BITS is used to check whether SEAMRR is enabled. > > The point is that this could be: > > return !!seamrr_mask; The definition of this SEAMRR_MASK MSR defines "ENABLED" and "LOCKED" bits. Explicitly checking the two bits, instead of !!seamrr_mask roles out other incorrect configurations. For instance, we should not treat SEAMRR being enabled if we only have "ENABLED" bit set or "LOCKED" bit set. > > > > > > +static void detect_seam_ap(struct cpuinfo_x86 *c) > > > > +{ > > > > + u64 base, mask; > > > > + > > > > + /* > > > > + * Don't bother to detect this AP if SEAMRR is not > > > > + * enabled after earlier detections. > > > > + */ > > > > + if (!__seamrr_enabled()) > > > > + return; > > > > + > > > > + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); > > > > + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); > > > > + > > > > > > This is the place for a comment about why the values have to be equal. > > > > I'll add below: > > > > /* BIOS must configure SEAMRR consistently across all cores */ > > What happens if the BIOS doesn't do this? What actually breaks? In > other words, do we *NEED* error checking here? AFAICT the spec doesn't explicitly mention what will happen if BIOS doesn't configure them consistently among cores. But for safety I think it's better to detect. > > > > > + if (base == seamrr_base && mask == seamrr_mask) > > > > + return; > > > > + > > > > + pr_err("Inconsistent SEAMRR configuration by BIOS\n"); > > > > + /* Mark SEAMRR as disabled. */ > > > > + seamrr_base = 0; > > > > + seamrr_mask = 0; > > > > +} > > > > + > > > > +static void detect_seam(struct cpuinfo_x86 *c) > > > > +{ > > > > + if (c == &boot_cpu_data) > > > > + detect_seam_bsp(c); > > > > + else > > > > + detect_seam_ap(c); > > > > +} > > > > + > > > > +void tdx_detect_cpu(struct cpuinfo_x86 *c) > > > > +{ > > > > + detect_seam(c); > > > > +} > > > > > > The extra function looks a bit silly here now. Maybe this gets filled > > > out later, but it's goofy-looking here. > > > > Thomas suggested to put all TDX detection related in one function call, so I > > added tdx_detect_cpu(). I'll move this to the next patch when detecting TDX > > KeyIDs. > > That's fine, or just add a comment or a changelog sentence about this > being filled out later. There's already one sentence in the changelog: "......Add a function to detect all TDX preliminaries (SEAMRR, TDX private KeyIDs) for a given cpu when it is brought up. As the first step, detect the validity of SEAMRR." Does this look good to you?
On Wed, Apr 27, 2022, Kai Huang wrote: > On Tue, 2022-04-26 at 16:28 -0700, Dave Hansen wrote: > > On 4/26/22 16:12, Kai Huang wrote: > > > Hi Dave, > > > > > > Thanks for review! > > > > > > On Tue, 2022-04-26 at 13:21 -0700, Dave Hansen wrote: > > > > > +config INTEL_TDX_HOST > > > > > + bool "Intel Trust Domain Extensions (TDX) host support" > > > > > + default n > > > > > + depends on CPU_SUP_INTEL > > > > > + depends on X86_64 > > > > > + help > > > > > + Intel Trust Domain Extensions (TDX) protects guest VMs from > > > > > malicious > > > > > + host and certain physical attacks. This option enables necessary > > > > > TDX > > > > > + support in host kernel to run protected VMs. > > > > > + > > > > > + If unsure, say N. > > > > > > > > Nothing about KVM? > > > > > > I'll add KVM into the context. How about below? > > > > > > "Intel Trust Domain Extensions (TDX) protects guest VMs from malicious > > > host and certain physical attacks. This option enables necessary TDX > > > support in host kernel to allow KVM to run protected VMs called Trust > > > Domains (TD)." > > > > What about a dependency? Isn't this dead code without CONFIG_KVM=y/m? > > Conceptually, KVM is one user of the TDX module, so it doesn't seem correct to > make CONFIG_INTEL_TDX_HOST depend on CONFIG_KVM. But so far KVM is the only > user of TDX, so in practice the code is dead w/o KVM. > > What's your opinion? Take a dependency on CONFIG_KVM_INTEL, there's already precedence for this specific case of a feature that can't possibly have an in-kernel user. See arch/x86/kernel/cpu/feat_ctl.c, which in the (very) unlikely event IA32_FEATURE_CONTROL is left unlocked by BIOS, will deliberately disable VMX if CONFIG_KVM_INTEL=n.
On Wed, 2022-04-27 at 00:22 +0000, Sean Christopherson wrote: > On Wed, Apr 27, 2022, Kai Huang wrote: > > On Tue, 2022-04-26 at 16:28 -0700, Dave Hansen wrote: > > > On 4/26/22 16:12, Kai Huang wrote: > > > > Hi Dave, > > > > > > > > Thanks for review! > > > > > > > > On Tue, 2022-04-26 at 13:21 -0700, Dave Hansen wrote: > > > > > > +config INTEL_TDX_HOST > > > > > > + bool "Intel Trust Domain Extensions (TDX) host support" > > > > > > + default n > > > > > > + depends on CPU_SUP_INTEL > > > > > > + depends on X86_64 > > > > > > + help > > > > > > + Intel Trust Domain Extensions (TDX) protects guest VMs from > > > > > > malicious > > > > > > + host and certain physical attacks. This option enables necessary > > > > > > TDX > > > > > > + support in host kernel to run protected VMs. > > > > > > + > > > > > > + If unsure, say N. > > > > > > > > > > Nothing about KVM? > > > > > > > > I'll add KVM into the context. How about below? > > > > > > > > "Intel Trust Domain Extensions (TDX) protects guest VMs from malicious > > > > host and certain physical attacks. This option enables necessary TDX > > > > support in host kernel to allow KVM to run protected VMs called Trust > > > > Domains (TD)." > > > > > > What about a dependency? Isn't this dead code without CONFIG_KVM=y/m? > > > > Conceptually, KVM is one user of the TDX module, so it doesn't seem correct to > > make CONFIG_INTEL_TDX_HOST depend on CONFIG_KVM. But so far KVM is the only > > user of TDX, so in practice the code is dead w/o KVM. > > > > What's your opinion? > > Take a dependency on CONFIG_KVM_INTEL, there's already precedence for this specific > case of a feature that can't possibly have an in-kernel user. See > arch/x86/kernel/cpu/feat_ctl.c, which in the (very) unlikely event IA32_FEATURE_CONTROL > is left unlocked by BIOS, will deliberately disable VMX if CONFIG_KVM_INTEL=n. Thanks. Fine to me.
On 4/26/22 16:49, Kai Huang wrote: > On Tue, 2022-04-26 at 16:28 -0700, Dave Hansen wrote: >> What about a dependency? Isn't this dead code without CONFIG_KVM=y/m? > > Conceptually, KVM is one user of the TDX module, so it doesn't seem correct to > make CONFIG_INTEL_TDX_HOST depend on CONFIG_KVM. But so far KVM is the only > user of TDX, so in practice the code is dead w/o KVM. > > What's your opinion? You're stuck in some really weird fantasy world. Sure, we can dream up more than one user of the TDX module. But, in the real world, there's only one. Plus, code can have multiple dependencies! depends on FOO || BAR This TDX cruft is dead code in today's real-world kernel without KVM. You should add a dependency. >>>>> +static bool __seamrr_enabled(void) >>>>> +{ >>>>> + return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; >>>>> +} >>>> >>>> But there's no case where seamrr_mask is non-zero and where >>>> _seamrr_enabled(). Why bother checking the SEAMRR_ENABLED_BITS? >>> >>> seamrr_mask will only be non-zero when SEAMRR is enabled by BIOS, otherwise it >>> is 0. It will also be cleared when BIOS mis-configuration is detected on any >>> AP. SEAMRR_ENABLED_BITS is used to check whether SEAMRR is enabled. >> >> The point is that this could be: >> >> return !!seamrr_mask; > > The definition of this SEAMRR_MASK MSR defines "ENABLED" and "LOCKED" bits. > Explicitly checking the two bits, instead of !!seamrr_mask roles out other > incorrect configurations. For instance, we should not treat SEAMRR being > enabled if we only have "ENABLED" bit set or "LOCKED" bit set. You're confusing two different things: * The state of the variable * The actual correct hardware state The *VARIABLE* can't be non-zero and also denote that SEAMRR is enabled. Does this *CODE* ever set ENABLED or LOCKED without each other? >>>>> +static void detect_seam_ap(struct cpuinfo_x86 *c) >>>>> +{ >>>>> + u64 base, mask; >>>>> + >>>>> + /* >>>>> + * Don't bother to detect this AP if SEAMRR is not >>>>> + * enabled after earlier detections. >>>>> + */ >>>>> + if (!__seamrr_enabled()) >>>>> + return; >>>>> + >>>>> + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); >>>>> + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); >>>>> + >>>> >>>> This is the place for a comment about why the values have to be equal. >>> >>> I'll add below: >>> >>> /* BIOS must configure SEAMRR consistently across all cores */ >> >> What happens if the BIOS doesn't do this? What actually breaks? In >> other words, do we *NEED* error checking here? > > AFAICT the spec doesn't explicitly mention what will happen if BIOS doesn't > configure them consistently among cores. But for safety I think it's better to > detect. Safety? Safety of what? >>>>> +void tdx_detect_cpu(struct cpuinfo_x86 *c) >>>>> +{ >>>>> + detect_seam(c); >>>>> +} >>>> >>>> The extra function looks a bit silly here now. Maybe this gets filled >>>> out later, but it's goofy-looking here. >>> >>> Thomas suggested to put all TDX detection related in one function call, so I >>> added tdx_detect_cpu(). I'll move this to the next patch when detecting TDX >>> KeyIDs. >> >> That's fine, or just add a comment or a changelog sentence about this >> being filled out later. > > There's already one sentence in the changelog: > > "......Add a function to detect all TDX preliminaries (SEAMRR, TDX private > KeyIDs) for a given cpu when it is brought up. As the first step, detect the > validity of SEAMRR." > > Does this look good to you? No, that doesn't provide enough context. There are two single-line wrapper functions. One calls the other. That looks entirely silly in this patch. You need to explain the silliness, explicitly.
On Wed, 2022-04-27 at 07:22 -0700, Dave Hansen wrote: > On 4/26/22 16:49, Kai Huang wrote: > > On Tue, 2022-04-26 at 16:28 -0700, Dave Hansen wrote: > > > What about a dependency? Isn't this dead code without CONFIG_KVM=y/m? > > > > Conceptually, KVM is one user of the TDX module, so it doesn't seem correct to > > make CONFIG_INTEL_TDX_HOST depend on CONFIG_KVM. But so far KVM is the only > > user of TDX, so in practice the code is dead w/o KVM. > > > > What's your opinion? > > You're stuck in some really weird fantasy world. Sure, we can dream up > more than one user of the TDX module. But, in the real world, there's > only one. Plus, code can have multiple dependencies! > > depends on FOO || BAR > > This TDX cruft is dead code in today's real-world kernel without KVM. > You should add a dependency. Will add a dependency on CONFIG_KVM_INTEL. > > > > > > > +static bool __seamrr_enabled(void) > > > > > > +{ > > > > > > + return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; > > > > > > +} > > > > > > > > > > But there's no case where seamrr_mask is non-zero and where > > > > > _seamrr_enabled(). Why bother checking the SEAMRR_ENABLED_BITS? > > > > > > > > seamrr_mask will only be non-zero when SEAMRR is enabled by BIOS, otherwise it > > > > is 0. It will also be cleared when BIOS mis-configuration is detected on any > > > > AP. SEAMRR_ENABLED_BITS is used to check whether SEAMRR is enabled. > > > > > > The point is that this could be: > > > > > > return !!seamrr_mask; > > > > The definition of this SEAMRR_MASK MSR defines "ENABLED" and "LOCKED" bits. > > Explicitly checking the two bits, instead of !!seamrr_mask roles out other > > incorrect configurations. For instance, we should not treat SEAMRR being > > enabled if we only have "ENABLED" bit set or "LOCKED" bit set. > > You're confusing two different things: > * The state of the variable > * The actual correct hardware state > > The *VARIABLE* can't be non-zero and also denote that SEAMRR is enabled. > Does this *CODE* ever set ENABLED or LOCKED without each other? OK. Will just use !!seamrr_mask. I thought explicitly checking SEAMRR_ENABLED_BITS would be clearer. > > > > > > > +static void detect_seam_ap(struct cpuinfo_x86 *c) > > > > > > +{ > > > > > > + u64 base, mask; > > > > > > + > > > > > > + /* > > > > > > + * Don't bother to detect this AP if SEAMRR is not > > > > > > + * enabled after earlier detections. > > > > > > + */ > > > > > > + if (!__seamrr_enabled()) > > > > > > + return; > > > > > > + > > > > > > + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); > > > > > > + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); > > > > > > + > > > > > > > > > > This is the place for a comment about why the values have to be equal. > > > > > > > > I'll add below: > > > > > > > > /* BIOS must configure SEAMRR consistently across all cores */ > > > > > > What happens if the BIOS doesn't do this? What actually breaks? In > > > other words, do we *NEED* error checking here? > > > > AFAICT the spec doesn't explicitly mention what will happen if BIOS doesn't > > configure them consistently among cores. But for safety I think it's better to > > detect. > > Safety? Safety of what? I'll ask TDX architect people and get back to you. I'll also ask what will happen if TDX KeyID isn't configured consistently across packages. Currently TDX KeyID is also detected on all cpus (existing detect_tme() also detect MKTME KeyID bits on all cpus).
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 7021ec725dd3..9113bf09f358 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1967,6 +1967,18 @@ config X86_SGX If unsure, say N. +config INTEL_TDX_HOST + bool "Intel Trust Domain Extensions (TDX) host support" + default n + depends on CPU_SUP_INTEL + depends on X86_64 + help + Intel Trust Domain Extensions (TDX) protects guest VMs from malicious + host and certain physical attacks. This option enables necessary TDX + support in host kernel to run protected VMs. + + If unsure, say N. + config EFI bool "EFI runtime service support" depends on ACPI diff --git a/arch/x86/Makefile b/arch/x86/Makefile index 63d50f65b828..2ca3a2a36dc5 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -234,6 +234,8 @@ head-y += arch/x86/kernel/platform-quirks.o libs-y += arch/x86/lib/ +core-y += arch/x86/virt/ + # drivers-y are linked after core-y drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/ drivers-$(CONFIG_PCI) += arch/x86/pci/ diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 020c81a7c729..1f29813b1646 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -20,6 +20,8 @@ #ifndef __ASSEMBLY__ +#include <asm/processor.h> + /* * Used to gather the output registers values of the TDCALL and SEAMCALL * instructions when requesting services from the TDX module. @@ -87,5 +89,12 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1, return -ENODEV; } #endif /* CONFIG_INTEL_TDX_GUEST && CONFIG_KVM_GUEST */ + +#ifdef CONFIG_INTEL_TDX_HOST +void tdx_detect_cpu(struct cpuinfo_x86 *c); +#else +static inline void tdx_detect_cpu(struct cpuinfo_x86 *c) { } +#endif /* CONFIG_INTEL_TDX_HOST */ + #endif /* !__ASSEMBLY__ */ #endif /* _ASM_X86_TDX_H */ diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index 8321c43554a1..b142a640fb8e 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -26,6 +26,7 @@ #include <asm/resctrl.h> #include <asm/numa.h> #include <asm/thermal.h> +#include <asm/tdx.h> #ifdef CONFIG_X86_64 #include <linux/topology.h> @@ -715,6 +716,8 @@ static void init_intel(struct cpuinfo_x86 *c) if (cpu_has(c, X86_FEATURE_TME)) detect_tme(c); + tdx_detect_cpu(c); + init_intel_misc_features(c); if (tsx_ctrl_state == TSX_CTRL_ENABLE) diff --git a/arch/x86/virt/Makefile b/arch/x86/virt/Makefile new file mode 100644 index 000000000000..1e36502cd738 --- /dev/null +++ b/arch/x86/virt/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-y += vmx/ diff --git a/arch/x86/virt/vmx/Makefile b/arch/x86/virt/vmx/Makefile new file mode 100644 index 000000000000..1e1fcd7d3bd1 --- /dev/null +++ b/arch/x86/virt/vmx/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-y += tdx/ diff --git a/arch/x86/virt/vmx/tdx/Makefile b/arch/x86/virt/vmx/tdx/Makefile new file mode 100644 index 000000000000..1bd688684716 --- /dev/null +++ b/arch/x86/virt/vmx/tdx/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_INTEL_TDX_HOST) += tdx.o diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c new file mode 100644 index 000000000000..03f35c75f439 --- /dev/null +++ b/arch/x86/virt/vmx/tdx/tdx.c @@ -0,0 +1,102 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright(c) 2022 Intel Corporation. + * + * Intel Trusted Domain Extensions (TDX) support + */ + +#define pr_fmt(fmt) "tdx: " fmt + +#include <linux/types.h> +#include <linux/cpumask.h> +#include <asm/msr-index.h> +#include <asm/msr.h> +#include <asm/cpufeature.h> +#include <asm/cpufeatures.h> +#include <asm/tdx.h> + +/* Support Intel Secure Arbitration Mode Range Registers (SEAMRR) */ +#define MTRR_CAP_SEAMRR BIT(15) + +/* Core-scope Intel SEAMRR base and mask registers. */ +#define MSR_IA32_SEAMRR_PHYS_BASE 0x00001400 +#define MSR_IA32_SEAMRR_PHYS_MASK 0x00001401 + +#define SEAMRR_PHYS_BASE_CONFIGURED BIT_ULL(3) +#define SEAMRR_PHYS_MASK_ENABLED BIT_ULL(11) +#define SEAMRR_PHYS_MASK_LOCKED BIT_ULL(10) + +#define SEAMRR_ENABLED_BITS \ + (SEAMRR_PHYS_MASK_ENABLED | SEAMRR_PHYS_MASK_LOCKED) + +/* BIOS must configure SEAMRR registers for all cores consistently */ +static u64 seamrr_base, seamrr_mask; + +static bool __seamrr_enabled(void) +{ + return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; +} + +static void detect_seam_bsp(struct cpuinfo_x86 *c) +{ + u64 mtrrcap, base, mask; + + /* SEAMRR is reported via MTRRcap */ + if (!boot_cpu_has(X86_FEATURE_MTRR)) + return; + + rdmsrl(MSR_MTRRcap, mtrrcap); + if (!(mtrrcap & MTRR_CAP_SEAMRR)) + return; + + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); + if (!(base & SEAMRR_PHYS_BASE_CONFIGURED)) { + pr_info("SEAMRR base is not configured by BIOS\n"); + return; + } + + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); + if ((mask & SEAMRR_ENABLED_BITS) != SEAMRR_ENABLED_BITS) { + pr_info("SEAMRR is not enabled by BIOS\n"); + return; + } + + seamrr_base = base; + seamrr_mask = mask; +} + +static void detect_seam_ap(struct cpuinfo_x86 *c) +{ + u64 base, mask; + + /* + * Don't bother to detect this AP if SEAMRR is not + * enabled after earlier detections. + */ + if (!__seamrr_enabled()) + return; + + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); + + if (base == seamrr_base && mask == seamrr_mask) + return; + + pr_err("Inconsistent SEAMRR configuration by BIOS\n"); + /* Mark SEAMRR as disabled. */ + seamrr_base = 0; + seamrr_mask = 0; +} + +static void detect_seam(struct cpuinfo_x86 *c) +{ + if (c == &boot_cpu_data) + detect_seam_bsp(c); + else + detect_seam_ap(c); +} + +void tdx_detect_cpu(struct cpuinfo_x86 *c) +{ + detect_seam(c); +}
Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious host and certain physical attacks. To support TDX, a new CPU mode called Secure Arbitration Mode (SEAM) is added to Intel processors. SEAM is an extension to the VMX architecture to define a new VMX root operation (SEAM VMX root) and a new VMX non-root operation (SEAM VMX non-root). SEAM VMX root operation is designed to host a CPU-attested software module called the 'TDX module' which implements functions to manage crypto-protected VMs called Trust Domains (TD). It is also designed to additionally host a CPU-attested software module called the 'Intel Persistent SEAMLDR (Intel P-SEAMLDR)' to load and update the TDX module. Software modules in SEAM VMX root run in a memory region defined by the SEAM range register (SEAMRR). So the first thing of detecting Intel TDX is to detect the validity of SEAMRR. The presence of SEAMRR is reported via a new SEAMRR bit (15) of the IA32_MTRRCAP MSR. The SEAMRR range registers consist of a pair of MSRs: IA32_SEAMRR_PHYS_BASE and IA32_SEAMRR_PHYS_MASK BIOS is expected to configure SEAMRR with the same value across all cores. In case of BIOS misconfiguration, detect and compare SEAMRR on all cpus. TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to crypto-protect TD guests. Part of MKTME KeyIDs are reserved as "TDX private KeyID" or "TDX KeyIDs" for short. Similar to detecting SEAMRR, detecting TDX private KeyIDs also needs to be done on all cpus to detect any BIOS misconfiguration. To start to support TDX, create a new arch/x86/virt/vmx/tdx/tdx.c for TDX host kernel support. Add a function to detect all TDX preliminaries (SEAMRR, TDX private KeyIDs) for a given cpu when it is brought up. As the first step, detect the validity of SEAMRR. Also add a new Kconfig option CONFIG_INTEL_TDX_HOST to opt-in TDX host kernel support (to distinguish with TDX guest kernel support). Signed-off-by: Kai Huang <kai.huang@intel.com> --- arch/x86/Kconfig | 12 ++++ arch/x86/Makefile | 2 + arch/x86/include/asm/tdx.h | 9 +++ arch/x86/kernel/cpu/intel.c | 3 + arch/x86/virt/Makefile | 2 + arch/x86/virt/vmx/Makefile | 2 + arch/x86/virt/vmx/tdx/Makefile | 2 + arch/x86/virt/vmx/tdx/tdx.c | 102 +++++++++++++++++++++++++++++++++ 8 files changed, 134 insertions(+) create mode 100644 arch/x86/virt/Makefile create mode 100644 arch/x86/virt/vmx/Makefile create mode 100644 arch/x86/virt/vmx/tdx/Makefile create mode 100644 arch/x86/virt/vmx/tdx/tdx.c