From patchwork Sun Mar 13 10:49:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F4D4C433F5 for ; Sun, 13 Mar 2022 10:50:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232958AbiCMKvY (ORCPT ); Sun, 13 Mar 2022 06:51:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230260AbiCMKvV (ORCPT ); Sun, 13 Mar 2022 06:51:21 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1C3622518; Sun, 13 Mar 2022 03:50:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168613; x=1678704613; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lf2BzP5YvhJWmueG/+DT2oFLzUgP7BzLaTzqPwfJCJQ=; b=P7fYDsPvVcMu8ROd05kIcK83ZdTOjjiyJ0RtGUfCCWe++9T9ZMiKYgb7 nK4uTzefkAR8k/lr61/2daIBI0LVK1z9doqyWVgPiR4/vRoz9z/qIDYyB 3j5S7JqEkyfeLidePR1ZAdkgTWjEOduqb0VWPUqSjKK+JTYGTpFrN4GAB NbaN4p9F8pXslzdxwpLAOaj0VjF57kybbuzf+PhAjAdxzkSsHlgDMLrB1 8Pjm4NhK0SS1WdUUkh9zGC7i7J+qIoIEvw932znxv7XG4XtfPJCQw96xp E8d6oxQa9LzbstDe03URIJYaRNZ2/2Jm4voFOBDGgUEwIr7MXDXKYC1jt w==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255810425" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255810425" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:13 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448044" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:10 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 01/21] x86/virt/tdx: Detect SEAM Date: Sun, 13 Mar 2022 23:49:41 +1300 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious host and certain physical attacks. To support TDX, a new CPU mode called Secure Arbitration Mode (SEAM) is added to Intel processors. SEAM is an extension to the VMX architecture to define a new VMX root operation (SEAM VMX root) and a new VMX non-root operation (SEAM VMX non-root). SEAM VMX root operation is designed to host a CPU-attested software module called the 'TDX module' which implements functions to manage crypto protected VMs called Trust Domains (TD). It is also designed to additionally host a CPU-attested software module called the 'Intel Persistent SEAMLDR (Intel P-SEAMLDR)' to load and update the TDX module. Software modules in SEAM VMX root run in a memory region defined by the SEAM range register (SEAMRR). So the first thing of detecting Intel TDX is to detect the validity of SEAMRR. The presence of SEAMRR is reported via a new SEAMRR bit (15) of the IA32_MTRRCAP MSR. The SEAMRR range registers consist of a pair of MSRs: IA32_SEAMRR_PHYS_BASE and IA32_SEAMRR_PHYS_MASK BIOS is expected to configure SEAMRR with the same value across all cores. In case of BIOS misconfiguration, detect and compare SEAMRR on all cpus. To start to support TDX, create a new arch/x86/virt/vmx/ for non-KVM host kernel virtualization support for Intel platforms, and create a new tdx.c under it for TDX host kernel support. TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to crypto protect TD guests. Part of MKTME KeyIDs are reserved as "TDX private KeyID" or "TDX KeyIDs" for short. Similar to detecting SEAMRR, detecting TDX private KeyIDs also needs to be done on all cpus to detect any BIOS misconfiguration. Add a function to detect all TDX preliminaries (SEAMRR, TDX private KeyIDs) for a given cpu when it is brought up. As the first step, detect the validity of SEAMRR. Also add a new Kconfig option CONFIG_INTEL_TDX_HOST to opt-in TDX host kernel support (to distinguish with TDX guest kernel support). Signed-off-by: Kai Huang --- arch/x86/Kconfig | 12 +++++ arch/x86/Makefile | 2 + arch/x86/include/asm/tdx.h | 9 ++++ arch/x86/kernel/cpu/intel.c | 3 ++ arch/x86/virt/Makefile | 2 + arch/x86/virt/vmx/Makefile | 2 + arch/x86/virt/vmx/tdx.c | 102 ++++++++++++++++++++++++++++++++++++ 7 files changed, 132 insertions(+) create mode 100644 arch/x86/virt/Makefile create mode 100644 arch/x86/virt/vmx/Makefile create mode 100644 arch/x86/virt/vmx/tdx.c diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index fb2706f7f04a..f4c5481cca46 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1956,6 +1956,18 @@ config X86_SGX If unsure, say N. +config INTEL_TDX_HOST + bool "Intel Trust Domain Extensions (TDX) host support" + default n + depends on CPU_SUP_INTEL + depends on X86_64 + help + Intel Trust Domain Extensions (TDX) protects guest VMs from malicious + host and certain physical attacks. This option enables necessary TDX + support in host kernel to run protected VMs. + + If unsure, say N. + config EFI bool "EFI runtime service support" depends on ACPI diff --git a/arch/x86/Makefile b/arch/x86/Makefile index e84cdd409b64..83a6a5a2e244 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -238,6 +238,8 @@ head-y += arch/x86/kernel/platform-quirks.o libs-y += arch/x86/lib/ +core-y += arch/x86/virt/ + # drivers-y are linked after core-y drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/ drivers-$(CONFIG_PCI) += arch/x86/pci/ diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 6a97d42b0de9..605d87ab580e 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -11,6 +11,8 @@ #ifndef __ASSEMBLY__ +#include + /* * Used to gather the output registers values of the TDCALL and SEAMCALL * instructions when requesting services from the TDX module. @@ -78,5 +80,12 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1, return -ENODEV; } #endif /* CONFIG_INTEL_TDX_GUEST && CONFIG_KVM_GUEST */ + +#ifdef CONFIG_INTEL_TDX_HOST +void tdx_detect_cpu(struct cpuinfo_x86 *c); +#else +static inline void tdx_detect_cpu(struct cpuinfo_x86 *c) { } +#endif /* CONFIG_INTEL_TDX_HOST */ + #endif /* !__ASSEMBLY__ */ #endif /* _ASM_X86_TDX_H */ diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index 8321c43554a1..b142a640fb8e 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -26,6 +26,7 @@ #include #include #include +#include #ifdef CONFIG_X86_64 #include @@ -715,6 +716,8 @@ static void init_intel(struct cpuinfo_x86 *c) if (cpu_has(c, X86_FEATURE_TME)) detect_tme(c); + tdx_detect_cpu(c); + init_intel_misc_features(c); if (tsx_ctrl_state == TSX_CTRL_ENABLE) diff --git a/arch/x86/virt/Makefile b/arch/x86/virt/Makefile new file mode 100644 index 000000000000..1e36502cd738 --- /dev/null +++ b/arch/x86/virt/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-y += vmx/ diff --git a/arch/x86/virt/vmx/Makefile b/arch/x86/virt/vmx/Makefile new file mode 100644 index 000000000000..1bd688684716 --- /dev/null +++ b/arch/x86/virt/vmx/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_INTEL_TDX_HOST) += tdx.o diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c new file mode 100644 index 000000000000..03f35c75f439 --- /dev/null +++ b/arch/x86/virt/vmx/tdx.c @@ -0,0 +1,102 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright(c) 2022 Intel Corporation. + * + * Intel Trusted Domain Extensions (TDX) support + */ + +#define pr_fmt(fmt) "tdx: " fmt + +#include +#include +#include +#include +#include +#include +#include + +/* Support Intel Secure Arbitration Mode Range Registers (SEAMRR) */ +#define MTRR_CAP_SEAMRR BIT(15) + +/* Core-scope Intel SEAMRR base and mask registers. */ +#define MSR_IA32_SEAMRR_PHYS_BASE 0x00001400 +#define MSR_IA32_SEAMRR_PHYS_MASK 0x00001401 + +#define SEAMRR_PHYS_BASE_CONFIGURED BIT_ULL(3) +#define SEAMRR_PHYS_MASK_ENABLED BIT_ULL(11) +#define SEAMRR_PHYS_MASK_LOCKED BIT_ULL(10) + +#define SEAMRR_ENABLED_BITS \ + (SEAMRR_PHYS_MASK_ENABLED | SEAMRR_PHYS_MASK_LOCKED) + +/* BIOS must configure SEAMRR registers for all cores consistently */ +static u64 seamrr_base, seamrr_mask; + +static bool __seamrr_enabled(void) +{ + return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; +} + +static void detect_seam_bsp(struct cpuinfo_x86 *c) +{ + u64 mtrrcap, base, mask; + + /* SEAMRR is reported via MTRRcap */ + if (!boot_cpu_has(X86_FEATURE_MTRR)) + return; + + rdmsrl(MSR_MTRRcap, mtrrcap); + if (!(mtrrcap & MTRR_CAP_SEAMRR)) + return; + + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); + if (!(base & SEAMRR_PHYS_BASE_CONFIGURED)) { + pr_info("SEAMRR base is not configured by BIOS\n"); + return; + } + + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); + if ((mask & SEAMRR_ENABLED_BITS) != SEAMRR_ENABLED_BITS) { + pr_info("SEAMRR is not enabled by BIOS\n"); + return; + } + + seamrr_base = base; + seamrr_mask = mask; +} + +static void detect_seam_ap(struct cpuinfo_x86 *c) +{ + u64 base, mask; + + /* + * Don't bother to detect this AP if SEAMRR is not + * enabled after earlier detections. + */ + if (!__seamrr_enabled()) + return; + + rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base); + rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask); + + if (base == seamrr_base && mask == seamrr_mask) + return; + + pr_err("Inconsistent SEAMRR configuration by BIOS\n"); + /* Mark SEAMRR as disabled. */ + seamrr_base = 0; + seamrr_mask = 0; +} + +static void detect_seam(struct cpuinfo_x86 *c) +{ + if (c == &boot_cpu_data) + detect_seam_bsp(c); + else + detect_seam_ap(c); +} + +void tdx_detect_cpu(struct cpuinfo_x86 *c) +{ + detect_seam(c); +} From patchwork Sun Mar 13 10:49:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC776C433EF for ; Sun, 13 Mar 2022 10:50:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233111AbiCMKvb (ORCPT ); Sun, 13 Mar 2022 06:51:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232979AbiCMKvY (ORCPT ); Sun, 13 Mar 2022 06:51:24 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C74D24F04; Sun, 13 Mar 2022 03:50:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168617; x=1678704617; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OZyHTx0S0kGdFboPSGYV7Hn0OiQxdH1V1xkIGkAhKNw=; b=E0RYgFQCkhhyfk7SMiOdENX3Cr06C9xspeGqIMNDFHY87PUSjwitfHMy QlsKlOjJPoUmvpL0ralMtfHg6HGnqqBrzxGrt98crejOCBbmOuHSnunSf pct7mTUpAqaEbr4vrG7dGpGfDWXWOeEQSARngJzTqOTaiYFBVgfyvuOFB 8mT+Btv//KzrzcdEAmH5V4WRS2dZ8pOH2b8Ewy6H0C8/ZgEw3laUKQLKh Gl4hWO3ZVlsHn6NeLriIYrUcPrvGVlAke1QGeQBBoGmbGPU9cESuDP5Sv FL/t4p8yhauBQXp7eHNqJASEwlmzvg27FTHHeblmQldxlu1tgmTawsLtk Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255810429" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255810429" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:16 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448057" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:13 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 02/21] x86/virt/tdx: Detect TDX private KeyIDs Date: Sun, 13 Mar 2022 23:49:42 +1300 Message-Id: <6d640fdda25b8845cd6abb041885d60095731644.1647167475.git.kai.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Pre-TDX Intel hardware has support for a memory encryption architecture called MKTME. The memory encryption hardware underpinning MKTME is also used for Intel TDX. TDX ends up "stealing" some of the physical address space from the MKTME architecture for crypto protection to VMs. A new MSR (IA32_MKTME_KEYID_PARTITIONING) helps to enumerate how MKTME- enumerated "KeyID" space is distributed between TDX and legacy MKTME. KeyIDs reserved for TDX are called 'TDX private KeyIDs' or 'TDX KeyIDs' for short. The new MSR is per package and BIOS is responsible for partitioning MKTME KeyIDs and TDX KeyIDs consistently among all packages. Detect TDX private KeyIDs as a preparation to initialize TDX. Similar to detecting SEAMRR, detect on all cpus to detect any potential BIOS misconfiguration among packages. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 72 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 03f35c75f439..ba2210001ea8 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -29,9 +29,28 @@ #define SEAMRR_ENABLED_BITS \ (SEAMRR_PHYS_MASK_ENABLED | SEAMRR_PHYS_MASK_LOCKED) +/* + * Intel Trusted Domain CPU Architecture Extension spec: + * + * IA32_MKTME_KEYID_PARTIONING: + * + * Bit [31:0]: number of MKTME KeyIDs. + * Bit [63:32]: number of TDX private KeyIDs. + * + * TDX private KeyIDs start after the last MKTME KeyID. + */ +#define MSR_IA32_MKTME_KEYID_PARTITIONING 0x00000087 + +#define TDX_KEYID_START(_keyid_part) \ + ((u32)(((_keyid_part) & 0xffffffffull) + 1)) +#define TDX_KEYID_NUM(_keyid_part) ((u32)((_keyid_part) >> 32)) + /* BIOS must configure SEAMRR registers for all cores consistently */ static u64 seamrr_base, seamrr_mask; +static u32 tdx_keyid_start; +static u32 tdx_keyid_num; + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -96,7 +115,60 @@ static void detect_seam(struct cpuinfo_x86 *c) detect_seam_ap(c); } +static void detect_tdx_keyids_bsp(struct cpuinfo_x86 *c) +{ + u64 keyid_part; + + /* TDX is built on MKTME, which is based on TME */ + if (!boot_cpu_has(X86_FEATURE_TME)) + return; + + if (rdmsrl_safe(MSR_IA32_MKTME_KEYID_PARTITIONING, &keyid_part)) + return; + + /* If MSR value is 0, TDX is not enabled by BIOS. */ + if (!keyid_part) + return; + + tdx_keyid_num = TDX_KEYID_NUM(keyid_part); + if (!tdx_keyid_num) + return; + + tdx_keyid_start = TDX_KEYID_START(keyid_part); +} + +static void detect_tdx_keyids_ap(struct cpuinfo_x86 *c) +{ + u64 keyid_part; + + /* + * Don't bother to detect this AP if TDX KeyIDs are + * not detected or cleared after earlier detections. + */ + if (!tdx_keyid_num) + return; + + rdmsrl(MSR_IA32_MKTME_KEYID_PARTITIONING, keyid_part); + + if ((tdx_keyid_start == TDX_KEYID_START(keyid_part)) && + (tdx_keyid_num == TDX_KEYID_NUM(keyid_part))) + return; + + pr_err("Inconsistent TDX KeyID configuration among packages by BIOS\n"); + tdx_keyid_start = 0; + tdx_keyid_num = 0; +} + +static void detect_tdx_keyids(struct cpuinfo_x86 *c) +{ + if (c == &boot_cpu_data) + detect_tdx_keyids_bsp(c); + else + detect_tdx_keyids_ap(c); +} + void tdx_detect_cpu(struct cpuinfo_x86 *c) { detect_seam(c); + detect_tdx_keyids(c); } From patchwork Sun Mar 13 10:49:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F875C433EF for ; Sun, 13 Mar 2022 10:50:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233970AbiCMKvr (ORCPT ); Sun, 13 Mar 2022 06:51:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233045AbiCMKv2 (ORCPT ); Sun, 13 Mar 2022 06:51:28 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D46A27CC8; Sun, 13 Mar 2022 03:50:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168620; x=1678704620; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PG8VyH23JZNbCrMdgalMLg3FQhVHGCQzLN//xYisa14=; b=MiX1YyKFUpHnfxPbytG875ziQXTdmcM3Dy7kEPgFPyejKMmjopKbtNLL ZVvbjcUug9qBlqjwMkBtAyUbovVgmRGmok/UP9Tifm4HjihXc5hXqEp7O oyTTF5wjohyhtaAxVeSFfCIp7EulHvy+hZZn69yMWTPWuaxG2PjAhn1aj jVB13fY9ObXuGFsI/3zHoEumJWtk4P73Qs9zxw+GFIhmpdzcyTqOUaCzI gZfSlzi+MPm6qazkG/IEUV9vC+BcJCajEcan/+Flmq6DP49pGFc45tFeg lpQGhzaiJA7ivoziHdp5yIujotIYWNxjSNxkwBYPdYlbbFSyyTeOXVTVp w==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255810433" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255810433" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:20 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448067" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:17 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 03/21] x86/virt/tdx: Implement the SEAMCALL base function Date: Sun, 13 Mar 2022 23:49:43 +1300 Message-Id: <269a053607357eedd9a1e8ddf0e7240ae0c3985c.1647167475.git.kai.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Secure Arbitration Mode (SEAM) is an extension of VMX architecture. It defines a new VMX root operation (SEAM VMX root) and a new VMX non-root operation (SEAM VMX non-root) which isolate from legacy VMX root and VMX non-root mode. A CPU-attested software module (called the 'TDX module') runs in SEAM VMX root to manage the crypto protected VMs running in SEAM VMX non-root. SEAM VMX root is also used to host another CPU-attested software module (called the 'P-SEAMLDR') to load and update the TDX module. Host kernel transits to either the P-SEAMLDR or the TDX module via the new SEAMCALL instruction. SEAMCALLs are host-side interface functions defined by the P-SEAMLDR and the TDX module around the new SEAMCALL instruction. They are similar to a hypercall, except they are made by host kernel to the SEAM software. SEAMCALLs use an ABI different from the x86-64 system-v ABI. Instead, they share the same ABI with the TDCALL. %rax is used to carry both the SEAMCALL leaf function number (input) and the completion status code (output). Additional GPRs (%rcx, %rdx, %r8->%r11) may be further used as both input and output operands in individual leaf SEAMCALLs. Implement a C function __seamcall() to do SEAMCALL using the assembly macro used by __tdx_module_call() (the implementation of TDCALL). The only exception not covered here is TDENTER leaf function which takes all GPRs and XMM0-XMM15 as both input and output. The caller of TDENTER should implement its own logic to call TDENTER directly instead of using this function. SEAMCALL instruction is essentially a VMExit from VMX root to SEAM VMX root, and it can fail with VMfailInvalid, for instance, when the SEAM software module is not loaded. The C function __seamcall() returns TDX_SEAMCALL_VMFAILINVALID, which doesn't conflict with any actual error code of SEAMCALLs, to uniquely represent this case. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/Makefile | 2 +- arch/x86/virt/vmx/seamcall.S | 52 ++++++++++++++++++++++++++++++++++++ arch/x86/virt/vmx/tdx.h | 11 ++++++++ 3 files changed, 64 insertions(+), 1 deletion(-) create mode 100644 arch/x86/virt/vmx/seamcall.S create mode 100644 arch/x86/virt/vmx/tdx.h diff --git a/arch/x86/virt/vmx/Makefile b/arch/x86/virt/vmx/Makefile index 1bd688684716..fd577619620e 100644 --- a/arch/x86/virt/vmx/Makefile +++ b/arch/x86/virt/vmx/Makefile @@ -1,2 +1,2 @@ # SPDX-License-Identifier: GPL-2.0-only -obj-$(CONFIG_INTEL_TDX_HOST) += tdx.o +obj-$(CONFIG_INTEL_TDX_HOST) += tdx.o seamcall.o diff --git a/arch/x86/virt/vmx/seamcall.S b/arch/x86/virt/vmx/seamcall.S new file mode 100644 index 000000000000..f31a717c00e0 --- /dev/null +++ b/arch/x86/virt/vmx/seamcall.S @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#include +#include + +#include "../tdxcall.S" + +/* + * __seamcall() - Host-side interface functions to SEAM software module + * (the P-SEAMLDR or the TDX module) + * + * Transform function call register arguments into the SEAMCALL register + * ABI. Return TDX_SEAMCALL_VMFAILINVALID, or the completion status of + * the SEAMCALL. Additional output operands are saved in @out (if it is + * provided by caller). + * + *------------------------------------------------------------------------- + * SEAMCALL ABI: + *------------------------------------------------------------------------- + * Input Registers: + * + * RAX - SEAMCALL Leaf number. + * RCX,RDX,R8-R9 - SEAMCALL Leaf specific input registers. + * + * Output Registers: + * + * RAX - SEAMCALL completion status code. + * RCX,RDX,R8-R11 - SEAMCALL Leaf specific output registers. + * + *------------------------------------------------------------------------- + * + * __seamcall() function ABI: + * + * @fn (RDI) - SEAMCALL Leaf number, moved to RAX + * @rcx (RSI) - Input parameter 1, moved to RCX + * @rdx (RDX) - Input parameter 2, moved to RDX + * @r8 (RCX) - Input parameter 3, moved to R8 + * @r9 (R8) - Input parameter 4, moved to R9 + * + * @out (R9) - struct tdx_module_output pointer + * stored temporarily in R12 (not + * used by the P-SEAMLDR or the TDX + * module). It can be NULL. + * + * Return (via RAX) the completion status of the SEAMCALL, or + * TDX_SEAMCALL_VMFAILINVALID. + */ +SYM_FUNC_START(__seamcall) + FRAME_BEGIN + TDX_MODULE_CALL host=1 + FRAME_END + ret +SYM_FUNC_END(__seamcall) diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h new file mode 100644 index 000000000000..9d5b6f554c20 --- /dev/null +++ b/arch/x86/virt/vmx/tdx.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _X86_VIRT_TDX_H +#define _X86_VIRT_TDX_H + +#include + +struct tdx_module_output; +u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, + struct tdx_module_output *out); + +#endif From patchwork Sun Mar 13 10:49:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9ACF1C433F5 for ; Sun, 13 Mar 2022 10:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233224AbiCMKvi (ORCPT ); Sun, 13 Mar 2022 06:51:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233090AbiCMKvb (ORCPT ); Sun, 13 Mar 2022 06:51:31 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CE6424082; Sun, 13 Mar 2022 03:50:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168623; x=1678704623; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jX/IWAeJpVrxu8+wd1yHhjZ4CSn6YW+y+6e5f9DSnSA=; b=aGkGf6RW/noIwz1H2qrqmvGnAZEoew3Ebkpe5DIT7VLpRyyyhdK+nZnD g1xF3t1IhaBwoQjwP4zXlBy/TQlV+48dO9jxscQUuRiecT36H1WNGV4pX FVCFQk4ETUxAMiUp4pCA5ozHE9Eg7HbY6eXzmFdR2Li73lBB4pb9IgvZz m/wU7u6amyXHBuxCwShEsVQFKelnkxYgdZ0HyQcXA8PWRK2ZetM5IL1rJ rQlrTUjhqqK2yyF73Jyvy6n1zYm1Jlj8dvd8OAekE5MNiM4Gi0xsBvKcy JgH8woNzg0HfuBFFjKOgffmxGcAwdzq7WrYh9e2iPLo8JX7rTq3MOxkbD Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255810435" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255810435" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:23 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448082" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:20 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 04/21] x86/virt/tdx: Add skeleton for detecting and initializing TDX on demand Date: Sun, 13 Mar 2022 23:49:44 +1300 Message-Id: <279af00f90a93491d5ec86672506146153909e5c.1647167475.git.kai.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TDX module is essentially a CPU-attested software module running in the new Secure Arbitration Mode (SEAM) to protect VMs from malicious host and certain physical attacks. The TDX module implements the functions to build, tear down and start execution of the protected VMs called Trusted Domains (TD). Before the TDX module can be used to create and run TD guests, it must be loaded into the SEAM Range Register (SEAMRR) and properly initialized. The TDX module is expected to be loaded by BIOS before booting to the kernel, and the kernel is expected to detect and initialize it, using the SEAMCALLs defined by TDX architecture. The TDX module can be initialized only once in its lifetime. Instead of always initializing it at boot time, this implementation chooses an on-demand approach to initialize TDX until there is a real need (e.g when requested by KVM). This avoids consuming the memory that must be allocated by kernel and given to the TDX module as metadata (~1/256th of the TDX-usable memory), and also saves the time of initializing the TDX module (and the metadata) when TDX is not used at all. Initializing the TDX module at runtime on-demand also is more flexible to support TDX module runtime updating in the future (after updating the TDX module, it needs to be initialized again). Introduce two placeholders tdx_detect() and tdx_init() to detect and initialize the TDX module on demand, with a state machine introduced to orchestrate the entire process (in case of multiple callers). To start with, tdx_detect() checks SEAMRR and TDX private KeyIDs. The TDX module is reported as not loaded if either SEAMRR is not enabled, or there are no enough TDX private KeyIDs to create any TD guest. The TDX module itself requires one global TDX private KeyID to crypto protect its metadata. And tdx_init() is currently empty. The TDX module will be initialized in multi-steps defined by the TDX architecture: 1) Global initialization; 2) Logical-CPU scope initialization; 3) Enumerate the TDX module capabilities and platform configuration; 4) Configure the TDX module about usable memory ranges and global KeyID information; 5) Package-scope configuration for the global KeyID; 6) Initialize usable memory ranges based on 4). The TDX module can also be shut down at any time during its lifetime. In case of any error during the initialization process, shut down the module. It's pointless to leave the module in any intermediate state during the initialization. SEAMCALLs used in the above steps (including shutting down the TDX module) require SEAMRR being enabled and CPU being already in VMX operation (VMXON has been done), otherwise it generates #UD. So far only KVM handles entering/leaving VMX operation (VMXON/VMXOFF). Handling of VMXON/VMXOFF could be added to this implementation, for instance, right around the SEAMCALL. But in long term, more kernel components are likely to use VMXON/VMXOFF, so a reference based approach to enter/leave VMX operation may be needed. KVM so far is the only user of TDX, therefore this implementation chooses to not handle VMXON/VMXOFF in tdx_detect() and tdx_init() but requires caller to guarantee that. Signed-off-by: Kai Huang --- arch/x86/include/asm/tdx.h | 4 + arch/x86/virt/vmx/tdx.c | 222 +++++++++++++++++++++++++++++++++++++ 2 files changed, 226 insertions(+) diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 605d87ab580e..b526d41c4bbf 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -83,8 +83,12 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1, #ifdef CONFIG_INTEL_TDX_HOST void tdx_detect_cpu(struct cpuinfo_x86 *c); +int tdx_detect(void); +int tdx_init(void); #else static inline void tdx_detect_cpu(struct cpuinfo_x86 *c) { } +static inline int tdx_detect(void) { return -ENODEV; } +static inline int tdx_init(void) { return -ENODEV; } #endif /* CONFIG_INTEL_TDX_HOST */ #endif /* !__ASSEMBLY__ */ diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index ba2210001ea8..a3517e221578 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -9,6 +9,8 @@ #include #include +#include +#include #include #include #include @@ -45,12 +47,33 @@ ((u32)(((_keyid_part) & 0xffffffffull) + 1)) #define TDX_KEYID_NUM(_keyid_part) ((u32)((_keyid_part) >> 32)) +/* + * TDX module status during initialization + */ +enum tdx_module_status_t { + /* TDX module status is unknown */ + TDX_MODULE_UNKNOWN, + /* TDX module is not loaded */ + TDX_MODULE_NONE, + /* TDX module is loaded, but not initialized */ + TDX_MODULE_LOADED, + /* TDX module is fully initialized */ + TDX_MODULE_INITIALIZED, + /* TDX module is shutdown due to error during initialization */ + TDX_MODULE_SHUTDOWN, +}; + /* BIOS must configure SEAMRR registers for all cores consistently */ static u64 seamrr_base, seamrr_mask; static u32 tdx_keyid_start; static u32 tdx_keyid_num; +static enum tdx_module_status_t tdx_module_status; + +/* Prevent concurrent attempts on TDX detection and initialization */ +static DEFINE_MUTEX(tdx_module_lock); + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -172,3 +195,202 @@ void tdx_detect_cpu(struct cpuinfo_x86 *c) detect_seam(c); detect_tdx_keyids(c); } + +static bool seamrr_enabled(void) +{ + /* + * To detect any BIOS misconfiguration among cores, all logical + * cpus must have been brought up at least once. This is true + * unless 'maxcpus' kernel command line is used to limit the + * number of cpus to be brought up during boot time. However + * 'maxcpus' is basically an invalid operation mode due to the + * MCE broadcast problem, and it should not be used on a TDX + * capable machine. Just do paranoid check here and WARN() + * if not the case. + */ + if (WARN_ON_ONCE(!cpumask_equal(&cpus_booted_once_mask, + cpu_present_mask))) + return false; + + return __seamrr_enabled(); +} + +static bool tdx_keyid_sufficient(void) +{ + if (WARN_ON_ONCE(!cpumask_equal(&cpus_booted_once_mask, + cpu_present_mask))) + return false; + + /* + * TDX requires at least two KeyIDs: one global KeyID to + * protect the metadata of the TDX module and one or more + * KeyIDs to run TD guests. + */ + return tdx_keyid_num >= 2; +} + +static int __tdx_detect(void) +{ + /* The TDX module is not loaded if SEAMRR is disabled */ + if (!seamrr_enabled()) { + pr_info("SEAMRR not enabled.\n"); + goto no_tdx_module; + } + + /* + * Also do not report the TDX module as loaded if there's + * no enough TDX private KeyIDs to run any TD guests. + */ + if (!tdx_keyid_sufficient()) { + pr_info("Number of TDX private KeyIDs too small: %u.\n", + tdx_keyid_num); + goto no_tdx_module; + } + + /* Return -ENODEV until the TDX module is detected */ +no_tdx_module: + tdx_module_status = TDX_MODULE_NONE; + return -ENODEV; +} + +static int init_tdx_module(void) +{ + /* + * Return -EFAULT until all steps of TDX module + * initialization are done. + */ + return -EFAULT; +} + +static void shutdown_tdx_module(void) +{ + /* TODO: Shut down the TDX module */ + tdx_module_status = TDX_MODULE_SHUTDOWN; +} + +static int __tdx_init(void) +{ + int ret; + + /* + * Logical-cpu scope initialization requires calling one SEAMCALL + * on all logical cpus enabled by BIOS. Shutting down the TDX + * module also has such requirement. Further more, configuring + * the key of the global KeyID requires calling one SEAMCALL for + * each package. For simplicity, disable CPU hotplug in the whole + * initialization process. + * + * It's perhaps better to check whether all BIOS-enabled cpus are + * online before starting initializing, and return early if not. + * But none of 'possible', 'present' and 'online' CPU masks + * represents BIOS-enabled cpus. For example, 'possible' mask is + * impacted by 'nr_cpus' or 'possible_cpus' kernel command line. + * Just let the SEAMCALL to fail if not all BIOS-enabled cpus are + * online. + */ + cpus_read_lock(); + + ret = init_tdx_module(); + + /* + * Shut down the TDX module in case of any error during the + * initialization process. It's meaningless to leave the TDX + * module in any middle state of the initialization process. + */ + if (ret) + shutdown_tdx_module(); + + cpus_read_unlock(); + + return ret; +} + +/** + * tdx_detect - Detect whether the TDX module has been loaded + * + * Detect whether the TDX module has been loaded and ready for + * initialization. Only call this function when all cpus are + * already in VMX operation. + * + * This function can be called in parallel by multiple callers. + * + * Return: + * + * * -0: The TDX module has been loaded and ready for + * initialization. + * * -ENODEV: The TDX module is not loaded. + * * -EPERM: CPU is not in VMX operation. + * * -EFAULT: Other internal fatal errors. + */ +int tdx_detect(void) +{ + int ret; + + mutex_lock(&tdx_module_lock); + + switch (tdx_module_status) { + case TDX_MODULE_UNKNOWN: + ret = __tdx_detect(); + break; + case TDX_MODULE_NONE: + ret = -ENODEV; + break; + case TDX_MODULE_LOADED: + case TDX_MODULE_INITIALIZED: + ret = 0; + break; + case TDX_MODULE_SHUTDOWN: + ret = -EFAULT; + break; + default: + WARN_ON(1); + ret = -EFAULT; + } + + mutex_unlock(&tdx_module_lock); + return ret; +} +EXPORT_SYMBOL_GPL(tdx_detect); + +/** + * tdx_init - Initialize the TDX module + * + * Initialize the TDX module to make it ready to run TD guests. This + * function should be called after tdx_detect() returns successful. + * Only call this function when all cpus are online and are in VMX + * operation. CPU hotplug is temporarily disabled internally. + * + * This function can be called in parallel by multiple callers. + * + * Return: + * + * * -0: The TDX module has been successfully initialized. + * * -ENODEV: The TDX module is not loaded. + * * -EPERM: The CPU which does SEAMCALL is not in VMX operation. + * * -EFAULT: Other internal fatal errors. + */ +int tdx_init(void) +{ + int ret; + + mutex_lock(&tdx_module_lock); + + switch (tdx_module_status) { + case TDX_MODULE_NONE: + ret = -ENODEV; + break; + case TDX_MODULE_LOADED: + ret = __tdx_init(); + break; + case TDX_MODULE_INITIALIZED: + ret = 0; + break; + default: + ret = -EFAULT; + break; + } + mutex_unlock(&tdx_module_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(tdx_init); From patchwork Sun Mar 13 10:49:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3E16C433EF for ; Sun, 13 Mar 2022 10:50:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233318AbiCMKvj (ORCPT ); Sun, 13 Mar 2022 06:51:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233282AbiCMKvf (ORCPT ); Sun, 13 Mar 2022 06:51:35 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB94D35DC7; Sun, 13 Mar 2022 03:50:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168627; x=1678704627; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1Jh6pGzw7iMEr2oAhNe8ODq2NUN69y4H2N4vnrcIRiU=; b=CT5tYwyUEe72pSVMLezdnPgKqiK9lCGenMDAUu2am5YC70Vwt4GwRPgR ZSgUj4Fg//MwamQi1AnBjTbuVBCYQJCu3TEzd1s4LJy+IRIZuHl0ISUIx fSS5Md0KmqJIxnv+v6rVq2fy0fGX+EB7iN79ldVBoxqeo4ZS/YjBCrHTS lhp918ryKI4KMFpel/F+2GQB1RUBA1NkON/GOg+gPdPs0PJrtXeoeR4Q7 lLqk1IAnOgelhI551d/LyeL/8FPkX2c1tRtLJ3fmirmvcisGqFLYcBsoT 5/B4ldYcoOpclFFgbfneiHZecSTJjI1GY8lrZqd64MWHXvIvzQkOEDQwh g==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255810440" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255810440" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:26 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448090" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:23 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 05/21] x86/virt/tdx: Detect P-SEAMLDR and TDX module Date: Sun, 13 Mar 2022 23:49:45 +1300 Message-Id: <853199dc588fce57fd40955f64ba76e16bd0a2ca.1647167475.git.kai.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The P-SEAMLDR (persistent SEAM loader) is the first software module that runs in SEAM VMX root, responsible for loading and updating the TDX module. Both the P-SEAMLDR and the TDX module are expected to be loaded before host kernel boots. There is not CPUID or MSR to detect whether the P-SEAMLDR or the TDX module has been loaded. SEAMCALL instruction fails with VMfailInvalid when the target SEAM software module is not loaded, so SEAMCALL can be used to detect whether the P-SEAMLDR and the TDX module are loaded. Detect the P-SEAMLDR and the TDX module by calling SEAMLDR.INFO SEAMCALL to get the P-SEAMLDR information. If the SEAMCALL succeeds, the P-SEAMLDR information further tells whether the TDX module is loaded or not. Also add a wrapper of __seamcall() to make SEAMCALL to the P-SEAMLDR and the TDX module with additional defensive check on SEAMRR and CR4.VMXE, since both detecting and initializing TDX module require the caller of TDX to handle VMXON. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 175 +++++++++++++++++++++++++++++++++++++++- arch/x86/virt/vmx/tdx.h | 31 +++++++ 2 files changed, 205 insertions(+), 1 deletion(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index a3517e221578..b04f792f1e65 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -15,7 +15,9 @@ #include #include #include +#include #include +#include "tdx.h" /* Support Intel Secure Arbitration Mode Range Registers (SEAMRR) */ #define MTRR_CAP_SEAMRR BIT(15) @@ -74,6 +76,8 @@ static enum tdx_module_status_t tdx_module_status; /* Prevent concurrent attempts on TDX detection and initialization */ static DEFINE_MUTEX(tdx_module_lock); +static struct p_seamldr_info p_seamldr_info; + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -229,6 +233,160 @@ static bool tdx_keyid_sufficient(void) return tdx_keyid_num >= 2; } +/* + * All error codes of both the P-SEAMLDR and the TDX module SEAMCALLs + * have bit 63 set if SEAMCALL fails. + */ +#define SEAMCALL_LEAF_ERROR(_ret) ((_ret) & BIT_ULL(63)) + +/** + * seamcall - make SEAMCALL to the P-SEAMLDR or the TDX module with + * additional check on SEAMRR and CR4.VMXE + * + * @fn: SEAMCALL leaf number. + * @rcx: Input operand RCX. + * @rdx: Input operand RDX. + * @r8: Input operand R8. + * @r9: Input operand R9. + * @seamcall_ret: SEAMCALL completion status (can be NULL). + * @out: Additional output operands (can be NULL). + * + * Wrapper of __seamcall() to make SEAMCALL to the P-SEAMLDR or the TDX + * module with additional defensive check on SEAMRR and CR4.VMXE. Caller + * to make sure SEAMRR is enabled and CPU is already in VMX operation + * before calling this function. + * + * Unlike __seamcall(), it returns kernel error code instead of SEAMCALL + * completion status, which is returned via @seamcall_ret if desired. + * + * Return: + * + * * -ENODEV: SEAMCALL failed with VMfailInvalid, or SEAMRR is not enabled. + * * -EPERM: CR4.VMXE is not enabled + * * -EFAULT: SEAMCALL failed + * * -0: SEAMCALL succeeded + */ +static int seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, + u64 *seamcall_ret, struct tdx_module_output *out) +{ + u64 ret; + + if (WARN_ON_ONCE(!seamrr_enabled())) + return -ENODEV; + + /* + * SEAMCALL instruction requires CPU being already in VMX + * operation (VMXON has been done), otherwise it causes #UD. + * Sanity check whether CR4.VMXE has been enabled. + * + * Note VMX being enabled in CR4 doesn't mean CPU is already + * in VMX operation, but unfortunately there's no way to do + * such check. However in practice enabling CR4.VMXE and + * doing VMXON are done together (for now) so in practice it + * checks whether VMXON has been done. + * + * Preemption is disabled during the CR4.VMXE check and the + * actual SEAMCALL so VMX doesn't get disabled by other threads + * due to scheduling. + */ + preempt_disable(); + if (WARN_ON_ONCE(!cpu_vmx_enabled())) { + preempt_enable_no_resched(); + return -EPERM; + } + + ret = __seamcall(fn, rcx, rdx, r8, r9, out); + + preempt_enable_no_resched(); + + /* + * Convert SEAMCALL error code to kernel error code: + * - -ENODEV: VMfailInvalid + * - -EFAULT: SEAMCALL failed + * - 0: SEAMCALL was successful + */ + if (ret == TDX_SEAMCALL_VMFAILINVALID) + return -ENODEV; + + /* Save the completion status if caller wants to use it */ + if (seamcall_ret) + *seamcall_ret = ret; + + /* + * TDX module SEAMCALLs may also return non-zero completion + * status codes but w/o bit 63 set. Those codes are treated + * as additional information/warning while the SEAMCALL is + * treated as completed successfully. Return 0 in this case. + * Caller can use @seamcall_ret to get the additional code + * when it is desired. + */ + if (SEAMCALL_LEAF_ERROR(ret)) { + pr_err("SEAMCALL leaf %llu failed: 0x%llx\n", fn, ret); + return -EFAULT; + } + + return 0; +} + +static inline bool p_seamldr_ready(void) +{ + return !!p_seamldr_info.p_seamldr_ready; +} + +static inline bool tdx_module_ready(void) +{ + /* + * SEAMLDR_INFO.SEAM_READY indicates whether TDX module + * is (loaded and) ready for SEAMCALL. + */ + return p_seamldr_ready() && !!p_seamldr_info.seam_ready; +} + +/* + * Detect whether the P-SEAMLDR has been loaded by calling SEAMLDR.INFO + * SEAMCALL to get the P-SEAMLDR information, which further tells whether + * the TDX module has been loaded and ready for SEAMCALL. Caller to make + * sure only calling this function when CPU is already in VMX operation. + */ +static int detect_p_seamldr(void) +{ + int ret; + + /* + * SEAMCALL fails with VMfailInvalid when SEAM software is not + * loaded, in which case seamcall() returns -ENODEV. Use this + * to detect the P-SEAMLDR. + * + * Note the P-SEAMLDR SEAMCALL also fails with VMfailInvalid when + * the P-SEAMLDR is already busy with another SEAMCALL. But this + * won't happen here as this function is only called once. + */ + ret = seamcall(P_SEAMCALL_SEAMLDR_INFO, __pa(&p_seamldr_info), + 0, 0, 0, NULL, NULL); + if (ret) { + if (ret == -ENODEV) + pr_info("P-SEAMLDR is not loaded.\n"); + else + pr_info("Failed to detect P-SEAMLDR.\n"); + + return ret; + } + + /* + * If SEAMLDR.INFO was successful, it must be ready for SEAMCALL. + * Otherwise it's either kernel or firmware bug. + */ + if (WARN_ON_ONCE(!p_seamldr_ready())) + return -ENODEV; + + pr_info("P-SEAMLDR: version 0x%x, vendor_id: 0x%x, build_date: %u, build_num %u, major %u, minor %u\n", + p_seamldr_info.version, p_seamldr_info.vendor_id, + p_seamldr_info.build_date, p_seamldr_info.build_num, + p_seamldr_info.major, p_seamldr_info.minor); + + return 0; +} + static int __tdx_detect(void) { /* The TDX module is not loaded if SEAMRR is disabled */ @@ -247,7 +405,22 @@ static int __tdx_detect(void) goto no_tdx_module; } - /* Return -ENODEV until the TDX module is detected */ + /* + * For simplicity any error during detect_p_seamldr() marks + * TDX module as not loaded. + */ + if (detect_p_seamldr()) + goto no_tdx_module; + + if (!tdx_module_ready()) { + pr_info("TDX module is not loaded.\n"); + goto no_tdx_module; + } + + pr_info("TDX module detected.\n"); + tdx_module_status = TDX_MODULE_LOADED; + return 0; + no_tdx_module: tdx_module_status = TDX_MODULE_NONE; return -ENODEV; diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index 9d5b6f554c20..6990c93198b3 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -3,6 +3,37 @@ #define _X86_VIRT_TDX_H #include +#include + +/* + * TDX architectural data structures + */ + +#define P_SEAMLDR_INFO_ALIGNMENT 256 + +struct p_seamldr_info { + u32 version; + u32 attributes; + u32 vendor_id; + u32 build_date; + u16 build_num; + u16 minor; + u16 major; + u8 reserved0[2]; + u32 acm_x2apicid; + u8 reserved1[4]; + u8 seaminfo[128]; + u8 seam_ready; + u8 seam_debug; + u8 p_seamldr_ready; + u8 reserved2[88]; +} __packed __aligned(P_SEAMLDR_INFO_ALIGNMENT); + +/* + * P-SEAMLDR SEAMCALL leaf function + */ +#define P_SEAMLDR_SEAMCALL_BASE BIT_ULL(63) +#define P_SEAMCALL_SEAMLDR_INFO (P_SEAMLDR_SEAMCALL_BASE | 0x0) struct tdx_module_output; u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, From patchwork Sun Mar 13 10:49:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CF0FC4332F for ; Sun, 13 Mar 2022 10:50:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233372AbiCMKvm (ORCPT ); Sun, 13 Mar 2022 06:51:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233297AbiCMKvj (ORCPT ); Sun, 13 Mar 2022 06:51:39 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 144DC36B72; Sun, 13 Mar 2022 03:50:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168630; x=1678704630; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HimONfpfA8ZUNJ57DL6YyWiMd/sDi6k2uK0GLmkOfLk=; b=nTxAKlh+BLqoLZu1LLSTEmNWQ5Mrb2nZnWZaE7wEBzCh4TPsq5MFjMlP mr1iB7NK+/EkH8QWFu8R9EVfs0Ny6FBRAP1yOvSmAU+31wOPl2wR1X6Uv zMse0wK9Dz86h1Wh9eQW9QScd31BxofcrpcdtZFr2CioEgXJGOj49i0tN wVQe2HhxZrQVw1pu4AjTppjs0kZvUAqo7Ace5nveVZppHUs09Pk2UdBul nwEiOXJlH5wLFs9KPO4i6XCHstBjeOLWtwjh9eoo2FovQRqRcxPgxT2Wx Ff8PldossO0RyOdGcsvF6LgrN2zfYTFeJZmsgU5kQlXyft5XUqUT+0JZO A==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255810443" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255810443" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:29 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448100" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:26 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 06/21] x86/virt/tdx: Shut down TDX module in case of error Date: Sun, 13 Mar 2022 23:49:46 +1300 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TDX supports shutting down the TDX module at any time during its lifetime. After TDX module is shut down, no further SEAMCALL can be made on any logical cpu. Shut down the TDX module in case of any error happened during the initialization process. It's pointless to leave the TDX module in some middle state. Shutting down the TDX module requires calling TDH.SYS.LP.SHUTDOWN on all BIOS-enabled cpus, and the SEMACALL can run concurrently on different cpus. Implement a mechanism to run SEAMCALL concurrently on all online cpus. Logical-cpu scope initialization will use it too. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 40 +++++++++++++++++++++++++++++++++++++++- arch/x86/virt/vmx/tdx.h | 5 +++++ 2 files changed, 44 insertions(+), 1 deletion(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index b04f792f1e65..d87af534db51 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -11,6 +11,8 @@ #include #include #include +#include +#include #include #include #include @@ -328,6 +330,39 @@ static int seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, return 0; } +/* Data structure to make SEAMCALL on multiple CPUs concurrently */ +struct seamcall_ctx { + u64 fn; + u64 rcx; + u64 rdx; + u64 r8; + u64 r9; + atomic_t err; + u64 seamcall_ret; + struct tdx_module_output out; +}; + +static void seamcall_smp_call_function(void *data) +{ + struct seamcall_ctx *sc = data; + int ret; + + ret = seamcall(sc->fn, sc->rcx, sc->rdx, sc->r8, sc->r9, + &sc->seamcall_ret, &sc->out); + if (ret) + atomic_set(&sc->err, ret); +} + +/* + * Call the SEAMCALL on all online cpus concurrently. + * Return error if SEAMCALL fails on any cpu. + */ +static int seamcall_on_each_cpu(struct seamcall_ctx *sc) +{ + on_each_cpu(seamcall_smp_call_function, sc, true); + return atomic_read(&sc->err); +} + static inline bool p_seamldr_ready(void) { return !!p_seamldr_info.p_seamldr_ready; @@ -437,7 +472,10 @@ static int init_tdx_module(void) static void shutdown_tdx_module(void) { - /* TODO: Shut down the TDX module */ + struct seamcall_ctx sc = { .fn = TDH_SYS_LP_SHUTDOWN }; + + seamcall_on_each_cpu(&sc); + tdx_module_status = TDX_MODULE_SHUTDOWN; } diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index 6990c93198b3..dcc1f6dfe378 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -35,6 +35,11 @@ struct p_seamldr_info { #define P_SEAMLDR_SEAMCALL_BASE BIT_ULL(63) #define P_SEAMCALL_SEAMLDR_INFO (P_SEAMLDR_SEAMCALL_BASE | 0x0) +/* + * TDX module SEAMCALL leaf functions + */ +#define TDH_SYS_LP_SHUTDOWN 44 + struct tdx_module_output; u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, struct tdx_module_output *out); From patchwork Sun Mar 13 10:49:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03FCDC433EF for ; Sun, 13 Mar 2022 10:51:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234268AbiCMKwF (ORCPT ); Sun, 13 Mar 2022 06:52:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233248AbiCMKvn (ORCPT ); Sun, 13 Mar 2022 06:51:43 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58B2436324; Sun, 13 Mar 2022 03:50:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168633; x=1678704633; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1/CFhMw+zoD/9IrIuLin35o8R0ap4GLkb9GXsqkEG84=; b=aDewddJqVCRAQ1ssP5t3bkSEeBCDJrRwr1k9tZ0mkjJko4R7PD7Z61Iw NdwQ04Ia6vf+9ISjef6Zzyix9T2zp4h5uqROgB9b8pfYoIXSJWJIO9E4C dtbAdTWkU9hREp8OU73CdxbJEr0hUymTMG6K2GMcQwep3mIknrHnrGg0U efH5yCsVM6u7i5dElWe0MWX+RLC8Wgn3S0JgtpdSKp8HVd7Q56xu1RIj9 /cQMIp7mGrLoJafktzu5UPKLKP25mAymL5DQEBLW402eClZ2nOr6txDaq TODTTBciplt1MUrnQ2OHyaa42alCe+HaXxITPzXY9RMYfBkr2kPAb2qeL g==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255810449" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255810449" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:33 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448109" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:29 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 07/21] x86/virt/tdx: Do TDX module global initialization Date: Sun, 13 Mar 2022 23:49:47 +1300 Message-Id: <35c2921238bab117fea6ce6feb8604e83750ac39.1647167475.git.kai.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Do the TDX module global initialization which requires calling TDH.SYS.INIT once on any logical cpu. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 11 ++++++++++- arch/x86/virt/vmx/tdx.h | 1 + 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index d87af534db51..45e7404b5d81 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -463,11 +463,20 @@ static int __tdx_detect(void) static int init_tdx_module(void) { + int ret; + + /* TDX module global initialization */ + ret = seamcall(TDH_SYS_INIT, 0, 0, 0, 0, NULL, NULL); + if (ret) + goto out; + /* * Return -EFAULT until all steps of TDX module * initialization are done. */ - return -EFAULT; + ret = -EFAULT; +out: + return ret; } static void shutdown_tdx_module(void) diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index dcc1f6dfe378..f0983b1936d8 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -38,6 +38,7 @@ struct p_seamldr_info { /* * TDX module SEAMCALL leaf functions */ +#define TDH_SYS_INIT 33 #define TDH_SYS_LP_SHUTDOWN 44 struct tdx_module_output; From patchwork Sun Mar 13 10:49:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E8C3C433F5 for ; Sun, 13 Mar 2022 10:51:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234246AbiCMKwR (ORCPT ); Sun, 13 Mar 2022 06:52:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234241AbiCMKv4 (ORCPT ); Sun, 13 Mar 2022 06:51:56 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4684437AA0; Sun, 13 Mar 2022 03:50:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168637; x=1678704637; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EFAqTydulIlOdHgbokdDqDaAGrxegpZWdbGMR3Kq/z0=; b=g5h+Dhg1Ms2RpCDsohLe8VSNp6GTX8+QfbNzWdzZT6TIo7oAv8jY775o uo9eJu0TNXEq0OWknT3Sye0r660gLUzp7kx51jdX9VM3WP8ScCOPK0mQ7 6eMl22/97UKT3KDvGnh6snBwwHWGtVobqZzW9RCQEx798gNxtCmk+t432 SuOyE893F1KdwdNorDBNTAbf7yR30BkvMR3U7YBfhoLlanN331jFygfnf cFS1DCQ6QKtth7Wvuf94b9sefg2DkP61p7jX5sVBxB9PEw7gUvye9xnrS 6sX61yasloiGvgKAlHvTMZaawibyNbAIOwV6LEJ5SZY5Yf3RJML5rtOoU g==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="254689533" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="254689533" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:36 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448125" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:33 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 08/21] x86/virt/tdx: Do logical-cpu scope TDX module initialization Date: Sun, 13 Mar 2022 23:49:48 +1300 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Logical-cpu scope initialization requires calling TDH.SYS.LP.INIT on all BIOS-enabled cpus, otherwise the TDH.SYS.CONFIG SEAMCALL will fail. TDH.SYS.LP.INIT can be called concurrently on all cpus. Following global initialization, do the logical-cpu scope initialization by calling TDH.SYS.LP.INIT on all online cpus. Whether all BIOS-enabled cpus are online is not checked here for simplicity. The user of TDX should guarantee all BIOS-enabled cpus are online. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 12 ++++++++++++ arch/x86/virt/vmx/tdx.h | 1 + 2 files changed, 13 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 45e7404b5d81..4b0c285d844b 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -461,6 +461,13 @@ static int __tdx_detect(void) return -ENODEV; } +static int tdx_module_init_cpus(void) +{ + struct seamcall_ctx sc = { .fn = TDH_SYS_LP_INIT }; + + return seamcall_on_each_cpu(&sc); +} + static int init_tdx_module(void) { int ret; @@ -470,6 +477,11 @@ static int init_tdx_module(void) if (ret) goto out; + /* Logical-cpu scope initialization */ + ret = tdx_module_init_cpus(); + if (ret) + goto out; + /* * Return -EFAULT until all steps of TDX module * initialization are done. diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index f0983b1936d8..b8cfdd6e12f3 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -39,6 +39,7 @@ struct p_seamldr_info { * TDX module SEAMCALL leaf functions */ #define TDH_SYS_INIT 33 +#define TDH_SYS_LP_INIT 35 #define TDH_SYS_LP_SHUTDOWN 44 struct tdx_module_output; From patchwork Sun Mar 13 10:49:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A1A9C433FE for ; Sun, 13 Mar 2022 10:51:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233332AbiCMKwL (ORCPT ); Sun, 13 Mar 2022 06:52:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234257AbiCMKwE (ORCPT ); Sun, 13 Mar 2022 06:52:04 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAB883CFF4; Sun, 13 Mar 2022 03:50:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168640; x=1678704640; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8xcZDRDnhd61U/E3nwRvsQhC/7pCuXs6OXRSCJIce8I=; b=IknxGArsjBgTtl1rCVdMuotMuxXjuDdQvTC74eiJu7c6UJYwMLn+vyQj pC+Qo3SZ7V+vpzbuU4hccgcBUrohsnmnL+1hjbk7HPHqyv1734fDtj5u/ mhrhZRr7wBD+9m4ynxXYypm3urcLmsbUVcbBbADZ1FGBE6gHN31SoY4w8 qecKtMDNEP49Eg4lrFJiEUrmUKE7SoG6c/wtNifTfRG4iecctvZRvOm9X gSNQBSv5YU3GWfGUn3QKq4TiyKSrmtOfbuzSTP/e/0R9ceGiE8pNtgXtY 64BzKnrwikO1F4SPj2WvcTgxjw2q3XiRaUjsk6D43b9WJDjRDvc7/n3fH Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="254689535" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="254689535" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:39 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448134" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:36 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 09/21] x86/virt/tdx: Get information about TDX module and convertible memory Date: Sun, 13 Mar 2022 23:49:49 +1300 Message-Id: <98c1010509aa412e7f05b12187cacf40451d5246.1647167475.git.kai.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TDX provides increased levels of memory confidentiality and integrity. This requires special hardware support for features like memory encryption and storage of memory integrity checksums. Not all memory satisfies these requirements. As a result, TDX introduced the concept of a "Convertible Memory Region" (CMR). During boot, the firmware builds a list of all of the memory ranges which can provide the TDX security guarantees. The list of these ranges, along with TDX module information, is available to the kernel by querying the TDX module via TDH.SYS.INFO SEAMCALL. Host kernel can choose whether or not to use all convertible memory regions as TDX memory. Before TDX module is ready to create any TD guests, all TDX memory regions that host kernel intends to use must be configured to the TDX module, using specific data structures defined by TDX architecture. Constructing those structures requires information of both TDX module and the Convertible Memory Regions. Call TDH.SYS.INFO to get this information as preparation to construct those structures. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 127 ++++++++++++++++++++++++++++++++++++++++ arch/x86/virt/vmx/tdx.h | 61 +++++++++++++++++++ 2 files changed, 188 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 4b0c285d844b..eb585bc5edda 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -80,6 +80,11 @@ static DEFINE_MUTEX(tdx_module_lock); static struct p_seamldr_info p_seamldr_info; +/* Base address of CMR array needs to be 512 bytes aligned. */ +static struct cmr_info tdx_cmr_array[MAX_CMRS] __aligned(CMR_INFO_ARRAY_ALIGNMENT); +static int tdx_cmr_num; +static struct tdsysinfo_struct tdx_sysinfo; + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -468,6 +473,123 @@ static int tdx_module_init_cpus(void) return seamcall_on_each_cpu(&sc); } +static inline bool cmr_valid(struct cmr_info *cmr) +{ + return !!cmr->size; +} + +static void print_cmrs(struct cmr_info *cmr_array, int cmr_num, + const char *name) +{ + int i; + + for (i = 0; i < cmr_num; i++) { + struct cmr_info *cmr = &cmr_array[i]; + + pr_info("%s : [0x%llx, 0x%llx)\n", name, + cmr->base, cmr->base + cmr->size); + } +} + +static int sanitize_cmrs(struct cmr_info *cmr_array, int cmr_num) +{ + int i, j; + + /* + * Intel TDX module spec, 20.7.3 CMR_INFO: + * + * TDH.SYS.INFO leaf function returns a MAX_CMRS (32) entry + * array of CMR_INFO entries. The CMRs are sorted from the + * lowest base address to the highest base address, and they + * are non-overlapping. + * + * This implies that BIOS may generate invalid empty entries + * if total CMRs are less than 32. Skip them manually. + */ + for (i = 0; i < cmr_num; i++) { + struct cmr_info *cmr = &cmr_array[i]; + struct cmr_info *prev_cmr = NULL; + + /* Skip further invalid CMRs */ + if (!cmr_valid(cmr)) + break; + + if (i > 0) + prev_cmr = &cmr_array[i - 1]; + + /* + * It is a TDX firmware bug if CMRs are not + * in address ascending order. + */ + if (prev_cmr && ((prev_cmr->base + prev_cmr->size) > + cmr->base)) { + pr_err("Firmware bug: CMRs not in address ascending order.\n"); + return -EFAULT; + } + } + + /* + * Also a sane BIOS should never generate invalid CMR(s) between + * two valid CMRs. Sanity check this and simply return error in + * this case. + */ + for (j = i; j < cmr_num; j++) + if (cmr_valid(&cmr_array[j])) { + pr_err("Firmware bug: invalid CMR(s) among valid CMRs.\n"); + return -EFAULT; + } + + /* + * Trim all tail invalid empty CMRs. BIOS should generate at + * least one valid CMR, otherwise it's a TDX firmware bug. + */ + tdx_cmr_num = i; + if (!tdx_cmr_num) { + pr_err("Firmware bug: No valid CMR.\n"); + return -EFAULT; + } + + /* Print kernel sanitized CMRs */ + print_cmrs(tdx_cmr_array, tdx_cmr_num, "Kernel-sanitized-CMR"); + + return 0; +} + +static int tdx_get_sysinfo(void) +{ + struct tdx_module_output out; + u64 tdsysinfo_sz, cmr_num; + int ret; + + BUILD_BUG_ON(sizeof(struct tdsysinfo_struct) != TDSYSINFO_STRUCT_SIZE); + + ret = seamcall(TDH_SYS_INFO, __pa(&tdx_sysinfo), TDSYSINFO_STRUCT_SIZE, + __pa(tdx_cmr_array), MAX_CMRS, NULL, &out); + if (ret) + return ret; + + /* + * If TDH.SYS.CONFIG succeeds, RDX contains the actual bytes + * written to @tdx_sysinfo and R9 contains the actual entries + * written to @tdx_cmr_array. Sanity check them. + */ + tdsysinfo_sz = out.rdx; + cmr_num = out.r9; + if (WARN_ON_ONCE((tdsysinfo_sz > sizeof(tdx_sysinfo)) || !tdsysinfo_sz || + (cmr_num > MAX_CMRS) || !cmr_num)) + return -EFAULT; + + pr_info("TDX module: vendor_id 0x%x, major_version %u, minor_version %u, build_date %u, build_num %u", + tdx_sysinfo.vendor_id, tdx_sysinfo.major_version, + tdx_sysinfo.minor_version, tdx_sysinfo.build_date, + tdx_sysinfo.build_num); + + /* Print BIOS provided CMRs */ + print_cmrs(tdx_cmr_array, cmr_num, "BIOS-CMR"); + + return sanitize_cmrs(tdx_cmr_array, cmr_num); +} + static int init_tdx_module(void) { int ret; @@ -482,6 +604,11 @@ static int init_tdx_module(void) if (ret) goto out; + /* Get TDX module information and CMRs */ + ret = tdx_get_sysinfo(); + if (ret) + goto out; + /* * Return -EFAULT until all steps of TDX module * initialization are done. diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index b8cfdd6e12f3..2f21c45df6ac 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -29,6 +29,66 @@ struct p_seamldr_info { u8 reserved2[88]; } __packed __aligned(P_SEAMLDR_INFO_ALIGNMENT); +struct cmr_info { + u64 base; + u64 size; +} __packed; + +#define MAX_CMRS 32 +#define CMR_INFO_ARRAY_ALIGNMENT 512 + +struct cpuid_config { + u32 leaf; + u32 sub_leaf; + u32 eax; + u32 ebx; + u32 ecx; + u32 edx; +} __packed; + +#define TDSYSINFO_STRUCT_SIZE 1024 +#define TDSYSINFO_STRUCT_ALIGNMENT 1024 + +struct tdsysinfo_struct { + /* TDX-SEAM Module Info */ + u32 attributes; + u32 vendor_id; + u32 build_date; + u16 build_num; + u16 minor_version; + u16 major_version; + u8 reserved0[14]; + /* Memory Info */ + u16 max_tdmrs; + u16 max_reserved_per_tdmr; + u16 pamt_entry_size; + u8 reserved1[10]; + /* Control Struct Info */ + u16 tdcs_base_size; + u8 reserved2[2]; + u16 tdvps_base_size; + u8 tdvps_xfam_dependent_size; + u8 reserved3[9]; + /* TD Capabilities */ + u64 attributes_fixed0; + u64 attributes_fixed1; + u64 xfam_fixed0; + u64 xfam_fixed1; + u8 reserved4[32]; + u32 num_cpuid_config; + /* + * The actual number of CPUID_CONFIG depends on above + * 'num_cpuid_config'. The size of 'struct tdsysinfo_struct' + * is 1024B defined by TDX architecture. Use a union with + * specific padding to make 'sizeof(struct tdsysinfo_struct)' + * equal to 1024. + */ + union { + struct cpuid_config cpuid_configs[0]; + u8 reserved5[892]; + }; +} __packed __aligned(TDSYSINFO_STRUCT_ALIGNMENT); + /* * P-SEAMLDR SEAMCALL leaf function */ @@ -38,6 +98,7 @@ struct p_seamldr_info { /* * TDX module SEAMCALL leaf functions */ +#define TDH_SYS_INFO 32 #define TDH_SYS_INIT 33 #define TDH_SYS_LP_INIT 35 #define TDH_SYS_LP_SHUTDOWN 44 From patchwork Sun Mar 13 10:49:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67FEBC433F5 for ; Sun, 13 Mar 2022 10:51:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234314AbiCMKwN (ORCPT ); Sun, 13 Mar 2022 06:52:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233450AbiCMKwE (ORCPT ); Sun, 13 Mar 2022 06:52:04 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E94833FBC7; Sun, 13 Mar 2022 03:50:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168643; x=1678704643; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pbitP+PnyGqFnbtK+hIWATSRZEKJkqH/QMEZOD9JF5o=; b=QbqPylHBI2MUjMLt6LCcmmJAOWmHf4Lvj4Fksy/7c/Ved49JZ72Aq7Lm 68MbTA0gqZbtnWRKQhXKag6xJTrTNJEasH1mWKh31r5pI6sIEBiDHd4Yw aXVz6pKXILQSIrOf8yNtiVbvjzCS7lrvGufsVLAy6PaMd69ufSWMEH+1R YUTRsCnVMzdWue99FnI/6764FJ5M+BHttHC+PH6/hPFP3ye3HwiXD3sn7 jhakhCt958S7poyDZYaxF4XYdP9JjKZCFxPpgcrqzDQ25rWT0OYBW7aDg SN3W28NlqssKTk75bL7fThXW88pastJBnkZMVRhfkuwGQ3bwUmQApSIlN Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="254689537" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="254689537" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:43 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448138" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:39 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 10/21] x86/virt/tdx: Add placeholder to coveret all system RAM as TDX memory Date: Sun, 13 Mar 2022 23:49:50 +1300 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TDX provides increased levels of memory confidentiality and integrity. This requires special hardware support for features like memory encryption and storage of memory integrity checksums. Not all memory satisfies these requirements. As a result, TDX introduced the concept of a "Convertible Memory Region" (CMR). During boot, the firmware builds a list of all of the memory ranges which can provide the TDX security guarantees. The list of these ranges, along with TDX module information, is available to the kernel by querying the TDX module. In order to provide crypto protection to TD guests, the TDX architecture also needs additional metadata to record things like which TD guest "owns" a given page of memory. This metadata essentially serves as the 'struct page' for the TDX module. The space for this metadata is not reserved by the hardware upfront and must be allocated by the kernel and given to the TDX module. Since this metadata consumes space, the VMM can choose whether or not to allocate it for a given area of convertible memory. If it chooses not to, the memory cannot receive TDX protections and can not be used by TDX guests as private memory. For every memory region that the VMM wants to use as TDX memory, it sets up a "TD Memory Region" (TDMR). Each TDMR represents a physically contiguous convertible range and must also have its own physically contiguous metadata table, referred to as a Physical Address Metadata Table (PAMT), to track status for each page in the TDMR range. Unlike a CMR, each TDMR requires 1G granularity and alignment. To support physical RAM areas that don't meet those strict requirements, each TDMR permits a number of internal "reserved areas" which can be placed over memory holes. If PAMT metadata is placed within a TDMR it must be covered by one of these reserved areas. Let's summarize the concepts: CMR - Firmware-enumerated physical ranges that support TDX. CMRs are 4K aligned. TDMR - Physical address range which is chosen by the kernel to support TDX. 1G granularity and alignment required. Each TDMR has reserved areas where TDX memory holes and overlapping PAMTs can be put into. PAMT - Physically contiguous TDX metadata. One table for each page size per TDMR. Roughly 1/256th of TDMR in size. 256G TDMR = ~1G PAMT. As one step of initializing the TDX module, the memory regions that TDX module can use must be configured to the TDX module via an array of TDMRs. Constructing TDMRs to build the TDX memory consists below steps: 1) Create TDMRs to cover all memory regions that TDX module can use; 2) Allocate and set up PAMT for each TDMR; 3) Set up reserved areas for each TDMR. Add a placeholder right after getting TDX module and CMRs information to construct TDMRs to do the above steps, as the preparation to configure the TDX module. Always free TDMRs at the end of the initialization (no matter successful or not), as TDMRs are only used during the initialization. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 47 +++++++++++++++++++++++++++++++++++++++++ arch/x86/virt/vmx/tdx.h | 23 ++++++++++++++++++++ 2 files changed, 70 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index eb585bc5edda..1571bf192dde 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -590,8 +591,29 @@ static int tdx_get_sysinfo(void) return sanitize_cmrs(tdx_cmr_array, cmr_num); } +static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) +{ + int i; + + for (i = 0; i < tdmr_num; i++) { + struct tdmr_info *tdmr = tdmr_array[i]; + + /* kfree() works with NULL */ + kfree(tdmr); + tdmr_array[i] = NULL; + } +} + +static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) +{ + /* Return -EFAULT until constructing TDMRs is done */ + return -EFAULT; +} + static int init_tdx_module(void) { + struct tdmr_info **tdmr_array; + int tdmr_num; int ret; /* TDX module global initialization */ @@ -609,11 +631,36 @@ static int init_tdx_module(void) if (ret) goto out; + /* + * Prepare enough space to hold pointers of TDMRs (TDMR_INFO). + * TDX requires TDMR_INFO being 512 aligned. Each TDMR is + * allocated individually within construct_tdmrs() to meet + * this requirement. + */ + tdmr_array = kcalloc(tdx_sysinfo.max_tdmrs, sizeof(struct tdmr_info *), + GFP_KERNEL); + if (!tdmr_array) { + ret = -ENOMEM; + goto out; + } + + /* Construct TDMRs to build TDX memory */ + ret = construct_tdmrs(tdmr_array, &tdmr_num); + if (ret) + goto out_free_tdmrs; + /* * Return -EFAULT until all steps of TDX module * initialization are done. */ ret = -EFAULT; +out_free_tdmrs: + /* + * TDMRs are only used during initializing TDX module. Always + * free them no matter the initialization was successful or not. + */ + free_tdmrs(tdmr_array, tdmr_num); + kfree(tdmr_array); out: return ret; } diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index 2f21c45df6ac..05bf9fe6bd00 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -89,6 +89,29 @@ struct tdsysinfo_struct { }; } __packed __aligned(TDSYSINFO_STRUCT_ALIGNMENT); +struct tdmr_reserved_area { + u64 offset; + u64 size; +} __packed; + +#define TDMR_INFO_ALIGNMENT 512 + +struct tdmr_info { + u64 base; + u64 size; + u64 pamt_1g_base; + u64 pamt_1g_size; + u64 pamt_2m_base; + u64 pamt_2m_size; + u64 pamt_4k_base; + u64 pamt_4k_size; + /* + * Actual number of reserved areas depends on + * 'struct tdsysinfo_struct'::max_reserved_per_tdmr. + */ + struct tdmr_reserved_area reserved_areas[0]; +} __packed __aligned(TDMR_INFO_ALIGNMENT); + /* * P-SEAMLDR SEAMCALL leaf function */ From patchwork Sun Mar 13 10:49:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C3F8C433EF for ; Sun, 13 Mar 2022 10:51:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229808AbiCMKwW (ORCPT ); Sun, 13 Mar 2022 06:52:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234291AbiCMKwF (ORCPT ); Sun, 13 Mar 2022 06:52:05 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A218342EDB; Sun, 13 Mar 2022 03:50:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168646; x=1678704646; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6w4wpWsKsHJjPcV/gxFlGtmd6yOnj3Ocq7tPmvnI3hg=; b=Nwl0SqVg4kEz5qLJDrkTAdVm7zpYNgFfPhPI9lr1E+PT3ad4+5cUDlQv KHTtvCGJ8ogMjWoeRIyzWgXhyPmN24uwE/053uDjhFqUBZ3eQmGqEkwhl CjL2pE6KJ90KFGBSpiFbKHEsMHu4rFhf2ync54aANPEQRmYU1kmyHm9ZL 8rdqZqxH3iLqt+P7GA1pXiWMDRAJzB2m9YJty6ML0KOtDNarjJxsNg5eP kM5c4Wx2fLFd7dltx8bFkfBfYiZJw5hWngEwP5Dtm/sJOe//SUQqTjo1E EbkZwH9RrNF82Wn25oqHkPKXbkNuI9WgS6UIIY1ZAYJL65a0ETXYr6odE Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="254689538" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="254689538" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:46 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448145" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:43 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 11/21] x86/virt/tdx: Choose to use all system RAM as TDX memory Date: Sun, 13 Mar 2022 23:49:51 +1300 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org As one step of initializing the TDX module, the memory regions that the TDX module can use must be configured to it via an array of 'TD Memory Regions' (TDMR). The kernel is responsible for choosing which memory regions to be used as TDX memory and building the array of TDMRs to cover those memory regions. The first generation of TDX-capable platforms basically guarantees all system RAM regions during machine boot are Convertible Memory Regions (excluding the memory below 1MB) and can be used by TDX. The memory pages allocated to TD guests can be any pages managed by the page allocator. To avoid having to modify the page allocator to distinguish TDX and non-TDX memory allocation, adopt a simple policy to use all system RAM regions as TDX memory. The low 1MB pages are excluded from TDX memory since they are not in CMRs in some platforms (those pages are reserved at boot time and won't be managed by page allocator anyway). This policy could be revised later if future TDX generations break the guarantee or when the size of the metadata (~1/256th of the size of the TDX usable memory) becomes a concern. At that time a CMR-aware page allocator may be necessary. Also, on the first generation of TDX-capable machine, the system RAM ranges discovered during boot time are all memory regions that kernel can use during its runtime. This is because the first generation of TDX architecturally doesn't support ACPI memory hotplug (CMRs are generated during machine boot and are static during machine's runtime). Also, the first generation of TDX-capable platform doesn't support TDX and ACPI memory hotplug at the same time on a single machine. Another case of memory hotplug is user may use NVDIMM as system RAM via kmem driver. But the first generation of TDX-capable machine doesn't support TDX and NVDIMM simultaneously, therefore in practice it cannot happen. One special case is user may use 'memmap' kernel command line to reserve part of system RAM as x86 legacy PMEMs, and user can theoretically add them as system RAM via kmem driver. This can be resolved by always treating legacy PMEMs as TDX memory. Implement a helper to loop over all RAM entries in e820 table to find all system RAM ranges, as a preparation to covert all of them to TDX memory. Use 'e820_table', rather than 'e820_table_firmware' to honor 'mem' and 'memmap' command lines. Following e820__memblock_setup(), both E820_TYPE_RAM and E820_TYPE_RESERVED_KERN types are treated as TDX memory, and contiguous ranges in the same NUMA node are merged together. One difference is, as mentioned above, x86 legacy PMEMs (E820_TYPE_PRAM) are also always treated as TDX memory. They are underneath RAM, and they could be used as TD guest memory. Always including them as TDX memory also avoids having to modify memory hotplug code to handle adding them as system RAM via kmem driver. To begin with, sanity check all memory regions found in e820 are fully covered by any CMR and can be used as TDX memory. Signed-off-by: Kai Huang --- arch/x86/Kconfig | 1 + arch/x86/virt/vmx/tdx.c | 228 +++++++++++++++++++++++++++++++++++++++- 2 files changed, 228 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index f4c5481cca46..2d3f983e3582 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1961,6 +1961,7 @@ config INTEL_TDX_HOST default n depends on CPU_SUP_INTEL depends on X86_64 + select NUMA_KEEP_MEMINFO if NUMA help Intel Trust Domain Extensions (TDX) protects guest VMs from malicious host and certain physical attacks. This option enables necessary TDX diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 1571bf192dde..e5206599f558 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -14,11 +14,13 @@ #include #include #include +#include #include #include #include #include #include +#include #include #include "tdx.h" @@ -591,6 +593,222 @@ static int tdx_get_sysinfo(void) return sanitize_cmrs(tdx_cmr_array, cmr_num); } +/* Check whether one e820 entry is RAM and could be used as TDX memory */ +static bool e820_entry_is_ram(struct e820_entry *entry) +{ + /* + * Besides E820_TYPE_RAM, E820_TYPE_RESERVED_KERN type entries + * are also treated as TDX memory as they are also added to + * memblock.memory in e820__memblock_setup(). + * + * E820_TYPE_SOFT_RESERVED type entries are excluded as they are + * marked as reserved and are not later freed to page allocator + * (only part of kernel image, initrd, etc are freed to page + * allocator). + * + * Also unconditionally treat x86 legacy PMEMs (E820_TYPE_PRAM) + * as TDX memory since they are RAM underneath, and could be used + * as TD guest memory. + */ + return (entry->type == E820_TYPE_RAM) || + (entry->type == E820_TYPE_RESERVED_KERN) || + (entry->type == E820_TYPE_PRAM); +} + +/* + * The low memory below 1MB is not covered by CMRs on some TDX platforms. + * In practice, this range cannot be used for guest memory because it is + * not managed by the page allocator due to boot-time reservation. Just + * skip the low 1MB so this range won't be treated as TDX memory. + * + * Return true if the e820 entry is completely skipped, in which case + * caller should ignore this entry. Otherwise the actual memory range + * after skipping the low 1MB is returned via @start and @end. + */ +static bool e820_entry_skip_lowmem(struct e820_entry *entry, u64 *start, + u64 *end) +{ + u64 _start = entry->addr; + u64 _end = entry->addr + entry->size; + + if (_start < SZ_1M) + _start = SZ_1M; + + *start = _start; + *end = _end; + + return _start >= _end; +} + +/* + * Trim away non-page-aligned memory at the beginning and the end for a + * given region. Return true when there are still pages remaining after + * trimming, and the trimmed region is returned via @start and @end. + */ +static bool e820_entry_trim(u64 *start, u64 *end) +{ + u64 s, e; + + s = round_up(*start, PAGE_SIZE); + e = round_down(*end, PAGE_SIZE); + + if (s >= e) + return false; + + *start = s; + *end = e; + + return true; +} + +/* + * Get the next memory region (excluding low 1MB) in e820. @idx points + * to the entry to start to walk with. Multiple memory regions in the + * same NUMA node that are contiguous are merged together (following + * e820__memblock_setup()). The merged range is returned via @start and + * @end. After return, @idx points to the next entry of the last RAM + * entry that has been walked, or table->nr_entries (indicating all + * entries in the e820 table have been walked). + */ +static void e820_next_mem(struct e820_table *table, int *idx, u64 *start, + u64 *end) +{ + u64 rs, re; + int rnid, i; + +again: + rs = re = 0; + for (i = *idx; i < table->nr_entries; i++) { + struct e820_entry *entry = &table->entries[i]; + u64 s, e; + int nid; + + if (!e820_entry_is_ram(entry)) + continue; + + if (e820_entry_skip_lowmem(entry, &s, &e)) + continue; + + /* + * Found the first RAM entry. Record it and keep + * looping to find other RAM entries that can be + * merged. + */ + if (!rs) { + rs = s; + re = e; + rnid = phys_to_target_node(rs); + if (WARN_ON_ONCE(rnid == NUMA_NO_NODE)) + rnid = 0; + continue; + } + + /* + * Try to merge with previous RAM entry. E820 entries + * are not necessarily page aligned. For instance, the + * setup_data elements in boot_params are marked as + * E820_TYPE_RESERVED_KERN, and they may not be page + * aligned. In e820__memblock_setup() all adjancent + * memory regions within the same NUMA node are merged to + * a single one, and the non-page-aligned parts (at the + * beginning and the end) are trimmed. Follow the same + * rule here. + */ + nid = phys_to_target_node(s); + if (WARN_ON_ONCE(nid == NUMA_NO_NODE)) + nid = 0; + if ((nid == rnid) && (s == re)) { + /* Merge with previous range and update the end */ + re = e; + continue; + } + + /* + * Stop if current entry cannot be merged with previous + * one (or more) entries. + */ + break; + } + + /* + * @i is either the RAM entry that cannot be merged with previous + * one (or more) entries, or table->nr_entries. + */ + *idx = i; + /* + * Trim non-page-aligned parts of [@rs, @re), which is either a + * valid memory region, or empty. If there's nothing left after + * trimming and there are still entries that have not been + * walked, continue to walk. + */ + if (!e820_entry_trim(&rs, &re) && i < table->nr_entries) + goto again; + + *start = rs; + *end = re; +} + +/* + * Helper to loop all e820 RAM entries with low 1MB excluded + * in a given e820 table. + */ +#define _e820_for_each_mem(_table, _i, _start, _end) \ + for ((_i) = 0, e820_next_mem((_table), &(_i), &(_start), &(_end)); \ + (_start) < (_end); \ + e820_next_mem((_table), &(_i), &(_start), &(_end))) + +/* + * Helper to loop all e820 RAM entries with low 1MB excluded + * in kernel modified 'e820_table' to honor 'mem' and 'memmap' kernel + * command lines. + */ +#define e820_for_each_mem(_i, _start, _end) \ + _e820_for_each_mem(e820_table, _i, _start, _end) + +/* Check whether first range is the subrange of the second */ +static bool is_subrange(u64 r1_start, u64 r1_end, u64 r2_start, u64 r2_end) +{ + return (r1_start >= r2_start && r1_end <= r2_end) ? true : false; +} + +/* Check whether address range is covered by any CMR or not. */ +static bool range_covered_by_cmr(struct cmr_info *cmr_array, int cmr_num, + u64 start, u64 end) +{ + int i; + + for (i = 0; i < cmr_num; i++) { + struct cmr_info *cmr = &cmr_array[i]; + + if (is_subrange(start, end, cmr->base, cmr->base + cmr->size)) + return true; + } + + return false; +} + +/* Sanity check whether all e820 RAM entries are fully covered by CMRs. */ +static int e820_check_against_cmrs(void) +{ + u64 start, end; + int i; + + /* + * Loop over e820_table to find all RAM entries and check + * whether they are all fully covered by any CMR. + */ + e820_for_each_mem(i, start, end) { + if (!range_covered_by_cmr(tdx_cmr_array, tdx_cmr_num, + start, end)) { + pr_err("[0x%llx, 0x%llx) is not fully convertible memory\n", + start, end); + return -EFAULT; + } + } + + return 0; +} + static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) { int i; @@ -606,8 +824,16 @@ static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) { + int ret; + + ret = e820_check_against_cmrs(); + if (ret) + goto err; + /* Return -EFAULT until constructing TDMRs is done */ - return -EFAULT; + ret = -EFAULT; +err: + return ret; } static int init_tdx_module(void) From patchwork Sun Mar 13 10:49:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCFCEC433FE for ; Sun, 13 Mar 2022 10:51:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234334AbiCMKwX (ORCPT ); Sun, 13 Mar 2022 06:52:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233197AbiCMKwH (ORCPT ); Sun, 13 Mar 2022 06:52:07 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D393746677; Sun, 13 Mar 2022 03:50:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168649; x=1678704649; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n1crdIzrOnQqTGW/i63Bu9xmzYGPEZcLc2/KThxwLhQ=; b=hcMuXWetJPu8pp/HOoR636NNQnIOXZ6tD1uEWSIF5mbUbAkDhSakMFd2 42W8u+Y0o7r+F237qCixBBofkj0hu3hZzLLRIaoQeXg8mSn9Gg5U8u9BC vNP7ClJJT47q4KC3XjMEjuLe3eiDyuB3v++ZSDylOPRPG+H0oUfb3U2aa BmDnqJaw4afZoUekRydBPVlCPtyTJQHKxJ45ehcwr/rd/t6k1b6vZxxm9 /gY9RcMZ6pQ6/UZb/yFdD8TA/2sIeyF9cj01TERD0BCOI9dd+EWpgaKKK 9F0fZhHucs+M5Z/jptboXXhohueYxFbifKYwXyNm138LIumuPG9LBFZt9 g==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="254689540" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="254689540" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:49 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448152" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:46 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 12/21] x86/virt/tdx: Create TDMRs to cover all system RAM Date: Sun, 13 Mar 2022 23:49:52 +1300 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The kernel configures TDX usable memory regions to the TDX module via an array of "TD Memory Region" (TDMR). Each TDMR entry (TDMR_INFO) contains the information of the base/size of a memory region, the base/size of the associated Physical Address Metadata Table (PAMT) and a list of reserved areas in the region. Create a number of TDMRs according to the verified e820 RAM entries. As the first step only set up the base/size information for each TDMR. TDMR must be 1G aligned and the size must be in 1G granularity. This implies that one TDMR could cover multiple e820 RAM entries. If a RAM entry spans the 1GB boundary and the former part is already covered by the previous TDMR, just create a new TDMR for the latter part. TDX only supports a limited number of TDMRs (currently 64). Abort the TDMR construction process when the number of TDMRs exceeds this limitation. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 138 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 138 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index e5206599f558..1939b64d23e8 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -54,6 +54,18 @@ ((u32)(((_keyid_part) & 0xffffffffull) + 1)) #define TDX_KEYID_NUM(_keyid_part) ((u32)((_keyid_part) >> 32)) +/* TDMR must be 1gb aligned */ +#define TDMR_ALIGNMENT BIT_ULL(30) +#define TDMR_PFN_ALIGNMENT (TDMR_ALIGNMENT >> PAGE_SHIFT) + +/* Align up and down the address to TDMR boundary */ +#define TDMR_ALIGN_DOWN(_addr) ALIGN_DOWN((_addr), TDMR_ALIGNMENT) +#define TDMR_ALIGN_UP(_addr) ALIGN((_addr), TDMR_ALIGNMENT) + +/* TDMR's start and end address */ +#define TDMR_START(_tdmr) ((_tdmr)->base) +#define TDMR_END(_tdmr) ((_tdmr)->base + (_tdmr)->size) + /* * TDX module status during initialization */ @@ -809,6 +821,44 @@ static int e820_check_against_cmrs(void) return 0; } +/* The starting offset of reserved areas within TDMR_INFO */ +#define TDMR_RSVD_START 64 + +static struct tdmr_info *__alloc_tdmr(void) +{ + int tdmr_sz; + + /* + * TDMR_INFO's actual size depends on maximum number of reserved + * areas that one TDMR supports. + */ + tdmr_sz = TDMR_RSVD_START + tdx_sysinfo.max_reserved_per_tdmr * + sizeof(struct tdmr_reserved_area); + + /* + * TDX requires TDMR_INFO to be 512 aligned. Always align up + * TDMR_INFO size to 512 so the memory allocated via kzalloc() + * can meet the alignment requirement. + */ + tdmr_sz = ALIGN(tdmr_sz, TDMR_INFO_ALIGNMENT); + + return kzalloc(tdmr_sz, GFP_KERNEL); +} + +/* Create a new TDMR at given index in the TDMR array */ +static struct tdmr_info *alloc_tdmr(struct tdmr_info **tdmr_array, int idx) +{ + struct tdmr_info *tdmr; + + if (WARN_ON_ONCE(tdmr_array[idx])) + return NULL; + + tdmr = __alloc_tdmr(); + tdmr_array[idx] = tdmr; + + return tdmr; +} + static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) { int i; @@ -822,6 +872,89 @@ static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) } } +/* + * Create TDMRs to cover all RAM entries in e820_table. The created + * TDMRs are saved to @tdmr_array and @tdmr_num is set to the actual + * number of TDMRs. All entries in @tdmr_array must be initially NULL. + */ +static int create_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) +{ + struct tdmr_info *tdmr; + u64 start, end; + int i, tdmr_idx; + int ret = 0; + + tdmr_idx = 0; + tdmr = alloc_tdmr(tdmr_array, 0); + if (!tdmr) + return -ENOMEM; + /* + * Loop over all RAM entries in e820 and create TDMRs to cover + * them. To keep it simple, always try to use one TDMR to cover + * one RAM entry. + */ + e820_for_each_mem(e820_table, i, start, end) { + start = TDMR_ALIGN_DOWN(start); + end = TDMR_ALIGN_UP(end); + + /* + * If the current TDMR's size hasn't been initialized, it + * is a new allocated TDMR to cover the new RAM entry. + * Otherwise the current TDMR already covers the previous + * RAM entry. In the latter case, check whether the + * current RAM entry has been fully or partially covered + * by the current TDMR, since TDMR is 1G aligned. + */ + if (tdmr->size) { + /* + * Loop to next RAM entry if the current entry + * is already fully covered by the current TDMR. + */ + if (end <= TDMR_END(tdmr)) + continue; + + /* + * If part of current RAM entry has already been + * covered by current TDMR, skip the already + * covered part. + */ + if (start < TDMR_END(tdmr)) + start = TDMR_END(tdmr); + + /* + * Create a new TDMR to cover the current RAM + * entry, or the remaining part of it. + */ + tdmr_idx++; + if (tdmr_idx >= tdx_sysinfo.max_tdmrs) { + ret = -E2BIG; + goto err; + } + tdmr = alloc_tdmr(tdmr_array, tdmr_idx); + if (!tdmr) { + ret = -ENOMEM; + goto err; + } + } + + tdmr->base = start; + tdmr->size = end - start; + } + + /* @tdmr_idx is always the index of last valid TDMR. */ + *tdmr_num = tdmr_idx + 1; + + return 0; +err: + /* + * Clean up already allocated TDMRs in case of error. @tdmr_idx + * indicates the last TDMR that wasn't created successfully, + * therefore only needs to free @tdmr_idx TDMRs. + */ + free_tdmrs(tdmr_array, tdmr_idx); + return ret; +} + static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) { int ret; @@ -830,8 +963,13 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) if (ret) goto err; + ret = create_tdmrs(tdmr_array, tdmr_num); + if (ret) + goto err; + /* Return -EFAULT until constructing TDMRs is done */ ret = -EFAULT; + free_tdmrs(tdmr_array, *tdmr_num); err: return ret; } From patchwork Sun Mar 13 10:49:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 760C4C433F5 for ; Sun, 13 Mar 2022 10:51:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232768AbiCMKwf (ORCPT ); Sun, 13 Mar 2022 06:52:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233381AbiCMKwP (ORCPT ); Sun, 13 Mar 2022 06:52:15 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81D744926D; Sun, 13 Mar 2022 03:50:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168654; x=1678704654; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vMfdJgE2g832IwndDS1Fuao3oZzseeu4HnRhETYWVHY=; b=DY9OvNw4pdgoqjVFdSjqWCZw0tvX4o3qqr1zAnaOHQUrUBqoRSpcxvfD pCJnbmZE2tBkvjeuHNx3R8OGFfiLrmIApYhoOaw8JjwJF6vxvWKHOt1pr LynyAR0heCef3PJ5nlcLYy2TOkeLCBLWo+G0FcLokvnjHMTGwdxQTvQFY A6EOxNx1jWeTuoETGqkNu4ZnMTgGHm+ZK+e8GKZAG6RmTd8dDnX+4T7EU /0i+5amIuSKrZUlJxYkqpEdrYD9tvc1lWk/IMptz89d4bu8Ri6553ehKU mPZ27WYHTasmbEy36g/T3PSb/iatuolJaPaCS+aTH0Q4sQYIGDGQ1Pm+N w==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="254689543" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="254689543" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:52 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448161" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:49 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 13/21] x86/virt/tdx: Allocate and set up PAMTs for TDMRs Date: Sun, 13 Mar 2022 23:49:53 +1300 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to provide crypto protection to guests, the TDX module uses additional metadata to record things like which guest "owns" a given page of memory. This metadata, referred as Physical Address Metadata Table (PAMT), essentially serves as the 'struct page' for the TDX module. PAMTs are not reserved by hardware upfront. They must be allocated by the kernel and then given to the TDX module. TDX supports 3 page sizes: 4K, 2M, and 1G. Each "TD Memory Region" (TDMR) has 3 PAMTs to track the 3 supported page sizes respectively. Each PAMT must be a physically contiguous area from the Convertible Memory Regions (CMR). However, the PAMTs which track pages in one TDMR do not need to reside within that TDMR but can be anywhere in CMRs. If one PAMT overlaps with any TDMR, the overlapping part must be reported as a reserved area in that particular TDMR. Use alloc_contig_pages() since PAMT must be a physically contiguous area and it may be potentially large (~1/256th of the size of the given TDMR). The current version of TDX supports at most 16 reserved areas per TDMR to cover both PAMTs and potential memory holes within the TDMR. If many PAMTs are allocated within a single TDMR, 16 reserved areas may not be sufficient to cover all of them. Adopt the following policies when allocating PAMTs for a given TDMR: - Allocate three PAMTs of the TDMR in one contiguous chunk to minimize the total number of reserved areas consumed for PAMTs. - Try to first allocate PAMT from the local node of the TDMR for better NUMA locality. Signed-off-by: Kai Huang --- arch/x86/Kconfig | 1 + arch/x86/virt/vmx/tdx.c | 167 +++++++++++++++++++++++++++++++++++++++- 2 files changed, 167 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 2d3f983e3582..193ba089d353 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1962,6 +1962,7 @@ config INTEL_TDX_HOST depends on CPU_SUP_INTEL depends on X86_64 select NUMA_KEEP_MEMINFO if NUMA + depends on CONTIG_ALLOC help Intel Trust Domain Extensions (TDX) protects guest VMs from malicious host and certain physical attacks. This option enables necessary TDX diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 1939b64d23e8..c58c99b94c72 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include "tdx.h" @@ -66,6 +67,16 @@ #define TDMR_START(_tdmr) ((_tdmr)->base) #define TDMR_END(_tdmr) ((_tdmr)->base + (_tdmr)->size) +/* Page sizes supported by TDX */ +enum tdx_page_sz { + TDX_PG_4K = 0, + TDX_PG_2M, + TDX_PG_1G, + TDX_PG_MAX, +}; + +#define TDX_HPAGE_SHIFT 9 + /* * TDX module status during initialization */ @@ -893,7 +904,7 @@ static int create_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) * them. To keep it simple, always try to use one TDMR to cover * one RAM entry. */ - e820_for_each_mem(e820_table, i, start, end) { + e820_for_each_mem(i, start, end) { start = TDMR_ALIGN_DOWN(start); end = TDMR_ALIGN_UP(end); @@ -955,6 +966,148 @@ static int create_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) return ret; } +/* Calculate PAMT size given a TDMR and a page size */ +static unsigned long __tdmr_get_pamt_sz(struct tdmr_info *tdmr, + enum tdx_page_sz pgsz) +{ + unsigned long pamt_sz; + + pamt_sz = (tdmr->size >> ((TDX_HPAGE_SHIFT * pgsz) + PAGE_SHIFT)) * + tdx_sysinfo.pamt_entry_size; + /* PAMT size must be 4K aligned */ + pamt_sz = ALIGN(pamt_sz, PAGE_SIZE); + + return pamt_sz; +} + +/* Calculate the size of all PAMTs for a TDMR */ +static unsigned long tdmr_get_pamt_sz(struct tdmr_info *tdmr) +{ + enum tdx_page_sz pgsz; + unsigned long pamt_sz; + + pamt_sz = 0; + for (pgsz = TDX_PG_4K; pgsz < TDX_PG_MAX; pgsz++) + pamt_sz += __tdmr_get_pamt_sz(tdmr, pgsz); + + return pamt_sz; +} + +/* + * Locate the NUMA node containing the start of the given TDMR's first + * RAM entry. The given TDMR may also cover memory in other NUMA nodes. + */ +static int tdmr_get_nid(struct tdmr_info *tdmr) +{ + u64 start, end; + int i; + + /* Find the first RAM entry covered by the TDMR */ + e820_for_each_mem(i, start, end) + if (end > TDMR_START(tdmr)) + break; + + /* + * One TDMR must cover at least one (or partial) RAM entry, + * otherwise it is kernel bug. WARN_ON() in this case. + */ + if (WARN_ON_ONCE((start >= end) || start >= TDMR_END(tdmr))) + return 0; + + /* + * The first RAM entry may be partially covered by the previous + * TDMR. In this case, use TDMR's start to find the NUMA node. + */ + if (start < TDMR_START(tdmr)) + start = TDMR_START(tdmr); + + return phys_to_target_node(start); +} + +static int tdmr_setup_pamt(struct tdmr_info *tdmr) +{ + unsigned long tdmr_pamt_base, pamt_base[TDX_PG_MAX]; + unsigned long pamt_sz[TDX_PG_MAX]; + unsigned long pamt_npages; + struct page *pamt; + enum tdx_page_sz pgsz; + int nid; + + /* + * Allocate one chunk of physically contiguous memory for all + * PAMTs. This helps minimize the PAMT's use of reserved areas + * in overlapped TDMRs. + */ + nid = tdmr_get_nid(tdmr); + pamt_npages = tdmr_get_pamt_sz(tdmr) >> PAGE_SHIFT; + pamt = alloc_contig_pages(pamt_npages, GFP_KERNEL, nid, + &node_online_map); + if (!pamt) + return -ENOMEM; + + /* Calculate PAMT base and size for all supported page sizes. */ + tdmr_pamt_base = page_to_pfn(pamt) << PAGE_SHIFT; + for (pgsz = TDX_PG_4K; pgsz < TDX_PG_MAX; pgsz++) { + unsigned long sz = __tdmr_get_pamt_sz(tdmr, pgsz); + + pamt_base[pgsz] = tdmr_pamt_base; + pamt_sz[pgsz] = sz; + + tdmr_pamt_base += sz; + } + + tdmr->pamt_4k_base = pamt_base[TDX_PG_4K]; + tdmr->pamt_4k_size = pamt_sz[TDX_PG_4K]; + tdmr->pamt_2m_base = pamt_base[TDX_PG_2M]; + tdmr->pamt_2m_size = pamt_sz[TDX_PG_2M]; + tdmr->pamt_1g_base = pamt_base[TDX_PG_1G]; + tdmr->pamt_1g_size = pamt_sz[TDX_PG_1G]; + + return 0; +} + +static void tdmr_free_pamt(struct tdmr_info *tdmr) +{ + unsigned long pamt_pfn, pamt_sz; + + pamt_pfn = tdmr->pamt_4k_base >> PAGE_SHIFT; + pamt_sz = tdmr->pamt_4k_size + tdmr->pamt_2m_size + tdmr->pamt_1g_size; + + /* Do nothing if PAMT hasn't been allocated for this TDMR */ + if (!pamt_sz) + return; + + if (WARN_ON(!pamt_pfn)) + return; + + free_contig_range(pamt_pfn, pamt_sz >> PAGE_SHIFT); +} + +static void tdmrs_free_pamt_all(struct tdmr_info **tdmr_array, int tdmr_num) +{ + int i; + + for (i = 0; i < tdmr_num; i++) + tdmr_free_pamt(tdmr_array[i]); +} + +/* Allocate and set up PAMTs for all TDMRs */ +static int tdmrs_setup_pamt_all(struct tdmr_info **tdmr_array, int tdmr_num) +{ + int i, ret; + + for (i = 0; i < tdmr_num; i++) { + ret = tdmr_setup_pamt(tdmr_array[i]); + if (ret) + goto err; + } + + return 0; +err: + tdmrs_free_pamt_all(tdmr_array, tdmr_num); + return -ENOMEM; +} + static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) { int ret; @@ -967,8 +1120,14 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) if (ret) goto err; + ret = tdmrs_setup_pamt_all(tdmr_array, *tdmr_num); + if (ret) + goto err_free_tdmrs; + /* Return -EFAULT until constructing TDMRs is done */ ret = -EFAULT; + tdmrs_free_pamt_all(tdmr_array, *tdmr_num); +err_free_tdmrs: free_tdmrs(tdmr_array, *tdmr_num); err: return ret; @@ -1018,6 +1177,12 @@ static int init_tdx_module(void) * initialization are done. */ ret = -EFAULT; + /* + * Free PAMTs allocated in construct_tdmrs() when TDX module + * initialization fails. + */ + if (ret) + tdmrs_free_pamt_all(tdmr_array, tdmr_num); out_free_tdmrs: /* * TDMRs are only used during initializing TDX module. Always From patchwork Sun Mar 13 10:49:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D658C433EF for ; Sun, 13 Mar 2022 10:51:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232031AbiCMKwz (ORCPT ); Sun, 13 Mar 2022 06:52:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232118AbiCMKwW (ORCPT ); Sun, 13 Mar 2022 06:52:22 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC28F11942E; Sun, 13 Mar 2022 03:50:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168658; x=1678704658; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hgk+D3xp6Yl1JTOiq87VMlJ490hywAh5/YmDigNeXH4=; b=U8Joqr11SHQnlnsWmxrJrUqhodsL5dIpA8fElyEQjDPGphxo00yXxH76 k3+abgiBO6xqFieXg4aEtMIZqq3J2E+/gbB97WAAXg2hTlbp/VE/ANDkt G6blzObqYEiOd99elS6OCbbzuzPhSWJ+1zGN0nuUj3oFUlhFSFuxhTpu8 XhuchyRQiAmkkbR+3tR7006II22Pv4Agfdp/RO2ELgbAcbJfqRgSk6v91 lSy4IPTid/v3okhdhzeqXkg1yTiQpiaXeZ7x80Shsc/cY49mAh5wtO3wO o1FtPjVLMbrFfsLN/RquszGExRXGuqgQSJjBoq4PNBC/fo3HYeDZTY0PF w==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="254689545" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="254689545" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:55 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448169" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:52 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 14/21] x86/virt/tdx: Set up reserved areas for all TDMRs Date: Sun, 13 Mar 2022 23:49:54 +1300 Message-Id: <0927dacd2476c96fd8c24f9a777162cb1a252401.1647167475.git.kai.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org As the last step of constructing TDMRs, create reserved area information for the memory region holes in each TDMR. If any PAMT (or part of it) resides within a particular TDMR, also it as reserved. All reserved areas in each TDMR must be in address ascending order, required by TDX architecture. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 148 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 146 insertions(+), 2 deletions(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index c58c99b94c72..f6345af29414 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -1108,6 +1109,145 @@ static int tdmrs_setup_pamt_all(struct tdmr_info **tdmr_array, int tdmr_num) return -ENOMEM; } +static int tdmr_add_rsvd_area(struct tdmr_info *tdmr, int *p_idx, + u64 addr, u64 size) +{ + struct tdmr_reserved_area *rsvd_areas = tdmr->reserved_areas; + int idx = *p_idx; + + /* Reserved area must be 4K aligned in offset and size */ + if (WARN_ON(addr & ~PAGE_MASK || size & ~PAGE_MASK)) + return -EINVAL; + + /* Cannot exceed maximum reserved areas supported by TDX */ + if (idx >= tdx_sysinfo.max_reserved_per_tdmr) + return -E2BIG; + + rsvd_areas[idx].offset = addr - tdmr->base; + rsvd_areas[idx].size = size; + + *p_idx = idx + 1; + + return 0; +} + +/* Compare function called by sort() for TDMR reserved areas */ +static int rsvd_area_cmp_func(const void *a, const void *b) +{ + struct tdmr_reserved_area *r1 = (struct tdmr_reserved_area *)a; + struct tdmr_reserved_area *r2 = (struct tdmr_reserved_area *)b; + + if (r1->offset + r1->size <= r2->offset) + return -1; + if (r1->offset >= r2->offset + r2->size) + return 1; + + /* Reserved areas cannot overlap. Caller should guarantee. */ + WARN_ON(1); + return -1; +} + +/* Set up reserved areas for a TDMR, including memory holes and PAMTs */ +static int tdmr_setup_rsvd_areas(struct tdmr_info *tdmr, + struct tdmr_info **tdmr_array, + int tdmr_num) +{ + u64 start, end, prev_end; + int rsvd_idx, i, ret = 0; + + /* Mark holes between e820 RAM entries as reserved */ + rsvd_idx = 0; + prev_end = TDMR_START(tdmr); + e820_for_each_mem(i, start, end) { + /* Break if this entry is after the TDMR */ + if (start >= TDMR_END(tdmr)) + break; + + /* Exclude entries before this TDMR */ + if (end < TDMR_START(tdmr)) + continue; + + /* + * Skip if no hole exists before this entry. "<=" is + * used because one e820 entry might span two TDMRs. + * In that case the start address of this entry is + * smaller then the start address of the second TDMR. + */ + if (start <= prev_end) { + prev_end = end; + continue; + } + + /* Add the hole before this e820 entry */ + ret = tdmr_add_rsvd_area(tdmr, &rsvd_idx, prev_end, + start - prev_end); + if (ret) + return ret; + + prev_end = end; + } + + /* Add the hole after the last RAM entry if it exists. */ + if (prev_end < TDMR_END(tdmr)) { + ret = tdmr_add_rsvd_area(tdmr, &rsvd_idx, prev_end, + TDMR_END(tdmr) - prev_end); + if (ret) + return ret; + } + + /* + * Walk over all TDMRs to find out whether any PAMT falls into + * the given TDMR. If yes, mark it as reserved too. + */ + for (i = 0; i < tdmr_num; i++) { + struct tdmr_info *tmp = tdmr_array[i]; + u64 pamt_start, pamt_end; + + pamt_start = tmp->pamt_4k_base; + pamt_end = pamt_start + tmp->pamt_4k_size + + tmp->pamt_2m_size + tmp->pamt_1g_size; + + /* Skip PAMTs outside of the given TDMR */ + if ((pamt_end <= TDMR_START(tdmr)) || + (pamt_start >= TDMR_END(tdmr))) + continue; + + /* Only mark the part within the TDMR as reserved */ + if (pamt_start < TDMR_START(tdmr)) + pamt_start = TDMR_START(tdmr); + if (pamt_end > TDMR_END(tdmr)) + pamt_end = TDMR_END(tdmr); + + ret = tdmr_add_rsvd_area(tdmr, &rsvd_idx, pamt_start, + pamt_end - pamt_start); + if (ret) + return ret; + } + + /* TDX requires reserved areas listed in address ascending order */ + sort(tdmr->reserved_areas, rsvd_idx, sizeof(struct tdmr_reserved_area), + rsvd_area_cmp_func, NULL); + + return 0; +} + +static int tdmrs_setup_rsvd_areas_all(struct tdmr_info **tdmr_array, + int tdmr_num) +{ + int i; + + for (i = 0; i < tdmr_num; i++) { + int ret; + + ret = tdmr_setup_rsvd_areas(tdmr_array[i], tdmr_array, + tdmr_num); + if (ret) + return ret; + } + + return 0; +} + static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) { int ret; @@ -1124,8 +1264,12 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) if (ret) goto err_free_tdmrs; - /* Return -EFAULT until constructing TDMRs is done */ - ret = -EFAULT; + ret = tdmrs_setup_rsvd_areas_all(tdmr_array, *tdmr_num); + if (ret) + goto err_free_pamts; + + return 0; +err_free_pamts: tdmrs_free_pamt_all(tdmr_array, *tdmr_num); err_free_tdmrs: free_tdmrs(tdmr_array, *tdmr_num); From patchwork Sun Mar 13 10:49:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4ED1C433EF for ; Sun, 13 Mar 2022 10:52:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233111AbiCMKxN (ORCPT ); Sun, 13 Mar 2022 06:53:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232306AbiCMKwq (ORCPT ); Sun, 13 Mar 2022 06:52:46 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 734821255AC; Sun, 13 Mar 2022 03:51:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168669; x=1678704669; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+k8+ff3MZT0dZaurFb5N+16+FvSx0d0LIcCmcswxmYM=; b=Dm34jHGF57eBvtNhZCsR6gc2XCMCKuejuyNOr/cgJbjC9cc/GfxcBbi0 WnkLQdqTJwlWFJM3A/YH9yjUYgr97DR/28w3QHtVuZoYaNVKY5uMWsSYc Z6vxfp6oHAiBq/88RI3/aK6fTwcpvCVA4ZohnGM+nnJlMjINrBAN37qve WV+YxRf+lKOscJwaxgEeNcGndrPAoEmaYRQr5K2+g6eetYMlwqQP47s98 QU399hETOvn7f5O7E0dqtJWFLeR0lS/sUo+Z04fzxUQbJlv0i8dgehVPf i/lgCRCif2VkYZZ1IJZh8OcMW8pRv94xk1X62zCQYg9y0mHxEhMlnDgRT g==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="254689549" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="254689549" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:00 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448184" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:56 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 15/21] x86/virt/tdx: Reserve TDX module global KeyID Date: Sun, 13 Mar 2022 23:49:55 +1300 Message-Id: <9166c08562dd141d4fce552725664a245cdbd34c.1647167475.git.kai.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org TDX module initialization requires to use one TDX private KeyID as the global KeyID to crypto protect TDX metadata. The global KeyID is configured to the TDX module along with TDMRs. Just reserve the first TDX private KeyID as the global KeyID. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index f6345af29414..f70eee7df0c1 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -112,6 +112,9 @@ static struct cmr_info tdx_cmr_array[MAX_CMRS] __aligned(CMR_INFO_ARRAY_ALIGNMEN static int tdx_cmr_num; static struct tdsysinfo_struct tdx_sysinfo; +/* TDX global KeyID to protect TDX metadata */ +static u32 tdx_global_keyid; + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -1316,6 +1319,12 @@ static int init_tdx_module(void) if (ret) goto out_free_tdmrs; + /* + * Reserve the first TDX KeyID as global KeyID to protect + * TDX module metadata. + */ + tdx_global_keyid = tdx_keyid_start; + /* * Return -EFAULT until all steps of TDX module * initialization are done. From patchwork Sun Mar 13 10:49:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F35C8C433EF for ; Sun, 13 Mar 2022 10:52:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232753AbiCMKxw (ORCPT ); Sun, 13 Mar 2022 06:53:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233163AbiCMKwr (ORCPT ); Sun, 13 Mar 2022 06:52:47 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66FD31255A0; Sun, 13 Mar 2022 03:51:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168669; x=1678704669; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6ZRv+WOPesQ1kchQgRV/AzyEwsvdCFvGUJKYSSVphAA=; b=EWvTbwZqQ16UT2iSyTaISXWY5r2bYhqqe/GY8bU9qes1/Dff6uCkrjro HQJ+NinjD8KoIKqmheUo5KmUrJaRrks+W6nWvRVD3+aH3GHiI6KZ70LcT n+GKKjdTY68x4jCtcDnVpgdwLN79mQ0d1i4XgN3aYyng5uNRd0idJ2Svc HEZ3RBvMgFC03uUqawAwZ1WbPcBLh/YCw8r7G0l8ET+9KM+Yibiq4ybWr G7QiJ+GMWzqpSKf+G2QmH6W5Csgb9qEWRQLtOTR9z6Vjbqx6rr51hi6BC YtM4Hdy5GjdTRiqgVfA25OBlpB5zkom88CsDmfNwKi0ZS4MMl54if+w9q w==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255590702" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255590702" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:03 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448202" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:50:59 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 16/21] x86/virt/tdx: Configure TDX module with TDMRs and global KeyID Date: Sun, 13 Mar 2022 23:49:56 +1300 Message-Id: <52f8123ee1028549e1207b1582f00737c42ef2c2.1647167475.git.kai.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org After the TDX usable memory regions are constructed in an array of TDMRs and the global KeyID is reserved, configure them to the TDX module. The configuration is done via TDH.SYS.CONFIG, which is one call and can be done on any logical cpu. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 42 +++++++++++++++++++++++++++++++++++++++++ arch/x86/virt/vmx/tdx.h | 2 ++ 2 files changed, 44 insertions(+) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index f70eee7df0c1..e03dc3e420db 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -1280,6 +1280,42 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num) return ret; } +static int config_tdx_module(struct tdmr_info **tdmr_array, int tdmr_num, + u64 global_keyid) +{ + u64 *tdmr_pa_array; + int i, array_sz; + int ret; + + /* + * TDMR_INFO entries are configured to the TDX module via an + * array of the physical address of each TDMR_INFO. TDX requires + * the array itself must be 512 aligned. Round up the array size + * to 512 aligned so the buffer allocated by kzalloc() meets the + * alignment requirement. + */ + array_sz = ALIGN(tdmr_num * sizeof(u64), TDMR_INFO_PA_ARRAY_ALIGNMENT); + tdmr_pa_array = kzalloc(array_sz, GFP_KERNEL); + if (!tdmr_pa_array) + return -ENOMEM; + + for (i = 0; i < tdmr_num; i++) + tdmr_pa_array[i] = __pa(tdmr_array[i]); + + /* + * TDH.SYS.CONFIG fails when TDH.SYS.LP.INIT is not done on all + * BIOS-enabled cpus. tdx_init() only disables CPU hotplug but + * doesn't do early check whether all BIOS-enabled cpus are + * online, so TDH.SYS.CONFIG can fail here. + */ + ret = seamcall(TDH_SYS_CONFIG, __pa(tdmr_pa_array), tdmr_num, + global_keyid, 0, NULL, NULL); + /* Free the array as it is not required any more. */ + kfree(tdmr_pa_array); + + return ret; +} + static int init_tdx_module(void) { struct tdmr_info **tdmr_array; @@ -1325,11 +1361,17 @@ static int init_tdx_module(void) */ tdx_global_keyid = tdx_keyid_start; + /* Config the TDX module with TDMRs and global KeyID */ + ret = config_tdx_module(tdmr_array, tdmr_num, tdx_global_keyid); + if (ret) + goto out_free_pamts; + /* * Return -EFAULT until all steps of TDX module * initialization are done. */ ret = -EFAULT; +out_free_pamts: /* * Free PAMTs allocated in construct_tdmrs() when TDX module * initialization fails. diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index 05bf9fe6bd00..d8e2800397af 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -95,6 +95,7 @@ struct tdmr_reserved_area { } __packed; #define TDMR_INFO_ALIGNMENT 512 +#define TDMR_INFO_PA_ARRAY_ALIGNMENT 512 struct tdmr_info { u64 base; @@ -125,6 +126,7 @@ struct tdmr_info { #define TDH_SYS_INIT 33 #define TDH_SYS_LP_INIT 35 #define TDH_SYS_LP_SHUTDOWN 44 +#define TDH_SYS_CONFIG 45 struct tdx_module_output; u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, From patchwork Sun Mar 13 10:49:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07540C433F5 for ; Sun, 13 Mar 2022 10:51:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234344AbiCMKxA (ORCPT ); Sun, 13 Mar 2022 06:53:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234278AbiCMKws (ORCPT ); Sun, 13 Mar 2022 06:52:48 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A09B712CC1D; Sun, 13 Mar 2022 03:51:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168672; x=1678704672; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vgFg1Mm5KzCdUKJl8VJiZqAJHtuAoK2zOLziK5r/MyM=; b=eN2LfVIcPl+o7PinVj1+qTP5fGo4DxJ1xtDNj2KjE8fEu1YwMK2ClINw tn4DPwdjjquWtBz+gpUXPslMdvToV+Q6T8wtia20059fOrYChTlFsrTrr S+0zhHsX6c6bN5cv2WW7aitR28FMKQd/AbSEQzaeFA0SOSVqGVJdBGcEz Q5/ROTGRAh6RnHhzZ8Gfz8yU07Q2C38k/B6AQVoDFoZrVcx359kevWW3r H6glmDJCOgMfgNrqevVy/2MXT2m8cOjaZ9QLoBN+zB2jOJnvZLEGCGiXP ZCIPDY2P1QtP9xq5tVVmd0qvZNq7R4lN5Tb8B9vgl0wE7ZXBx2JU8dRxw Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255590711" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255590711" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:05 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448227" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:02 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 17/21] x86/virt/tdx: Configure global KeyID on all packages Date: Sun, 13 Mar 2022 23:49:57 +1300 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Before TDX module can use the global KeyID to access TDX metadata, the key of the global KeyID must be configured on all physical packages via TDH.SYS.KEY.CONFIG. This SEAMCALL cannot run concurrently on different cpus since it exclusively acquires the TDX module. Implement a helper to run SEAMCALL on one (any) cpu for all packages in serialized way, and run TDH.SYS.KEY.CONFIG on all packages using the helper. The TDX module uses the global KeyID to initialize its metadata (PAMTs). Before TDX module can do that, all cachelines of PAMTs must be flushed. Otherwise, they may silently corrupt the PAMTs later initialized by the TDX module. Use wbinvd to flush cache as PAMTs can be potentially large (~1/256th of system RAM). Flush cache before configuring the global KeyID on all packages, as suggested by TDX specification. In practice, the current generation of TDX doesn't use the global KeyID in TDH.SYS.KEY.CONFIG. Therefore in practice flushing cache can be done after configuring the global KeyID is done on all packages. But the future generation of TDX may change this behaviour, so just follow TDX specification's suggestion to flush cache before configuring the global KeyID on all packages. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 94 ++++++++++++++++++++++++++++++++++++++++- arch/x86/virt/vmx/tdx.h | 1 + 2 files changed, 94 insertions(+), 1 deletion(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index e03dc3e420db..39b1b7d0417d 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include "tdx.h" @@ -398,6 +399,47 @@ static int seamcall_on_each_cpu(struct seamcall_ctx *sc) return atomic_read(&sc->err); } +/* + * Call the SEAMCALL on one (any) cpu for each physical package in + * serialized way. Note for serialized calls 'seamcall_ctx::err' + * doesn't have to be atomic, but for simplicity just reuse it + * instead of adding a new one. + * + * Return -ENXIO if IPI SEAMCALL wasn't run on any cpu, or -EFAULT + * when SEAMCALL fails, or -EPERM when the cpu where SEAMCALL runs + * on is not in VMX operation. In case of -EFAULT, the error code + * of SEAMCALL is in 'struct seamcall_ctx::seamcall_ret'. + */ +static int seamcall_on_each_package_serialized(struct seamcall_ctx *sc) +{ + cpumask_var_t packages; + int cpu, ret; + + if (!zalloc_cpumask_var(&packages, GFP_KERNEL)) + return -ENOMEM; + + for_each_online_cpu(cpu) { + if (cpumask_test_and_set_cpu(topology_physical_package_id(cpu), + packages)) + continue; + + ret = smp_call_function_single(cpu, seamcall_smp_call_function, + sc, true); + if (ret) + return ret; + + /* + * Doesn't have to use atomic_read(), but it doesn't + * hurt either. + */ + ret = atomic_read(&sc->err); + if (ret) + return ret; + } + + return 0; +} + static inline bool p_seamldr_ready(void) { return !!p_seamldr_info.p_seamldr_ready; @@ -1316,6 +1358,18 @@ static int config_tdx_module(struct tdmr_info **tdmr_array, int tdmr_num, return ret; } +static int config_global_keyid(u64 global_keyid) +{ + struct seamcall_ctx sc = { .fn = TDH_SYS_KEY_CONFIG }; + + /* + * TDH.SYS.KEY.CONFIG may fail with entropy error (which is + * a recoverable error). Assume this is exceedingly rare and + * just return error if encountered instead of retrying. + */ + return seamcall_on_each_package_serialized(&sc); +} + static int init_tdx_module(void) { struct tdmr_info **tdmr_array; @@ -1366,6 +1420,37 @@ static int init_tdx_module(void) if (ret) goto out_free_pamts; + /* + * The same physical address associated with different KeyIDs + * has separate cachelines. Before using the new KeyID to access + * some memory, the cachelines associated with the old KeyID must + * be flushed, otherwise they may later silently corrupt the data + * written with the new KeyID. After cachelines associated with + * the old KeyID are flushed, CPU speculative fetch using the old + * KeyID is OK since the prefetched cachelines won't be consumed + * by CPU core. + * + * TDX module initializes PAMTs using the global KeyID to crypto + * protect them from malicious host. Before that, the PAMTs are + * used by kernel (with KeyID 0) and the cachelines associated + * with the PAMTs must be flushed. Given PAMTs are potentially + * large (~1/256th of system RAM), just use WBINVD on all cpus to + * flush the cache. + * + * In practice, the current generation of TDX doesn't use the + * global KeyID in TDH.SYS.KEY.CONFIG. Therefore in practice, + * the cachelines can be flushed after configuring the global + * KeyID on all pkgs is done. But the future generation of TDX + * may change this, so just follow the suggestion of TDX spec to + * flush cache before TDH.SYS.KEY.CONFIG. + */ + wbinvd_on_all_cpus(); + + /* Config the key of global KeyID on all packages */ + ret = config_global_keyid(tdx_global_keyid); + if (ret) + goto out_free_pamts; + /* * Return -EFAULT until all steps of TDX module * initialization are done. @@ -1376,8 +1461,15 @@ static int init_tdx_module(void) * Free PAMTs allocated in construct_tdmrs() when TDX module * initialization fails. */ - if (ret) + if (ret) { + /* + * Part of PAMTs may already have been initialized by + * TDX module. Flush cache before returning them back + * to kernel. + */ + wbinvd_on_all_cpus(); tdmrs_free_pamt_all(tdmr_array, tdmr_num); + } out_free_tdmrs: /* * TDMRs are only used during initializing TDX module. Always diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index d8e2800397af..bba8cabea4bb 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -122,6 +122,7 @@ struct tdmr_info { /* * TDX module SEAMCALL leaf functions */ +#define TDH_SYS_KEY_CONFIG 31 #define TDH_SYS_INFO 32 #define TDH_SYS_INIT 33 #define TDH_SYS_LP_INIT 35 From patchwork Sun Mar 13 10:49:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E9D8C433F5 for ; Sun, 13 Mar 2022 10:51:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234299AbiCMKw6 (ORCPT ); Sun, 13 Mar 2022 06:52:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234284AbiCMKws (ORCPT ); Sun, 13 Mar 2022 06:52:48 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3859342488; Sun, 13 Mar 2022 03:51:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168675; x=1678704675; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+PwKqgi1QBSJ18V2BSvNa8Zj3gKKVn9SnP8Q5rkorIs=; b=ZqXmY3Mfc7OlHTE4zq2sw/9Rnxyx09LBKjxLBqfZ6TGFfQPGcHzJ23BQ IrhxN/KAr0D+1Ko1iDEOy5L6yqdt/OlMTctgwVbll05N7a6/AjPlLfIdi 09Tk0qyGLQcuqleOHAdyy9VEpM/9PqofIec7o9P9rpiG2gCOTwNzlZsW5 jJDEIkD/Igih0sl/WvKu3q97+R4XiMEq1jIvI5lLAUUEub/z1tYGMBiVJ /56P4BXjl/gfiwRNBUWP4TMIRppXBoJ4xbENlBQsc7/vrdODV4lFaiMIz wold3Ky44yuqZWRO9i+bTbkVlgrbfQpX37ZluCazUexTK1AUupN10ogzl Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255590718" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255590718" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:08 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448238" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:05 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 18/21] x86/virt/tdx: Initialize all TDMRs Date: Sun, 13 Mar 2022 23:49:58 +1300 Message-Id: <3af7ece5cf86dfe83f755b7a7c541d8f691e4133.1647167475.git.kai.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Initialize TDMRs via TDH.SYS.TDMR.INIT as the last step to complete the TDX initialization. All TDMRs need to be initialized using TDH.SYS.TDMR.INIT SEAMCALL before the TDX memory can be used to run any TD guest. The SEAMCALL internally uses the global KeyID to initialize PAMTs in order to crypto protect them from malicious host kernel. TDH.SYS.TDMR.INIT can be done any cpu. The time of initializing TDMR is proportional to the size of the TDMR. To avoid long latency caused in one SEAMCALL, TDH.SYS.TDMR.INIT only initializes an (implementation-specific) subset of PAMT entries of one TDMR in one invocation. The caller is responsible for calling TDH.SYS.TDMR.INIT iteratively until all PAMT entries of the requested TDMR are initialized. Current implementation initializes TDMRs one by one. It takes ~100ms on a 2-socket machine with 2.2GHz CPUs and 64GB memory when the system is idle. Each TDH.SYS.TDMR.INIT takes ~7us on average. TDX does allow different TDMRs to be initialized concurrently on multiple CPUs. This parallel scheme could be introduced later when the total initialization time becomes a real concern, e.g. on a platform with a much bigger memory size. Signed-off-by: Kai Huang --- arch/x86/virt/vmx/tdx.c | 75 ++++++++++++++++++++++++++++++++++++++--- arch/x86/virt/vmx/tdx.h | 1 + 2 files changed, 71 insertions(+), 5 deletions(-) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index 39b1b7d0417d..f2b9c98191ed 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -1370,6 +1370,65 @@ static int config_global_keyid(u64 global_keyid) return seamcall_on_each_package_serialized(&sc); } +/* Initialize one TDMR */ +static int init_tdmr(struct tdmr_info *tdmr) +{ + u64 next; + + /* + * Initializing PAMT entries might be time-consuming (in + * proportion to the size of the requested TDMR). To avoid long + * latency in one SEAMCALL, TDH.SYS.TDMR.INIT only initializes + * an (implementation-defined) subset of PAMT entries in one + * invocation. + * + * Call TDH.SYS.TDMR.INIT iteratively until all PAMT entries + * of the requested TDMR are initialized (if next-to-initialize + * address matches the end address of the TDMR). + */ + do { + struct tdx_module_output out; + int ret; + + ret = seamcall(TDH_SYS_TDMR_INIT, tdmr->base, 0, 0, 0, + NULL, &out); + if (ret) + return ret; + /* + * RDX contains 'next-to-initialize' address if + * TDH.SYS.TDMR.INT succeeded. + */ + next = out.rdx; + if (need_resched()) + cond_resched(); + } while (next < tdmr->base + tdmr->size); + + return 0; +} + +/* Initialize all TDMRs */ +static int init_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num) +{ + int i; + + /* + * Initialize TDMRs one-by-one for simplicity, though the TDX + * architecture does allow different TDMRs to be initialized in + * parallel on multiple CPUs. Parallel initialization could + * be added later when the time spent in the serialized scheme + * becomes a real concern. + */ + for (i = 0; i < tdmr_num; i++) { + int ret; + + ret = init_tdmr(tdmr_array[i]); + if (ret) + return ret; + } + + return 0; +} + static int init_tdx_module(void) { struct tdmr_info **tdmr_array; @@ -1451,11 +1510,12 @@ static int init_tdx_module(void) if (ret) goto out_free_pamts; - /* - * Return -EFAULT until all steps of TDX module - * initialization are done. - */ - ret = -EFAULT; + /* Initialize TDMRs to complete the TDX module initialization */ + ret = init_tdmrs(tdmr_array, tdmr_num); + if (ret) + goto out_free_pamts; + + tdx_module_status = TDX_MODULE_INITIALIZED; out_free_pamts: /* * Free PAMTs allocated in construct_tdmrs() when TDX module @@ -1478,6 +1538,11 @@ static int init_tdx_module(void) free_tdmrs(tdmr_array, tdmr_num); kfree(tdmr_array); out: + if (ret) + pr_info("Failed to initialize TDX module.\n"); + else + pr_info("TDX module initialized.\n"); + return ret; } diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h index bba8cabea4bb..212f83374c0a 100644 --- a/arch/x86/virt/vmx/tdx.h +++ b/arch/x86/virt/vmx/tdx.h @@ -126,6 +126,7 @@ struct tdmr_info { #define TDH_SYS_INFO 32 #define TDH_SYS_INIT 33 #define TDH_SYS_LP_INIT 35 +#define TDH_SYS_TDMR_INIT 36 #define TDH_SYS_LP_SHUTDOWN 44 #define TDH_SYS_CONFIG 45 From patchwork Sun Mar 13 10:49:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8665DC4332F for ; Sun, 13 Mar 2022 10:51:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234394AbiCMKxB (ORCPT ); Sun, 13 Mar 2022 06:53:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234416AbiCMKww (ORCPT ); Sun, 13 Mar 2022 06:52:52 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D1CD47AD1; Sun, 13 Mar 2022 03:51:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168684; x=1678704684; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=APrZzjGrkRGG7Lr87LmTfqU/eGtk8ICoc5xUHBUFbwI=; b=WE8aIdi5bVLQa8XXUK/9HiEPW5JEFK1A3lWh0QiQQilzrAdwV4tcbt4D 1D4AX4IpVsT/q9TAkzpuda8QtxqPkd7wpCJHTs9YrPZDaVfHbtkjJXzfE z7CxdZYnd8gaEFiLsNzdisDaCMEbq9cYa1dFleWGRYXmr640VcgsvKUQq OvtsHI5FI+dtVCz2k+yRzHQSSZciiX7kpPE8Yv/BkOwplPxoOZXgTrh4/ wpXUbsPu4Ew/2eddsXGwwxdmEoSS7Sdk3ExV++HXEpCaAo7FkdrDuXc2T NGIbsQaXIscWjgwS+7VwbW1dNwyk4Q8zPT73i08mXplrpecXgTu7MGVQX w==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255590722" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255590722" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:12 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448243" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:09 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 19/21] x86: Flush cache of TDX private memory during kexec() Date: Sun, 13 Mar 2022 23:49:59 +1300 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If TDX is ever enabled and/or used to run any TD guests, the cachelines of TDX private memory, including PAMTs, used by TDX module need to be flushed before transiting to the new kernel otherwise they may silently corrupt the new kernel. TDX module can only be initialized once during its lifetime. TDX does not have interface to reset TDX module to an uninitialized state so it could be initialized again. If the old kernel has enabled TDX, the new kernel won't be able to use TDX again. Therefore, ideally the old kernel should shut down the TDX module if it is ever initialized so that no SEAMCALLs can be made to it again. However, shutting down the TDX module requires calling SEAMCALL, which requires cpu being in VMX operation (VMXON has been done). Currently, only KVM does entering/leaving VMX operation, so there's no guarantee that all cpus are in VMX operation during kexec(). Therefore, this implementation doesn't shut down the TDX module, but only does cache flush and just leave the TDX module open. And it's fine to leave the module open. If the new kernel wants to use TDX, it needs to go through the initialization process, and it will fail at the first SEAMCALL due to the TDX module is not in the uninitialized state. If the new kernel doesn't want to use TDX, then the TDX module won't run at all. Following the implementation of SME support, use wbinvd() to flush cache in stop_this_cpu(). Introduce a new function platform_has_tdx() to only check whether the platform is TDX-capable and do wbinvd() when it is true. platform_has_tdx() returns true when SEAMRR is enabled and there are enough TDX private KeyIDs to run at least one TD guest (both of which are detected at boot time). TDX is enabled on demand at runtime and it has a state machine with mutex to protect multiple callers to initialize TDX in parallel. Getting TDX module state needs to hold the mutex but stop_this_cpu() runs in interrupt context, so just check whether platform supports TDX and flush cache. Signed-off-by: Kai Huang --- arch/x86/include/asm/tdx.h | 2 ++ arch/x86/kernel/process.c | 15 ++++++++++++++- arch/x86/virt/vmx/tdx.c | 14 ++++++++++++++ 3 files changed, 30 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index b526d41c4bbf..24f2b7e8b280 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -85,10 +85,12 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1, void tdx_detect_cpu(struct cpuinfo_x86 *c); int tdx_detect(void); int tdx_init(void); +bool platform_has_tdx(void); #else static inline void tdx_detect_cpu(struct cpuinfo_x86 *c) { } static inline int tdx_detect(void) { return -ENODEV; } static inline int tdx_init(void) { return -ENODEV; } +static inline bool platform_has_tdx(void) { return false; } #endif /* CONFIG_INTEL_TDX_HOST */ #endif /* !__ASSEMBLY__ */ diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 71aa12082370..bf3d1c9cb00c 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -766,8 +766,21 @@ void stop_this_cpu(void *dummy) * without the encryption bit, they don't race each other when flushed * and potentially end up with the wrong entry being committed to * memory. + * + * In case of kexec, similar to SME, if TDX is ever enabled, the + * cachelines of TDX private memory (including PAMTs) used by TDX + * module need to be flushed before transiting to the new kernel, + * otherwise they may silently corrupt the new kernel. + * + * Note TDX is enabled on demand at runtime, and enabling TDX has a + * state machine protected with a mutex to prevent concurrent calls + * from multiple callers. Holding the mutex is required to get the + * TDX enabling status, but this function runs in interrupt context. + * So to make it simple, always flush cache when platform supports + * TDX (detected at boot time), regardless whether TDX is truly + * enabled by kernel. */ - if (boot_cpu_has(X86_FEATURE_SME)) + if (boot_cpu_has(X86_FEATURE_SME) || platform_has_tdx()) native_wbinvd(); for (;;) { /* diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index f2b9c98191ed..d9ad8dc7111e 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -1681,3 +1681,17 @@ int tdx_init(void) return ret; } EXPORT_SYMBOL_GPL(tdx_init); + +/** + * platform_has_tdx - Whether platform supports TDX + * + * Check whether platform supports TDX (i.e. TDX is enabled in BIOS), + * regardless whether TDX is truly enabled by kernel. + * + * Return true if SEAMRR is enabled, and there are sufficient TDX private + * KeyIDs to run TD guests. + */ +bool platform_has_tdx(void) +{ + return seamrr_enabled() && tdx_keyid_sufficient(); +} From patchwork Sun Mar 13 10:50:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50EF7C433FE for ; Sun, 13 Mar 2022 10:51:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234417AbiCMKxC (ORCPT ); Sun, 13 Mar 2022 06:53:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234429AbiCMKwx (ORCPT ); Sun, 13 Mar 2022 06:52:53 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A829E4AE1F; Sun, 13 Mar 2022 03:51:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168686; x=1678704686; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9l+LPMkLtqeMSWP1b4SOmyqedQ8aVLN9kIwR5ZjsVQI=; b=FPIY3K61Wm18luhGAlMcGtML5U1+CJz/pJI+pPfM1iaov/ABZwJ4Wd5D FzJ6SLL1ejjxNAWRa3ZLQb2TdrPm9NlqxS9cxf/6+7bOyy47Fb8jQJ5oI uUOAPVhZvZbFMYorxjOI4o1Z6BxqNC3GXW9eowZVudogx9hr7zwHc/Nsq XFK6waO/bUo3AksjjSGbRp2BJj1tL118fHWsLZvoiQxvQxIGp3yUTLwms ItjT77AN5MHcgIvCoqjzMAb02XbjIyC3/lP3/8s7G24Rk2Qnulod25KZo X0xwan0a4hDMEjwV+wcoM+oRQ5BYSz8blTzzc7MMTbuMH04KnGRkgRr2v Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255590723" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255590723" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:15 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448251" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:12 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 20/21] x86/virt/tdx: Add kernel command line to opt-in TDX host support Date: Sun, 13 Mar 2022 23:50:00 +1300 Message-Id: <97c3539843a2c9cfdfd4a4ba2786be9b76afdd78.1647167475.git.kai.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Enabling TDX consumes additional memory (used by TDX as metadata) and additional initialization time. Introduce a kernel command line to allow to opt-in TDX host kernel support when user truly wants to use TDX. Signed-off-by: Kai Huang --- Documentation/admin-guide/kernel-parameters.txt | 6 ++++++ arch/x86/virt/vmx/tdx.c | 14 ++++++++++++++ 2 files changed, 20 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index f5a27f067db9..9f85cafd0c2d 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5707,6 +5707,12 @@ tdfx= [HW,DRM] + tdx_host= [X86-64, TDX] + Format: {on|off} + on: Enable TDX host kernel support + off: Disable TDX host kernel support + Default is off. + test_suspend= [SUSPEND][,N] Specify "mem" (for Suspend-to-RAM) or "standby" (for standby suspend) or "freeze" (for suspend type freeze) diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c index d9ad8dc7111e..2022f9c019b8 100644 --- a/arch/x86/virt/vmx/tdx.c +++ b/arch/x86/virt/vmx/tdx.c @@ -116,6 +116,16 @@ static struct tdsysinfo_struct tdx_sysinfo; /* TDX global KeyID to protect TDX metadata */ static u32 tdx_global_keyid; +static bool enable_tdx_host; + +static int __init tdx_host_setup(char *s) +{ + if (!strcmp(s, "on")) + enable_tdx_host = true; + return 1; +} +__setup("tdx_host=", tdx_host_setup); + static bool __seamrr_enabled(void) { return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS; @@ -501,6 +511,10 @@ static int detect_p_seamldr(void) static int __tdx_detect(void) { + /* Disabled by kernel command line */ + if (!enable_tdx_host) + goto no_tdx_module; + /* The TDX module is not loaded if SEAMRR is disabled */ if (!seamrr_enabled()) { pr_info("SEAMRR not enabled.\n"); From patchwork Sun Mar 13 10:50:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 12779171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6E7EC433EF for ; Sun, 13 Mar 2022 10:52:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234398AbiCMKxW (ORCPT ); Sun, 13 Mar 2022 06:53:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234449AbiCMKwy (ORCPT ); Sun, 13 Mar 2022 06:52:54 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85B8412F159; Sun, 13 Mar 2022 03:51:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647168690; x=1678704690; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SpjC3CbH42U1qa3ksr4Ka1FAutv97ZtYm4oaP1Da4rY=; b=Qfs0OOtqsluBHxItZTVgZ/1T5hJixWtX5ewVe7zKYj6yGbuf2kODwraw njb+VpOdq4EHvg1J8dcXHaMO1vTkIoeGRI1rZPiHkbNPsG0gTyN+DmxJM 0fzTmNX6TuMfW/91KVBUTCd7sdDBTJi3OK850nhsYbfLywCJsaLPp7DYO gaLyo4ahNLOIeuuvatASEsdYGOhPHClqJWfgRadYlR7JfKE5PIAVb7wYF XtEDNWKQAjujnwgmlKLdL5AZQIX4UBfTo2hr0BuOzTE87g+KAVExfFCyF ++U1vNn+OXg34a58tONnKvVI8csP+uuxwIBNYYvZnDChjvzVpBwpvrCYJ w==; X-IronPort-AV: E=McAfee;i="6200,9189,10284"; a="255590725" X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="255590725" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:18 -0700 X-IronPort-AV: E=Sophos;i="5.90,178,1643702400"; d="scan'208";a="645448259" Received: from mvideche-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.130.249]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Mar 2022 03:51:15 -0700 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: dave.hansen@intel.com, seanjc@google.com, pbonzini@redhat.com, kirill.shutemov@linux.intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, peterz@infradead.org, tony.luck@intel.com, ak@linux.intel.com, dan.j.williams@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com Subject: [PATCH v2 21/21] Documentation/x86: Add documentation for TDX host support Date: Sun, 13 Mar 2022 23:50:01 +1300 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add for TDX host support. Signed-off-by: Kai Huang --- Documentation/x86/index.rst | 1 + Documentation/x86/tdx_host.rst | 300 +++++++++++++++++++++++++++++++++ 2 files changed, 301 insertions(+) create mode 100644 Documentation/x86/tdx_host.rst diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst index 382e53ca850a..145fc251fbfc 100644 --- a/Documentation/x86/index.rst +++ b/Documentation/x86/index.rst @@ -25,6 +25,7 @@ x86-specific Documentation intel_txt amd-memory-encryption tdx + tdx_host pti mds microcode diff --git a/Documentation/x86/tdx_host.rst b/Documentation/x86/tdx_host.rst new file mode 100644 index 000000000000..a843ede9d45c --- /dev/null +++ b/Documentation/x86/tdx_host.rst @@ -0,0 +1,300 @@ +.. SPDX-License-Identifier: GPL-2.0 + +========================================================= +Intel Trusted Domain Extensions (TDX) host kernel support +========================================================= + +Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious +host and certain physical attacks. To support TDX, a new CPU mode called +Secure Arbitration Mode (SEAM) is added to Intel processors. + +SEAM is an extension to the VMX architecture to define a new VMX root +operation called 'SEAM VMX root' and a new VMX non-root operation called +'VMX non-root'. Collectively, the SEAM VMX root and SEAM VMX non-root +execution modes are called operation in SEAM. + +SEAM VMX root operation is designed to host a CPU-attested, software +module called 'Intel TDX module' to manage virtual machine (VM) guests +called Trust Domains (TD). The TDX module implements the functions to +build, tear down, and start execution of TD VMs. SEAM VMX root is also +designed to additionally host a CPU-attested, software module called the +'Intel Persistent SEAMLDR (Intel P-SEAMLDR)' module to load and update +the Intel TDX module. + +The software in SEAM VMX root runs in the memory region defined by the +SEAM range register (SEAMRR). Access to this range is restricted to SEAM +VMX root operation. Code fetches outside of SEAMRR when in SEAM VMX root +operation are meant to be disallowed and lead to an unbreakable shutdown. + +TDX leverages Intel Multi-Key Total Memory Encryption (MKTME) to crypto +protect TD guests. TDX reserves part of MKTME KeyID space as TDX private +KeyIDs, which can only be used by software runs in SEAM. The physical +address bits reserved for encoding TDX private KeyID are treated as +reserved bits when not in SEAM operation. The partitioning of MKTME +KeyIDs and TDX private KeyIDs is configured by BIOS. + +Host kernel transits to either P-SEAMLDR or TDX module via the new +SEAMCALL instruction. SEAMCALLs are host-side interface functions +defined by P-SEAMLDR and TDX module around the new SEAMCALL instruction. +They are similar to a hypercall, except they are made by host kernel to +the SEAM software modules. + +Before being able to manage TD guests, the TDX module must be loaded +into SEAMRR and properly initialized using SEAMCALLs defined by TDX +architecture. The current implementation assumes both P-SEAMLDR and +TDX module are loaded by BIOS before the kernel boots. + +Detection and Initialization +---------------------------- + +The presence of SEAMRR is reported via a new SEAMRR bit (15) of the +IA32_MTRRCAP MSR. The SEAMRR range registers consist of a pair of MSRs: +IA32_SEAMRR_PHYS_BASE (0x1400) and IA32_SEAMRR_PHYS_MASK (0x1401). +SEAMRR is enabled when bit 3 of IA32_SEAMRR_PHYS_BASE is set and +bit 10/11 of IA32_SEAMRR_PHYS_MASK are set. + +However, there is no CPUID or MSR for querying the presence of the TDX +module or the P-SEAMLDR. SEAMCALL fails with VMfailInvalid when SEAM +software is not loaded, so SEAMCALL can be used to detect P-SEAMLDR and +TDX module. SEAMLDR.INFO SEAMCALL is used to detect both P-SEAMLDR and +TDX module. Success of the SEAMCALL means P-SEAMLDR is loaded, and the +P-SEAMLDR information returned by the SEAMCALL further tells whether TDX +module is loaded or not. + +User can check whether the TDX module is initialized via dmesg: + + [..] tdx: P-SEAMLDR: version 0x0, vendor_id: 0x8086, build_date: 20211209, build_num 160, major 1, minor 0 + [..] tdx: TDX module detected. + [..] tdx: TDX module: vendor_id 0x8086, major_version 1, minor_version 0, build_date 20211209, build_num 160 + [..] tdx: TDX module initialized. + +Initializing TDX takes time (in seconds) and additional memory space (for +metadata). Both are affected by the size of total usable memory which the +TDX module is configured with. In particular, the TDX metadata consumes +~1/256 of TDX usable memory. This leads to a non-negligible burden as the +current implementation simply treats all E820 RAM ranges as TDX usable +memory (all system RAM meets the security requirements on the first +generation of TDX-capable platforms). + +Therefore, kernel uses lazy TDX initialization to avoid such burden for +all users on a TDX-capable platform. The software component (e.g. KVM) +which wants to use TDX is expected to call two helpers below to detect +and initialize the TDX module until TDX is truly needed: + + if (tdx_detect()) + goto no_tdx; + if (tdx_init()) + goto no_tdx; + +TDX detection and initialization are done via SEAMCALLs which require the +CPU in VMX operation. The caller of the above two helpers should ensure +that condition. + +Currently, only KVM is the only user of TDX and KVM already handles +entering/leaving VMX operation. Letting KVM initialize TDX on demand +avoids handling entering/leaving VMX operation, which isn't trivial, in +core-kernel. + +In addition, a new kernel parameter 'tdx_host={on/off}' can be used to +force disabling the TDX capability by the admin. + +TDX Memory Management +--------------------- + +TDX architecture manages TDX memory via below data structures: + +1) Convertible Memory Regions (CMRs) + +TDX provides increased levels of memory confidentiality and integrity. +This requires special hardware support for features like memory +encryption and storage of memory integrity checksums. A CMR represents a +memory range that meets those requirements and can be used as TDX memory. +The list of CMRs can be queried from TDX module. + +2) TD Memory Regions (TDMRs) + +The TDX module manages TDX usable memory via TD Memory Regions (TDMR). +Each TDMR has information of its base and size, its metadata (PAMT)'s +base and size, and an array of reserved areas to hold the memory region +address holes and PAMTs. TDMR must be 1G aligned and in 1G granularity. + +Host kernel is responsible for choosing which convertible memory regions +(reside in CMRs) to use as TDX memory, and constructing a list of TDMRs +to cover all those memory regions, and configure the TDMRs to TDX module. + +3) Physical Address Metadata Tables (PAMTs) + +This metadata essentially serves as the 'struct page' for the TDX module, +recording things like which TD guest 'owns' a given page of memory. Each +TDMR has a dedicated PAMT. + +PAMT is not reserved by the hardware upfront and must be allocated by the +kernel and given to the TDX module. PAMT for a given TDMR doesn't have +to be within that TDMR, but a PAMT must be within one CMR. Additionally, +if a PAMT overlaps with a TDMR, the overlapping part must be marked as +reserved in that particular TDMR. + +Kernel Policy of TDX Memory +--------------------------- + +The first generation of TDX essentially guarantees that all system RAM +memory regions (excluding the memory below 1MB) are covered by CMRs. +Currently, to avoid having to modify the page allocator to support both +TDX and non-TDX allocation, the kernel choose to use all system RAM as +TDX memory. A list of TDMRs are constructed based on all RAM entries in +e820 table and configured to the TDX module. + +Limitations +----------- + +1. Constructing TDMRs + +Currently, the kernel tries to create one TDMR for each RAM entry in +e820. 'e820_table' is used to find all RAM entries to honor 'mem' and +'memmap' kernel command line. However, 'memmap' command line may also +result in many discrete RAM entries. TDX architecturally only supports a +limited number of TDMRs (currently 64). In this case, constructing TDMRs +may fail due to exceeding the maximum number of TDMRs. The user is +responsible for not doing so otherwise TDX may not be available. This +can be further enhanced by supporting merging adjacent TDMRs. + +2. PAMT allocation + +Currently, the kernel allocates PAMT for each TDMR separately using +alloc_contig_pages(). alloc_contig_pages() only guarantees the PAMT is +allocated from a given NUMA node, but doesn't have control over +allocating PAMT from a given TDMR range. This may result in all PAMTs +on one NUMA node being within one single TDMR. PAMTs overlapping with +a given TDMR must be put into the TDMR's reserved areas too. However TDX +only supports a limited number of reserved areas per TDMR (currently 16), +thus too many PAMTs in one NUMA node may result in constructing TDMR +failure due to exceeding TDMR's maximum reserved areas. + +The user is responsible for not creating too many discrete RAM entries +on one NUMA node, which may result in having too many TDMRs on one node, +which eventually results in constructing TDMR failure due to exceeding +the maximum reserved areas. This can be further enhanced to support +per-NUMA-node PAMT allocation, which could reduce the number of PAMT to +1 for each node. + +3. TDMR initialization + +Currently, the kernel initialize TDMRs one by one. This may take couple +of seconds to finish on large memory systems (TBs). This can be further +enhanced by allowing initializing different TDMRs in parallel on multiple +cpus. + +4. CPU hotplug + +The first generation of TDX architecturally doesn't support ACPI CPU +hotplug. All logical cpus are enabled by BIOS in MADT table. Also, the +first generation of TDX-capable platforms don't support ACPI CPU hotplug +either. Since this physically cannot happen, currently kernel doesn't +have any check in ACPI CPU hotplug code path to disable it. + +Also, only TDX module initialization requires all BIOS-enabled cpus are +online. After the initialization, any logical cpu can be brought down +and brought up to online again later. Therefore this series doesn't +change logical CPU hotplug either. + +This can be enhanced when any future generation of TDX starts to support +ACPI cpu hotplug. + +5. Memory hotplug + +The first generation of TDX architecturally doesn't support memory +hotplug. The CMRs are generated by BIOS during boot and it is fixed +during machine's runtime. + +However, the first generation of TDX-capable platforms don't support ACPI +memory hotplug. Since it physically cannot happen, currently kernel +doesn't have any check in ACPI memory hotplug code path to disable it. + +A special case of memory hotplug is adding NVDIMM as system RAM using +kmem driver. However the first generation of TDX-capable platforms +cannot turn on TDX and NVDIMM simultaneously, so in practice this cannot +happen either. + +Another case is admin can use 'memmap' kernel command line to create +legacy PMEMs and use them as TD guest memory, or theoretically, can use +kmem driver to add them as system RAM. Current implementation always +includes legacy PMEMs when constructing TDMRs so they are also TDX memory. +So legacy PMEMs can either be used as TD guest memory directly or can be +converted to system RAM via kmem driver. + +This can be enhanced when future generation of TDX starts to support ACPI +memory hotplug, or NVDIMM and TDX can be enabled simultaneously on the +same platform. + +6. Online CPUs + +TDX initialization includes a step where certain SEAMCALL must be called +on every BIOS-enabled CPU (with a ACPI MADT entry marked as enabled). +Otherwise, the initialization process aborts at a later step. + +The user should avoid using boot parameters (such as maxcpus, nr_cpus, +possible_cpus) or offlining CPUs before initializing TDX. Doing so will +lead to the mismatch between online CPUs and BIOS-enabled CPUs, resulting +TDX module initialization failure. + +It is ok to offline CPUs after TDX initialization is completed. + +7. Kexec + +The TDX module can be initialized only once during its lifetime. The +first generation of TDX doesn't have interface to reset TDX module to +uninitialized state so it can be initialized again. + +This implies: + + - If the old kernel fails to initialize TDX, the new kernel cannot + use TDX too unless the new kernel fixes the bug which leads to + initialization failure in the old kernel and can resume from where + the old kernel stops. This requires certain coordination between + the two kernels. + + - If the old kernel has initialized TDX successfully, the new kernel + may be able to use TDX if the two kernels have exactly the same + configurations on the TDX module. It further requires the new kernel + to reserve the TDX metadata pages (allocated by the old kernel) in + its page allocator. It also requires coordination between the two + kernels. Furthermore, if kexec() is done when there are active TD + guests running, the new kernel cannot use TDX because it's extremely + hard for the old kernel to pass all TDX private pages to the new + kernel. + +Given that, the current implementation doesn't support TDX after kexec() +(except the old kernel hasn't initialized TDX at all). + +The current implementation doesn't shut down TDX module but leaves it +open during kexec(). This is because shutting down TDX module requires +CPU being in VMX operation but there's no guarantee of this during +kexec(). Leaving the TDX module open is not the best case, but it is OK +since the new kernel won't be able to use TDX anyway (therefore TDX +module won't run at all). + +This can be further enhanced when core-kernele (non-KVM) can handle +VMXON. + +If TDX is ever enabled and/or used to run any TD guests, the cachelines +of TDX private memory, including PAMTs, used by TDX module need to be +flushed before transiting to the new kernel otherwise they may silently +corrupt the new kernel. Similar to SME, the current implementation +flushes cache in stop_this_cpu(). + +8. Initialization error + +Currently, any error happened during TDX initialization moves the TDX +module to the SHUTDOWN state. No SEAMCALL is allowed in this state, and +the TDX module cannot be re-initialized without a hard reset. + +This can be further enhanced to treat some errors as recoverable errors +and let the caller retry later. A more detailed state machine can be +added to record the internal state of TDX module, and the initialization +can resume from that state in the next try. + +Specifically, there are three cases that can be treated as recoverable +error: 1) -ENOMEM (i.e. due to PAMT allocation); 2) TDH.SYS.CONFIG error +due to TDH.SYS.LP.INIT is not called on all cpus (i.e. due to offline +cpus); 3) -EPERM when the caller doesn't guarantee all cpus are in VMX +operation.