From patchwork Mon Mar 6 14:13:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Kai" X-Patchwork-Id: 13161224 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D886C61DA4 for ; Mon, 6 Mar 2023 14:15:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A1CB6B0074; Mon, 6 Mar 2023 09:15:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 22A66280002; Mon, 6 Mar 2023 09:15:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F1C3280001; Mon, 6 Mar 2023 09:15:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F3D7A6B0074 for ; Mon, 6 Mar 2023 09:15:16 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id AD87FAAD51 for ; Mon, 6 Mar 2023 14:15:16 +0000 (UTC) X-FDA: 80538670632.16.CD207D9 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf05.hostedemail.com (Postfix) with ESMTP id 840DD10001F for ; Mon, 6 Mar 2023 14:15:14 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=mzciYKko; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf05.hostedemail.com: domain of kai.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=kai.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678112114; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rRvrcRkKrkDPjou+CddV54bMQQrJvECJP13jmFVeVn0=; b=kq7+xprDRYNCGxl4+3yDzQy/GMUaXfRjlTuW0PInRzq1BGiHN0vpFnAAs5Ln1QcqIIx1b+ f6uAWJzwok0VtPNh5xS0c55BjjmICHlaYhvYM8buWW2MY71MbXabWuGBO2T+JTQC3iiecw YTpbDwdpKYWI2vdv8H+CdcvM+5iGRvU= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=mzciYKko; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf05.hostedemail.com: domain of kai.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=kai.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678112114; a=rsa-sha256; cv=none; b=PG9oOvhIFxQWzMV4KAM53kJQTw0airZl7C4cFZcP9MhFpO2g/1q/VT7ExxOhT0/VbqJhTZ KYux1GoOMKW9BCJw4zsDXqP4mRW8jI3tNEv+dRGlMoKqQ3KT36YHGuAAcToJdo/OZq5Mgm SZk7u85j301JswXZ+YcWSbucvm7LNH4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678112114; x=1709648114; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Og4X3jpditcgwnDVUk17+Xq7VjKqW/puuAFTfSPP8E0=; b=mzciYKkorxfCIxRbM1nLNA/7cBatK7O2jBs9AaPTS5/1rCdrlWbZwPoa oZFH0XnjEwPOGrK5WrGspnlKuBKcB1s9S3MZZr10kFdv6PPhbUIySs2Ds q0k0UIkY2OuokguYiM7h5s0YgpLdV1lvUBuoE+TrDDMd40+D71SBpPsiN 6zRsqAG36cFZWHWxGWpwUZV77GjWVeKUnCoM1xjlECTJDXbQqSYUq3LAD mlclQ3oHPZ/gw0PhOV1UgD34P+4OWgUVcGeqNXNyzzrbnkoUwfy4Tm24m jZc+z8sKo3fQ0U7yjXC3nBjbXl0VseePoVXlhKy1H5dVGzKBpVlRiQBY8 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10641"; a="337080203" X-IronPort-AV: E=Sophos;i="5.98,238,1673942400"; d="scan'208";a="337080203" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2023 06:15:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10641"; a="765232254" X-IronPort-AV: E=Sophos;i="5.98,238,1673942400"; d="scan'208";a="765232254" Received: from jwhisle1-mobl2.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.212.92.57]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2023 06:15:09 -0800 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: linux-mm@kvack.org, dave.hansen@intel.com, peterz@infradead.org, tglx@linutronix.de, seanjc@google.com, pbonzini@redhat.com, dan.j.williams@intel.com, rafael.j.wysocki@intel.com, kirill.shutemov@linux.intel.com, ying.huang@intel.com, reinette.chatre@intel.com, len.brown@intel.com, tony.luck@intel.com, ak@linux.intel.com, isaku.yamahata@intel.com, chao.gao@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, david@redhat.com, bagasdotme@gmail.com, sagis@google.com, imammedo@redhat.com, kai.huang@intel.com Subject: [PATCH v10 09/16] x86/virt/tdx: Fill out TDMRs to cover all TDX memory regions Date: Tue, 7 Mar 2023 03:13:54 +1300 Message-Id: <0e200d3110d4f7fce9c569156c5ec4c94fd13c1c.1678111292.git.kai.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 840DD10001F X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: adnti6ckx5zwwdgqgbtejf3bqfs14ynm X-HE-Tag: 1678112114-993781 X-HE-Meta: U2FsdGVkX1+/Wx6leOCBI+6ge76M8+rofPBOHRV5e0v1n6z1qW8IwssEmfzvBAQjDU6wEs1qJJbwrvFUZeB4XxxzgHLOZN1aAM/WMB3lOrPu5kWSAfu8KBPmBp6x+jABKXqCRbt7TTEE+7awCEgo0pzjGsbDxlmHKcu8IHKp+6onO2CRYHxxX/M7Kc7zufSQ/XJw8ThsDLPB54Iw0rQd7/lmvboHdBf2G/B7SNNxbrXvouLttWE9mnHGHK7M/ge9P8lf3nzOyUfZgdm0g4D2uly288YtlbGvProeZjrRcPob+oMA/xZkU+O7Pd7wgh6TJ2YAjCP/LMgHBrkDdLsgp2oEFCQik5jmfXFfPzO4tUptzL/1hJLsG5q8UUJnU874EOBZhVA+aBppFXvput2nfqUHYPYBhwZe0rezg9AEdFTwKlaS/phdJVYpaHtOWMtAIawu0HjzhJdsTRvNwcuNXM9wmWWAmy+Oxyey7xnmLt7AO4bO5U3lnOocCyfq8nqTUVd2D0gkVQcuMO/njgGWz+/AJ1cy8fuSLAopnW6AomKkSxb+1qpSRAp4pltrbARq+Lc0R9XmbY5IdNfnRQCTUoPMQ6OS1hGzh6HKXcunYqsieAwttVXamc/teB7UMi/hYQC+jyXra7StMbt+cOn6cI16QUvEQQM4RUG/6HVVUQ0i0HLulUlxki+UN4UhHz27uNLnacxtUdabtPQOcxK8jEEopKIYnKoaiuXLLQpqG3TkI1jzExE9RHymTJxslArLbUJmXOxeKuPOsGy+Z/KqIQUY0V+dp4aotG/f8rROdmbkZMfn9giI8YWJS8d0fVPD5FatGczgDeLfKbnJwjwvA0gvfhFs4OOaOx25BdsT0/K1q1+MfeDmKYEYAdcSqw8n8wvHZMirWVujIEZdcYJE7W6/KOKpvqfiIEqMro96ArksoYx45qEroz6mmlv1HVzoE/aQeduCrCDwVN1cB1V wDQftkHZ JQuesHa8X8lHWgkSd0OdeMN8T7ed9gQgZgXbqnLglwMFkg7nxAtUrI72VIcfgzdVv5FKXVXUs5R6Rtq+zrQkHB0SngnGxae/2lRw8sX2I49a3gjq1nLOcV/h0ZJdek0vCJ/IwKJdE5g6zWOgWlb1yzwQb5n3FKDT+sYdB7ZWuEYHSCvU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000004, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Start to transit out the "multi-steps" to construct a list of "TD Memory Regions" (TDMRs) to cover all TDX-usable memory regions. The kernel configures TDX-usable memory regions by passing a list of TDMRs "TD Memory Regions" (TDMRs) to the TDX module. Each TDMR contains the information of the base/size of a memory region, the base/size of the associated Physical Address Metadata Table (PAMT) and a list of reserved areas in the region. Do the first step to fill out a number of TDMRs to cover all TDX memory regions. To keep it simple, always try to use one TDMR for each memory region. As the first step only set up the base/size for each TDMR. Each TDMR must be 1G aligned and the size must be in 1G granularity. This implies that one TDMR could cover multiple memory regions. If a memory region spans the 1GB boundary and the former part is already covered by the previous TDMR, just use a new TDMR for the remaining part. TDX only supports a limited number of TDMRs. Disable TDX if all TDMRs are consumed but there is more memory region to cover. There are fancier things that could be done like trying to merge adjacent TDMRs. This would allow more pathological memory layouts to be supported. But, current systems are not even close to exhausting the existing TDMR resources in practice. For now, keep it simple. Signed-off-by: Kai Huang --- v9 -> v10: - No change. v8 -> v9: - Added the last paragraph in the changelog (Dave). - Removed unnecessary type cast in tdmr_entry() (Dave). --- arch/x86/virt/vmx/tdx/tdx.c | 94 ++++++++++++++++++++++++++++++++++++- 1 file changed, 93 insertions(+), 1 deletion(-) diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c index 2b87cedc7fce..e2487d872bbd 100644 --- a/arch/x86/virt/vmx/tdx/tdx.c +++ b/arch/x86/virt/vmx/tdx/tdx.c @@ -508,6 +508,93 @@ static void free_tdmr_list(struct tdmr_info_list *tdmr_list) tdmr_list->max_tdmrs * tdmr_list->tdmr_sz); } +/* Get the TDMR from the list at the given index. */ +static struct tdmr_info *tdmr_entry(struct tdmr_info_list *tdmr_list, + int idx) +{ + int tdmr_info_offset = tdmr_list->tdmr_sz * idx; + + return (void *)tdmr_list->tdmrs + tdmr_info_offset; +} + +#define TDMR_ALIGNMENT BIT_ULL(30) +#define TDMR_PFN_ALIGNMENT (TDMR_ALIGNMENT >> PAGE_SHIFT) +#define TDMR_ALIGN_DOWN(_addr) ALIGN_DOWN((_addr), TDMR_ALIGNMENT) +#define TDMR_ALIGN_UP(_addr) ALIGN((_addr), TDMR_ALIGNMENT) + +static inline u64 tdmr_end(struct tdmr_info *tdmr) +{ + return tdmr->base + tdmr->size; +} + +/* + * Take the memory referenced in @tmb_list and populate the + * preallocated @tdmr_list, following all the special alignment + * and size rules for TDMR. + */ +static int fill_out_tdmrs(struct list_head *tmb_list, + struct tdmr_info_list *tdmr_list) +{ + struct tdx_memblock *tmb; + int tdmr_idx = 0; + + /* + * Loop over TDX memory regions and fill out TDMRs to cover them. + * To keep it simple, always try to use one TDMR to cover one + * memory region. + * + * In practice TDX1.0 supports 64 TDMRs, which is big enough to + * cover all memory regions in reality if the admin doesn't use + * 'memmap' to create a bunch of discrete memory regions. When + * there's a real problem, enhancement can be done to merge TDMRs + * to reduce the final number of TDMRs. + */ + list_for_each_entry(tmb, tmb_list, list) { + struct tdmr_info *tdmr = tdmr_entry(tdmr_list, tdmr_idx); + u64 start, end; + + start = TDMR_ALIGN_DOWN(PFN_PHYS(tmb->start_pfn)); + end = TDMR_ALIGN_UP(PFN_PHYS(tmb->end_pfn)); + + /* + * A valid size indicates the current TDMR has already + * been filled out to cover the previous memory region(s). + */ + if (tdmr->size) { + /* + * Loop to the next if the current memory region + * has already been fully covered. + */ + if (end <= tdmr_end(tdmr)) + continue; + + /* Otherwise, skip the already covered part. */ + if (start < tdmr_end(tdmr)) + start = tdmr_end(tdmr); + + /* + * Create a new TDMR to cover the current memory + * region, or the remaining part of it. + */ + tdmr_idx++; + if (tdmr_idx >= tdmr_list->max_tdmrs) { + pr_warn("initialization failed: TDMRs exhausted.\n"); + return -ENOSPC; + } + + tdmr = tdmr_entry(tdmr_list, tdmr_idx); + } + + tdmr->base = start; + tdmr->size = end - start; + } + + /* @tdmr_idx is always the index of last valid TDMR. */ + tdmr_list->nr_consumed_tdmrs = tdmr_idx + 1; + + return 0; +} + /* * Construct a list of TDMRs on the preallocated space in @tdmr_list * to cover all TDX memory regions in @tmb_list based on the TDX module @@ -517,10 +604,15 @@ static int construct_tdmrs(struct list_head *tmb_list, struct tdmr_info_list *tdmr_list, struct tdsysinfo_struct *sysinfo) { + int ret; + + ret = fill_out_tdmrs(tmb_list, tdmr_list); + if (ret) + return ret; + /* * TODO: * - * - Fill out TDMRs to cover all TDX memory regions. * - Allocate and set up PAMTs for each TDMR. * - Designate reserved areas for each TDMR. *