From patchwork Fri Dec 9 06:52:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kai Huang X-Patchwork-Id: 13069305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42846C4332F for ; Fri, 9 Dec 2022 06:54:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D72638E0003; Fri, 9 Dec 2022 01:53:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D22C18E0001; Fri, 9 Dec 2022 01:53:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC4668E0003; Fri, 9 Dec 2022 01:53:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id ADD238E0001 for ; Fri, 9 Dec 2022 01:53:59 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 7292A1C6A3F for ; Fri, 9 Dec 2022 06:53:59 +0000 (UTC) X-FDA: 80221852998.03.90686D4 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf20.hostedemail.com (Postfix) with ESMTP id 7CE881C000E for ; Fri, 9 Dec 2022 06:53:57 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=QVtiOwLc; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf20.hostedemail.com: domain of kai.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=kai.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670568837; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3DXRwdosXw8bHyQhEcE3KJUQ2Z7XmqYdbiyt7wsSM1Q=; b=VOQgFwf+RRR19JmOxnJX177Qq/WVIg2fUw57adsCWUuhPZTaxDTvOeUMKOKUbi+4hLDamP VEAvrx/DMRmAGDkAH99vsUJNWm9Bv4ThDr+DsyB6V0wbna4mMvWprMWIJBRDPuZQ9+LRHO FbxdMDqqG1ZgGt1029/0sbWd2vGtKho= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=QVtiOwLc; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf20.hostedemail.com: domain of kai.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=kai.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670568837; a=rsa-sha256; cv=none; b=IpchOIrTX+IGDaReUHaQGRdXqPQZHNXrRDKc/JY1EsWdNKWU+558IW291kBr+kP+VOT0oe 6bT00DelEKLZCk9oG5CPSzxwOANnXGz6ZESstTcN2nvd9wdPrSMRRy7tPqo8IOLvdSGDS2 I4waT84ZoQMsxcDz2pVSUkkkik9Vn9I= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670568837; x=1702104837; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zj8VYCQPKpJr/rbKEs7DLA4ziOlJLcuFQ401mUvTogY=; b=QVtiOwLcAVv6n4rghav80EHFe0o6vhqbgP2Nfb5HDF/TY8OMSqUffXie hto6vBYtpB78qjcyhX1SMnqcQexsqjdZYlRasLrOM3TukC7a5gEeaUx9U kY/II2B/6PtSCrZd90N2pu6ZMDKXc04UnmYrdtWYdyHQoDmQDUqfV0Y92 hnt6U+G4BD3TfZrOo5GTDzSzEKhBfoGhBbUvILyIOzI7vkVm9xRcGI9GN PoZd7jfIuvF32kYl2C+48FrlB6NQ1ynWj1RO0q2tuvICRPICWbarB4R33 JDnvB8x5ySnuKnTwjXOMw60CRiOhnXcm5T5f+VPM2tPuhOO3TMrJ2zQdx g==; X-IronPort-AV: E=McAfee;i="6500,9779,10555"; a="318551436" X-IronPort-AV: E=Sophos;i="5.96,230,1665471600"; d="scan'208";a="318551436" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Dec 2022 22:53:57 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10555"; a="679837035" X-IronPort-AV: E=Sophos;i="5.96,230,1665471600"; d="scan'208";a="679837035" Received: from omiramon-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.212.28.82]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Dec 2022 22:53:52 -0800 From: Kai Huang To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: linux-mm@kvack.org, dave.hansen@intel.com, peterz@infradead.org, tglx@linutronix.de, seanjc@google.com, pbonzini@redhat.com, dan.j.williams@intel.com, rafael.j.wysocki@intel.com, kirill.shutemov@linux.intel.com, ying.huang@intel.com, reinette.chatre@intel.com, len.brown@intel.com, tony.luck@intel.com, ak@linux.intel.com, isaku.yamahata@intel.com, chao.gao@intel.com, sathyanarayanan.kuppuswamy@linux.intel.com, bagasdotme@gmail.com, sagis@google.com, imammedo@redhat.com, kai.huang@intel.com Subject: [PATCH v8 11/16] x86/virt/tdx: Designate reserved areas for all TDMRs Date: Fri, 9 Dec 2022 19:52:32 +1300 Message-Id: <27dcd2781a450b3f77a2aec833de6a3669bc0fb8.1670566861.git.kai.huang@intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 7CE881C000E X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: yow3xc1cckss41rq3tc9a61o667p5b8i X-HE-Tag: 1670568837-565320 X-HE-Meta: U2FsdGVkX18f6moN4WXhp1A2evt3CIhk9tbZ5AADBaUuo+NRyDYXFL4EQ0amcrbEPP0JPvCTh8J/PlN7Aw8I8dIJkzrQLnq3rFmP/ieOtwhy1eDtsx2hZ02lVuyR3VLbvJs0se5wXjPG5kpVO8XSx1AyO1CtOhPiaifaDauU9A7L/YihcHur03KPqGqZkwSTWe2YOQn1w+pT9sNC7KpXWw5pJf59YTw8aORi/lg85ITL+BE2L+ksqiRRv/TCP8dBYyRXym5PMiQnMz1DWHHGBWj+ht95rKuRrTe+V9vvQOlwz4E2Bsh3w+HK53het+fQ1HU7tt73ogRo2x+thiwEcqVU37NgA+LeF8Wb6wEY43ihzJIiIWDBDBs/A1o0ZomS3LNAmXw70g0V49BTV7BBsDKXSY0ZOLPjuZ4qMfo+afUGUq1kaluEDdj5xHWWRfDzKFLAncgqMEQwLYYiABX3LR9rEp3+LtWuH6mS8kYCInOxaEB3q59A6Yl7Bn4yzmmNYqb9EzSLV8D+kT/3+vPUpNoXEoMEURkr1Tiw2Irmj4mkVzotghGsabXMvtTT6UBbuQtyH+d7uWcpvWK9Qhx+gl9V6z5/GoGyBbQxkP2V212vDmfEI6qHf9UZDg54pvOJAfktNFLMLWaelnbKAHCsWyQvueT2fW0xjXz+BOxR5W/bEmhD7ZBl869CZkyyy4dkU8ta0fPDBBGmwXpO11SWi/uW5wFWN4vnRX7VjnyFUv2AhE3Ljp11MQkpbGb7UmbPIhtPk+I285R7uIeh8bb71rfIw65wH+heVsVxz0MDSNPta1Jm6bnhvcAgmnc2+00jY2gjbAla/zkG3KyOuSZhrwbHBpRwyyk+MBb1SYDozWTE6hmBJihpie91VYdbZyKkQL/K9KE4ajKDi2ZzFXLy+A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As the last step of constructing TDMRs, populate reserved areas for all TDMRs. For each TDMR, put all memory holes within this TDMR to the reserved areas. And for all PAMTs which overlap with this TDMR, put all the overlapping parts to reserved areas too. Reviewed-by: Isaku Yamahata Signed-off-by: Kai Huang --- v7 -> v8: (Dave) - "set_up" -> "populate" in function name change (Dave). - Improved comment suggested by Dave. - Other changes due to 'struct tdmr_info_list'. v6 -> v7: - No change. v5 -> v6: - Rebase due to using 'tdx_memblock' instead of memblock. - Split tdmr_set_up_rsvd_areas() into two functions to handle memory hole and PAMT respectively. - Added Isaku's Reviewed-by. --- arch/x86/virt/vmx/tdx/tdx.c | 213 ++++++++++++++++++++++++++++++++++-- 1 file changed, 205 insertions(+), 8 deletions(-) diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c index cf970a783f1f..620b35e2a61b 100644 --- a/arch/x86/virt/vmx/tdx/tdx.c +++ b/arch/x86/virt/vmx/tdx/tdx.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -687,6 +688,202 @@ static unsigned long tdmrs_count_pamt_pages(struct tdmr_info_list *tdmr_list) return pamt_npages; } +static int tdmr_add_rsvd_area(struct tdmr_info *tdmr, int *p_idx, u64 addr, + u64 size, u16 max_reserved_per_tdmr) +{ + struct tdmr_reserved_area *rsvd_areas = tdmr->reserved_areas; + int idx = *p_idx; + + /* Reserved area must be 4K aligned in offset and size */ + if (WARN_ON(addr & ~PAGE_MASK || size & ~PAGE_MASK)) + return -EINVAL; + + if (idx >= max_reserved_per_tdmr) + return -E2BIG; + + rsvd_areas[idx].offset = addr - tdmr->base; + rsvd_areas[idx].size = size; + + *p_idx = idx + 1; + + return 0; +} + +/* + * Go through @tmb_list to find holes between memory areas. If any of + * those holes fall within @tdmr, set up a TDMR reserved area to cover + * the hole. + */ +static int tdmr_populate_rsvd_holes(struct list_head *tmb_list, + struct tdmr_info *tdmr, + int *rsvd_idx, + u16 max_reserved_per_tdmr) +{ + struct tdx_memblock *tmb; + u64 prev_end; + int ret; + + /* + * Start looking for reserved blocks at the + * beginning of the TDMR. + */ + prev_end = tdmr->base; + list_for_each_entry(tmb, tmb_list, list) { + u64 start, end; + + start = PFN_PHYS(tmb->start_pfn); + end = PFN_PHYS(tmb->end_pfn); + + /* Break if this region is after the TDMR */ + if (start >= tdmr_end(tdmr)) + break; + + /* Exclude regions before this TDMR */ + if (end < tdmr->base) + continue; + + /* + * Skip over memory areas that + * have already been dealt with. + */ + if (start <= prev_end) { + prev_end = end; + continue; + } + + /* Add the hole before this region */ + ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, prev_end, + start - prev_end, + max_reserved_per_tdmr); + if (ret) + return ret; + + prev_end = end; + } + + /* Add the hole after the last region if it exists. */ + if (prev_end < tdmr_end(tdmr)) { + ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, prev_end, + tdmr_end(tdmr) - prev_end, + max_reserved_per_tdmr); + if (ret) + return ret; + } + + return 0; +} + +/* + * Go through @tdmr_list to find all PAMTs. If any of those PAMTs + * overlaps with @tdmr, set up a TDMR reserved area to cover the + * overlapping part. + */ +static int tdmr_populate_rsvd_pamts(struct tdmr_info_list *tdmr_list, + struct tdmr_info *tdmr, + int *rsvd_idx, + u16 max_reserved_per_tdmr) +{ + int i, ret; + + for (i = 0; i < tdmr_list->nr_tdmrs; i++) { + struct tdmr_info *tmp = tdmr_entry(tdmr_list, i); + unsigned long pamt_start_pfn, pamt_npages; + u64 pamt_start, pamt_end; + + tdmr_get_pamt(tmp, &pamt_start_pfn, &pamt_npages); + /* Each TDMR must already have PAMT allocated */ + WARN_ON_ONCE(!pamt_npages || !pamt_start_pfn); + + pamt_start = PFN_PHYS(pamt_start_pfn); + pamt_end = PFN_PHYS(pamt_start_pfn + pamt_npages); + + /* Skip PAMTs outside of the given TDMR */ + if ((pamt_end <= tdmr->base) || + (pamt_start >= tdmr_end(tdmr))) + continue; + + /* Only mark the part within the TDMR as reserved */ + if (pamt_start < tdmr->base) + pamt_start = tdmr->base; + if (pamt_end > tdmr_end(tdmr)) + pamt_end = tdmr_end(tdmr); + + ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, pamt_start, + pamt_end - pamt_start, + max_reserved_per_tdmr); + if (ret) + return ret; + } + + return 0; +} + +/* Compare function called by sort() for TDMR reserved areas */ +static int rsvd_area_cmp_func(const void *a, const void *b) +{ + struct tdmr_reserved_area *r1 = (struct tdmr_reserved_area *)a; + struct tdmr_reserved_area *r2 = (struct tdmr_reserved_area *)b; + + if (r1->offset + r1->size <= r2->offset) + return -1; + if (r1->offset >= r2->offset + r2->size) + return 1; + + /* Reserved areas cannot overlap. The caller must guarantee. */ + WARN_ON_ONCE(1); + return -1; +} + +/* + * Populate reserved areas for the given @tdmr, including memory holes + * (via @tmb_list) and PAMTs (via @tdmr_list). + */ +static int tdmr_populate_rsvd_areas(struct tdmr_info *tdmr, + struct list_head *tmb_list, + struct tdmr_info_list *tdmr_list, + u16 max_reserved_per_tdmr) +{ + int ret, rsvd_idx = 0; + + ret = tdmr_populate_rsvd_holes(tmb_list, tdmr, &rsvd_idx, + max_reserved_per_tdmr); + if (ret) + return ret; + + ret = tdmr_populate_rsvd_pamts(tdmr_list, tdmr, &rsvd_idx, + max_reserved_per_tdmr); + if (ret) + return ret; + + /* TDX requires reserved areas listed in address ascending order */ + sort(tdmr->reserved_areas, rsvd_idx, sizeof(struct tdmr_reserved_area), + rsvd_area_cmp_func, NULL); + + return 0; +} + +/* + * Populate reserved areas for all TDMRs in @tdmr_list, including memory + * holes (via @tmb_list) and PAMTs. + */ +static int tdmrs_populate_rsvd_areas_all(struct tdmr_info_list *tdmr_list, + struct list_head *tmb_list, + u16 max_reserved_per_tdmr) +{ + int i; + + for (i = 0; i < tdmr_list->nr_tdmrs; i++) { + int ret; + + ret = tdmr_populate_rsvd_areas(tdmr_entry(tdmr_list, i), + tmb_list, tdmr_list, max_reserved_per_tdmr); + if (ret) + return ret; + } + + return 0; +} + /* * Construct a list of TDMRs on the preallocated space in @tdmr_list * to cover all TDX memory regions in @tmb_list based on the TDX module @@ -706,14 +903,14 @@ static int construct_tdmrs(struct list_head *tmb_list, sysinfo->pamt_entry_size); if (ret) goto err; - /* - * TODO: - * - * - Designate reserved areas for each TDMR. - * - * Return -EINVAL until constructing TDMRs is done - */ - ret = -EINVAL; + + ret = tdmrs_populate_rsvd_areas_all(tdmr_list, tmb_list, + sysinfo->max_reserved_per_tdmr); + if (ret) + goto err_free_pamts; + + return 0; +err_free_pamts: tdmrs_free_pamt_all(tdmr_list); err: return ret;