From patchwork Mon Apr 25 03:39:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43360C433F5 for ; Mon, 25 Apr 2022 03:39:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 11DBF6B00AB; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0CDE86B00AC; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E88F26B00AD; Sun, 24 Apr 2022 23:39:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id C6E6B6B00AC for ; Sun, 24 Apr 2022 23:39:44 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id A0752121E9F for ; Mon, 25 Apr 2022 03:39:44 +0000 (UTC) X-FDA: 79393997088.25.EC5A051 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf01.hostedemail.com (Postfix) with ESMTP id 7D66940038 for ; Mon, 25 Apr 2022 03:39:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857983; x=1682393983; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=voiZlbMIDqijYsQ1xhylh9FstZReHrjZplC2cj9TuAA=; b=M5JAnBQmB9JVkcSWTNbBYjkWF9urxJV46I2a4bTJ31e0swNwrHsU/X9b R9s7N5e/x4tbiiULDgcfjt1xluziXlf8z3F1SRT3Agl92G3pHISzNnXdD ft/Ihw7zTgqg4DVY3IXLZp1niOt3WeeB4XevweCa9cKIalFLwywsfsqeF hq05YnF9SIp+AYbMFwLVQrFdcrGqASoVDOdb1p+XS+vqU4ohIoUcVF6RX PcjG6AKX0ivY8lQf8dW4zW/lW+UtZx/xMskxCCOEIhXDXDTgsltOB/Qa7 J8IyM4XQannQuJ0kt9Za6KtgvuieP/e0eRE2cBQ2h9gdcGJvSR2KJqzg3 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="351576504" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="351576504" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="729520376" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga005.jf.intel.com with ESMTP; 24 Apr 2022 20:39:35 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 8973481; Mon, 25 Apr 2022 06:39:35 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 01/12] x86/boot/: Centralize __pa()/__va() definitions Date: Mon, 25 Apr 2022 06:39:23 +0300 Message-Id: <20220425033934.68551-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 7D66940038 X-Stat-Signature: pwdzuaphgs9y54mtqe7qsp4pscnc3pgn Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=M5JAnBQm; spf=none (imf01.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.43) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1650857980-835851 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace multiple __pa()/__va() definitions with a single one in misc.h. Signed-off-by: Kirill A. Shutemov Reviewed-by: David Hildenbrand Reviewed-by: Mike Rapoport --- arch/x86/boot/compressed/ident_map_64.c | 8 -------- arch/x86/boot/compressed/misc.h | 9 +++++++++ arch/x86/boot/compressed/sev.c | 2 -- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c index f7213d0943b8..fe523ee1a19f 100644 --- a/arch/x86/boot/compressed/ident_map_64.c +++ b/arch/x86/boot/compressed/ident_map_64.c @@ -8,14 +8,6 @@ * Copyright (C) 2016 Kees Cook */ -/* - * Since we're dealing with identity mappings, physical and virtual - * addresses are the same, so override these defines which are ultimately - * used by the headers in misc.h. - */ -#define __pa(x) ((unsigned long)(x)) -#define __va(x) ((void *)((unsigned long)(x))) - /* No PAGE_TABLE_ISOLATION support needed either: */ #undef CONFIG_PAGE_TABLE_ISOLATION diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h index ea71cf3d64e1..9f7154a30d37 100644 --- a/arch/x86/boot/compressed/misc.h +++ b/arch/x86/boot/compressed/misc.h @@ -19,6 +19,15 @@ /* cpu_feature_enabled() cannot be used this early */ #define USE_EARLY_PGTABLE_L5 +/* + * Boot stub deals with identity mappings, physical and virtual addresses are + * the same, so override these defines. + * + * will not define them if they are already defined. + */ +#define __pa(x) ((unsigned long)(x)) +#define __va(x) ((void *)((unsigned long)(x))) + #include #include #include diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c index 28bcf04c022e..4dcea0bc4fe4 100644 --- a/arch/x86/boot/compressed/sev.c +++ b/arch/x86/boot/compressed/sev.c @@ -106,9 +106,7 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt, } #undef __init -#undef __pa #define __init -#define __pa(x) ((unsigned long)(x)) #define __BOOT_COMPRESSED From patchwork Mon Apr 25 03:39:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825226 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7B28C43217 for ; Mon, 25 Apr 2022 03:39:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7AFCA6B00B0; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 743ED6B00AF; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5143F6B00AE; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 314BD6B00AD for ; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1A325273CF for ; Mon, 25 Apr 2022 03:39:45 +0000 (UTC) X-FDA: 79393997130.05.68EF8D8 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf13.hostedemail.com (Postfix) with ESMTP id 2B56A20037 for ; Mon, 25 Apr 2022 03:39:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857984; x=1682393984; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AU3N/AS272NjI3/TmIzxmfjdnreKQE9x+jX/bkWRHy4=; b=bpWy36rp5TD057JeNdfyVBKGC97GYS6VqCIxg0NGFyqQA9tATm7xNbp1 DNp3bzPvLIdzBO0aZwnqmxsW31x8ntGYgFVF8mTdoPxZ+h6emAeEGNxW9 jzD+YtD3DAOKUhqLhiuoqH584J0n2/qSsfgtkPLmzN0SKdCNeDMYFb0D0 A0uV+tl86MZTxnvbrmDuUzuRQCdUypRaTtmShw5TEAe/OS7XWEasv/pJs 2y1+a4cah3bY1SFtTxF555u8THZPKMMAUeWtzw+v1/betOBEdgViA8yJ7 K9h7JeQFA15HcxdannACADjRxzZ14TBN7CN3Gn3bU2A+STIz9cwct1ttj g==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="265294788" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="265294788" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="512438874" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga003.jf.intel.com with ESMTP; 24 Apr 2022 20:39:35 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 933EE12C; Mon, 25 Apr 2022 06:39:35 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Mike Rapoport Subject: [PATCHv5 02/12] mm: Add support for unaccepted memory Date: Mon, 25 Apr 2022 06:39:24 +0300 Message-Id: <20220425033934.68551-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2B56A20037 X-Stat-Signature: c7e3qphefpxxd995wkt1o7chxf8ns176 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=bpWy36rp; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf13.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=kirill.shutemov@linux.intel.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1650857978-382011 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: UEFI Specification version 2.9 introduces the concept of memory acceptance. Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, require memory to be accepted before it can be used by the guest. Accepting happens via a protocol specific to the Virtual Machine platform. There are several ways kernel can deal with unaccepted memory: 1. Accept all the memory during the boot. It is easy to implement and it doesn't have runtime cost once the system is booted. The downside is very long boot time. Accept can be parallelized to multiple CPUs to keep it manageable (i.e. via DEFERRED_STRUCT_PAGE_INIT), but it tends to saturate memory bandwidth and does not scale beyond the point. 2. Accept a block of memory on the first use. It requires more infrastructure and changes in page allocator to make it work, but it provides good boot time. On-demand memory accept means latency spikes every time kernel steps onto a new memory block. The spikes will go away once workload data set size gets stabilized or all memory gets accepted. 3. Accept all memory in background. Introduce a thread (or multiple) that gets memory accepted proactively. It will minimize time the system experience latency spikes on memory allocation while keeping low boot time. This approach cannot function on its own. It is an extension of #2: background memory acceptance requires functional scheduler, but the page allocator may need to tap into unaccepted memory before that. The downside of the approach is that these threads also steal CPU cycles and memory bandwidth from the user's workload and may hurt user experience. Implement #2 for now. It is a reasonable default. Some workloads may want to use #1 or #3 and they can be implemented later based on user's demands. Support of unaccepted memory requires a few changes in core-mm code: - memblock has to accept memory on allocation; - page allocator has to accept memory on the first allocation of the page; Memblock change is trivial. The page allocator is modified to accept pages on the first allocation. The new page type (encoded in the _mapcount) -- PageUnaccepted() -- is used to indicate that the page requires acceptance. Architecture has to provide two helpers if it wants to support unaccepted memory: - accept_memory() makes a range of physical addresses accepted. - memory_is_unaccepted() checks anything within the range of physical addresses requires acceptance. Signed-off-by: Kirill A. Shutemov Acked-by: Mike Rapoport # memblock --- include/linux/page-flags.h | 31 +++++++++++++++ mm/internal.h | 11 ++++++ mm/memblock.c | 9 +++++ mm/page_alloc.c | 77 +++++++++++++++++++++++++++++++++++++- 4 files changed, 126 insertions(+), 2 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 9d8eeaa67d05..7f21267366a9 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -928,6 +928,14 @@ static inline bool is_page_hwpoison(struct page *page) #define PG_offline 0x00000100 #define PG_table 0x00000200 #define PG_guard 0x00000400 +#define PG_unaccepted 0x00000800 + +/* + * Page types allowed at page_expected_state() + * + * PageUnaccepted() will get cleared in post_alloc_hook(). + */ +#define PAGE_TYPES_EXPECTED PG_unaccepted #define PageType(page, flag) \ ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) @@ -953,6 +961,18 @@ static __always_inline void __ClearPage##uname(struct page *page) \ page->page_type |= PG_##lname; \ } +#define PAGE_TYPE_OPS_FALSE(uname) \ +static __always_inline int Page##uname(struct page *page) \ +{ \ + return false; \ +} \ +static __always_inline void __SetPage##uname(struct page *page) \ +{ \ +} \ +static __always_inline void __ClearPage##uname(struct page *page) \ +{ \ +} + /* * PageBuddy() indicates that the page is free and in the buddy system * (see mm/page_alloc.c). @@ -983,6 +1003,17 @@ PAGE_TYPE_OPS(Buddy, buddy) */ PAGE_TYPE_OPS(Offline, offline) +/* + * PageUnaccepted() indicates that the page has to be "accepted" before it can + * be read or written. The page allocator must call accept_page() before + * touching the page or returning it to the caller. + */ +#ifdef CONFIG_UNACCEPTED_MEMORY +PAGE_TYPE_OPS(Unaccepted, unaccepted) +#else +PAGE_TYPE_OPS_FALSE(Unaccepted) +#endif + extern void page_offline_freeze(void); extern void page_offline_thaw(void); extern void page_offline_begin(void); diff --git a/mm/internal.h b/mm/internal.h index cf16280ce132..10302fe857c4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -758,4 +758,15 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags); DECLARE_PER_CPU(struct per_cpu_nodestat, boot_nodestats); +#ifndef CONFIG_UNACCEPTED_MEMORY +static inline bool memory_is_unaccepted(phys_addr_t start, phys_addr_t end) +{ + return false; +} + +static inline void accept_memory(phys_addr_t start, phys_addr_t end) +{ +} +#endif + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memblock.c b/mm/memblock.c index e4f03a6e8e56..a1f7f8b304d5 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1405,6 +1405,15 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, */ kmemleak_alloc_phys(found, size, 0, 0); + /* + * Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, + * require memory to be accepted before it can be used by the + * guest. + * + * Accept the memory of the allocated buffer. + */ + accept_memory(found, found + size); + return found; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6e5b4488a0c5..d38cfb146f11 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -121,6 +121,12 @@ typedef int __bitwise fpi_t; */ #define FPI_SKIP_KASAN_POISON ((__force fpi_t)BIT(2)) +/* + * Check if the page needs to be marked as PageUnaccepted(). + * Used for the new pages added to the buddy allocator for the first time. + */ +#define FPI_UNACCEPTED_SLOWPATH ((__force fpi_t)BIT(3)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8) @@ -1023,6 +1029,29 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, return page_is_buddy(higher_page, higher_buddy, order + 1); } +/* + * Page acceptance can be very slow. Do not call under critical locks. + */ +static void accept_page(struct page *page, unsigned int order) +{ + phys_addr_t start = page_to_phys(page); + int i; + + accept_memory(start, start + (PAGE_SIZE << order)); + + for (i = 0; i < (1 << order); i++) { + if (PageUnaccepted(page + i)) + __ClearPageUnaccepted(page + i); + } +} + +static bool page_is_unaccepted(struct page *page, unsigned int order) +{ + phys_addr_t start = page_to_phys(page); + + return memory_is_unaccepted(start, start + (PAGE_SIZE << order)); +} + /* * Freeing function for a buddy system allocator. * @@ -1058,6 +1087,7 @@ static inline void __free_one_page(struct page *page, unsigned long combined_pfn; struct page *buddy; bool to_tail; + bool page_needs_acceptance = PageUnaccepted(page); VM_BUG_ON(!zone_is_initialized(zone)); VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); @@ -1089,6 +1119,11 @@ static inline void __free_one_page(struct page *page, clear_page_guard(zone, buddy, order, migratetype); else del_page_from_free_list(buddy, zone, order); + + /* Mark page unaccepted if any of merged pages were unaccepted */ + if (PageUnaccepted(buddy)) + page_needs_acceptance = true; + combined_pfn = buddy_pfn & pfn; page = page + (combined_pfn - pfn); pfn = combined_pfn; @@ -1124,6 +1159,23 @@ static inline void __free_one_page(struct page *page, done_merging: set_buddy_order(page, order); + /* + * The page gets marked as PageUnaccepted() if any of merged-in pages + * is PageUnaccepted(). + * + * New pages, just being added to buddy allocator, do not have + * PageUnaccepted() set. FPI_UNACCEPTED_SLOWPATH indicates that the + * page is new and page_is_unaccepted() check is required to + * determinate if accaptance is required. + * + * Avoid calling page_is_unaccepted() if it is known that the page + * needs acceptance. It can be costly. + */ + if (!page_needs_acceptance && (fpi_flags & FPI_UNACCEPTED_SLOWPATH)) + page_needs_acceptance = page_is_unaccepted(page, order); + if (page_needs_acceptance) + __SetPageUnaccepted(page); + if (fpi_flags & FPI_TO_TAIL) to_tail = true; else if (is_shuffle_order(order)) @@ -1149,7 +1201,13 @@ static inline void __free_one_page(struct page *page, static inline bool page_expected_state(struct page *page, unsigned long check_flags) { - if (unlikely(atomic_read(&page->_mapcount) != -1)) + /* + * The page must not be mapped to userspace and must not have + * a PageType other than listed in PAGE_TYPES_EXPECTED. + * + * Note, bit cleared means the page type is set. + */ + if (unlikely((atomic_read(&page->_mapcount) | PAGE_TYPES_EXPECTED) != -1)) return false; if (unlikely((unsigned long)page->mapping | @@ -1654,7 +1712,9 @@ void __free_pages_core(struct page *page, unsigned int order) * Bypass PCP and place fresh pages right to the tail, primarily * relevant for memory onlining. */ - __free_pages_ok(page, order, FPI_TO_TAIL | FPI_SKIP_KASAN_POISON); + __free_pages_ok(page, order, + FPI_TO_TAIL | FPI_SKIP_KASAN_POISON | + FPI_UNACCEPTED_SLOWPATH); } #ifdef CONFIG_NUMA @@ -1807,6 +1867,9 @@ static void __init deferred_free_range(unsigned long pfn, return; } + /* Accept chunks smaller than page-block upfront */ + accept_memory(pfn << PAGE_SHIFT, (pfn + nr_pages) << PAGE_SHIFT); + for (i = 0; i < nr_pages; i++, page++, pfn++) { if ((pfn & (pageblock_nr_pages - 1)) == 0) set_pageblock_migratetype(page, MIGRATE_MOVABLE); @@ -2266,6 +2329,13 @@ static inline void expand(struct zone *zone, struct page *page, if (set_page_guard(zone, &page[size], high, migratetype)) continue; + /* + * Transfer PageUnaccepted() to the newly split pages so + * they can be accepted after dropping the zone lock. + */ + if (PageUnaccepted(page)) + __SetPageUnaccepted(&page[size]); + add_to_free_list(&page[size], zone, high, migratetype); set_buddy_order(&page[size], high); } @@ -2396,6 +2466,9 @@ inline void post_alloc_hook(struct page *page, unsigned int order, */ kernel_unpoison_pages(page, 1 << order); + if (PageUnaccepted(page)) + accept_page(page, order); + /* * As memory initialization might be integrated into KASAN, * KASAN unpoisoning and memory initializion code must be From patchwork Mon Apr 25 03:39:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825228 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42E35C4332F for ; Mon, 25 Apr 2022 03:39:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1EBF66B00AD; Sun, 24 Apr 2022 23:39:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F5B06B00AE; Sun, 24 Apr 2022 23:39:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E144A6B00AF; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id A2F756B00AE for ; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 82F9421FE9 for ; Mon, 25 Apr 2022 03:39:45 +0000 (UTC) X-FDA: 79393997130.22.E8ED487 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf01.hostedemail.com (Postfix) with ESMTP id 5389140038 for ; Mon, 25 Apr 2022 03:39:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857984; x=1682393984; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uq7WbkY0GvTQhFT7oa2kwzHtvnsgmaIdyx/9kdIYCD0=; b=Ulsfxi1ekNtLj4dkY7y+V8QrMwHzN0Wt07eYSoWF0/guIQTs4R0ftY5M sDPQqqOM7lZjYXMebDCVSh4l3XTKIeKBSRLHPGOjMo3LV2oFQk9Tg/n04 N8EBf2Rk54DVfuzRWfNrsAQji6yVAcJVa4J38zUDXXs4NKCR24WG9n4ly WpGa79OTLjPHECSqNtDCcK6kX0oEUmzWvDgIyzFlw6sT6+V9HAqWzNVSm jLoGxW2bffd7YeKhLd1vJX6b6wXgz01xpho40UC2yJSOlM8eZB+8KCAUF i/iZXNbXjDprDhOuHkf/9I+gxCZtQGhULpXNqRB9g3EujXiQihSzWSbCU w==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="351576506" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="351576506" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="729520378" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga005.jf.intel.com with ESMTP; 24 Apr 2022 20:39:35 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id A608B3A8; Mon, 25 Apr 2022 06:39:35 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 03/12] efi/x86: Get full memory map in allocate_e820() Date: Mon, 25 Apr 2022 06:39:25 +0300 Message-Id: <20220425033934.68551-4-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 5389140038 X-Stat-Signature: 6w39wg9gp1shycyayh7k6e7xtqp49e1m Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Ulsfxi1e; spf=none (imf01.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.43) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1650857981-374964 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently allocate_e820() only interested in the size of map and size of memory descriptor to determine how many e820 entries the kernel needs. UEFI Specification version 2.9 introduces a new memory type -- unaccepted memory. To track unaccepted memory kernel needs to allocate a bitmap. The size of the bitmap is dependent on the maximum physical address present in the system. A full memory map is required to find the maximum address. Modify allocate_e820() to get a full memory map. This is preparation for the next patch that implements handling of unaccepted memory in EFI stub. Signed-off-by: Kirill A. Shutemov --- drivers/firmware/efi/libstub/x86-stub.c | 30 ++++++++++++------------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c index 01ddd4502e28..5401985901f5 100644 --- a/drivers/firmware/efi/libstub/x86-stub.c +++ b/drivers/firmware/efi/libstub/x86-stub.c @@ -569,31 +569,29 @@ static efi_status_t alloc_e820ext(u32 nr_desc, struct setup_data **e820ext, } static efi_status_t allocate_e820(struct boot_params *params, + struct efi_boot_memmap *map, struct setup_data **e820ext, u32 *e820ext_size) { - unsigned long map_size, desc_size, map_key; efi_status_t status; - __u32 nr_desc, desc_version; + __u32 nr_desc; - /* Only need the size of the mem map and size of each mem descriptor */ - map_size = 0; - status = efi_bs_call(get_memory_map, &map_size, NULL, &map_key, - &desc_size, &desc_version); - if (status != EFI_BUFFER_TOO_SMALL) - return (status != EFI_SUCCESS) ? status : EFI_UNSUPPORTED; - - nr_desc = map_size / desc_size + EFI_MMAP_NR_SLACK_SLOTS; + status = efi_get_memory_map(map); + if (status != EFI_SUCCESS) + return status; - if (nr_desc > ARRAY_SIZE(params->e820_table)) { - u32 nr_e820ext = nr_desc - ARRAY_SIZE(params->e820_table); + nr_desc = *map->map_size / *map->desc_size; + if (nr_desc > ARRAY_SIZE(params->e820_table) - EFI_MMAP_NR_SLACK_SLOTS) { + u32 nr_e820ext = nr_desc - ARRAY_SIZE(params->e820_table) + + EFI_MMAP_NR_SLACK_SLOTS; status = alloc_e820ext(nr_e820ext, e820ext, e820ext_size); if (status != EFI_SUCCESS) - return status; + goto out; } - - return EFI_SUCCESS; +out: + efi_bs_call(free_pool, *map->map); + return status; } struct exit_boot_struct { @@ -642,7 +640,7 @@ static efi_status_t exit_boot(struct boot_params *boot_params, void *handle) priv.boot_params = boot_params; priv.efi = &boot_params->efi_info; - status = allocate_e820(boot_params, &e820ext, &e820ext_size); + status = allocate_e820(boot_params, &map, &e820ext, &e820ext_size); if (status != EFI_SUCCESS) return status; From patchwork Mon Apr 25 03:39:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E0EBC433F5 for ; Mon, 25 Apr 2022 03:39:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B85D56B00AC; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ABB3C6B00B1; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AB3C6B00AD; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 5A3CD6B00AC for ; Sun, 24 Apr 2022 23:39:45 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 37F42623A8 for ; Mon, 25 Apr 2022 03:39:45 +0000 (UTC) X-FDA: 79393997130.02.5476A45 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf23.hostedemail.com (Postfix) with ESMTP id 2069A140036 for ; Mon, 25 Apr 2022 03:39:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857984; x=1682393984; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=q/YmYQjfOe8AznqHyjTAnCkWN8UWX+2u2MP+z2dU+nQ=; b=GdWW9wB+wIvAVfCk793CgFYTXH12BalEyzcIwSJB/A3AVMSaxFwqr7qX DjutmfZftEiGfBfJXn2tfTd54Pv9lDMcwLQneoJiM92JgWMk9ofh7nDUo Q8cxwuMHxNWPH2ILwyMIiTVdaX4TL2YnbARKm3V+g8vgjKiffNlV84l0z Ps4KTZpOFv6wG0O9U4ryLk35Cyn4FxQ9J/XcI4N0Xs0qK8aKOFp1qWXOQ W+xrW7gPk3mxURDYTumRub9E/V3sE9gpVcYJG/6r0gjaG+QQxZhzl+VrH XG4Gc3BDYMyvg4i9EtHF059hlC6IKXiGnjQj+FNlwkYcQ6me+2QkYkaNo w==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="262727667" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="262727667" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="563911372" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 24 Apr 2022 20:39:35 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id B3F7D4E1; Mon, 25 Apr 2022 06:39:35 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 04/12] x86/boot: Add infrastructure required for unaccepted memory support Date: Mon, 25 Apr 2022 06:39:26 +0300 Message-Id: <20220425033934.68551-5-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2069A140036 X-Stat-Signature: wrx8oaeaesdrs8cny45ujs7xdtew3joe X-Rspam-User: Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=GdWW9wB+; spf=none (imf23.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1650857979-908064 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pull functionality from the main kernel headers and lib/ that is required for unaccepted memory support. This is preparatory patch. The users for the functionality will come in following patches. Signed-off-by: Kirill A. Shutemov --- arch/x86/boot/bitops.h | 40 +++++++++++++++ arch/x86/boot/compressed/align.h | 14 +++++ arch/x86/boot/compressed/bitmap.c | 43 ++++++++++++++++ arch/x86/boot/compressed/bitmap.h | 49 ++++++++++++++++++ arch/x86/boot/compressed/bits.h | 36 +++++++++++++ arch/x86/boot/compressed/compiler.h | 9 ++++ arch/x86/boot/compressed/find.c | 54 +++++++++++++++++++ arch/x86/boot/compressed/find.h | 80 +++++++++++++++++++++++++++++ arch/x86/boot/compressed/math.h | 37 +++++++++++++ arch/x86/boot/compressed/minmax.h | 61 ++++++++++++++++++++++ 10 files changed, 423 insertions(+) create mode 100644 arch/x86/boot/compressed/align.h create mode 100644 arch/x86/boot/compressed/bitmap.c create mode 100644 arch/x86/boot/compressed/bitmap.h create mode 100644 arch/x86/boot/compressed/bits.h create mode 100644 arch/x86/boot/compressed/compiler.h create mode 100644 arch/x86/boot/compressed/find.c create mode 100644 arch/x86/boot/compressed/find.h create mode 100644 arch/x86/boot/compressed/math.h create mode 100644 arch/x86/boot/compressed/minmax.h diff --git a/arch/x86/boot/bitops.h b/arch/x86/boot/bitops.h index 02e1dea11d94..61eb820ee402 100644 --- a/arch/x86/boot/bitops.h +++ b/arch/x86/boot/bitops.h @@ -41,4 +41,44 @@ static inline void set_bit(int nr, void *addr) asm("btsl %1,%0" : "+m" (*(u32 *)addr) : "Ir" (nr)); } +static __always_inline void __set_bit(long nr, volatile unsigned long *addr) +{ + asm volatile(__ASM_SIZE(bts) " %1,%0" : : "m" (*(volatile long *) addr), + "Ir" (nr) : "memory"); +} + +static __always_inline void __clear_bit(long nr, volatile unsigned long *addr) +{ + asm volatile(__ASM_SIZE(btr) " %1,%0" : : "m" (*(volatile long *) addr), + "Ir" (nr) : "memory"); +} + +/** + * __ffs - find first set bit in word + * @word: The word to search + * + * Undefined if no bit exists, so code should check against 0 first. + */ +static __always_inline unsigned long __ffs(unsigned long word) +{ + asm("rep; bsf %1,%0" + : "=r" (word) + : "rm" (word)); + return word; +} + +/** + * ffz - find first zero bit in word + * @word: The word to search + * + * Undefined if no zero exists, so code should check against ~0UL first. + */ +static __always_inline unsigned long ffz(unsigned long word) +{ + asm("rep; bsf %1,%0" + : "=r" (word) + : "r" (~word)); + return word; +} + #endif /* BOOT_BITOPS_H */ diff --git a/arch/x86/boot/compressed/align.h b/arch/x86/boot/compressed/align.h new file mode 100644 index 000000000000..c72ff4e8dd63 --- /dev/null +++ b/arch/x86/boot/compressed/align.h @@ -0,0 +1,14 @@ +// SPDX-License-Identifier: GPL-2.0-only +#ifndef BOOT_ALIGN_H +#define BOOT_ALIGN_H +#define _LINUX_ALIGN_H /* Inhibit inclusion of */ + +/* @a is a power of 2 value */ +#define ALIGN(x, a) __ALIGN_KERNEL((x), (a)) +#define ALIGN_DOWN(x, a) __ALIGN_KERNEL((x) - ((a) - 1), (a)) +#define __ALIGN_MASK(x, mask) __ALIGN_KERNEL_MASK((x), (mask)) +#define PTR_ALIGN(p, a) ((typeof(p))ALIGN((unsigned long)(p), (a))) +#define PTR_ALIGN_DOWN(p, a) ((typeof(p))ALIGN_DOWN((unsigned long)(p), (a))) +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + +#endif diff --git a/arch/x86/boot/compressed/bitmap.c b/arch/x86/boot/compressed/bitmap.c new file mode 100644 index 000000000000..789ecadeb521 --- /dev/null +++ b/arch/x86/boot/compressed/bitmap.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "bitmap.h" + +void __bitmap_set(unsigned long *map, unsigned int start, int len) +{ + unsigned long *p = map + BIT_WORD(start); + const unsigned int size = start + len; + int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG); + unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start); + + while (len - bits_to_set >= 0) { + *p |= mask_to_set; + len -= bits_to_set; + bits_to_set = BITS_PER_LONG; + mask_to_set = ~0UL; + p++; + } + if (len) { + mask_to_set &= BITMAP_LAST_WORD_MASK(size); + *p |= mask_to_set; + } +} + +void __bitmap_clear(unsigned long *map, unsigned int start, int len) +{ + unsigned long *p = map + BIT_WORD(start); + const unsigned int size = start + len; + int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG); + unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start); + + while (len - bits_to_clear >= 0) { + *p &= ~mask_to_clear; + len -= bits_to_clear; + bits_to_clear = BITS_PER_LONG; + mask_to_clear = ~0UL; + p++; + } + if (len) { + mask_to_clear &= BITMAP_LAST_WORD_MASK(size); + *p &= ~mask_to_clear; + } +} diff --git a/arch/x86/boot/compressed/bitmap.h b/arch/x86/boot/compressed/bitmap.h new file mode 100644 index 000000000000..34cce38d94e9 --- /dev/null +++ b/arch/x86/boot/compressed/bitmap.h @@ -0,0 +1,49 @@ +// SPDX-License-Identifier: GPL-2.0-only +#ifndef BOOT_BITMAP_H +#define BOOT_BITMAP_H +#define __LINUX_BITMAP_H /* Inhibit inclusion of */ + +#include "../bitops.h" +#include "../string.h" +#include "align.h" + +#define BITMAP_MEM_ALIGNMENT 8 +#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1) + +#define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG - 1))) +#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG - 1))) + +#define BIT_WORD(nr) ((nr) / BITS_PER_LONG) + +void __bitmap_set(unsigned long *map, unsigned int start, int len); +void __bitmap_clear(unsigned long *map, unsigned int start, int len); + +static __always_inline void bitmap_set(unsigned long *map, unsigned int start, + unsigned int nbits) +{ + if (__builtin_constant_p(nbits) && nbits == 1) + __set_bit(start, map); + else if (__builtin_constant_p(start & BITMAP_MEM_MASK) && + IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) && + __builtin_constant_p(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + memset((char *)map + start / 8, 0xff, nbits / 8); + else + __bitmap_set(map, start, nbits); +} + +static __always_inline void bitmap_clear(unsigned long *map, unsigned int start, + unsigned int nbits) +{ + if (__builtin_constant_p(nbits) && nbits == 1) + __clear_bit(start, map); + else if (__builtin_constant_p(start & BITMAP_MEM_MASK) && + IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) && + __builtin_constant_p(nbits & BITMAP_MEM_MASK) && + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) + memset((char *)map + start / 8, 0, nbits / 8); + else + __bitmap_clear(map, start, nbits); +} + +#endif diff --git a/arch/x86/boot/compressed/bits.h b/arch/x86/boot/compressed/bits.h new file mode 100644 index 000000000000..b00cd13c63c8 --- /dev/null +++ b/arch/x86/boot/compressed/bits.h @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: GPL-2.0-only +#ifndef BOOT_BITS_H +#define BOOT_BITS_H +#define __LINUX_BITS_H /* Inhibit inclusion of */ + +#ifdef __ASSEMBLY__ +#define _AC(X,Y) X +#define _AT(T,X) X +#else +#define __AC(X,Y) (X##Y) +#define _AC(X,Y) __AC(X,Y) +#define _AT(T,X) ((T)(X)) +#endif + +#define _UL(x) (_AC(x, UL)) +#define _ULL(x) (_AC(x, ULL)) +#define UL(x) (_UL(x)) +#define ULL(x) (_ULL(x)) + +#define BIT(nr) (UL(1) << (nr)) +#define BIT_ULL(nr) (ULL(1) << (nr)) +#define BIT_MASK(nr) (UL(1) << ((nr) % BITS_PER_LONG)) +#define BIT_WORD(nr) ((nr) / BITS_PER_LONG) +#define BIT_ULL_MASK(nr) (ULL(1) << ((nr) % BITS_PER_LONG_LONG)) +#define BIT_ULL_WORD(nr) ((nr) / BITS_PER_LONG_LONG) +#define BITS_PER_BYTE 8 + +#define GENMASK(h, l) \ + (((~UL(0)) - (UL(1) << (l)) + 1) & \ + (~UL(0) >> (BITS_PER_LONG - 1 - (h)))) + +#define GENMASK_ULL(h, l) \ + (((~ULL(0)) - (ULL(1) << (l)) + 1) & \ + (~ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h)))) + +#endif diff --git a/arch/x86/boot/compressed/compiler.h b/arch/x86/boot/compressed/compiler.h new file mode 100644 index 000000000000..72e20cf01465 --- /dev/null +++ b/arch/x86/boot/compressed/compiler.h @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0-only +#ifndef BOOT_COMPILER_H +#define BOOT_COMPILER_H +#define __LINUX_COMPILER_H /* Inhibit inclusion of */ + +# define likely(x) __builtin_expect(!!(x), 1) +# define unlikely(x) __builtin_expect(!!(x), 0) + +#endif diff --git a/arch/x86/boot/compressed/find.c b/arch/x86/boot/compressed/find.c new file mode 100644 index 000000000000..839be91aae52 --- /dev/null +++ b/arch/x86/boot/compressed/find.c @@ -0,0 +1,54 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include "bitmap.h" +#include "find.h" +#include "math.h" +#include "minmax.h" + +static __always_inline unsigned long swab(const unsigned long y) +{ +#if __BITS_PER_LONG == 64 + return __builtin_bswap32(y); +#else /* __BITS_PER_LONG == 32 */ + return __builtin_bswap64(y); +#endif +} + +unsigned long _find_next_bit(const unsigned long *addr1, + const unsigned long *addr2, unsigned long nbits, + unsigned long start, unsigned long invert, unsigned long le) +{ + unsigned long tmp, mask; + + if (unlikely(start >= nbits)) + return nbits; + + tmp = addr1[start / BITS_PER_LONG]; + if (addr2) + tmp &= addr2[start / BITS_PER_LONG]; + tmp ^= invert; + + /* Handle 1st word. */ + mask = BITMAP_FIRST_WORD_MASK(start); + if (le) + mask = swab(mask); + + tmp &= mask; + + start = round_down(start, BITS_PER_LONG); + + while (!tmp) { + start += BITS_PER_LONG; + if (start >= nbits) + return nbits; + + tmp = addr1[start / BITS_PER_LONG]; + if (addr2) + tmp &= addr2[start / BITS_PER_LONG]; + tmp ^= invert; + } + + if (le) + tmp = swab(tmp); + + return min(start + __ffs(tmp), nbits); +} diff --git a/arch/x86/boot/compressed/find.h b/arch/x86/boot/compressed/find.h new file mode 100644 index 000000000000..910d007a7ec5 --- /dev/null +++ b/arch/x86/boot/compressed/find.h @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0-only +#ifndef BOOT_FIND_H +#define BOOT_FIND_H +#define __LINUX_FIND_H /* Inhibit inclusion of */ + +#include "../bitops.h" +#include "align.h" +#include "bits.h" +#include "compiler.h" + +unsigned long _find_next_bit(const unsigned long *addr1, + const unsigned long *addr2, unsigned long nbits, + unsigned long start, unsigned long invert, unsigned long le); + +/** + * find_next_bit - find the next set bit in a memory region + * @addr: The address to base the search on + * @offset: The bitnumber to start searching at + * @size: The bitmap size in bits + * + * Returns the bit number for the next set bit + * If no bits are set, returns @size. + */ +static inline +unsigned long find_next_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + if (small_const_nbits(size)) { + unsigned long val; + + if (unlikely(offset >= size)) + return size; + + val = *addr & GENMASK(size - 1, offset); + return val ? __ffs(val) : size; + } + + return _find_next_bit(addr, NULL, size, offset, 0UL, 0); +} + +/** + * find_next_zero_bit - find the next cleared bit in a memory region + * @addr: The address to base the search on + * @offset: The bitnumber to start searching at + * @size: The bitmap size in bits + * + * Returns the bit number of the next zero bit + * If no bits are zero, returns @size. + */ +static inline +unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size, + unsigned long offset) +{ + if (small_const_nbits(size)) { + unsigned long val; + + if (unlikely(offset >= size)) + return size; + + val = *addr | ~GENMASK(size - 1, offset); + return val == ~0UL ? size : ffz(val); + } + + return _find_next_bit(addr, NULL, size, offset, ~0UL, 0); +} + +/** + * for_each_set_bitrange_from - iterate over all set bit ranges [b; e) + * @b: bit offset of start of current bitrange (first set bit); must be initialized + * @e: bit offset of end of current bitrange (first unset bit) + * @addr: bitmap address to base the search on + * @size: bitmap size in number of bits + */ +#define for_each_set_bitrange_from(b, e, addr, size) \ + for ((b) = find_next_bit((addr), (size), (b)), \ + (e) = find_next_zero_bit((addr), (size), (b) + 1); \ + (b) < (size); \ + (b) = find_next_bit((addr), (size), (e) + 1), \ + (e) = find_next_zero_bit((addr), (size), (b) + 1)) +#endif diff --git a/arch/x86/boot/compressed/math.h b/arch/x86/boot/compressed/math.h new file mode 100644 index 000000000000..b8b9fccb3c03 --- /dev/null +++ b/arch/x86/boot/compressed/math.h @@ -0,0 +1,37 @@ +// SPDX-License-Identifier: GPL-2.0-only +#ifndef BOOT_MATH_H +#define BOOT_MATH_H +#define __LINUX_MATH_H /* Inhibit inclusion of */ + +/* + * + * This looks more complex than it should be. But we need to + * get the type for the ~ right in round_down (it needs to be + * as wide as the result!), and we want to evaluate the macro + * arguments just once each. + */ +#define __round_mask(x, y) ((__typeof__(x))((y)-1)) + +/** + * round_up - round up to next specified power of 2 + * @x: the value to round + * @y: multiple to round up to (must be a power of 2) + * + * Rounds @x up to next multiple of @y (which must be a power of 2). + * To perform arbitrary rounding up, use roundup() below. + */ +#define round_up(x, y) ((((x)-1) | __round_mask(x, y))+1) + +/** + * round_down - round down to next specified power of 2 + * @x: the value to round + * @y: multiple to round down to (must be a power of 2) + * + * Rounds @x down to next multiple of @y (which must be a power of 2). + * To perform arbitrary rounding down, use rounddown() below. + */ +#define round_down(x, y) ((x) & ~__round_mask(x, y)) + +#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) + +#endif diff --git a/arch/x86/boot/compressed/minmax.h b/arch/x86/boot/compressed/minmax.h new file mode 100644 index 000000000000..fbf640cfce32 --- /dev/null +++ b/arch/x86/boot/compressed/minmax.h @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0-only +#ifndef BOOT_MINMAX_H +#define BOOT_MINMAX_H +#define __LINUX_MINMAX_H /* Inhibit inclusion of */ + +/* + * This returns a constant expression while determining if an argument is + * a constant expression, most importantly without evaluating the argument. + * Glory to Martin Uecker + */ +#define __is_constexpr(x) \ + (sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8))) + +/* + * min()/max()/clamp() macros must accomplish three things: + * + * - avoid multiple evaluations of the arguments (so side-effects like + * "x++" happen only once) when non-constant. + * - perform strict type-checking (to generate warnings instead of + * nasty runtime surprises). See the "unnecessary" pointer comparison + * in __typecheck(). + * - retain result as a constant expressions when called with only + * constant expressions (to avoid tripping VLA warnings in stack + * allocation usage). + */ +#define __typecheck(x, y) \ + (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1))) + +#define __no_side_effects(x, y) \ + (__is_constexpr(x) && __is_constexpr(y)) + +#define __safe_cmp(x, y) \ + (__typecheck(x, y) && __no_side_effects(x, y)) + +#define __cmp(x, y, op) ((x) op (y) ? (x) : (y)) + +#define __cmp_once(x, y, unique_x, unique_y, op) ({ \ + typeof(x) unique_x = (x); \ + typeof(y) unique_y = (y); \ + __cmp(unique_x, unique_y, op); }) + +#define __careful_cmp(x, y, op) \ + __builtin_choose_expr(__safe_cmp(x, y), \ + __cmp(x, y, op), \ + __cmp_once(x, y, __UNIQUE_ID(__x), __UNIQUE_ID(__y), op)) + +/** + * min - return minimum of two values of the same or compatible types + * @x: first value + * @y: second value + */ +#define min(x, y) __careful_cmp(x, y, <) + +/** + * max - return maximum of two values of the same or compatible types + * @x: first value + * @y: second value + */ +#define max(x, y) __careful_cmp(x, y, >) + +#endif From patchwork Mon Apr 25 03:39:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7F6FC433FE for ; Mon, 25 Apr 2022 03:39:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ABC216B00B3; Sun, 24 Apr 2022 23:39:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A44CA6B00B4; Sun, 24 Apr 2022 23:39:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 870C56B00B5; Sun, 24 Apr 2022 23:39:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 6E6906B00B3 for ; Sun, 24 Apr 2022 23:39:50 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 54EAE61D17 for ; Mon, 25 Apr 2022 03:39:50 +0000 (UTC) X-FDA: 79393997340.14.F414DA8 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf01.hostedemail.com (Postfix) with ESMTP id 20D9E40038 for ; Mon, 25 Apr 2022 03:39:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857989; x=1682393989; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=w5IxTrKVM9UmdAGSefOOJ0UE+57s2glhh61AAAfqqWs=; b=k06OEX/MoyufeU427ICfe2IqsOlenpI5LHDRpnEUShhqjcgs4c73dirj U0yFFHx57LG63iBAlFl1yb3zHW96FGoDhYWIZPo0hCe7pshEC5fOlvSFf snc8TUPN+YmW+eL6+TpuHuQSrL3SEMYcAj4pP9cTSqx7ppPR8BP738205 nDaBvzOR8GokaupX5HiTVi8P4pblwpcMg6Dzm41y1cCqBcKzZlajEqRrx f3eBcPbWw4spb+vm4piwoxKUtSsZeDLY0aaijen/9WWoYxtpjog3CrXLy Dm+yXqKhMp8hRMkKJwM+PlbCHUwe6lmg6ql9rbq6vV8vA2XitZgC7wNbh A==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="351576537" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="351576537" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="659959896" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga002.fm.intel.com with ESMTP; 24 Apr 2022 20:39:42 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id C1B54530; Mon, 25 Apr 2022 06:39:35 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 05/12] efi/x86: Implement support for unaccepted memory Date: Mon, 25 Apr 2022 06:39:27 +0300 Message-Id: <20220425033934.68551-6-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 20D9E40038 X-Stat-Signature: b6y5qsrf7knzkdce154skcqe8dms5xx5 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="k06OEX/M"; spf=none (imf01.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.43) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1650857985-165235 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: UEFI Specification version 2.9 introduces the concept of memory acceptance: Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, requiring memory to be accepted before it can be used by the guest. Accepting happens via a protocol specific for the Virtual Machine platform. Accepting memory is costly and it makes VMM allocate memory for the accepted guest physical address range. It's better to postpone memory acceptance until memory is needed. It lowers boot time and reduces memory overhead. The kernel needs to know what memory has been accepted. Firmware communicates this information via memory map: a new memory type -- EFI_UNACCEPTED_MEMORY -- indicates such memory. Range-based tracking works fine for firmware, but it gets bulky for the kernel: e820 has to be modified on every page acceptance. It leads to table fragmentation, but there's a limited number of entries in the e820 table Another option is to mark such memory as usable in e820 and track if the range has been accepted in a bitmap. One bit in the bitmap represents 2MiB in the address space: one 4k page is enough to track 64GiB or physical address space. In the worst-case scenario -- a huge hole in the middle of the address space -- It needs 256MiB to handle 4PiB of the address space. Any unaccepted memory that is not aligned to 2M gets accepted upfront. The bitmap is allocated and constructed in the EFI stub and passed down to the kernel via boot_params. allocate_e820() allocates the bitmap if unaccepted memory is present, according to the maximum address in the memory map. The same boot_params.unaccepted_memory can be used to pass the bitmap between two kernels on kexec, but the use-case is not yet implemented. Make KEXEC and UNACCEPTED_MEMORY mutually exclusive for now. The implementation requires some basic helpers in boot stub. They provided by linux/ includes in the main kernel image, but is not present in boot stub. Create copy of required functionality in the boot stub. Signed-off-by: Kirill A. Shutemov --- Documentation/x86/zero-page.rst | 1 + arch/x86/boot/compressed/Makefile | 1 + arch/x86/boot/compressed/mem.c | 68 +++++++++++++++++++++++ arch/x86/include/asm/unaccepted_memory.h | 10 ++++ arch/x86/include/uapi/asm/bootparam.h | 2 +- drivers/firmware/efi/Kconfig | 15 ++++++ drivers/firmware/efi/efi.c | 1 + drivers/firmware/efi/libstub/x86-stub.c | 69 ++++++++++++++++++++++++ include/linux/efi.h | 3 +- 9 files changed, 168 insertions(+), 2 deletions(-) create mode 100644 arch/x86/boot/compressed/mem.c create mode 100644 arch/x86/include/asm/unaccepted_memory.h diff --git a/Documentation/x86/zero-page.rst b/Documentation/x86/zero-page.rst index f088f5881666..bb8e9cb093cc 100644 --- a/Documentation/x86/zero-page.rst +++ b/Documentation/x86/zero-page.rst @@ -19,6 +19,7 @@ Offset/Size Proto Name Meaning 058/008 ALL tboot_addr Physical address of tboot shared page 060/010 ALL ist_info Intel SpeedStep (IST) BIOS support information (struct ist_info) +078/008 ALL unaccepted_memory Bitmap of unaccepted memory (1bit == 2M) 080/010 ALL hd0_info hd0 disk parameter, OBSOLETE!! 090/010 ALL hd1_info hd1 disk parameter, OBSOLETE!! 0A0/010 ALL sys_desc_table System description table (struct sys_desc_table), diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 8fd0e6ae2e1f..7f672f7e2fea 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -102,6 +102,7 @@ endif vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o vmlinux-objs-$(CONFIG_INTEL_TDX_GUEST) += $(obj)/tdx.o $(obj)/tdcall.o +vmlinux-objs-$(CONFIG_UNACCEPTED_MEMORY) += $(obj)/bitmap.o $(obj)/mem.o vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c new file mode 100644 index 000000000000..415df0d3bc81 --- /dev/null +++ b/arch/x86/boot/compressed/mem.c @@ -0,0 +1,68 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "../cpuflags.h" +#include "bitmap.h" +#include "error.h" +#include "math.h" + +#define PMD_SHIFT 21 +#define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) +#define PMD_MASK (~(PMD_SIZE - 1)) + +static inline void __accept_memory(phys_addr_t start, phys_addr_t end) +{ + /* Platform-specific memory-acceptance call goes here */ + error("Cannot accept memory"); +} + +/* + * The accepted memory bitmap only works at PMD_SIZE granularity. If a request + * comes in to mark memory as unaccepted which is not PMD_SIZE-aligned, simply + * accept the memory now since it can not be *marked* as unaccepted. + */ +void process_unaccepted_memory(struct boot_params *params, u64 start, u64 end) +{ + /* + * Accept small regions that might not be able to be represented + * in the bitmap. This is a bit imprecise and may accept some + * areas that could have been represented in the bitmap instead. + * + * Consider case like this: + * + * | 4k | 2044k | 2048k | + * ^ 0x0 ^ 2MB ^ 4MB + * + * all memory in the range is unaccepted, except for the first 4k. + * The second 2M can be represented in the bitmap, but kernel accept it + * right away. The imprecision makes the code simpler by ensuring that + * at least one bit will be set int the bitmap below. + */ + if (end - start < 2 * PMD_SIZE) { + __accept_memory(start, end); + return; + } + + /* + * No matter how the start and end are aligned, at least one unaccepted + * PMD_SIZE area will remain. + */ + + /* Immediately accept a unaccepted_memory, + start / PMD_SIZE, (end - start) / PMD_SIZE); +} diff --git a/arch/x86/include/asm/unaccepted_memory.h b/arch/x86/include/asm/unaccepted_memory.h new file mode 100644 index 000000000000..df0736d32858 --- /dev/null +++ b/arch/x86/include/asm/unaccepted_memory.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2020 Intel Corporation */ +#ifndef _ASM_X86_UNACCEPTED_MEMORY_H +#define _ASM_X86_UNACCEPTED_MEMORY_H + +struct boot_params; + +void process_unaccepted_memory(struct boot_params *params, u64 start, u64 num); + +#endif diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h index b25d3f82c2f3..f7a32176f301 100644 --- a/arch/x86/include/uapi/asm/bootparam.h +++ b/arch/x86/include/uapi/asm/bootparam.h @@ -179,7 +179,7 @@ struct boot_params { __u64 tboot_addr; /* 0x058 */ struct ist_info ist_info; /* 0x060 */ __u64 acpi_rsdp_addr; /* 0x070 */ - __u8 _pad3[8]; /* 0x078 */ + __u64 unaccepted_memory; /* 0x078 */ __u8 hd0_info[16]; /* obsolete! */ /* 0x080 */ __u8 hd1_info[16]; /* obsolete! */ /* 0x090 */ struct sys_desc_table sys_desc_table; /* obsolete! */ /* 0x0a0 */ diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig index 2c3dac5ecb36..e8048586aefa 100644 --- a/drivers/firmware/efi/Kconfig +++ b/drivers/firmware/efi/Kconfig @@ -243,6 +243,21 @@ config EFI_DISABLE_PCI_DMA options "efi=disable_early_pci_dma" or "efi=no_disable_early_pci_dma" may be used to override this option. +config UNACCEPTED_MEMORY + bool + depends on EFI_STUB + depends on !KEXEC_CORE + help + Some Virtual Machine platforms, such as Intel TDX, require + some memory to be "accepted" by the guest before it can be used. + This mechanism helps prevent malicious hosts from making changes + to guest memory. + + UEFI specification v2.9 introduced EFI_UNACCEPTED_MEMORY memory type. + + This option adds support for unaccepted memory and makes such memory + usable by the kernel. + endmenu config EFI_EMBEDDED_FIRMWARE diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index 5502e176d51b..2c055afb1b11 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -747,6 +747,7 @@ static __initdata char memory_type_name[][13] = { "MMIO Port", "PAL Code", "Persistent", + "Unaccepted", }; char * __init efi_md_typeattr_format(char *buf, size_t size, diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c index 5401985901f5..f9b88174209e 100644 --- a/drivers/firmware/efi/libstub/x86-stub.c +++ b/drivers/firmware/efi/libstub/x86-stub.c @@ -15,6 +15,7 @@ #include #include #include +#include #include "efistub.h" @@ -504,6 +505,17 @@ setup_e820(struct boot_params *params, struct setup_data *e820ext, u32 e820ext_s e820_type = E820_TYPE_PMEM; break; + case EFI_UNACCEPTED_MEMORY: + if (!IS_ENABLED(CONFIG_UNACCEPTED_MEMORY)) { + efi_warn_once("The system has unaccepted memory," + " but kernel does not support it\n"); + efi_warn_once("Consider enabling UNACCEPTED_MEMORY\n"); + continue; + } + e820_type = E820_TYPE_RAM; + process_unaccepted_memory(params, d->phys_addr, + d->phys_addr + PAGE_SIZE * d->num_pages); + break; default: continue; } @@ -568,6 +580,59 @@ static efi_status_t alloc_e820ext(u32 nr_desc, struct setup_data **e820ext, return status; } +static efi_status_t allocate_unaccepted_memory(struct boot_params *params, + __u32 nr_desc, + struct efi_boot_memmap *map) +{ + unsigned long *mem = NULL; + u64 size, max_addr = 0; + efi_status_t status; + bool found = false; + int i; + + /* Check if there's any unaccepted memory and find the max address */ + for (i = 0; i < nr_desc; i++) { + efi_memory_desc_t *d; + + d = efi_early_memdesc_ptr(*map->map, *map->desc_size, i); + if (d->type == EFI_UNACCEPTED_MEMORY) + found = true; + if (d->phys_addr + d->num_pages * PAGE_SIZE > max_addr) + max_addr = d->phys_addr + d->num_pages * PAGE_SIZE; + } + + if (!found) { + params->unaccepted_memory = 0; + return EFI_SUCCESS; + } + + /* + * If unaccepted memory is present allocate a bitmap to track what + * memory has to be accepted before access. + * + * One bit in the bitmap represents 2MiB in the address space: + * A 4k bitmap can track 64GiB of physical address space. + * + * In the worst case scenario -- a huge hole in the middle of the + * address space -- It needs 256MiB to handle 4PiB of the address + * space. + * + * TODO: handle situation if params->unaccepted_memory has already set. + * It's required to deal with kexec. + * + * The bitmap will be populated in setup_e820() according to the memory + * map after efi_exit_boot_services(). + */ + size = DIV_ROUND_UP(max_addr, PMD_SIZE * BITS_PER_BYTE); + status = efi_allocate_pages(size, (unsigned long *)&mem, ULONG_MAX); + if (status == EFI_SUCCESS) { + memset(mem, 0, size); + params->unaccepted_memory = (unsigned long)mem; + } + + return status; +} + static efi_status_t allocate_e820(struct boot_params *params, struct efi_boot_memmap *map, struct setup_data **e820ext, @@ -589,6 +654,10 @@ static efi_status_t allocate_e820(struct boot_params *params, if (status != EFI_SUCCESS) goto out; } + + if (IS_ENABLED(CONFIG_UNACCEPTED_MEMORY)) + status = allocate_unaccepted_memory(params, nr_desc, map); + out: efi_bs_call(free_pool, *map->map); return status; diff --git a/include/linux/efi.h b/include/linux/efi.h index ccd4d3f91c98..b0240fdcaf5b 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -108,7 +108,8 @@ typedef struct { #define EFI_MEMORY_MAPPED_IO_PORT_SPACE 12 #define EFI_PAL_CODE 13 #define EFI_PERSISTENT_MEMORY 14 -#define EFI_MAX_MEMORY_TYPE 15 +#define EFI_UNACCEPTED_MEMORY 15 +#define EFI_MAX_MEMORY_TYPE 16 /* Attribute values: */ #define EFI_MEMORY_UC ((u64)0x0000000000000001ULL) /* uncached */ From patchwork Mon Apr 25 03:39:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88DB9C4332F for ; Mon, 25 Apr 2022 03:39:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 00DEF6B00B8; Sun, 24 Apr 2022 23:39:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB2256B00B6; Sun, 24 Apr 2022 23:39:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B367A6B00B8; Sun, 24 Apr 2022 23:39:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 7930D6B00B6 for ; Sun, 24 Apr 2022 23:39:51 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4C5BB22003 for ; Mon, 25 Apr 2022 03:39:51 +0000 (UTC) X-FDA: 79393997382.04.D564E00 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf29.hostedemail.com (Postfix) with ESMTP id 69C5812003A for ; Mon, 25 Apr 2022 03:39:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857990; x=1682393990; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Sj6Lg0REiZlYfnrUdKJKfc9AWEWiW+hzQ+rfnwkYy1o=; b=cpX0d1pgpz/F0YUSU/8WxG+fOl89BZClnt17CHfOU30u+Z6wfwnTRLTz XQVNACwqhUfrNWKxpNaALil8VjzY9S3kaWnjzmwkafzQTee0mlkkSUgk9 E4BASqgrp+oRbHJqcIp37Hnx3TTUJXhyaeVtk4djgFfQSQ3Btn9RpR+1B G3nnDYGFYAZOZ/8AOEjNbd0hRV3ZTlZ8+xbwp3JIwjXphiprno+4S32F3 JaeRTLh/MAB6tcsIw4vdbWD/wt4Y/as2kBA/F/pRLzHPJEqNefWUqjsd1 M1BNUXsXuvWYI6si804WbCa1WWevDCdGA5lZb52BsP/fzfwQWbQGF6WyT w==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="264641577" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="264641577" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="579045721" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga008.jf.intel.com with ESMTP; 24 Apr 2022 20:39:42 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id CF5F9595; Mon, 25 Apr 2022 06:39:35 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 06/12] x86/boot/compressed: Handle unaccepted memory Date: Mon, 25 Apr 2022 06:39:28 +0300 Message-Id: <20220425033934.68551-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 69C5812003A X-Stat-Signature: zpi9dmtupmk9ztjs6abijhxa5bzkc8cz X-Rspam-User: Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cpX0d1pg; spf=none (imf29.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1650857988-824470 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The firmware will pre-accept the memory used to run the stub. But, the stub is responsible for accepting the memory into which it decompresses the main kernel. Accept memory just before decompression starts. The stub is also responsible for choosing a physical address in which to place the decompressed kernel image. The KASLR mechanism will randomize this physical address. Since the unaccepted memory region is relatively small, KASLR would be quite ineffective if it only used the pre-accepted area (EFI_CONVENTIONAL_MEMORY). Ensure that KASLR randomizes among the entire physical address space by also including EFI_UNACCEPTED_MEMOR Signed-off-by: Kirill A. Shutemov --- arch/x86/boot/compressed/Makefile | 2 +- arch/x86/boot/compressed/kaslr.c | 14 ++++++++++++-- arch/x86/boot/compressed/mem.c | 21 +++++++++++++++++++++ arch/x86/boot/compressed/misc.c | 9 +++++++++ arch/x86/include/asm/unaccepted_memory.h | 2 ++ 5 files changed, 45 insertions(+), 3 deletions(-) diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 7f672f7e2fea..b59007e57cbf 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -102,7 +102,7 @@ endif vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o vmlinux-objs-$(CONFIG_INTEL_TDX_GUEST) += $(obj)/tdx.o $(obj)/tdcall.o -vmlinux-objs-$(CONFIG_UNACCEPTED_MEMORY) += $(obj)/bitmap.o $(obj)/mem.o +vmlinux-objs-$(CONFIG_UNACCEPTED_MEMORY) += $(obj)/bitmap.o $(obj)/find.o $(obj)/mem.o vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c index 411b268bc0a2..59db90626042 100644 --- a/arch/x86/boot/compressed/kaslr.c +++ b/arch/x86/boot/compressed/kaslr.c @@ -725,10 +725,20 @@ process_efi_entries(unsigned long minimum, unsigned long image_size) * but in practice there's firmware where using that memory leads * to crashes. * - * Only EFI_CONVENTIONAL_MEMORY is guaranteed to be free. + * Only EFI_CONVENTIONAL_MEMORY and EFI_UNACCEPTED_MEMORY (if + * supported) are guaranteed to be free. */ - if (md->type != EFI_CONVENTIONAL_MEMORY) + + switch (md->type) { + case EFI_CONVENTIONAL_MEMORY: + break; + case EFI_UNACCEPTED_MEMORY: + if (IS_ENABLED(CONFIG_UNACCEPTED_MEMORY)) + break; continue; + default: + continue; + } if (efi_soft_reserve_enabled() && (md->attribute & EFI_MEMORY_SP)) diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c index 415df0d3bc81..b5058c975d26 100644 --- a/arch/x86/boot/compressed/mem.c +++ b/arch/x86/boot/compressed/mem.c @@ -3,12 +3,15 @@ #include "../cpuflags.h" #include "bitmap.h" #include "error.h" +#include "find.h" #include "math.h" #define PMD_SHIFT 21 #define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE - 1)) +extern struct boot_params *boot_params; + static inline void __accept_memory(phys_addr_t start, phys_addr_t end) { /* Platform-specific memory-acceptance call goes here */ @@ -66,3 +69,21 @@ void process_unaccepted_memory(struct boot_params *params, u64 start, u64 end) bitmap_set((unsigned long *)params->unaccepted_memory, start / PMD_SIZE, (end - start) / PMD_SIZE); } + +void accept_memory(phys_addr_t start, phys_addr_t end) +{ + unsigned long range_start, range_end; + unsigned long *unaccepted_memory; + unsigned long bitmap_size; + + unaccepted_memory = (unsigned long *)boot_params->unaccepted_memory; + range_start = start / PMD_SIZE; + bitmap_size = DIV_ROUND_UP(end, PMD_SIZE); + + for_each_set_bitrange_from(range_start, range_end, + unaccepted_memory, bitmap_size) { + __accept_memory(range_start * PMD_SIZE, range_end * PMD_SIZE); + bitmap_clear(unaccepted_memory, + range_start, range_end - range_start); + } +} diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c index fa8969fad011..285b37e28074 100644 --- a/arch/x86/boot/compressed/misc.c +++ b/arch/x86/boot/compressed/misc.c @@ -18,6 +18,7 @@ #include "../string.h" #include "../voffset.h" #include +#include /* * WARNING!! @@ -451,6 +452,14 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap, #endif debug_putstr("\nDecompressing Linux... "); + +#ifdef CONFIG_UNACCEPTED_MEMORY + if (boot_params->unaccepted_memory) { + debug_putstr("Accepting memory... "); + accept_memory(__pa(output), __pa(output) + needed_size); + } +#endif + __decompress(input_data, input_len, NULL, NULL, output, output_len, NULL, error); parse_elf(output); diff --git a/arch/x86/include/asm/unaccepted_memory.h b/arch/x86/include/asm/unaccepted_memory.h index df0736d32858..41fbfc798100 100644 --- a/arch/x86/include/asm/unaccepted_memory.h +++ b/arch/x86/include/asm/unaccepted_memory.h @@ -7,4 +7,6 @@ struct boot_params; void process_unaccepted_memory(struct boot_params *params, u64 start, u64 num); +void accept_memory(phys_addr_t start, phys_addr_t end); + #endif From patchwork Mon Apr 25 03:39:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE217C433FE for ; Mon, 25 Apr 2022 03:39:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9ABF76B00BC; Sun, 24 Apr 2022 23:39:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E5BC6B00BB; Sun, 24 Apr 2022 23:39:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70E3D6B00BA; Sun, 24 Apr 2022 23:39:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 5EA6F6B00B7 for ; Sun, 24 Apr 2022 23:39:52 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3906527AC8 for ; Mon, 25 Apr 2022 03:39:52 +0000 (UTC) X-FDA: 79393997424.10.AF0692C Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf15.hostedemail.com (Postfix) with ESMTP id 9378AA0034 for ; Mon, 25 Apr 2022 03:39:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857991; x=1682393991; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=B6xQElVKXMmk7bVMoMJbz0U88Nts50RHxDgA813tPNs=; b=QTUZ9IyGJhzNpV/c7qA0QsZG9tmf17Yp52w6x7a3qGto6/dzvWl4r73Z qnB0j6ATzf6C5RXmSbDYTGzFCU8J7zZNHM1BDxpxuw1OshfFITt1ssndv 4FNv05vC5WBg5cVkDIXPjzQ+yBRhUM3NAw5X1x/QSUcOPO3ls5egTDSL1 duPtgoklJpFDnMt/H8+5DzYSMktzktuyEPZYFKnyXv23ChJa3vMVrnaF0 kdL0IqraMSmUNOyYhBBGN0vxgfGNhAgACIhD0Hn23X4NM/JwWZTsYYJvQ o8nAMJ0m4JUzue4SgM06eiPF5dR1wTIVxBZwnXxWpRZWebpvQeCu9oXPy A==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="265294812" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="265294812" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="871799384" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga005.fm.intel.com with ESMTP; 24 Apr 2022 20:39:43 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id DC3785F2; Mon, 25 Apr 2022 06:39:35 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Mike Rapoport Subject: [PATCHv5 07/12] x86/mm: Reserve unaccepted memory bitmap Date: Mon, 25 Apr 2022 06:39:29 +0300 Message-Id: <20220425033934.68551-8-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=QTUZ9IyG; spf=none (imf15.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9378AA0034 X-Stat-Signature: t3asksuhpns5ntekamktj99q5q3itz76 X-HE-Tag: 1650857987-792154 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A given page of memory can only be accepted once. The kernel has a need to accept memory both in the early decompression stage and during normal runtime. A bitmap used to communicate the acceptance state of each page between the decompression stage and normal runtime. boot_params is used to communicate location of the bitmap through out the boot. The bitmap is allocated and initially populated in EFI stub. Decompression stage accepts pages required for kernel/initrd and mark these pages accordingly in the bitmap. The main kernel picks up the bitmap from the same boot_params and uses it to determinate what has to be accepted on allocation. In the runtime kernel, reserve the bitmap's memory to ensure nothing overwrites it. The size of bitmap is determinated with e820__end_of_ram_pfn() which relies on setup_e820() marking unaccepted memory as E820_TYPE_RAM. Signed-off-by: Kirill A. Shutemov Acked-by: Mike Rapoport --- arch/x86/kernel/e820.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c index f267205f2d5a..22d1fe48dcba 100644 --- a/arch/x86/kernel/e820.c +++ b/arch/x86/kernel/e820.c @@ -1316,6 +1316,16 @@ void __init e820__memblock_setup(void) int i; u64 end; + /* Mark unaccepted memory bitmap reserved */ + if (boot_params.unaccepted_memory) { + unsigned long size; + + /* One bit per 2MB */ + size = DIV_ROUND_UP(e820__end_of_ram_pfn() * PAGE_SIZE, + PMD_SIZE * BITS_PER_BYTE); + memblock_reserve(boot_params.unaccepted_memory, size); + } + /* * The bootstrap memblock region count maximum is 128 entries * (INIT_MEMBLOCK_REGIONS), but EFI might pass us more E820 entries From patchwork Mon Apr 25 03:39:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EB23C433F5 for ; Mon, 25 Apr 2022 03:39:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2BF746B00B6; Sun, 24 Apr 2022 23:39:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 16CAE6B00BA; Sun, 24 Apr 2022 23:39:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D58686B00B7; Sun, 24 Apr 2022 23:39:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id BA3326B00B9 for ; Sun, 24 Apr 2022 23:39:51 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 919EC623A8 for ; Mon, 25 Apr 2022 03:39:51 +0000 (UTC) X-FDA: 79393997382.25.03E0D9D Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf21.hostedemail.com (Postfix) with ESMTP id 6FF351C0040 for ; Mon, 25 Apr 2022 03:39:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857990; x=1682393990; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KOIQjRtDsnHvxdLscF3eIqrHEJe+rxpybj+mEJKy4T0=; b=hjiELzFT3S1j21Wy5f1zK1jLoka4ofeQlvQ8IDneWNpIR/fgJoSDZpKW +24aEuEscOxdgkJDaKFlOgxVSpz079k2wISVqEu9xU0wi/o8fECjF2XC9 dtlcyMHXF/mFbgQbhzFfIvQpHiUnwmkIMNwJnV5SZ8itpbi+6BHaDUiXd Nlt5QKkPAa4IxqWLjncGE3ES9wQ02VHgtKt8ahRWpoe6ISBPrTw3buQvp d1IrYzZawvYrb1sKIWKgZ+/cOM4K4swJgsEvrrN2vysGpqbB95MpLeO/x i4GI00+6qmcyl3+5AcVX3DtiL5uz1rEBiCrM7Z3y8EdWQ874gCHrxd/Yx g==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="245055706" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="245055706" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="616322297" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga008.fm.intel.com with ESMTP; 24 Apr 2022 20:39:43 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id E965562E; Mon, 25 Apr 2022 06:39:35 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 08/12] x86/mm: Provide helpers for unaccepted memory Date: Mon, 25 Apr 2022 06:39:30 +0300 Message-Id: <20220425033934.68551-9-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 6FF351C0040 X-Stat-Signature: ex5cfgqdwox57frq67nsey6tscc7jxc3 X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hjiELzFT; spf=none (imf21.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1650857988-700539 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Core-mm requires few helpers to support unaccepted memory: - accept_memory() checks the range of addresses against the bitmap and accept memory if needed. - memory_is_unaccepted() check if anything within the range requires acceptance. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/page.h | 3 ++ arch/x86/include/asm/unaccepted_memory.h | 4 ++ arch/x86/mm/Makefile | 2 + arch/x86/mm/unaccepted_memory.c | 56 ++++++++++++++++++++++++ 4 files changed, 65 insertions(+) create mode 100644 arch/x86/mm/unaccepted_memory.c diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 9cc82f305f4b..df4ec3a988dc 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -19,6 +19,9 @@ struct page; #include + +#include + extern struct range pfn_mapped[]; extern int nr_pfn_mapped; diff --git a/arch/x86/include/asm/unaccepted_memory.h b/arch/x86/include/asm/unaccepted_memory.h index 41fbfc798100..a59264ee0ab3 100644 --- a/arch/x86/include/asm/unaccepted_memory.h +++ b/arch/x86/include/asm/unaccepted_memory.h @@ -7,6 +7,10 @@ struct boot_params; void process_unaccepted_memory(struct boot_params *params, u64 start, u64 num); +#ifdef CONFIG_UNACCEPTED_MEMORY + void accept_memory(phys_addr_t start, phys_addr_t end); +bool memory_is_unaccepted(phys_addr_t start, phys_addr_t end); #endif +#endif diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index fe3d3061fc11..e327f83e6bbf 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -60,3 +60,5 @@ obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_amd.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o + +obj-$(CONFIG_UNACCEPTED_MEMORY) += unaccepted_memory.o diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c new file mode 100644 index 000000000000..1327f64d5205 --- /dev/null +++ b/arch/x86/mm/unaccepted_memory.c @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include + +#include +#include +#include + +/* Protects unaccepted memory bitmap */ +static DEFINE_SPINLOCK(unaccepted_memory_lock); + +void accept_memory(phys_addr_t start, phys_addr_t end) +{ + unsigned long *unaccepted_memory; + unsigned long flags; + unsigned long range_start, range_end; + + if (!boot_params.unaccepted_memory) + return; + + unaccepted_memory = __va(boot_params.unaccepted_memory); + range_start = start / PMD_SIZE; + + spin_lock_irqsave(&unaccepted_memory_lock, flags); + for_each_set_bitrange_from(range_start, range_end, unaccepted_memory, + DIV_ROUND_UP(end, PMD_SIZE)) { + unsigned long len = range_end - range_start; + + /* Platform-specific memory-acceptance call goes here */ + panic("Cannot accept memory"); + bitmap_clear(unaccepted_memory, range_start, len); + } + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); +} + +bool memory_is_unaccepted(phys_addr_t start, phys_addr_t end) +{ + unsigned long *unaccepted_memory = __va(boot_params.unaccepted_memory); + unsigned long flags; + bool ret = false; + + spin_lock_irqsave(&unaccepted_memory_lock, flags); + while (start < end) { + if (test_bit(start / PMD_SIZE, unaccepted_memory)) { + ret = true; + break; + } + + start += PMD_SIZE; + } + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); + + return ret; +} From patchwork Mon Apr 25 03:39:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DB0FC433FE for ; Mon, 25 Apr 2022 03:40:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 871DF6B00B9; Sun, 24 Apr 2022 23:39:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D3426B00BA; Sun, 24 Apr 2022 23:39:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69B236B00BB; Sun, 24 Apr 2022 23:39:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 573E66B00B9 for ; Sun, 24 Apr 2022 23:39:54 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 39315623AA for ; Mon, 25 Apr 2022 03:39:54 +0000 (UTC) X-FDA: 79393997508.16.F68CBD5 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf26.hostedemail.com (Postfix) with ESMTP id 1415A14002F for ; Mon, 25 Apr 2022 03:39:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857993; x=1682393993; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PExZ2/SUMRFGw0fOWgXaC5jKJv8tYWn+UZUseswavOM=; b=kwtpBKiYPt6uFrD9d4VgTsabqqKIGMQfP5guTFBKjBSDFWwSFJE+BhEq 6jkLCjP4gPPw2PWwKm7oOyB0UvGipgFw4cQ2nGMd4HmDQdlEbiV9Cqqla NGSH1+L8KgyOzeBzFQ0ECkeO/bvxwGv8CqVhBdOHk/qklAYgoilFqskkN 0Uk6CEAyqlaNiPzsLo0QY+0uDW11LTUHvZLyKXhFSJhuBMwCSbnqdp7av erJmYyvE3Yzu0sMh34zQZmAaO8rlj2nLzT2X0PRXT/kwOP6HiRT+gcGD4 9vIi3UOVdsQbtJ8iXKdaowOhwwqkdJG7GtCSG8o6Gi9X+J9sziXLAu/jG g==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="247051026" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="247051026" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="649514097" Received: from black.fi.intel.com ([10.237.72.28]) by FMSMGA003.fm.intel.com with ESMTP; 24 Apr 2022 20:39:43 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 02D2B6E2; Mon, 25 Apr 2022 06:39:35 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 09/12] x86/tdx: Make _tdx_hypercall() and __tdx_module_call() available in boot stub Date: Mon, 25 Apr 2022 06:39:31 +0300 Message-Id: <20220425033934.68551-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=kwtpBKiY; spf=none (imf26.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.126) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 1415A14002F X-Stat-Signature: s3bcsccktnjwowh7g47xhy1fes3fp6hg X-HE-Tag: 1650857991-992942 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Memory acceptance requires a hypercall and one or multiple module calls. Make helpers for the calls available in boot stub. It has to accept memory where kernel image and initrd are placed. Signed-off-by: Kirill A. Shutemov --- arch/x86/coco/tdx/tdx.c | 26 ------------------ arch/x86/include/asm/shared/tdx.h | 45 +++++++++++++++++++++++++++++++ arch/x86/include/asm/tdx.h | 19 ------------- 3 files changed, 45 insertions(+), 45 deletions(-) diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index 03deb4d6920d..ddb60a87b426 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -12,14 +12,6 @@ #include #include -/* TDX module Call Leaf IDs */ -#define TDX_GET_INFO 1 -#define TDX_GET_VEINFO 3 -#define TDX_ACCEPT_PAGE 6 - -/* TDX hypercall Leaf IDs */ -#define TDVMCALL_MAP_GPA 0x10001 - /* MMIO direction */ #define EPT_READ 0 #define EPT_WRITE 1 @@ -34,24 +26,6 @@ #define VE_GET_PORT_NUM(e) ((e) >> 16) #define VE_IS_IO_STRING(e) ((e) & BIT(4)) -/* - * Wrapper for standard use of __tdx_hypercall with no output aside from - * return code. - */ -static inline u64 _tdx_hypercall(u64 fn, u64 r12, u64 r13, u64 r14, u64 r15) -{ - struct tdx_hypercall_args args = { - .r10 = TDX_HYPERCALL_STANDARD, - .r11 = fn, - .r12 = r12, - .r13 = r13, - .r14 = r14, - .r15 = r15, - }; - - return __tdx_hypercall(&args, 0); -} - /* Called from __tdx_hypercall() for unrecoverable failure */ void __tdx_hypercall_failed(void) { diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h index e53f26228fbb..956ced04c3be 100644 --- a/arch/x86/include/asm/shared/tdx.h +++ b/arch/x86/include/asm/shared/tdx.h @@ -13,6 +13,14 @@ #define TDX_CPUID_LEAF_ID 0x21 #define TDX_IDENT "IntelTDX " +/* TDX module Call Leaf IDs */ +#define TDX_GET_INFO 1 +#define TDX_GET_VEINFO 3 +#define TDX_ACCEPT_PAGE 6 + +/* TDX hypercall Leaf IDs */ +#define TDVMCALL_MAP_GPA 0x10001 + #ifndef __ASSEMBLY__ /* @@ -33,8 +41,45 @@ struct tdx_hypercall_args { /* Used to request services from the VMM */ u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags); +/* + * Wrapper for standard use of __tdx_hypercall with no output aside from + * return code. + */ +static inline u64 _tdx_hypercall(u64 fn, u64 r12, u64 r13, u64 r14, u64 r15) +{ + struct tdx_hypercall_args args = { + .r10 = TDX_HYPERCALL_STANDARD, + .r11 = fn, + .r12 = r12, + .r13 = r13, + .r14 = r14, + .r15 = r15, + }; + + return __tdx_hypercall(&args, 0); +} + + /* Called from __tdx_hypercall() for unrecoverable failure */ void __tdx_hypercall_failed(void); +/* + * Used in __tdx_module_call() to gather the output registers' values of the + * TDCALL instruction when requesting services from the TDX module. This is a + * software only structure and not part of the TDX module/VMM ABI + */ +struct tdx_module_output { + u64 rcx; + u64 rdx; + u64 r8; + u64 r9; + u64 r10; + u64 r11; +}; + +/* Used to communicate with the TDX module */ +u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, + struct tdx_module_output *out); + #endif /* !__ASSEMBLY__ */ #endif /* _ASM_X86_SHARED_TDX_H */ diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 020c81a7c729..d9106d3e89f8 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -20,21 +20,6 @@ #ifndef __ASSEMBLY__ -/* - * Used to gather the output registers values of the TDCALL and SEAMCALL - * instructions when requesting services from the TDX module. - * - * This is a software only structure and not part of the TDX module/VMM ABI. - */ -struct tdx_module_output { - u64 rcx; - u64 rdx; - u64 r8; - u64 r9; - u64 r10; - u64 r11; -}; - /* * Used by the #VE exception handler to gather the #VE exception * info from the TDX module. This is a software only structure @@ -55,10 +40,6 @@ struct ve_info { void __init tdx_early_init(void); -/* Used to communicate with the TDX module */ -u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, - struct tdx_module_output *out); - void tdx_get_ve_info(struct ve_info *ve); bool tdx_handle_virt_exception(struct pt_regs *regs, struct ve_info *ve); From patchwork Mon Apr 25 03:39:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D0DBC433F5 for ; Mon, 25 Apr 2022 03:40:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DCC2C6B00B7; Sun, 24 Apr 2022 23:39:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D33186B00BA; Sun, 24 Apr 2022 23:39:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9AA746B00B9; Sun, 24 Apr 2022 23:39:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 8252E6B00B7 for ; Sun, 24 Apr 2022 23:39:52 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5D94722003 for ; Mon, 25 Apr 2022 03:39:52 +0000 (UTC) X-FDA: 79393997424.07.575BFD1 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf20.hostedemail.com (Postfix) with ESMTP id 24C651C004C for ; Mon, 25 Apr 2022 03:39:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857991; x=1682393991; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UAkTjqMUWj3U88352d4TWKeyPDjEnADBaQUiU5n9fSE=; b=holCvxTJhgyvAylbTLOEmm0IglNqBu/VemNcVnlHxng2RSXLl1EI2mG6 pTSqsy1V6hhcKhYDxATbdH9AoMnSEcFd4XkdkGPU8arRIF/Csr0GO011n PzgMeUt8V1MWReQR3J7+xdgOTPrUsk6y86BSFXUIBNn58fMMHLOuOnkfU C6yvxhnXWRYwDyXGa7wpwYy3IB39N2a4H22ehUUeKM9wttV0qZ/7qBIp/ v/LMLQlo6ev+Sc4ztsoghpxOmy7ipW/0ixvYeNe/L5fu2/rRgEuoCwzMl 8uzZPILBBPRjZhGHyabv4Z0B5YXIxrNV74X+fUXG+MlhY0VOS6ocd/Sb+ Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="263977727" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="263977727" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="557501602" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga007.jf.intel.com with ESMTP; 24 Apr 2022 20:39:43 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 0FAE86EE; Mon, 25 Apr 2022 06:39:36 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 10/12] x86/tdx: Unaccepted memory support Date: Mon, 25 Apr 2022 06:39:32 +0300 Message-Id: <20220425033934.68551-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=holCvxTJ; spf=none (imf20.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 24C651C004C X-Stat-Signature: twf39mfsraft6iw3n644wa48xf8tbzfs X-HE-Tag: 1650857988-235227 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All preparations are complete. Hookup TDX-specific code to accept memory. Accepting the memory is the same process as converting memory from shared to private: kernel notifies VMM with MAP_GPA hypercall and then accept pages with ACCEPT_PAGE module call. The implementation in core kernel uses tdx_enc_status_changed(). It already used for converting memory to shared and back for I/O transactions. Boot stub provides own implementation of tdx_accept_memory(). It is similar in structure to tdx_enc_status_changed(), but only cares about converting memory to private. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 1 + arch/x86/boot/compressed/mem.c | 24 ++++++++- arch/x86/boot/compressed/tdx.c | 85 +++++++++++++++++++++++++++++++ arch/x86/coco/tdx/tdx.c | 31 +++++++---- arch/x86/include/asm/shared/tdx.h | 2 + arch/x86/mm/unaccepted_memory.c | 9 +++- 6 files changed, 141 insertions(+), 11 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 7021ec725dd3..e4c31dbea6d7 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -885,6 +885,7 @@ config INTEL_TDX_GUEST select ARCH_HAS_CC_PLATFORM select X86_MEM_ENCRYPT select X86_MCE + select UNACCEPTED_MEMORY help Support running as a guest under Intel TDX. Without this support, the guest kernel can not boot or run under TDX. diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c index b5058c975d26..539fff27de49 100644 --- a/arch/x86/boot/compressed/mem.c +++ b/arch/x86/boot/compressed/mem.c @@ -5,6 +5,8 @@ #include "error.h" #include "find.h" #include "math.h" +#include "tdx.h" +#include #define PMD_SHIFT 21 #define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) @@ -12,10 +14,30 @@ extern struct boot_params *boot_params; +static bool is_tdx_guest(void) +{ + static bool once; + static bool is_tdx; + + if (!once) { + u32 eax, sig[3]; + + cpuid_count(TDX_CPUID_LEAF_ID, 0, &eax, + &sig[0], &sig[2], &sig[1]); + is_tdx = !memcmp(TDX_IDENT, sig, sizeof(sig)); + once = true; + } + + return is_tdx; +} + static inline void __accept_memory(phys_addr_t start, phys_addr_t end) { /* Platform-specific memory-acceptance call goes here */ - error("Cannot accept memory"); + if (is_tdx_guest()) + tdx_accept_memory(start, end); + else + error("Cannot accept memory"); } /* diff --git a/arch/x86/boot/compressed/tdx.c b/arch/x86/boot/compressed/tdx.c index 918a7606f53c..57fd2bf28484 100644 --- a/arch/x86/boot/compressed/tdx.c +++ b/arch/x86/boot/compressed/tdx.c @@ -3,12 +3,14 @@ #include "../cpuflags.h" #include "../string.h" #include "../io.h" +#include "align.h" #include "error.h" #include #include #include +#include /* Called from __tdx_hypercall() for unrecoverable failure */ void __tdx_hypercall_failed(void) @@ -75,3 +77,86 @@ void early_tdx_detect(void) pio_ops.f_outb = tdx_outb; pio_ops.f_outw = tdx_outw; } + +enum pg_level { + PG_LEVEL_4K, + PG_LEVEL_2M, + PG_LEVEL_1G, +}; + +#define PTE_SHIFT 9 + +static bool try_accept_one(phys_addr_t *start, unsigned long len, + enum pg_level pg_level) +{ + unsigned long accept_size = PAGE_SIZE << (pg_level * PTE_SHIFT); + u64 tdcall_rcx; + u8 page_size; + + if (!IS_ALIGNED(*start, accept_size)) + return false; + + if (len < accept_size) + return false; + + /* + * Pass the page physical address to the TDX module to accept the + * pending, private page. + * + * Bits 2:0 of RCX encode page size: 0 - 4K, 1 - 2M, 2 - 1G. + */ + switch (pg_level) { + case PG_LEVEL_4K: + page_size = 0; + break; + case PG_LEVEL_2M: + page_size = 1; + break; + case PG_LEVEL_1G: + page_size = 2; + break; + default: + return false; + } + + tdcall_rcx = *start | page_size; + if (__tdx_module_call(TDX_ACCEPT_PAGE, tdcall_rcx, 0, 0, 0, NULL)) + return false; + + *start += accept_size; + return true; +} + +void tdx_accept_memory(phys_addr_t start, phys_addr_t end) +{ + /* + * Notify the VMM about page mapping conversion. More info about ABI + * can be found in TDX Guest-Host-Communication Interface (GHCI), + * section "TDG.VP.VMCALL" + */ + if (_tdx_hypercall(TDVMCALL_MAP_GPA, start, end - start, 0, 0)) + error("Accepting memory failed\n"); + + /* + * For shared->private conversion, accept the page using + * TDX_ACCEPT_PAGE TDX module call. + */ + while (start < end) { + unsigned long len = end - start; + + /* + * Try larger accepts first. It gives chance to VMM to keep + * 1G/2M SEPT entries where possible and speeds up process by + * cutting number of hypercalls (if successful). + */ + + if (try_accept_one(&start, len, PG_LEVEL_1G)) + continue; + + if (try_accept_one(&start, len, PG_LEVEL_2M)) + continue; + + if (!try_accept_one(&start, len, PG_LEVEL_4K)) + error("Accepting memory failed\n"); + } +} diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index ddb60a87b426..ab4deb897942 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -580,16 +580,9 @@ static bool try_accept_one(phys_addr_t *start, unsigned long len, return true; } -/* - * Inform the VMM of the guest's intent for this physical page: shared with - * the VMM or private to the guest. The VMM is expected to change its mapping - * of the page in response. - */ -static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc) +static bool tdx_enc_status_changed_phys(phys_addr_t start, phys_addr_t end, + bool enc) { - phys_addr_t start = __pa(vaddr); - phys_addr_t end = __pa(vaddr + numpages * PAGE_SIZE); - if (!enc) { /* Set the shared (decrypted) bits: */ start |= cc_mkdec(0); @@ -634,6 +627,25 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc) return true; } +void tdx_accept_memory(phys_addr_t start, phys_addr_t end) +{ + if (!tdx_enc_status_changed_phys(start, end, true)) + panic("Accepting memory failed\n"); +} + +/* + * Inform the VMM of the guest's intent for this physical page: shared with + * the VMM or private to the guest. The VMM is expected to change its mapping + * of the page in response. + */ +static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc) +{ + phys_addr_t start = __pa(vaddr); + phys_addr_t end = __pa(vaddr + numpages * PAGE_SIZE); + + return tdx_enc_status_changed_phys(start, end, enc); +} + void __init tdx_early_init(void) { u64 cc_mask; @@ -645,6 +657,7 @@ void __init tdx_early_init(void) return; setup_force_cpu_cap(X86_FEATURE_TDX_GUEST); + setup_clear_cpu_cap(X86_FEATURE_MCE); cc_set_vendor(CC_VENDOR_INTEL); cc_mask = get_cc_mask(); diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h index 956ced04c3be..97534c334473 100644 --- a/arch/x86/include/asm/shared/tdx.h +++ b/arch/x86/include/asm/shared/tdx.h @@ -81,5 +81,7 @@ struct tdx_module_output { u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, struct tdx_module_output *out); +void tdx_accept_memory(phys_addr_t start, phys_addr_t end); + #endif /* !__ASSEMBLY__ */ #endif /* _ASM_X86_SHARED_TDX_H */ diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c index 1327f64d5205..de0790af1824 100644 --- a/arch/x86/mm/unaccepted_memory.c +++ b/arch/x86/mm/unaccepted_memory.c @@ -6,6 +6,7 @@ #include #include +#include #include /* Protects unaccepted memory bitmap */ @@ -29,7 +30,13 @@ void accept_memory(phys_addr_t start, phys_addr_t end) unsigned long len = range_end - range_start; /* Platform-specific memory-acceptance call goes here */ - panic("Cannot accept memory"); + if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) { + tdx_accept_memory(range_start * PMD_SIZE, + range_end * PMD_SIZE); + } else { + panic("Cannot accept memory"); + } + bitmap_clear(unaccepted_memory, range_start, len); } spin_unlock_irqrestore(&unaccepted_memory_lock, flags); From patchwork Mon Apr 25 03:39:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825236 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C18FC433F5 for ; Mon, 25 Apr 2022 03:40:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C5A16B00BA; Sun, 24 Apr 2022 23:39:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 674456B00BD; Sun, 24 Apr 2022 23:39:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EEEA6B00BE; Sun, 24 Apr 2022 23:39:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 38DC56B00BA for ; Sun, 24 Apr 2022 23:39:55 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1BDC727AFF for ; Mon, 25 Apr 2022 03:39:55 +0000 (UTC) X-FDA: 79393997550.03.03F8A20 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf26.hostedemail.com (Postfix) with ESMTP id E10FA14002D for ; Mon, 25 Apr 2022 03:39:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857994; x=1682393994; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5hPx0r3OWxS+lz/D+l0BiGQBbeJJHb52sFjAF/68dVQ=; b=Bpd6NcKA0XJRrq2U5Jl6i/oyVXFhvsM76P4Lwcljbhqf60OiH3TVVl+1 W1oiy3x1Cs8hS6wlE+Uk2/6fi+mBS25xH8wO3DnYRMmklRYQR3I+fV++4 7i1jbZnW1D+aqk0Y7yIgRi/pr1uXYU1OoVsXRqShiba//mztPlO8nFM1C X7kx0MIQoGyYklb+iRVVpkiOSznOd7Kk5gpKhAw8VGD44lOJj7A/vvmlz X7B/nSxJ/hbFQpZ/bQv0Taz9po55XLtQUUqybWiWotTO3LhRsmDnVttvA kJ2b03AqqIl0V4My9m7j7O3HxMGk3fIcGsVsBrSj47h99GdhykDvMkCbK w==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="247051029" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="247051029" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="649514099" Received: from black.fi.intel.com ([10.237.72.28]) by FMSMGA003.fm.intel.com with ESMTP; 24 Apr 2022 20:39:43 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 1D2DC739; Mon, 25 Apr 2022 06:39:36 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 11/12] mm/vmstat: Add counter for memory accepting Date: Mon, 25 Apr 2022 06:39:33 +0300 Message-Id: <20220425033934.68551-12-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Bpd6NcKA; spf=none (imf26.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.126) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E10FA14002D X-Stat-Signature: dtexqis3k3uregzu1yo7yn1b3t5jbzxe X-HE-Tag: 1650857992-636471 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The counter increased every time kernel accepts a memory region. The counter allows to see if memory acceptation is still ongoing and contributes to memory allocation latency. Signed-off-by: Kirill A. Shutemov --- arch/x86/mm/unaccepted_memory.c | 1 + include/linux/vm_event_item.h | 3 +++ mm/vmstat.c | 3 +++ 3 files changed, 7 insertions(+) diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c index de0790af1824..65cd49b93c50 100644 --- a/arch/x86/mm/unaccepted_memory.c +++ b/arch/x86/mm/unaccepted_memory.c @@ -38,6 +38,7 @@ void accept_memory(phys_addr_t start, phys_addr_t end) } bitmap_clear(unaccepted_memory, range_start, len); + count_vm_events(ACCEPT_MEMORY, len * PMD_SIZE / PAGE_SIZE); } spin_unlock_irqrestore(&unaccepted_memory_lock, flags); } diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 16a0a4fd000b..6a468164a2f9 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -136,6 +136,9 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_X86 DIRECT_MAP_LEVEL2_SPLIT, DIRECT_MAP_LEVEL3_SPLIT, +#endif +#ifdef CONFIG_UNACCEPTED_MEMORY + ACCEPT_MEMORY, #endif NR_VM_EVENT_ITEMS }; diff --git a/mm/vmstat.c b/mm/vmstat.c index b75b1a64b54c..4c9197f32406 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1397,6 +1397,9 @@ const char * const vmstat_text[] = { "direct_map_level2_splits", "direct_map_level3_splits", #endif +#ifdef CONFIG_UNACCEPTED_MEMORY + "accept_memory", +#endif #endif /* CONFIG_VM_EVENT_COUNTERS || CONFIG_MEMCG */ }; #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA || CONFIG_MEMCG */ From patchwork Mon Apr 25 03:39:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 12825230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D3D7C433EF for ; Mon, 25 Apr 2022 03:39:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F42C6B00B4; Sun, 24 Apr 2022 23:39:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 75FCD6B00B5; Sun, 24 Apr 2022 23:39:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 625446B00B6; Sun, 24 Apr 2022 23:39:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 505B46B00B4 for ; Sun, 24 Apr 2022 23:39:51 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2489822E0 for ; Mon, 25 Apr 2022 03:39:51 +0000 (UTC) X-FDA: 79393997382.14.C0B19DF Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf02.hostedemail.com (Postfix) with ESMTP id 148C58003D for ; Mon, 25 Apr 2022 03:39:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650857990; x=1682393990; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UW/BHigZ6spgrKRAqG9bWiHedWVnA5vrB2UptEivfzc=; b=lZ1VlKZ3IcsMNcTi0REsQEHmqpt46RlFJFOel13qQO6qzR06F+oJiRiC mdkaEEc1nx4G/bv9krdwovhA9gfA73iaXun94+munhjVczUpcHT84zxIK 3SfZh/0FdN6QribOueY1j4vF2avj1WNm2MB6sYecerPdUxuHEeNHMjlLg IbUON/nyTp4t/1TDG8he63W+wXYcFH4ZPtQy72ncO8oaHduRjd3RrkNQt HTnnRudkwEq0+fayq3DJZsthzSxI5ynxIAUxTpbL/MXZvS4PWTpE90Oom z5uN9UDIg6c35bCrb18KAHGd/0/gQpKN1NJU8ziTm53+7+3ma4n847ozE w==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="351576538" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="351576538" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 20:39:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="659959901" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga002.fm.intel.com with ESMTP; 24 Apr 2022 20:39:43 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 288BC7E0; Mon, 25 Apr 2022 06:39:36 +0300 (EEST) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv5 12/12] x86/mm: Report unaccepted memory in /proc/meminfo Date: Mon, 25 Apr 2022 06:39:34 +0300 Message-Id: <20220425033934.68551-13-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=lZ1VlKZ3; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf02.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.43) smtp.mailfrom=kirill.shutemov@linux.intel.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 148C58003D X-Rspam-User: X-Stat-Signature: ei1j4osccbqxgfccsexybqxwx8sqjz6d X-HE-Tag: 1650857987-790779 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Track amount of unaccepted memory and report it in /proc/meminfo. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/set_memory.h | 2 ++ arch/x86/include/asm/unaccepted_memory.h | 9 ++++++ arch/x86/mm/init.c | 8 ++++++ arch/x86/mm/pat/set_memory.c | 2 +- arch/x86/mm/unaccepted_memory.c | 36 +++++++++++++++++++++++- 5 files changed, 55 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index 78ca53512486..e467f3941d22 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -86,6 +86,8 @@ bool kernel_page_present(struct page *page); extern int kernel_set_to_readonly; +void direct_map_meminfo(struct seq_file *m); + #ifdef CONFIG_X86_64 /* * Prevent speculative access to the page by either unmapping diff --git a/arch/x86/include/asm/unaccepted_memory.h b/arch/x86/include/asm/unaccepted_memory.h index a59264ee0ab3..7c93661152a9 100644 --- a/arch/x86/include/asm/unaccepted_memory.h +++ b/arch/x86/include/asm/unaccepted_memory.h @@ -3,7 +3,10 @@ #ifndef _ASM_X86_UNACCEPTED_MEMORY_H #define _ASM_X86_UNACCEPTED_MEMORY_H +#include + struct boot_params; +struct seq_file; void process_unaccepted_memory(struct boot_params *params, u64 start, u64 num); @@ -12,5 +15,11 @@ void process_unaccepted_memory(struct boot_params *params, u64 start, u64 num); void accept_memory(phys_addr_t start, phys_addr_t end); bool memory_is_unaccepted(phys_addr_t start, phys_addr_t end); +void unaccepted_meminfo(struct seq_file *m); + +#else + +static inline void unaccepted_meminfo(struct seq_file *m) {} + #endif #endif diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index d8cfce221275..7e92a9d93994 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -1065,3 +1065,11 @@ unsigned long max_swapfile_size(void) return pages; } #endif + +#ifdef CONFIG_PROC_FS +void arch_report_meminfo(struct seq_file *m) +{ + direct_map_meminfo(m); + unaccepted_meminfo(m); +} +#endif diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index abf5ed76e4b7..2880ba01451c 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -105,7 +105,7 @@ static void split_page_count(int level) direct_pages_count[level - 1] += PTRS_PER_PTE; } -void arch_report_meminfo(struct seq_file *m) +void direct_map_meminfo(struct seq_file *m) { seq_printf(m, "DirectMap4k: %8lu kB\n", direct_pages_count[PG_LEVEL_4K] << 2); diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c index 65cd49b93c50..66a6c529bf31 100644 --- a/arch/x86/mm/unaccepted_memory.c +++ b/arch/x86/mm/unaccepted_memory.c @@ -3,14 +3,17 @@ #include #include #include +#include +#include #include #include #include #include -/* Protects unaccepted memory bitmap */ +/* Protects unaccepted memory bitmap and nr_unaccepted */ static DEFINE_SPINLOCK(unaccepted_memory_lock); +static unsigned long nr_unaccepted; void accept_memory(phys_addr_t start, phys_addr_t end) { @@ -39,6 +42,12 @@ void accept_memory(phys_addr_t start, phys_addr_t end) bitmap_clear(unaccepted_memory, range_start, len); count_vm_events(ACCEPT_MEMORY, len * PMD_SIZE / PAGE_SIZE); + + /* In early boot nr_unaccepted is not yet initialized */ + if (nr_unaccepted) { + WARN_ON(nr_unaccepted < len); + nr_unaccepted -= len; + } } spin_unlock_irqrestore(&unaccepted_memory_lock, flags); } @@ -62,3 +71,28 @@ bool memory_is_unaccepted(phys_addr_t start, phys_addr_t end) return ret; } + +void unaccepted_meminfo(struct seq_file *m) +{ + seq_printf(m, "UnacceptedMem: %8lu kB\n", + (READ_ONCE(nr_unaccepted) * PMD_SIZE) >> 10); +} + +static int __init unaccepted_meminfo_init(void) +{ + unsigned long *unaccepted_memory; + unsigned long flags, bitmap_size; + + if (!boot_params.unaccepted_memory) + return 0; + + bitmap_size = e820__end_of_ram_pfn() * PAGE_SIZE / PMD_SIZE; + unaccepted_memory = __va(boot_params.unaccepted_memory); + + spin_lock_irqsave(&unaccepted_memory_lock, flags); + nr_unaccepted = bitmap_weight(unaccepted_memory, bitmap_size); + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); + + return 0; +} +fs_initcall(unaccepted_meminfo_init);