From patchwork Sun Jan 30 16:45:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 12729932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48468C433F5 for ; Sun, 30 Jan 2022 16:46:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3ECD36B0072; Sun, 30 Jan 2022 11:46:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 375966B0073; Sun, 30 Jan 2022 11:46:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C8226B0074; Sun, 30 Jan 2022 11:46:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 062C36B0072 for ; Sun, 30 Jan 2022 11:46:10 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A808196F39 for ; Sun, 30 Jan 2022 16:46:09 +0000 (UTC) X-FDA: 79087530858.05.06A048A Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf12.hostedemail.com (Postfix) with ESMTP id A9EA740003 for ; Sun, 30 Jan 2022 16:46:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643561168; x=1675097168; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iPSUyfTOWmQNxJXF3W6BUfHMnx9aMsx3QIVFSuilmwM=; b=WvYstBW3ODJz9tMoe73Lpaveu1m+bruIidN4X5rUEGPxiAXhIVxkw1XQ 30UcOXUWaxx61R5FpeHcn3QVvWbUs0W09WpqjCVNUF6ntf1KrTr+jGPKa U4KRJUCj5hxevW4Rb80E8v8FZumkKy0RjhWbuY8V4bvVf8O7nNdMPCVVt rCTPEHCSs4CVWejheKCwQiJCS9+2rhv8/IsP4exjsMZvpr9l+y53bkHQd 56kKsG516f8djZ4YuT3n2vF5A7DwihRc1y4lPGywbSzFsyH/JrbaszDSN kwT73cn0vvMGQsnMHYgj/dCX4klFWi5LjBYyhTN5ZSHYemT3Kil5vlE6T g==; X-IronPort-AV: E=McAfee;i="6200,9189,10242"; a="228024833" X-IronPort-AV: E=Sophos;i="5.88,328,1635231600"; d="scan'208";a="228024833" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2022 08:46:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,328,1635231600"; d="scan'208";a="770566154" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga005.fm.intel.com with ESMTP; 30 Jan 2022 08:46:00 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 36E80176; Sun, 30 Jan 2022 18:46:13 +0200 (EET) From: "Kirill A. Shutemov" To: rppt@kernel.org Cc: ak@linux.intel.com, akpm@linux-foundation.org, ardb@kernel.org, bp@alien8.de, brijesh.singh@amd.com, dave.hansen@intel.com, david@redhat.com, dfaggioli@suse.com, jroedel@suse.de, kirill.shutemov@linux.intel.com, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, luto@kernel.org, mingo@redhat.com, pbonzini@redhat.com, peterz@infradead.org, rientjes@google.com, sathyanarayanan.kuppuswamy@linux.intel.com, seanjc@google.com, tglx@linutronix.de, thomas.lendacky@amd.com, varad.gautam@suse.com, vbabka@suse.cz, x86@kernel.org, Mike Rapoport Subject: [PATCHv3.1 1/7] mm: Add support for unaccepted memory Date: Sun, 30 Jan 2022 19:45:48 +0300 Message-Id: <20220130164548.40417-1-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WvYstBW3; spf=none (imf12.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.151) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: nil X-Rspamd-Queue-Id: A9EA740003 X-Stat-Signature: rhm8tsfnq7a1nkkjzbo7gw33hx5so4s8 X-Rspamd-Server: rspam12 X-HE-Tag: 1643561168-225520 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: UEFI Specification version 2.9 introduces the concept of memory acceptance. Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, requiring memory to be accepted before it can be used by the guest. Accepting happens via a protocol specific for the Virtual Machine platform. Accepting memory is costly and it makes VMM allocate memory for the accepted guest physical address range. It's better to postpone memory acceptance until memory is needed. It lowers boot time and reduces memory overhead. Support of such memory requires a few changes in core-mm code: - memblock has to accept memory on allocation; - page allocator has to accept memory on the first allocation of the page; Memblock change is trivial. The page allocator is modified to accept pages on the first allocation. PageBuddyUnaccepted() is used to indicate that the page requires acceptance. Kernel only need to accept memory once after boot, so during the boot and warm up phase there will be a lot of memory acceptance. After things are settled down the only price of the feature if couple of checks for PageBuddyUnaccepted() in alloc and free paths. The check refers a hot variable (that also encodes PageBuddy()), so it is cheap and not visible on profiles. Architecture has to provide three helpers if it wants to support unaccepted memory: - accept_memory() makes a range of physical addresses accepted. - maybe_mark_page_unaccepted() marks a page PageBuddyUnaccepted() if it requires acceptance. Used during boot to put pages on free lists. - accept_page() makes a page accepted and clears PageBuddyUnaccepted(). Signed-off-by: Kirill A. Shutemov Acked-by: Mike Rapoport # memblock Acked-by: David Hildenbrand --- include/linux/page-flags.h | 27 +++++++++++++++++++++++++++ mm/internal.h | 15 +++++++++++++++ mm/memblock.c | 9 +++++++++ mm/page_alloc.c | 23 ++++++++++++++++++++++- 4 files changed, 73 insertions(+), 1 deletion(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 1c3b6e5c8bfd..1bdc6b422207 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -871,6 +871,18 @@ static __always_inline void __ClearPage##uname(struct page *page) \ page->page_type |= PG_##lname; \ } +#define PAGE_TYPE_OPS_FALSE(uname) \ +static __always_inline int Page##uname(struct page *page) \ +{ \ + return false; \ +} \ +static __always_inline void __SetPage##uname(struct page *page) \ +{ \ +} \ +static __always_inline void __ClearPage##uname(struct page *page) \ +{ \ +} + /* * PageBuddy() indicates that the page is free and in the buddy system * (see mm/page_alloc.c). @@ -901,6 +913,21 @@ PAGE_TYPE_OPS(Buddy, buddy) */ PAGE_TYPE_OPS(Offline, offline) + /* + * PageBuddyUnaccepted() indicates that the page has to be "accepted" before + * it can be used. Page allocator has to call accept_page() before returning + * the page to the caller. + * + * PageBuddyUnaccepted() encoded with the same bit as PageOffline(). + * PageOffline() pages are never on free list of buddy allocator, so there's + * not conflict. + */ +#ifdef CONFIG_UNACCEPTED_MEMORY +PAGE_TYPE_OPS(BuddyUnaccepted, offline) +#else +PAGE_TYPE_OPS_FALSE(BuddyUnaccepted) +#endif + extern void page_offline_freeze(void); extern void page_offline_thaw(void); extern void page_offline_begin(void); diff --git a/mm/internal.h b/mm/internal.h index d80300392a19..26e5d7cb6aff 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -718,4 +718,19 @@ void vunmap_range_noflush(unsigned long start, unsigned long end); int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, unsigned long addr, int page_nid, int *flags); +#ifndef CONFIG_UNACCEPTED_MEMORY +static inline void maybe_mark_page_unaccepted(struct page *page, + unsigned int order) +{ +} + +static inline void accept_page(struct page *page, unsigned int order) +{ +} + +static inline void accept_memory(phys_addr_t start, phys_addr_t end) +{ +} +#endif + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memblock.c b/mm/memblock.c index 1018e50566f3..6c109b3b2a02 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1400,6 +1400,15 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, */ kmemleak_alloc_phys(found, size, 0, 0); + /* + * Some Virtual Machine platforms, such as Intel TDX or AMD SEV-SNP, + * require memory to be accepted before it can be used by the + * guest. + * + * Accept the memory of the allocated buffer. + */ + accept_memory(found, found + size); + return found; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3589febc6d31..27b9bd20e675 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1077,6 +1077,7 @@ static inline void __free_one_page(struct page *page, unsigned int max_order; struct page *buddy; bool to_tail; + bool unaccepted = PageBuddyUnaccepted(page); max_order = min_t(unsigned int, MAX_ORDER - 1, pageblock_order); @@ -1110,6 +1111,10 @@ static inline void __free_one_page(struct page *page, clear_page_guard(zone, buddy, order, migratetype); else del_page_from_free_list(buddy, zone, order); + + if (PageBuddyUnaccepted(buddy)) + unaccepted = true; + combined_pfn = buddy_pfn & pfn; page = page + (combined_pfn - pfn); pfn = combined_pfn; @@ -1143,6 +1148,10 @@ static inline void __free_one_page(struct page *page, done_merging: set_buddy_order(page, order); + /* Mark page unaccepted if any of merged pages were unaccepted */ + if (unaccepted) + __SetPageBuddyUnaccepted(page); + if (fpi_flags & FPI_TO_TAIL) to_tail = true; else if (is_shuffle_order(order)) @@ -1168,7 +1177,8 @@ static inline void __free_one_page(struct page *page, static inline bool page_expected_state(struct page *page, unsigned long check_flags) { - if (unlikely(atomic_read(&page->_mapcount) != -1)) + if (unlikely(atomic_read(&page->_mapcount) != -1) && + !PageBuddyUnaccepted(page)) return false; if (unlikely((unsigned long)page->mapping | @@ -1749,6 +1759,8 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, { if (early_page_uninitialised(pfn)) return; + + maybe_mark_page_unaccepted(page, order); __free_pages_core(page, order); } @@ -1838,10 +1850,12 @@ static void __init deferred_free_range(unsigned long pfn, if (nr_pages == pageblock_nr_pages && (pfn & (pageblock_nr_pages - 1)) == 0) { set_pageblock_migratetype(page, MIGRATE_MOVABLE); + maybe_mark_page_unaccepted(page, pageblock_order); __free_pages_core(page, pageblock_order); return; } + accept_memory(pfn << PAGE_SHIFT, (pfn + nr_pages) << PAGE_SHIFT); for (i = 0; i < nr_pages; i++, page++, pfn++) { if ((pfn & (pageblock_nr_pages - 1)) == 0) set_pageblock_migratetype(page, MIGRATE_MOVABLE); @@ -2312,6 +2326,10 @@ static inline void expand(struct zone *zone, struct page *page, if (set_page_guard(zone, &page[size], high, migratetype)) continue; + /* Transfer PageBuddyUnaccepted() to the newly split pages */ + if (PageBuddyUnaccepted(page)) + __SetPageBuddyUnaccepted(&page[size]); + add_to_free_list(&page[size], zone, high, migratetype); set_buddy_order(&page[size], high); } @@ -2408,6 +2426,9 @@ inline void post_alloc_hook(struct page *page, unsigned int order, */ kernel_unpoison_pages(page, 1 << order); + if (PageBuddyUnaccepted(page)) + accept_page(page, order); + /* * As memory initialization might be integrated into KASAN, * kasan_alloc_pages and kernel_init_free_pages must be From patchwork Sun Jan 30 16:48:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 12729933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBBBCC433FE for ; Sun, 30 Jan 2022 16:48:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 28ADE6B0072; Sun, 30 Jan 2022 11:48:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 23ABB6B0073; Sun, 30 Jan 2022 11:48:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 101D96B0074; Sun, 30 Jan 2022 11:48:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id F25A76B0072 for ; Sun, 30 Jan 2022 11:48:23 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AC8E995CB2 for ; Sun, 30 Jan 2022 16:48:23 +0000 (UTC) X-FDA: 79087536486.07.30BA6AD Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf23.hostedemail.com (Postfix) with ESMTP id C5F1D140005 for ; Sun, 30 Jan 2022 16:48:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643561302; x=1675097302; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nSSz3PwT9pAlklbcpqd0bHWy9ItvvFudWk2pwJNvLyg=; b=dxX9jwd8INQ+Rg/+HZRaoKupL2nNRJ1ljfE7lv/wHYDP60IcAZ4C7lPS aOsLYFG9ofAlUCvroXY//kDF0cNUI25sNTpsycBRDuBWCFbHJ3MHHzLOd Jfa9uHUDZ1tJOArRX1zjDnWqJtv9MdnPmWig78odBdMbF8hPJBwkbeuyQ r20UxozRYxgxHkqY0OpUP6QuHc2BLJHbnTrhotucVmJBMxZ+cqyLSyms7 TA1kh831aaNK7F7gXkj7w4koRlCht0amouQPeRSd4lMNzDM3AHCkxOY3Z U7+70EY+2/lhRpvT+9zm8BUifpulD7l/DRxoHMtwH72z9BZYxtkLPl/wj w==; X-IronPort-AV: E=McAfee;i="6200,9189,10242"; a="234750725" X-IronPort-AV: E=Sophos;i="5.88,329,1635231600"; d="scan'208";a="234750725" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2022 08:48:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,329,1635231600"; d="scan'208";a="629634558" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga004.jf.intel.com with ESMTP; 30 Jan 2022 08:48:14 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 83CFC176; Sun, 30 Jan 2022 18:48:28 +0200 (EET) From: "Kirill A. Shutemov" To: rppt@kernel.org Cc: ak@linux.intel.com, akpm@linux-foundation.org, ardb@kernel.org, bp@alien8.de, brijesh.singh@amd.com, dave.hansen@intel.com, david@redhat.com, dfaggioli@suse.com, jroedel@suse.de, kirill.shutemov@linux.intel.com, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, luto@kernel.org, mingo@redhat.com, pbonzini@redhat.com, peterz@infradead.org, rientjes@google.com, sathyanarayanan.kuppuswamy@linux.intel.com, seanjc@google.com, tglx@linutronix.de, thomas.lendacky@amd.com, varad.gautam@suse.com, vbabka@suse.cz, x86@kernel.org, Mike Rapoport Subject: [PATCHv3.1 5/7] x86/mm: Reserve unaccepted memory bitmap Date: Sun, 30 Jan 2022 19:48:23 +0300 Message-Id: <20220130164823.40470-1-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: C5F1D140005 X-Stat-Signature: s4pa93juxima1894g4tm4sk4arrbcd6x X-Rspam-User: nil Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dxX9jwd8; spf=none (imf23.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1643561302-579304 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A given page of memory can only be accepted once. The kernel has a need to accept memory both in the early decompression stage and during normal runtime. A bitmap used to communicate the acceptance state of each page between the decompression stage and normal runtime. This eliminates the possibility of attempting to double-accept a page. The bitmap is allocated in EFI stub, decompression stage updates the state of pages used for the kernel and initrd and hands the bitmap over to the main kernel image via boot_params. In the runtime kernel, reserve the bitmap's memory to ensure nothing overwrites it. Signed-off-by: Kirill A. Shutemov Acked-by: Mike Rapoport --- arch/x86/kernel/e820.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c index bc0657f0deed..3905bd1ca41d 100644 --- a/arch/x86/kernel/e820.c +++ b/arch/x86/kernel/e820.c @@ -1297,6 +1297,16 @@ void __init e820__memblock_setup(void) int i; u64 end; + /* Mark unaccepted memory bitmap reserved */ + if (boot_params.unaccepted_memory) { + unsigned long size; + + /* One bit per 2MB */ + size = DIV_ROUND_UP(e820__end_of_ram_pfn() * PAGE_SIZE, + PMD_SIZE * BITS_PER_BYTE); + memblock_reserve(boot_params.unaccepted_memory, size); + } + /* * The bootstrap memblock region count maximum is 128 entries * (INIT_MEMBLOCK_REGIONS), but EFI might pass us more E820 entries