From patchwork Thu Apr 15 08:39:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12204651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 405EEC43460 for ; Thu, 15 Apr 2021 08:42:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A9ABA611CD for ; Thu, 15 Apr 2021 08:42:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A9ABA611CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 438376B0070; Thu, 15 Apr 2021 04:42:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E75D8D0002; Thu, 15 Apr 2021 04:42:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 213286B0072; Thu, 15 Apr 2021 04:42:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id 0651C6B0070 for ; Thu, 15 Apr 2021 04:42:30 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B3F4F1834735B for ; Thu, 15 Apr 2021 08:42:29 +0000 (UTC) X-FDA: 78033960018.26.004D45A Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf01.hostedemail.com (Postfix) with ESMTP id 3EE0A5001533 for ; Thu, 15 Apr 2021 08:42:28 +0000 (UTC) Received: by mail-pj1-f52.google.com with SMTP id q14-20020a17090a430eb02901503aaee02bso1242130pjg.3 for ; Thu, 15 Apr 2021 01:42:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wkls0a7x4M2ga0TFkLCH+JapmMR51m9xOn6j0TvlQjk=; b=rV9PvpXovmPcJdUitzkqy+Ph9YMCsuORY8cu2EePl87TWPSVrwEbvFdCssUN4oMt0O Vviu/3E3loAn0/clcAkBXin2uAJHCHBKX/ANjU/DMGTw2DY5qdgJ0+Dt4eIEyL5d+JSv zSfHL9pxrwnPP7K3DedHPYwJ5VWSczabnGHBLAx5CdHUBAMX9Rxvu3kjVu/bP7Ro4CwI zu8clF7N6dY+FlKS2EGSi0TXyJ/XPIgycd3OrK0bJFN5mv9v3SHMKlrqQPUlbO4fEvFY ZxCeTIxdV9Bc5F5FIFH+C+5JGFUtdg9cwdogJP5QUHlxzK2pysT0Am1RiV7q1qMUciKm IP0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wkls0a7x4M2ga0TFkLCH+JapmMR51m9xOn6j0TvlQjk=; b=ttzAwA8T1+dGM3VlBERRjE7bqtAwO4FLkQ+gnEGshxOcfCjxSRYiwvvT10jDOL4sMU N22pPO1s+FVBpknMn4iywMR5V5xO8DkEzuaZ7taKOU2JRHKKHs+q1dCjy5DHgIebT7Zm 42aSZj4lwoQooKdVvHJG59KaL9+ACViN8B3yBXXoywApTVcfqY45AkNg8+3Ywak5bTGk CLOYaIQCg/shO5mfUjTmZLRFMDgevPpuT8r0Y98uP7HY5/iVGNekoGwiY7PWy+lRkqb0 KUt4IaT7iIcZ8XQCo8oifCFCt/7Clr2sbCdXLoTul0xGZtBdHeFW+0+QjrSSwWJdkVD2 IAVg== X-Gm-Message-State: AOAM530498jTyJqZ96t2gKe0RU+OfS4nYAJ6j2n7R9q9JQSqhfjL265o sQ2+qDPwEj7s9CuPlPJ0EaiGPg== X-Google-Smtp-Source: ABdhPJwnHF00n9KQ5J9o4SRR6RgYbFMesclkz4EjP3RKtVpsOtIwyFWZEnTyf6VBHnr072SenCBOeQ== X-Received: by 2002:a17:90b:188:: with SMTP id t8mr2607381pjs.169.1618476148186; Thu, 15 Apr 2021 01:42:28 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id e13sm1392365pgt.91.2021.04.15.01.42.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Apr 2021 01:42:27 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v20 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Date: Thu, 15 Apr 2021 16:39:57 +0800 Message-Id: <20210415084005.25049-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210415084005.25049-1-songmuchun@bytedance.com> References: <20210415084005.25049-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3EE0A5001533 X-Stat-Signature: pfi9s8bab185sci4tb6wobxu553rjw97 X-Rspamd-Server: rspam02 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=mail-pj1-f52.google.com; client-ip=209.85.216.52 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618476148-970314 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move bootmem info registration common API to individual bootmem_info.c. And we will use {get,put}_page_bootmem() to initialize the page for the vmemmap pages or free the vmemmap pages to buddy in the later patch. So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador Reviewed-by: David Hildenbrand Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam --- arch/sparc/mm/init_64.c | 1 + arch/x86/mm/init_64.c | 3 +- include/linux/bootmem_info.h | 40 +++++++++++++ include/linux/memory_hotplug.h | 27 --------- mm/Makefile | 1 + mm/bootmem_info.c | 127 +++++++++++++++++++++++++++++++++++++++++ mm/memory_hotplug.c | 116 ------------------------------------- mm/sparse.c | 1 + 8 files changed, 172 insertions(+), 144 deletions(-) create mode 100644 include/linux/bootmem_info.h create mode 100644 mm/bootmem_info.c diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index e454f179cf5d..ac9d8b161e0c 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index e527d829e1ed..3aaf1d30c777 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -1623,7 +1624,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return err; } -#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE) +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void register_page_bootmem_memmap(unsigned long section_nr, struct page *start_page, unsigned long nr_pages) { diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h new file mode 100644 index 000000000000..4ed6dee1adc9 --- /dev/null +++ b/include/linux/bootmem_info.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_BOOTMEM_INFO_H +#define __LINUX_BOOTMEM_INFO_H + +#include + +/* + * Types for free bootmem stored in page->lru.next. These have to be in + * some random range in unsigned long space for debugging purposes. + */ +enum { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, + SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, + MIX_SECTION_INFO, + NODE_INFO, + MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, +}; + +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE +void __init register_page_bootmem_info_node(struct pglist_data *pgdat); + +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type); +void put_page_bootmem(struct page *page); +#else +static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) +{ +} + +static inline void put_page_bootmem(struct page *page) +{ +} + +static inline void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ +} +#endif + +#endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index a85d4b7d15c2..ff1449f645f1 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -18,18 +18,6 @@ struct vmem_altmap; #ifdef CONFIG_MEMORY_HOTPLUG struct page *pfn_to_online_page(unsigned long pfn); -/* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. - */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, - SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, - MIX_SECTION_INFO, - NODE_INFO, - MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, -}; - /* Types for control the zone type of onlined and offlined memory */ enum { /* Offline the memory. */ @@ -220,17 +208,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat); -#else -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} -#endif -extern void put_page_bootmem(struct page *page); -extern void get_page_bootmem(unsigned long ingo, struct page *page, - unsigned long type); - void get_online_mems(void); void put_online_mems(void); @@ -258,10 +235,6 @@ static inline void zone_span_writelock(struct zone *zone) {} static inline void zone_span_writeunlock(struct zone *zone) {} static inline void zone_seqlock_init(struct zone *zone) {} -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} - static inline int try_online_node(int nid) { return 0; diff --git a/mm/Makefile b/mm/Makefile index a9ad6122d468..d0ccddae7a45 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -126,3 +126,4 @@ obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o obj-$(CONFIG_PTDUMP_CORE) += ptdump.o obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o obj-$(CONFIG_IO_MAPPING) += io-mapping.o +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c new file mode 100644 index 000000000000..5b152dba7344 --- /dev/null +++ b/mm/bootmem_info.c @@ -0,0 +1,127 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Bootmem core functions. + * + * Copyright (c) 2020, Bytedance. + * + * Author: Muchun Song + * + */ +#include +#include +#include +#include +#include + +void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) +{ + page->freelist = (void *)type; + SetPagePrivate(page); + set_page_private(page, info); + page_ref_inc(page); +} + +void put_page_bootmem(struct page *page) +{ + unsigned long type; + + type = (unsigned long) page->freelist; + BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || + type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); + + if (page_ref_dec_return(page) == 1) { + page->freelist = NULL; + ClearPagePrivate(page); + set_page_private(page, 0); + INIT_LIST_HEAD(&page->lru); + free_reserved_page(page); + } +} + +#ifndef CONFIG_SPARSEMEM_VMEMMAP +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + /* Get section's memmap address */ + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + /* + * Get page for the memmap's phys address + * XXX: need more consideration for sparse_vmemmap... + */ + page = virt_to_page(memmap); + mapsize = sizeof(struct page) * PAGES_PER_SECTION; + mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; + + /* remember memmap's page */ + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, SECTION_INFO); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); + +} +#else /* CONFIG_SPARSEMEM_VMEMMAP */ +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); +} +#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ + +void __init register_page_bootmem_info_node(struct pglist_data *pgdat) +{ + unsigned long i, pfn, end_pfn, nr_pages; + int node = pgdat->node_id; + struct page *page; + + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; + page = virt_to_page(pgdat); + + for (i = 0; i < nr_pages; i++, page++) + get_page_bootmem(node, page, NODE_INFO); + + pfn = pgdat->node_start_pfn; + end_pfn = pgdat_end_pfn(pgdat); + + /* register section info */ + for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { + /* + * Some platforms can assign the same pfn to multiple nodes - on + * node0 as well as nodeN. To avoid registering a pfn against + * multiple nodes we check that this pfn does not already + * reside in some other nodes. + */ + if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) + register_page_bootmem_info_section(pfn); + } +} diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 446f5aa373b7..68923c19bdea 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -154,122 +154,6 @@ static void release_memory_resource(struct resource *res) } #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE -void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) -{ - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); - page_ref_inc(page); -} - -void put_page_bootmem(struct page *page) -{ - unsigned long type; - - type = (unsigned long) page->freelist; - BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || - type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); - - if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - INIT_LIST_HEAD(&page->lru); - free_reserved_page(page); - } -} - -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - /* Get section's memmap address */ - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - /* - * Get page for the memmap's phys address - * XXX: need more consideration for sparse_vmemmap... - */ - page = virt_to_page(memmap); - mapsize = sizeof(struct page) * PAGES_PER_SECTION; - mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; - - /* remember memmap's page */ - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, SECTION_INFO); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); - -} -#else /* CONFIG_SPARSEMEM_VMEMMAP */ -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); -} -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ - -void __init register_page_bootmem_info_node(struct pglist_data *pgdat) -{ - unsigned long i, pfn, end_pfn, nr_pages; - int node = pgdat->node_id; - struct page *page; - - nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; - page = virt_to_page(pgdat); - - for (i = 0; i < nr_pages; i++, page++) - get_page_bootmem(node, page, NODE_INFO); - - pfn = pgdat->node_start_pfn; - end_pfn = pgdat_end_pfn(pgdat); - - /* register section info */ - for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - /* - * Some platforms can assign the same pfn to multiple nodes - on - * node0 as well as nodeN. To avoid registering a pfn against - * multiple nodes we check that this pfn does not already - * reside in some other nodes. - */ - if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) - register_page_bootmem_info_section(pfn); - } -} -#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { diff --git a/mm/sparse.c b/mm/sparse.c index 522ebb1a4b64..2533c7390fa9 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "internal.h" #include From patchwork Thu Apr 15 08:39:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12204685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB33DC43460 for ; Thu, 15 Apr 2021 08:42:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2B3F9611C0 for ; Thu, 15 Apr 2021 08:42:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2B3F9611C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BB0EE6B0071; Thu, 15 Apr 2021 04:42:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B60796B0072; Thu, 15 Apr 2021 04:42:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B4846B0073; Thu, 15 Apr 2021 04:42:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0230.hostedemail.com [216.40.44.230]) by kanga.kvack.org (Postfix) with ESMTP id 80A866B0071 for ; Thu, 15 Apr 2021 04:42:42 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3BDA15DF9 for ; Thu, 15 Apr 2021 08:42:42 +0000 (UTC) X-FDA: 78033960564.10.D5E0CC0 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf08.hostedemail.com (Postfix) with ESMTP id B177D80192E5 for ; Thu, 15 Apr 2021 08:42:27 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id j7so11668996plx.2 for ; Thu, 15 Apr 2021 01:42:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SoZb++zfv4yAwjnBuF75FK8ZgZmHYksA4MNIPLiTWic=; b=qb4x4kPnL1dlpXI/+d0gpkF9kwYzBftXjo9PWqIU+lemjVOupYoguZKtb3/kCWI/mA T9JH4OZFrCtydpvncc4zCUgGk6vtkX7DUfps28LPiF2CQjU+dSmQ8bBSEpkKk4ne6rhK 6yf9mefEVL7Mj17r33WvTPAvFZkVKDKuPHKB7BuJNleKt/82bfR+1hwlKwec4Ts2CFNY MnQNsL1wYBWXbPnU5p6kkIspVCmbqqzPWvgQXKAAh6qG2D5iOdz+Lx0iyqlBrHuetu1t OrInlgTR0i7KbWcXGiSWYs82WBbE93RYJV+Jgh4OIIg1mbtCwYiQyK7MckgrwHKEdUoe Df5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SoZb++zfv4yAwjnBuF75FK8ZgZmHYksA4MNIPLiTWic=; b=k39sAmFzU9YOJ9eXMtR9oIEL41etnjRCCsp/JYIDGHRCkSwbAk1jKmnMXDKpP8Xlj9 +Zr8bw2l8OAnhqhWL6fVCIfGcvBa92CEjaDzoV6S7BzZ9p8ju2nKGwaKNifDfmmje879 O/e/5u3TZOjKCeo2gR905RDeadIm6zI7rfMAf8ZEDPTmLsIdB5QA9veQK2fIPYwMgsdh VuiX+A9ax9hdMKHKNGU5hccqOTZxRt1BZYeMjK4Tk2bHE/8kg7tqIobNs2O2IVaenw73 uTe/nXEmnUECHyc87ViRmJFcPzl5/5lhUciVIysKcAjMsftTbmgdq/zp+N0YsNJavGwn pMCA== X-Gm-Message-State: AOAM532SpGmjwCWyoNVnVQgt7dLLYvtvH/6XILPnxONbq/L6EHq7H4Ig 1pcYO0vcYNzUfE5PI9SThqUPzA== X-Google-Smtp-Source: ABdhPJyzbv6Qvy5O0BwCVmcna9pkNkXi+jWZJy9ZJO77vt43H2p7otmlQGPUwJTF2JUh3+JZ9iTBOQ== X-Received: by 2002:a17:90a:6be2:: with SMTP id w89mr2653717pjj.130.1618476160886; Thu, 15 Apr 2021 01:42:40 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id e13sm1392365pgt.91.2021.04.15.01.42.28 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Apr 2021 01:42:40 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam , Balbir Singh Subject: [PATCH v20 2/9] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Date: Thu, 15 Apr 2021 16:39:58 +0800 Message-Id: <20210415084005.25049-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210415084005.25049-1-songmuchun@bytedance.com> References: <20210415084005.25049-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: B177D80192E5 X-Stat-Signature: oitrj97jjs7qfdn4df7w5fr8kdmqkxuh Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf08; identity=mailfrom; envelope-from=""; helo=mail-pl1-f179.google.com; client-ip=209.85.214.179 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618476147-668247 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of some vmemmap pages associated with pre-allocated HugeTLB pages. For example, on X86_64 6 vmemmap pages of size 4KB each can be saved for each 2MB HugeTLB page. 4094 vmemmap pages of size 4KB each can be saved for each 1GB HugeTLB page. When a HugeTLB page is allocated or freed, the vmemmap array representing the range associated with the page will need to be remapped. When a page is allocated, vmemmap pages are freed after remapping. When a page is freed, previously discarded vmemmap pages must be allocated before remapping. The config option is introduced early so that supporting code can be written to depend on the option. The initial version of the code only provides support for x86-64. If config HAVE_BOOTMEM_INFO_NODE is enabled, the freeing vmemmap page code denpend on it to free vmemmap pages. Otherwise, just use free_reserved_page() to free vmemmmap pages. The routine register_page_bootmem_info() is used to register bootmem info. Therefore, make sure register_page_bootmem_info is enabled if HUGETLB_PAGE_FREE_VMEMMAP is defined. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Acked-by: Mike Kravetz Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam Reviewed-by: Balbir Singh --- arch/x86/mm/init_64.c | 2 +- fs/Kconfig | 5 +++++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 3aaf1d30c777..65ea58527176 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1270,7 +1270,7 @@ static struct kcore_list kcore_vsyscall; static void __init register_page_bootmem_info(void) { -#ifdef CONFIG_NUMA +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) int i; for_each_online_node(i) diff --git a/fs/Kconfig b/fs/Kconfig index dcd9161fbeba..6ce6fdac00a3 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -240,6 +240,11 @@ config HUGETLBFS config HUGETLB_PAGE def_bool HUGETLBFS +config HUGETLB_PAGE_FREE_VMEMMAP + def_bool HUGETLB_PAGE + depends on X86_64 + depends on SPARSEMEM_VMEMMAP + config MEMFD_CREATE def_bool TMPFS || HUGETLBFS From patchwork Thu Apr 15 08:39:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12204687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D086C433ED for ; Thu, 15 Apr 2021 08:42:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 25A44611F1 for ; Thu, 15 Apr 2021 08:42:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 25A44611F1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A2A6D6B0072; Thu, 15 Apr 2021 04:42:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9DBA76B0073; Thu, 15 Apr 2021 04:42:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 808596B0074; Thu, 15 Apr 2021 04:42:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id 667486B0072 for ; Thu, 15 Apr 2021 04:42:54 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 209038249980 for ; Thu, 15 Apr 2021 08:42:54 +0000 (UTC) X-FDA: 78033961068.10.EB8299E Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf18.hostedemail.com (Postfix) with ESMTP id B813F2000266 for ; Thu, 15 Apr 2021 08:42:54 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id r13so7918961pjf.2 for ; Thu, 15 Apr 2021 01:42:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6TWaOIOsTerxj3mJL6eq87dt1yIFOG9nTLeeYfIsCVs=; b=2B/7nhsw2xEIQKmLjJ7aBwin8TXHNpJiJkmdaLsd979e20DhPX4OMUfCVwZIFa2TN4 F/JXI4BhmvN30z+IJcoaGsSIdaxHjCXldThwCavxssw+Fi7NzGjeMvlk/GkUcY2iOVtR k8m0TzBYtqg7+F4PQHlAbcU8TNnsdAaTCP2nnPxwkubSdyNZ8SY48XonOImvd/s8q3zW fQHqdD2UM5LDQIYHbLQRHjdlIXNl5dEWthO2vW8KOaJjiQ98IDxD8CAd/NJJFVzNf9j2 tERae+6Y3FjRozk4SF6gNSXWcd7uKEAgAqKD9k0BS4ZREM3HwIA3E4m/FnsN8bhe9IKD AO9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6TWaOIOsTerxj3mJL6eq87dt1yIFOG9nTLeeYfIsCVs=; b=Q1yjFg9VbL/JcHgLLGYb55qdpIRSkXGvcKea7VNPvsGBXDjVO/IdxGY5cNKL8E8+aw VWmhZS3ToFVcU0UFYFIKwJRO5iVu1AYkTza1NxtITv5nJ/Lz1RWi6QGAo+tSt2GP5Cn/ rhfMCd0rj9tXYc40WVY/9eZIHdZlZPJ7SF+8ar1AWLOt7Li7xjoc5jcN5V3KiJcaibOO YBM28Vst5QsL3fojoTbdGg5P8ULCtmLf6MAK0QsuHwD1R+6f+aeT1MFvnu1UiAoUvcUR sc2pi8vmCqiyq3Iex4Ne1DAWvryhEthxh430M5/HvWqqRxS2qY2/FtO6CxxE78iSomtA r+0w== X-Gm-Message-State: AOAM531nDCpkq5+I/zkSo7pLmtvTGmtmiAIwwYcTJhyl1WU3wokuYF8+ mZKXz3//nyMQrxZxaCSOgKluTA== X-Google-Smtp-Source: ABdhPJwTNMUz7wpKbNzAkS7+rcZjQm5CQ+OdHPmFYkuycOXyiO5NtJld2ZwUwxrLL8A8DqogSGwLjA== X-Received: by 2002:a17:902:8601:b029:e6:7b87:8add with SMTP id f1-20020a1709028601b02900e67b878addmr2736039plo.3.1618476172961; Thu, 15 Apr 2021 01:42:52 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id e13sm1392365pgt.91.2021.04.15.01.42.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Apr 2021 01:42:52 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v20 3/9] mm: hugetlb: gather discrete indexes of tail page Date: Thu, 15 Apr 2021 16:39:59 +0800 Message-Id: <20210415084005.25049-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210415084005.25049-1-songmuchun@bytedance.com> References: <20210415084005.25049-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B813F2000266 X-Stat-Signature: x59rqxfrbeufwhmhsjwn3r854ro5jytj Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf18; identity=mailfrom; envelope-from=""; helo=mail-pj1-f43.google.com; client-ip=209.85.216.43 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618476174-864352 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For HugeTLB page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page. In this case, it will be easier to add a new tail page index later. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam Acked-by: Michal Hocko Reviewed-by: Mike Kravetz --- include/linux/hugetlb.h | 21 +++++++++++++++++++-- include/linux/hugetlb_cgroup.h | 19 +++++++++++-------- 2 files changed, 30 insertions(+), 10 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 09f1fd12a6fa..0abed7e766b8 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -29,6 +29,23 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +/* + * For HugeTLB page, there are more metadata to save in the struct page. But + * the head struct page cannot meet our needs, so we have to abuse other tail + * struct page to store the metadata. In order to avoid conflicts caused by + * subsequent use of more tail struct pages, we gather these discrete indexes + * of tail struct page here. + */ +enum { + SUBPAGE_INDEX_SUBPOOL = 1, /* reuse page->private */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP, /* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ + __MAX_CGROUP_SUBPAGE_INDEX = SUBPAGE_INDEX_CGROUP_RSVD, +#endif + __NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; @@ -626,13 +643,13 @@ extern unsigned int default_hstate_idx; */ static inline struct hugepage_subpool *hugetlb_page_subpool(struct page *hpage) { - return (struct hugepage_subpool *)(hpage+1)->private; + return (void *)page_private(hpage + SUBPAGE_INDEX_SUBPOOL); } static inline void hugetlb_set_page_subpool(struct page *hpage, struct hugepage_subpool *subpool) { - set_page_private(hpage+1, (unsigned long)subpool); + set_page_private(hpage + SUBPAGE_INDEX_SUBPOOL, (unsigned long)subpool); } static inline struct hstate *hstate_file(struct file *f) diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 0bff345c4bc6..0b8d1fdda3a1 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -21,15 +21,16 @@ struct hugetlb_cgroup; struct resv_map; struct file_region; +#ifdef CONFIG_CGROUP_HUGETLB /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ -#define HUGETLB_CGROUP_MIN_ORDER 2 +#define HUGETLB_CGROUP_MIN_ORDER order_base_2(__MAX_CGROUP_SUBPAGE_INDEX + 1) -#ifdef CONFIG_CGROUP_HUGETLB enum hugetlb_memory_event { HUGETLB_MAX, HUGETLB_NR_MEMORY_EVENTS, @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } From patchwork Thu Apr 15 08:40:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12204689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B11EBC433B4 for ; Thu, 15 Apr 2021 08:43:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3CE9D6139A for ; Thu, 15 Apr 2021 08:43:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3CE9D6139A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CC2DB6B0073; Thu, 15 Apr 2021 04:43:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C98276B0074; Thu, 15 Apr 2021 04:43:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9E1E6B0075; Thu, 15 Apr 2021 04:43:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id 814866B0073 for ; Thu, 15 Apr 2021 04:43:06 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3F3DD5DE5 for ; Thu, 15 Apr 2021 08:43:06 +0000 (UTC) X-FDA: 78033961572.04.A36FE1C Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf05.hostedemail.com (Postfix) with ESMTP id 25652E000102 for ; Thu, 15 Apr 2021 08:43:04 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id q14-20020a17090a430eb02901503aaee02bso1242911pjg.3 for ; Thu, 15 Apr 2021 01:43:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=s3/RcSFiGf6joblhTrgiRrEGgvN3knWY8ji+KOTlxFU=; b=b2lU5fM8Ilg18nKYVVQ81tkaJDVhz5SXyfkdVUyfJTv4qKAH4vc5DW1ETc6eZSav7j MIeCthYFZGTnXauh3kKSwfmjHTlEsGY9jPoeGz6oWv0QcRqabVV19cN2hGGcgo7VU7FZ o48WEPVm99XyZxgCwKNQYJcRi0KeCoU+etU5+CgGktTG3WLUolku4Wuhh24qsQHf2npa fybP90ynBgMf25v7TLKg+zbOXh0nG3ZGlDkfjlf/bX8Aipo+zrX1WrCpfsEBqZhwWE5Y sn05oqvMvMIWGel1SSQgTcseQSUv2JVEZfB9yOLmpp1fC0GJJCJWutKkYbnIhOnWXtR9 yPGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=s3/RcSFiGf6joblhTrgiRrEGgvN3knWY8ji+KOTlxFU=; b=pHD9esUK7o0IAEF931n7fe839mXIKsiGnALUiC/LHUO+hli7XshdRljnX8MIkOQfpH jiGQVWszEpXT97VNeR9j3Q0XQysoPrsABP/i7rMW9JOBY0Lb4NE5HlCHp27XhOMlgj81 i23RuRzOAdVL3obm3CNrkFKVjUSDSYW1Y0qS2lHZ2uCfwNl6NrtXF7qtyNdwRt4CC+hM D9BGgccuJYvZ/PZoRm1fXj9OFWmDzWdnHCtByQr9D83soKrhLe9MhDyS0B4AzVSifP5U jwAECo3BjoD7RGQRVki96qIMLLJ616ZzfSC2aioMSZpLzXcDGQX4tsO7D3vDTMpcALqe hrWA== X-Gm-Message-State: AOAM531ft8M3n7nTwse/R8nnCYqhi5SBpmb1EZ/xrvzoNkxzJVEGyVqt akhUB1lZii9GYVFSnegs4LFbAQ== X-Google-Smtp-Source: ABdhPJw9W3ekRa/V5rlwi0vOLhe3D3LltbOa3eUW+hFYDlmIwAF+qfOUQM0GbVAIeM8eT0dNlNfrUg== X-Received: by 2002:a17:90b:4017:: with SMTP id ie23mr2284655pjb.152.1618476184669; Thu, 15 Apr 2021 01:43:04 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id e13sm1392365pgt.91.2021.04.15.01.42.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Apr 2021 01:43:04 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v20 4/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Date: Thu, 15 Apr 2021 16:40:00 +0800 Message-Id: <20210415084005.25049-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210415084005.25049-1-songmuchun@bytedance.com> References: <20210415084005.25049-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: aybk8d1gwytegy8uujzmnetkbdu8fymn X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 25652E000102 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf05; identity=mailfrom; envelope-from=""; helo=mail-pj1-f47.google.com; client-ip=209.85.216.47 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618476184-831793 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Every HugeTLB has more than one struct page structure. We __know__ that we only use the first 4 (__NR_USED_SUBPAGE) struct page structures to store metadata associated with each HugeTLB. There are a lot of struct page structures associated with each HugeTLB page. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structures. We map the virtual addresses of the remaining pages of tail page structures to the first tail page struct, and then free these page frames. Therefore, we need to reserve two pages as vmemmap areas. When we allocate a HugeTLB page from the buddy, we can free some vmemmap pages associated with each HugeTLB page. It is more appropriate to do it in the prep_new_huge_page(). The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap pages associated with a HugeTLB page can be freed, returns zero for now, which means the feature is disabled. We will enable it once all the infrastructure is there. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam Acked-by: Michal Hocko Reviewed-by: Mike Kravetz --- include/linux/bootmem_info.h | 28 +++++- include/linux/mm.h | 3 + mm/Makefile | 1 + mm/hugetlb.c | 2 + mm/hugetlb_vmemmap.c | 218 +++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 20 ++++ mm/sparse-vmemmap.c | 194 ++++++++++++++++++++++++++++++++++++++ 7 files changed, 465 insertions(+), 1 deletion(-) create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..2bc8b1f69c93 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -2,7 +2,7 @@ #ifndef __LINUX_BOOTMEM_INFO_H #define __LINUX_BOOTMEM_INFO_H -#include +#include /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +/* + * Any memory allocated via the memblock allocator and not via the + * buddy will be marked reserved already in the memmap. For those + * pages, we can call this function to free it to buddy allocator. + */ +static inline void free_bootmem_page(struct page *page) +{ + unsigned long magic = (unsigned long)page->freelist; + + /* + * The reserve_bootmem_region sets the reserved flag on bootmem + * pages. + */ + VM_BUG_ON_PAGE(page_ref_count(page) != 2, page); + + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + put_page_bootmem(page); + else + VM_BUG_ON_PAGE(1, page); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { @@ -35,6 +56,11 @@ static inline void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { } + +static inline void free_bootmem_page(struct page *page) +{ + free_reserved_page(page); +} #endif #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 25b9041f9925..a4d160ddb749 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3046,6 +3046,9 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) } #endif +void vmemmap_remap_free(unsigned long start, unsigned long end, + unsigned long reuse); + void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap); diff --git a/mm/Makefile b/mm/Makefile index d0ccddae7a45..40ee404e200e 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -75,6 +75,7 @@ obj-$(CONFIG_FRONTSWAP) += frontswap.o obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) += hugetlb_vmemmap.o obj-$(CONFIG_NUMA) += mempolicy.o obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 54d81d5947ed..923d05e2806b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -41,6 +41,7 @@ #include #include #include "internal.h" +#include "hugetlb_vmemmap.h" int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; @@ -1485,6 +1486,7 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + free_huge_page_vmemmap(h, page); INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); hugetlb_set_page_subpool(page, NULL); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c new file mode 100644 index 000000000000..e45a138a7f85 --- /dev/null +++ b/mm/hugetlb_vmemmap.c @@ -0,0 +1,218 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + * + * The struct page structures (page structs) are used to describe a physical + * page frame. By default, there is a one-to-one mapping from a page frame to + * it's corresponding page struct. + * + * HugeTLB pages consist of multiple base page size pages and is supported by + * many architectures. See hugetlbpage.rst in the Documentation directory for + * more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB + * are currently supported. Since the base page size on x86 is 4KB, a 2MB + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of + * 4096 base pages. For each base page, there is a corresponding page struct. + * + * Within the HugeTLB subsystem, only the first 4 page structs are used to + * contain unique information about a HugeTLB page. __NR_USED_SUBPAGE provides + * this upper limit. The only 'useful' information in the remaining page structs + * is the compound_head field, and this field is the same for all tail pages. + * + * By removing redundant page structs for HugeTLB pages, memory can be returned + * to the buddy allocator for other uses. + * + * Different architectures support different HugeTLB pages. For example, the + * following table is the HugeTLB page size supported by x86 and arm64 + * architectures. Because arm64 supports 4k, 16k, and 64k base pages and + * supports contiguous entries, so it supports many kinds of sizes of HugeTLB + * page. + * + * +--------------+-----------+-----------------------------------------------+ + * | Architecture | Page Size | HugeTLB Page Size | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | x86-64 | 4KB | 2MB | 1GB | | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | | 4KB | 64KB | 2MB | 32MB | 1GB | + * | +-----------+-----------+-----------+-----------+-----------+ + * | arm64 | 16KB | 2MB | 32MB | 1GB | | + * | +-----------+-----------+-----------+-----------+-----------+ + * | | 64KB | 2MB | 512MB | 16GB | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * + * When the system boot up, every HugeTLB page has more than one struct page + * structs which size is (unit: pages): + * + * struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * + * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size + * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following + * relationship. + * + * HugeTLB_Size = n * PAGE_SIZE + * + * Then, + * + * struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * = n * sizeof(struct page) / PAGE_SIZE + * + * We can use huge mapping at the pud/pmd level for the HugeTLB page. + * + * For the HugeTLB page of the pmd level mapping, then + * + * struct_size = n * sizeof(struct page) / PAGE_SIZE + * = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE + * = sizeof(struct page) / sizeof(pte_t) + * = 64 / 8 + * = 8 (pages) + * + * Where n is how many pte entries which one page can contains. So the value of + * n is (PAGE_SIZE / sizeof(pte_t)). + * + * This optimization only supports 64-bit system, so the value of sizeof(pte_t) + * is 8. And this optimization also applicable only when the size of struct page + * is a power of two. In most cases, the size of struct page is 64 bytes (e.g. + * x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the + * size of struct page structs of it is 8 page frames which size depends on the + * size of the base page. + * + * For the HugeTLB page of the pud level mapping, then + * + * struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) + * = PAGE_SIZE / 8 * 8 (pages) + * = PAGE_SIZE (pages) + * + * Where the struct_size(pmd) is the size of the struct page structs of a + * HugeTLB page of the pmd level mapping. + * + * E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB + * HugeTLB page consists in 4096. + * + * Next, we take the pmd level mapping of the HugeTLB page as an example to + * show the internal implementation of this optimization. There are 8 pages + * struct page structs associated with a HugeTLB page which is pmd mapped. + * + * Here is how things look before optimization. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | -------------> | 2 | + * | | +-----------+ +-----------+ + * | | | 3 | -------------> | 3 | + * | | +-----------+ +-----------+ + * | | | 4 | -------------> | 4 | + * | PMD | +-----------+ +-----------+ + * | level | | 5 | -------------> | 5 | + * | mapping | +-----------+ +-----------+ + * | | | 6 | -------------> | 6 | + * | | +-----------+ +-----------+ + * | | | 7 | -------------> | 7 | + * | | +-----------+ +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * The value of page->compound_head is the same for all tail pages. The first + * page of page structs (page 0) associated with the HugeTLB page contains the 4 + * page structs necessary to describe the HugeTLB. The only use of the remaining + * pages of page structs (page 1 to page 7) is to point to page->compound_head. + * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs + * will be used for each HugeTLB page. This will allow us to free the remaining + * 6 pages to the buddy allocator. + * + * Here is how things look after remapping. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | ----------------^ ^ ^ ^ ^ ^ + * | | +-----------+ | | | | | + * | | | 3 | ------------------+ | | | | + * | | +-----------+ | | | | + * | | | 4 | --------------------+ | | | + * | PMD | +-----------+ | | | + * | level | | 5 | ----------------------+ | | + * | mapping | +-----------+ | | + * | | | 6 | ------------------------+ | + * | | +-----------+ | + * | | | 7 | --------------------------+ + * | | +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for + * vmemmap pages and restore the previous mapping relationship. + * + * For the HugeTLB page of the pud level mapping. It is similar to the former. + * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages. + * + * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures + * (e.g. aarch64) provides a contiguous bit in the translation table entries + * that hints to the MMU to indicate that it is one of a contiguous set of + * entries that can be cached in a single TLB entry. + * + * The contiguous bit is used to increase the mapping size at the pmd and pte + * (last) level. So this type of HugeTLB page can be optimized only when its + * size of the struct page structs is greater than 2 pages. + */ +#include "hugetlb_vmemmap.h" + +/* + * There are a lot of struct page structures associated with each HugeTLB page. + * For tail pages, the value of compound_head is the same. So we can reuse first + * page of tail page structures. We map the virtual addresses of the remaining + * pages of tail page structures to the first tail page struct, and then free + * these page frames. Therefore, we need to reserve two pages as vmemmap areas. + */ +#define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} + +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) +{ + return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; +} + +void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr = (unsigned long)head; + unsigned long vmemmap_end, vmemmap_reuse; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_addr += RESERVE_VMEMMAP_SIZE; + vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); + vmemmap_reuse = vmemmap_addr - PAGE_SIZE; + + /* + * Remap the vmemmap virtual address range [@vmemmap_addr, @vmemmap_end) + * to the page which @vmemmap_reuse is mapped to, then free the pages + * which the range [@vmemmap_addr, @vmemmap_end] is mapped to. + */ + vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h new file mode 100644 index 000000000000..6923f03534d5 --- /dev/null +++ b/mm/hugetlb_vmemmap.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + */ +#ifndef _LINUX_HUGETLB_VMEMMAP_H +#define _LINUX_HUGETLB_VMEMMAP_H +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void free_huge_page_vmemmap(struct hstate *h, struct page *head); +#else +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ +#endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 16183d85a7d5..7d40b5bd7046 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -27,8 +27,202 @@ #include #include #include +#include +#include + #include #include +#include + +/** + * vmemmap_remap_walk - walk vmemmap page table + * + * @remap_pte: called for each lowest-level entry (PTE). + * @reuse_page: the page which is reused for the tail vmemmap pages. + * @reuse_addr: the virtual address of the @reuse_page page. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + */ +struct vmemmap_remap_walk { + void (*remap_pte)(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk); + struct page *reuse_page; + unsigned long reuse_addr; + struct list_head *vmemmap_pages; +}; + +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pte_t *pte = pte_offset_kernel(pmd, addr); + + /* + * The reuse_page is found 'first' in table walk before we start + * remapping (which is calling @walk->remap_pte). + */ + if (!walk->reuse_page) { + walk->reuse_page = pte_page(*pte); + /* + * Because the reuse address is part of the range that we are + * walking, skip the reuse address range. + */ + addr += PAGE_SIZE; + pte++; + } + + for (; addr != end; addr += PAGE_SIZE, pte++) + walk->remap_pte(pte, addr, walk); +} + +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + do { + BUG_ON(pmd_leaf(*pmd)); + + next = pmd_addr_end(addr, end); + vmemmap_pte_range(pmd, addr, next, walk); + } while (pmd++, addr = next, addr != end); +} + +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(p4d, addr); + do { + next = pud_addr_end(addr, end); + vmemmap_pmd_range(pud, addr, next, walk); + } while (pud++, addr = next, addr != end); +} + +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + p4d_t *p4d; + unsigned long next; + + p4d = p4d_offset(pgd, addr); + do { + next = p4d_addr_end(addr, end); + vmemmap_pud_range(p4d, addr, next, walk); + } while (p4d++, addr = next, addr != end); +} + +static void vmemmap_remap_range(unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) +{ + unsigned long addr = start; + unsigned long next; + pgd_t *pgd; + + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); + + pgd = pgd_offset_k(addr); + do { + next = pgd_addr_end(addr, end); + vmemmap_p4d_range(pgd, addr, next, walk); + } while (pgd++, addr = next, addr != end); + + /* + * We only change the mapping of the vmemmap virtual address range + * [@start + PAGE_SIZE, end), so we only need to flush the TLB which + * belongs to the range. + */ + flush_tlb_kernel_range(start + PAGE_SIZE, end); +} + +/* + * Free a vmemmap page. A vmemmap page can be allocated from the memblock + * allocator or buddy allocator. If the PG_reserved flag is set, it means + * that it allocated from the memblock allocator, just free it via the + * free_bootmem_page(). Otherwise, use __free_page(). + */ +static inline void free_vmemmap_page(struct page *page) +{ + if (PageReserved(page)) + free_bootmem_page(page); + else + __free_page(page); +} + +/* Free a list of the vmemmap pages */ +static void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + /* + * Remap the tail pages as read-only to catch illegal write operation + * to the tail pages. + */ + pgprot_t pgprot = PAGE_KERNEL_RO; + pte_t entry = mk_pte(walk->reuse_page, pgprot); + struct page *page = pte_page(*pte); + + list_add(&page->lru, walk->vmemmap_pages); + set_pte_at(&init_mm, addr, pte, entry); +} + +/** + * vmemmap_remap_free - remap the vmemmap virtual address range [@start, @end) + * to the page which @reuse is mapped to, then free vmemmap + * which the range are mapped to. + * @start: start address of the vmemmap virtual address range that we want + * to remap. + * @end: end address of the vmemmap virtual address range that we want to + * remap. + * @reuse: reuse address. + * + * Note: This function depends on vmemmap being base page mapped. Please make + * sure that we disable PMD mapping of vmemmap pages when calling this function. + */ +void vmemmap_remap_free(unsigned long start, unsigned long end, + unsigned long reuse) +{ + LIST_HEAD(vmemmap_pages); + struct vmemmap_remap_walk walk = { + .remap_pte = vmemmap_remap_pte, + .reuse_addr = reuse, + .vmemmap_pages = &vmemmap_pages, + }; + + /* + * In order to make remapping routine most efficient for the huge pages, + * the routine of vmemmap page table walking has the following rules + * (see more details from the vmemmap_pte_range()): + * + * - The range [@start, @end) and the range [@reuse, @reuse + PAGE_SIZE) + * should be continuous. + * - The @reuse address is part of the range [@reuse, @end) that we are + * walking which is passed to vmemmap_remap_range(). + * - The @reuse address is the first in the complete range. + * + * So we need to make sure that @start and @reuse meet the above rules. + */ + BUG_ON(start - reuse != PAGE_SIZE); + + vmemmap_remap_range(reuse, end, &walk); + free_vmemmap_page_list(&vmemmap_pages); +} /* * Allocate a block of memory to be used to back the virtual memory map From patchwork Thu Apr 15 08:40:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12204691 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DEF6C433B4 for ; Thu, 15 Apr 2021 08:43:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 77420611C0 for ; Thu, 15 Apr 2021 08:43:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 77420611C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0DB156B0036; Thu, 15 Apr 2021 04:43:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 08A596B0074; Thu, 15 Apr 2021 04:43:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DD1CF6B0075; Thu, 15 Apr 2021 04:43:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id C232A6B0036 for ; Thu, 15 Apr 2021 04:43:16 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 82D071801E8D6 for ; Thu, 15 Apr 2021 08:43:16 +0000 (UTC) X-FDA: 78033961992.31.E5BDD86 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf14.hostedemail.com (Postfix) with ESMTP id BAB15C0001FA for ; Thu, 15 Apr 2021 08:43:07 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id n10so522996plc.0 for ; Thu, 15 Apr 2021 01:43:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fwwWxcjL7/ZgriiZ7PmyesiU1nwrUlYRrkWxiyoK9uI=; b=uGVt48g0XVlIcaAbjG/DC3OyQg/WtSM/UNJw0xZQOs8v9zKnd0od2YBC9Vv6stjOIP TLnzQJ7j/n9/soHUp/1wLXLTxBO7+nVk/BsLcMHaHEAXmsEcXNS5ZfPLVJd5rgP/Im7t q0rKn6ob2GSNMv/mJHprkx/6bAf1fCxl4ke47jXre+E0QFEvmzzyXz6pMg5iSKM8+rNC hsLg9hp7TSpq2lOvp3tIyABbPkpve6lmJPhyfQN8h54/+f7XahG7Tn2yG33rVkfQ7D0Z ly3gvORZc13KKzhjNAat+N3ZUmrgpoP1XCUtJQmiQHAgtIe2xTGpUBlWB59dVMiohUO8 eD1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fwwWxcjL7/ZgriiZ7PmyesiU1nwrUlYRrkWxiyoK9uI=; b=A/gFoA6+krw7XPMeRGrsNuT+Hlh+1sGBwH8dYo9QXWZdrxLrqbcRLU0SOeWdC6VwNs 0x32YZcpdhdxY719lE9x7frgwFoEEgmmcL6nmRNBWVxtpnvcf8uaohxlKTmoaM1YzXic H4DlArenPUbw2ryQ9Bjf6eCEi0Wdx/XvkAfnJunsPMU0H3A7H+lxgwBGuJ3C5ny57hoM fqO9X6jn8HDmWfAwJsf2Th6jfztW2ND1dDM5m4sGWcmH+fKiOwQ0LXJHMZiJGLTXJHyu RLiH9P/P814WCGZLWX9R43N02VIG8WBKJIiZ3BGKRIXitqXbyKOKwG4ZGTlG5g4hDcLn 9TMQ== X-Gm-Message-State: AOAM532kkBhlRyaXtZHnRu5wetS1mDw+Fxyr0curPmHmL9rHctv3Ejas w0xOyzxg7a5DYGD9kgsIO8vYAw== X-Google-Smtp-Source: ABdhPJyT6R/pSknWjXspgUSIfiX+ZW/dd+KcodopiAjEXv1L4FG3GBuVGWAEIWxO4wr8mNsGNaXa+g== X-Received: by 2002:a17:902:7682:b029:eb:2900:ed69 with SMTP id m2-20020a1709027682b02900eb2900ed69mr2530505pll.53.1618476195259; Thu, 15 Apr 2021 01:43:15 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id e13sm1392365pgt.91.2021.04.15.01.43.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Apr 2021 01:43:14 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v20 5/9] mm: hugetlb: defer freeing of HugeTLB pages Date: Thu, 15 Apr 2021 16:40:01 +0800 Message-Id: <20210415084005.25049-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210415084005.25049-1-songmuchun@bytedance.com> References: <20210415084005.25049-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: BAB15C0001FA X-Stat-Signature: zxykfygbkyjiwfya8ngzsi388itwxtuj Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf14; identity=mailfrom; envelope-from=""; helo=mail-pl1-f181.google.com; client-ip=209.85.214.181 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618476187-832714 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the subsequent patch, we should allocate the vmemmap pages when freeing a HugeTLB page. But update_and_free_page() can be called under any context, so we cannot use GFP_KERNEL to allocate vmemmap pages. However, we can defer the actual freeing in a kworker to prevent from using GFP_ATOMIC to allocate the vmemmap pages. The __update_and_free_page() is where the call to allocate vmemmmap pages will be inserted. Signed-off-by: Muchun Song --- mm/hugetlb.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++++---- mm/hugetlb_vmemmap.c | 12 --------- mm/hugetlb_vmemmap.h | 17 ++++++++++++ 3 files changed, 85 insertions(+), 17 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 923d05e2806b..eeb8f5480170 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1376,7 +1376,7 @@ static void remove_hugetlb_page(struct hstate *h, struct page *page, h->nr_huge_pages_node[nid]--; } -static void update_and_free_page(struct hstate *h, struct page *page) +static void __update_and_free_page(struct hstate *h, struct page *page) { int i; struct page *subpage = page; @@ -1399,12 +1399,73 @@ static void update_and_free_page(struct hstate *h, struct page *page) } } +/* + * As update_and_free_page() can be called under any context, so we cannot + * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the + * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate + * the vmemmap pages. + * + * free_hpage_workfn() locklessly retrieves the linked list of pages to be + * freed and frees them one-by-one. As the page->mapping pointer is going + * to be cleared in free_hpage_workfn() anyway, it is reused as the llist_node + * structure of a lockless linked list of huge pages to be freed. + */ +static LLIST_HEAD(hpage_freelist); + +static void free_hpage_workfn(struct work_struct *work) +{ + struct llist_node *node; + + node = llist_del_all(&hpage_freelist); + + while (node) { + struct page *page; + struct hstate *h; + + page = container_of((struct address_space **)node, + struct page, mapping); + node = node->next; + page->mapping = NULL; + h = page_hstate(page); + + __update_and_free_page(h, page); + + cond_resched(); + } +} +static DECLARE_WORK(free_hpage_work, free_hpage_workfn); + +static inline void flush_free_hpage_work(struct hstate *h) +{ + if (free_vmemmap_pages_per_hpage(h)) + flush_work(&free_hpage_work); +} + +static void update_and_free_page(struct hstate *h, struct page *page, + bool atomic) +{ + if (!free_vmemmap_pages_per_hpage(h) || !atomic) { + __update_and_free_page(h, page); + return; + } + + /* + * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap pages. + * + * Only call schedule_work() if hpage_freelist is previously + * empty. Otherwise, schedule_work() had been called but the workfn + * hasn't retrieved the list yet. + */ + if (llist_add((struct llist_node *)&page->mapping, &hpage_freelist)) + schedule_work(&free_hpage_work); +} + static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list) { struct page *page, *t_page; list_for_each_entry_safe(page, t_page, list, lru) { - update_and_free_page(h, page); + update_and_free_page(h, page, false); cond_resched(); } } @@ -1471,12 +1532,12 @@ void free_huge_page(struct page *page) if (HPageTemporary(page)) { remove_hugetlb_page(h, page, false); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page); + update_and_free_page(h, page, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ remove_hugetlb_page(h, page, true); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page); + update_and_free_page(h, page, true); } else { arch_clear_hugepage_flags(page); enqueue_huge_page(h, page); @@ -1785,7 +1846,7 @@ int dissolve_free_huge_page(struct page *page) remove_hugetlb_page(h, page, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); - update_and_free_page(h, head); + update_and_free_page(h, head, false); return 0; } out: @@ -2627,6 +2688,7 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, * pages in hstate via the proc/sysfs interfaces. */ mutex_lock(&h->resize_lock); + flush_free_hpage_work(h); spin_lock_irq(&hugetlb_lock); /* @@ -2736,6 +2798,7 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, /* free the pages after dropping lock */ spin_unlock_irq(&hugetlb_lock); update_and_free_pages_bulk(h, &page_list); + flush_free_hpage_work(h); spin_lock_irq(&hugetlb_lock); while (count < persistent_huge_pages(h)) { diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index e45a138a7f85..cb28c5b6c9ff 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -180,18 +180,6 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) -/* - * How many vmemmap pages associated with a HugeTLB page that can be freed - * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. - */ -static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) -{ - return 0; -} - static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 6923f03534d5..01f8637adbe0 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -12,9 +12,26 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void free_huge_page_vmemmap(struct hstate *h, struct page *head); + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #else static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Thu Apr 15 08:40:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12204693 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C9E1C433B4 for ; Thu, 15 Apr 2021 08:43:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3DE9C611CD for ; Thu, 15 Apr 2021 08:43:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3DE9C611CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BDA0E6B006C; Thu, 15 Apr 2021 04:43:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B625E6B0070; Thu, 15 Apr 2021 04:43:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D9C96B0074; Thu, 15 Apr 2021 04:43:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0248.hostedemail.com [216.40.44.248]) by kanga.kvack.org (Postfix) with ESMTP id 7DD5C6B006C for ; Thu, 15 Apr 2021 04:43:40 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3AB953643 for ; Thu, 15 Apr 2021 08:43:40 +0000 (UTC) X-FDA: 78033963000.33.2C5F975 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf21.hostedemail.com (Postfix) with ESMTP id EAF0AE000102 for ; Thu, 15 Apr 2021 08:43:37 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id u11so8051508pjr.0 for ; Thu, 15 Apr 2021 01:43:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rPbCvLLsrlHgBGPaJLHAbBrCHPjBJxppvtDHNdN9YHs=; b=J0B3onLFJdFbcJxwe+xZUL5B5n9Pw1gL+TZ7S2h8bjyKY4jT2SrMhZFG5sfvfRH5kV 2fEomYqAa17BfkW2nYskClPcv01dLhOT/8VEUyn91nYpdyHmOzoKjHXqSzrxfJbnqR6K cJ9PI99rrduLc6JslUVoMNgDLrvxdA6Z8c2DdZXe1al8lkf2zF0DFOtU+N4ybXkreSHL I6HeE8Jky3R71lOB+yQ003L19gY3O4wjKxE4/o8/wM6WcsiTkSwyeGrIi+S5SRQtCVeX 8p/OetTD87hBGyQaEkAE/wz3C1Tf+VJg4dUO9mUShrjVKEHCAe6+zzP4+jsnkgfqmwKt JHGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rPbCvLLsrlHgBGPaJLHAbBrCHPjBJxppvtDHNdN9YHs=; b=n93en9Raq7JJeIZcki2AxH+DOivJ3Y9xJxxoAB0VwOtPKR2QGEmklv5ZZ/GfTT7C6y H6le6yXgmBxsHBb95wntuTK/aUyCAZzQaxVZEme4BAgKHcZmiqiLf3TlB4Kx05pYfLWp VyDi2dXWr4r6I7H+60a/4PqfEdcSKYYESE+Q47R0oVBtveiSIrmM5qrqMcOxctnHdGd7 hjQ0YkGTYfGvNe4T6D/At7wao7H0qBPws4IShRE/m6fLmLssOYKhfnNDAjrqV3XRRkll IcRPP+3FK+Fv4XW362ifJeeqo4V0bQst0L4WBF/H7iQBIhgM2m/LfurU1hsL/OsrF7xP m24Q== X-Gm-Message-State: AOAM532vyWoimLp15jUiYnN1h5IYDYAgTCdG1mr5tj0XZQD2xAjZ3psq SLicqwlqkhSQMU1vkHOzWS/KfQ== X-Google-Smtp-Source: ABdhPJw39Nqq95PLexiUDFEkQ/+Gn85dw7l440xmm5D8aSFIeNRmN2E7sOn9/AEATRdaZ7j9VteNrg== X-Received: by 2002:a17:902:a9c7:b029:e8:de49:6a76 with SMTP id b7-20020a170902a9c7b02900e8de496a76mr2804257plr.63.1618476218713; Thu, 15 Apr 2021 01:43:38 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id e13sm1392365pgt.91.2021.04.15.01.43.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Apr 2021 01:43:38 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v20 7/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Date: Thu, 15 Apr 2021 16:40:03 +0800 Message-Id: <20210415084005.25049-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210415084005.25049-1-songmuchun@bytedance.com> References: <20210415084005.25049-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: 9u59wa1p5t9spg6zyobfjxet63qnmhgi X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: EAF0AE000102 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf21; identity=mailfrom; envelope-from=""; helo=mail-pj1-f41.google.com; client-ip=209.85.216.41 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618476217-936989 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a kernel parameter hugetlb_free_vmemmap to enable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. We disables PMD mapping of vmemmap pages for x86-64 arch when this feature is enabled. Because vmemmap_remap_free() depends on vmemmap being base page mapped. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Barry Song Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam Reviewed-by: Mike Kravetz --- Documentation/admin-guide/kernel-parameters.txt | 17 +++++++++++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ arch/x86/mm/init_64.c | 8 ++++++-- include/linux/hugetlb.h | 19 +++++++++++++++++++ mm/hugetlb_vmemmap.c | 24 ++++++++++++++++++++++++ 5 files changed, 69 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 3bf052d14504..9e655f5206ac 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1611,6 +1611,23 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] + hugetlb_free_vmemmap= + [KNL] Reguires CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + enabled. + Allows heavy hugetlb users to free up some more + memory (6 * PAGE_SIZE for each 2MB hugetlb page). + This feauture is not free though. Large page + tables are not used to back vmemmap pages which + can lead to a performance degradation for some + workloads. Also there will be memory allocation + required when hugetlb pages are freed from the + pool which can lead to corner cases under heavy + memory pressure. + Format: { on | off (default) } + + on: enable the feature + off: disable the feature + hung_task_panic= [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index 6988895d09a8..8abaeb144e44 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -153,6 +153,9 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing + unused vmemmap pages associated with each HugeTLB page. When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 65ea58527176..9d9d18d0c2a1 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -1609,7 +1610,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); - if (end - start < PAGES_PER_SECTION * sizeof(struct page)) + if ((is_hugetlb_free_vmemmap_enabled() && !altmap) || + end - start < PAGES_PER_SECTION * sizeof(struct page)) err = vmemmap_populate_basepages(start, end, node, NULL); else if (boot_cpu_has(X86_FEATURE_PSE)) err = vmemmap_populate_hugepages(start, end, node, altmap); @@ -1637,6 +1639,8 @@ void register_page_bootmem_memmap(unsigned long section_nr, pmd_t *pmd; unsigned int nr_pmd_pages; struct page *page; + bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) || + is_hugetlb_free_vmemmap_enabled(); for (; addr < end; addr = next) { pte_t *pte = NULL; @@ -1662,7 +1666,7 @@ void register_page_bootmem_memmap(unsigned long section_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); - if (!boot_cpu_has(X86_FEATURE_PSE)) { + if (base_mapping) { next = (addr + PAGE_SIZE) & PAGE_MASK; pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 6e970a7d3480..4015cedacf91 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -894,6 +894,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +extern bool hugetlb_free_vmemmap_enabled; + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return hugetlb_free_vmemmap_enabled; +} +#else +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} +#endif + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; @@ -1047,6 +1061,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index a897c7778246..3070e1465b1b 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -168,6 +168,8 @@ * (last) level. So this type of HugeTLB page can be optimized only when its * size of the struct page structs is greater than 2 pages. */ +#define pr_fmt(fmt) "HugeTLB: " fmt + #include "hugetlb_vmemmap.h" /* @@ -180,6 +182,28 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +bool hugetlb_free_vmemmap_enabled; + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + /* We cannot optimize if a "struct page" crosses page boundaries. */ + if ((!is_power_of_2(sizeof(struct page)))) { + pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); + return 0; + } + + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "on")) + hugetlb_free_vmemmap_enabled = true; + else if (strcmp(buf, "off")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); + static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; From patchwork Thu Apr 15 08:40:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12204695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96206C433ED for ; Thu, 15 Apr 2021 08:43:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 252AC6124B for ; Thu, 15 Apr 2021 08:43:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 252AC6124B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A29BA6B0070; Thu, 15 Apr 2021 04:43:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D9086B0071; Thu, 15 Apr 2021 04:43:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82C146B0074; Thu, 15 Apr 2021 04:43:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id 681776B0070 for ; Thu, 15 Apr 2021 04:43:53 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 286A7824999B for ; Thu, 15 Apr 2021 08:43:53 +0000 (UTC) X-FDA: 78033963546.11.E2DED4A Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by imf21.hostedemail.com (Postfix) with ESMTP id 4DF10E000114 for ; Thu, 15 Apr 2021 08:43:49 +0000 (UTC) Received: by mail-pg1-f174.google.com with SMTP id q10so16429095pgj.2 for ; Thu, 15 Apr 2021 01:43:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=x/SRS23vpbVfBLs/lG9SN0UAIXnw7CTzwH43G93PLHw=; b=f6OOfdVMog22KSTujExSY8hMPhka8V31/RlO6bivNiwdYGSLF9MaIJjOnDAilUTf4+ Kf2dZGdhKeeHHQOkOClaeWJt98bNIy7lcHrY6lFVEFAzMGy+dafwVH7lHjT6g4GOkSik iGmqdLT3QKaqZ6jCDhcO3Eg8nEdnD7mQcrvLNXbWDh4MJpTvWiDPm+kKwWOA8llvLyPc +nU4bQz37CmoDyQlAfULJcrKCsUVmcdYGToVJHfOssQLvy2fYPD0imnIsVn5GFWrniH8 xU9smLwN4vOEu4WhHs0ZfQvmxh5ZBEUCReK7w3ZKusEmdafzofFrHbNSj49UXrwIh4sS ByaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=x/SRS23vpbVfBLs/lG9SN0UAIXnw7CTzwH43G93PLHw=; b=g3r3bmAaEDj6kIiYheJwR0GrOAu6s0ODIhtb4m8uQMQxKRPrZLj5soXXmhIA0e1jh4 Wcp0BZWyVqcECCu9LOitIMNvSWPYq5A9k4749Mbe/eV/oYRapcuP5MSZqlSzG4n1vGeT e3CTcooBny38prF7iQKRjHi3jyWz/zPbPuWwsvr4W1OJTtSKCYv2D6OXwHLxg6TGlbI9 69//iadhPiAvTOGWErQ811dwz3yidr/8bVIhQqkmkV2kIAJYCOjOwVv7WwpQ0O3dpQf6 hM9mB3gTJWf+eIG1SMSedqYD5utyT+qe5jI5DM1wn6qoKHawG363gRUgs8kQRPs+Gvo5 A0BQ== X-Gm-Message-State: AOAM532rC9YIKlPXEU5oAyYMOexPn8H/txEVHf/KF9tFsjBIgyOZvn82 z7dYtFP6q+lo+BD0tceM1XSQng== X-Google-Smtp-Source: ABdhPJyV31W3NPW5cka6JErz4uX4p2+48ao/MXX/4spK2UbJVujp6YAdcy5vJgNDADqmFhU8NVMZmQ== X-Received: by 2002:a65:4082:: with SMTP id t2mr2515072pgp.396.1618476230299; Thu, 15 Apr 2021 01:43:50 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id e13sm1392365pgt.91.2021.04.15.01.43.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Apr 2021 01:43:50 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v20 8/9] mm: memory_hotplug: disable memmap_on_memory when hugetlb_free_vmemmap enabled Date: Thu, 15 Apr 2021 16:40:04 +0800 Message-Id: <20210415084005.25049-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210415084005.25049-1-songmuchun@bytedance.com> References: <20210415084005.25049-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4DF10E000114 X-Stat-Signature: n77qxwzs6j4j1t1xudna8withyakyb1f Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf21; identity=mailfrom; envelope-from=""; helo=mail-pg1-f174.google.com; client-ip=209.85.215.174 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618476229-720258 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The parameter of memory_hotplug.memmap_on_memory is not compatible with hugetlb_free_vmemmap. So disable it when hugetlb_free_vmemmap is enabled. Signed-off-by: Muchun Song --- Documentation/admin-guide/kernel-parameters.txt | 4 ++++ drivers/acpi/acpi_memhotplug.c | 1 + mm/memory_hotplug.c | 18 +++++++++++++----- 3 files changed, 18 insertions(+), 5 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 9e655f5206ac..1f648b3e6120 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2893,6 +2893,10 @@ Note that even when enabled, there are a few cases where the feature is not effective. + This is not compatible with hugetlb_free_vmemmap. If + both parameters are enabled, hugetlb_free_vmemmap takes + precedence over memory_hotplug.memmap_on_memory. + memtest= [KNL,X86,ARM,PPC,RISCV] Enable memtest Format: default : 0 diff --git a/drivers/acpi/acpi_memhotplug.c b/drivers/acpi/acpi_memhotplug.c index 8cc195c4c861..0d7f595ee441 100644 --- a/drivers/acpi/acpi_memhotplug.c +++ b/drivers/acpi/acpi_memhotplug.c @@ -15,6 +15,7 @@ #include #include #include +#include #include "internal.h" diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 68923c19bdea..c45ed6c0cd9f 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -981,6 +981,7 @@ static int online_memory_block(struct memory_block *mem, void *arg) bool mhp_supports_memmap_on_memory(unsigned long size) { + bool supported; unsigned long nr_vmemmap_pages = size / PAGE_SIZE; unsigned long vmemmap_size = nr_vmemmap_pages * sizeof(struct page); unsigned long remaining_size = size - vmemmap_size; @@ -1011,11 +1012,18 @@ bool mhp_supports_memmap_on_memory(unsigned long size) * altmap as an alternative source of memory, and we do not exactly * populate a single PMD. */ - return memmap_on_memory && - IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) && - size == memory_block_size_bytes() && - IS_ALIGNED(vmemmap_size, PMD_SIZE) && - IS_ALIGNED(remaining_size, pageblock_nr_pages << PAGE_SHIFT); + supported = memmap_on_memory && + IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) && + size == memory_block_size_bytes() && + IS_ALIGNED(vmemmap_size, PMD_SIZE) && + IS_ALIGNED(remaining_size, pageblock_nr_pages << PAGE_SHIFT); + + if (supported && is_hugetlb_free_vmemmap_enabled()) { + pr_info("Cannot enable memory_hotplug.memmap_on_memory, it is not compatible with hugetlb_free_vmemmap\n"); + supported = false; + } + + return supported; } /* From patchwork Thu Apr 15 08:40:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12204697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33710C433B4 for ; Thu, 15 Apr 2021 08:44:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BA500611C0 for ; Thu, 15 Apr 2021 08:44:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BA500611C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 56E0E6B0071; Thu, 15 Apr 2021 04:44:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 51E1B6B0072; Thu, 15 Apr 2021 04:44:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E5C56B0074; Thu, 15 Apr 2021 04:44:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id 1B9D26B0071 for ; Thu, 15 Apr 2021 04:44:04 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C9B4E5823 for ; Thu, 15 Apr 2021 08:44:03 +0000 (UTC) X-FDA: 78033963966.29.AD486AA Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf14.hostedemail.com (Postfix) with ESMTP id 14DBAC0007C2 for ; Thu, 15 Apr 2021 08:43:54 +0000 (UTC) Received: by mail-pj1-f50.google.com with SMTP id e8-20020a17090a7288b029014e51f5a6baso6984279pjg.2 for ; Thu, 15 Apr 2021 01:44:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rztN3Sqh1tkAjysn9ldI6OfX9hnQFcJ8oZ96HmDJuBw=; b=XyjszvEg4KtjzDgyFGgL4DKc3WTlGlBJkpde9eFMNV5K7VTWE7DHIT09M4BKSwVcyr 8nKOzj/d5HHpc+YFKjmxwXy9t/m2Iot3CMvKHHOjtLYyeF+EK3WQS+cZYbcz70Ytuf7D qMJvJthb5h4SYRZO7708mCFiKU1K1ORY3fl8Ri605brIuIEgWUDuIE8+LMNuuq+47JiC XjV2TyhuMJfM+Spm8svYyY3NpdHsD9vRDVuzfPiy12HfHb5OWt4tB+oH2aMYsSEj+gSJ gfU5lqRnEHoWbZNj25cSjJOHE/B+y3bJnj57lhh+wxD6LwJesJv/BQ6yMzpwcXiGY798 +6IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rztN3Sqh1tkAjysn9ldI6OfX9hnQFcJ8oZ96HmDJuBw=; b=e8/R98DHSE6wpRxDfx21Rddp7HDwEz/7A0uo7rQDasDEugSbqDapNBwr/eqAzRsneN GNZRcn0dJQhexebLAyN/ibQovFLDRFSBmdWT3ahWAB7WseTeHsAEb9jTLmHqNEW2w0V6 1pONHYC/IN4dnsJ/zYx938EIGyRSIGhvNgri/PvSiqqKrIdOerSyroqNgGFsmYsrDM6W Fl3syc4hrCd2S907VHdHhXgk1aNdoWBN4zN/ceKwIYSmtFrdHoWqTa30KdW1pEGRqN38 0WJBs6j92QpWQ2GKI0DK9oQ6lssFgZxF3cR0ynJ7CWwboiSseFn0k0vVcTXYzNRNLOSi Lw+A== X-Gm-Message-State: AOAM5303kA/jK1WhS5ZKM6mmXVtZnxXyxnfyQBuXKG1NRx/cEM492rt/ Rq+nr9U1WvDsssK4a8xgiQoqKQ== X-Google-Smtp-Source: ABdhPJzH5qvqlCVH3XG4+IacmBcXDkhhbnx9NW6Ap/O/MV03EbydxBbdj/RauHFJJ1E0EP9+xtzJdA== X-Received: by 2002:a17:902:ce8b:b029:eb:5fd4:51c5 with SMTP id f11-20020a170902ce8bb02900eb5fd451c5mr2809871plg.31.1618476242689; Thu, 15 Apr 2021 01:44:02 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id e13sm1392365pgt.91.2021.04.15.01.43.50 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Apr 2021 01:44:02 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v20 9/9] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Date: Thu, 15 Apr 2021 16:40:05 +0800 Message-Id: <20210415084005.25049-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210415084005.25049-1-songmuchun@bytedance.com> References: <20210415084005.25049-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: e9gnuqz6jy9ijw6kie9bkx1byc9y85xz X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 14DBAC0007C2 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf14; identity=mailfrom; envelope-from=""; helo=mail-pj1-f50.google.com; client-ip=209.85.216.50 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618476234-740500 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All the infrastructure is ready, so we introduce nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a HugeTLB page that can be freed to buddy allocator. And initialize it in the hugetlb_vmemmap_init(). This patch is actual enablement of the feature. There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 1 + mm/hugetlb_vmemmap.c | 33 +++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 10 ++++++---- 4 files changed, 43 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 4015cedacf91..710d821fbca6 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -602,6 +602,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1c37f0098e00..8adf52f4c7e4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3361,6 +3361,7 @@ void __init hugetlb_add_hstate(unsigned int order) h->next_nid_to_free = first_memory_node; snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); parsed_hstate = h; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 3070e1465b1b..f9f9bb212319 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -262,3 +262,36 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head) SetHPageVmemmapOptimized(head); } + +void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int nr_pages = pages_per_huge_page(h); + unsigned int vmemmap_pages; + + /* + * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct + * page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, + * so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. + */ + BUILD_BUG_ON(__NR_USED_SUBPAGE >= + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + + if (!hugetlb_free_vmemmap_enabled) + return; + + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page are not to be freed to buddy + * allocator, the other pages will map to the first tail page, so they + * can be freed. + * + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true + * on some architectures (e.g. aarch64). See Documentation/arm64/ + * hugetlbpage.rst for more details. + */ + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + + pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, + h->name); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index a37771b0b82a..cb2bef8f9e73 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -13,17 +13,15 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP int alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); +void hugetlb_vmemmap_init(struct hstate *h); /* * How many vmemmap pages associated with a HugeTLB page that can be freed * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. */ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { - return 0; + return h->nr_free_vmemmap_pages; } #else static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head) @@ -35,6 +33,10 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} + static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return 0;