From patchwork Tue Dec 22 14:24:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11986753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19A5FC433DB for ; Tue, 22 Dec 2020 14:29:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8585523124 for ; Tue, 22 Dec 2020 14:29:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8585523124 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2394A8D0008; Tue, 22 Dec 2020 09:29:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E9A66B0088; Tue, 22 Dec 2020 09:29:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B2458D0008; Tue, 22 Dec 2020 09:29:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id E3AE16B0087 for ; Tue, 22 Dec 2020 09:29:41 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 99F7A181AEF1E for ; Tue, 22 Dec 2020 14:29:41 +0000 (UTC) X-FDA: 77621151762.22.day11_29059ed27460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 71A6718038E7C for ; Tue, 22 Dec 2020 14:29:41 +0000 (UTC) X-HE-Tag: day11_29059ed27460 X-Filterd-Recvd-Size: 16822 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:29:40 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id x12so7502882plr.10 for ; Tue, 22 Dec 2020 06:29:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3qJg+8yJ0ZaO1UKBsLGCMyx4R42eQSImcGjXZI1+SbQ=; b=x/+LjvRMkHFIbtg4tH6C5oF2yINwJZptrjYUcr0IV2uya7OqgVlIDqCo/70TFMm5r5 LwhWNnzLtkY9QHUlCD1AiVibu/0p4QM38/vKwuMc25w5veRPjDrxsHRjDxtCGF/QhSP/ vT7XsfeizeWIfL3HTZfna+e73CbhMGqxmIPoJYxcly5KuxAA5BxJbb+hwV7zWQpcu8FD qEslD1ZcmWqs0rngS6F++zGZVGAw1KVsTq8/4jIyeLgBd5eeKInU0Sn/BuLV0ezD41Br RC9TbNg3mN9kukgOeJBfBmTgl4s48QMFxTdhpOiTF740Hh9X7DI43ZU2fXRXydkYEjW0 24Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3qJg+8yJ0ZaO1UKBsLGCMyx4R42eQSImcGjXZI1+SbQ=; b=HQcf5zEYZ7JJZj2Ne6LMpMjYsk4sX0mh/31vhO1AHFe2C/mgsgWACzeR2WR8cRjh/j OWVAui9TTzxFQcq4oQKGydZBYyPYrilHbvspHg3u+UyPUmCTOMYECt6guJkvRuX9sEjm ziQIHwPAcbknzvakVq63X5saXGrCwL8/gkG9ImOuJn0tzwbrhRcg6rf/AprsXHBMfOt8 LdiKhx87670HN1s310ul/sV2h6BarUu/jETnp/yahyQ7f288exBx/e9Yf0EQRPgSa6VU mkTehibLfJdQVkLWusKaw5o7o2XoaLTdYIDD3+hrZYjXQQfhGX3NBVZFB9iMzQqFx/qa /9+Q== X-Gm-Message-State: AOAM533oPOSPnrmHd7VS/aVjDMESqj6DQ1A66qy1QtUw6uprbrgPEH8u 3+6ixk3s7vUbhj/noP8RdUP7IA== X-Google-Smtp-Source: ABdhPJzl6L12Gb/ubTsbiGwl/U8R1/6sn82bSQUID/903ntm0gRRh0/NtZLCJuGlwEGgq9bmaJRg2g== X-Received: by 2002:a17:90a:cf94:: with SMTP id i20mr22095317pju.28.1608647379463; Tue, 22 Dec 2020 06:29:39 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.29.28 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:29:38 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 01/11] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Date: Tue, 22 Dec 2020 22:24:30 +0800 Message-Id: <20201222142440.28930-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move bootmem info registration common API to individual bootmem_info.c. And we will use {get,put}_page_bootmem() to initialize the page for the vmemmap pages or free the vmemmap pages to buddy in the later patch. So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador Reviewed-by: David Hildenbrand --- arch/x86/mm/init_64.c | 3 +- include/linux/bootmem_info.h | 40 +++++++++++++ include/linux/memory_hotplug.h | 27 --------- mm/Makefile | 1 + mm/bootmem_info.c | 124 +++++++++++++++++++++++++++++++++++++++++ mm/memory_hotplug.c | 116 -------------------------------------- mm/sparse.c | 1 + 7 files changed, 168 insertions(+), 144 deletions(-) create mode 100644 include/linux/bootmem_info.h create mode 100644 mm/bootmem_info.c diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index b5a3fa4033d3..0a45f062826e 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -1571,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return err; } -#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE) +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void register_page_bootmem_memmap(unsigned long section_nr, struct page *start_page, unsigned long nr_pages) { diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h new file mode 100644 index 000000000000..4ed6dee1adc9 --- /dev/null +++ b/include/linux/bootmem_info.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_BOOTMEM_INFO_H +#define __LINUX_BOOTMEM_INFO_H + +#include + +/* + * Types for free bootmem stored in page->lru.next. These have to be in + * some random range in unsigned long space for debugging purposes. + */ +enum { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, + SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, + MIX_SECTION_INFO, + NODE_INFO, + MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, +}; + +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE +void __init register_page_bootmem_info_node(struct pglist_data *pgdat); + +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type); +void put_page_bootmem(struct page *page); +#else +static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) +{ +} + +static inline void put_page_bootmem(struct page *page) +{ +} + +static inline void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ +} +#endif + +#endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 15acce5ab106..84590964ad35 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -33,18 +33,6 @@ struct vmem_altmap; ___page; \ }) -/* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. - */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, - SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, - MIX_SECTION_INFO, - NODE_INFO, - MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, -}; - /* Types for control the zone type of onlined and offlined memory */ enum { /* Offline the memory. */ @@ -222,17 +210,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat); -#else -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} -#endif -extern void put_page_bootmem(struct page *page); -extern void get_page_bootmem(unsigned long ingo, struct page *page, - unsigned long type); - void get_online_mems(void); void put_online_mems(void); @@ -260,10 +237,6 @@ static inline void zone_span_writelock(struct zone *zone) {} static inline void zone_span_writeunlock(struct zone *zone) {} static inline void zone_seqlock_init(struct zone *zone) {} -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} - static inline int try_online_node(int nid) { return 0; diff --git a/mm/Makefile b/mm/Makefile index a1af02ba8f3f..ed4b88fa0f5e 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_KFENCE) += kfence/ obj-$(CONFIG_FAILSLAB) += failslab.o +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c new file mode 100644 index 000000000000..fcab5a3f8cc0 --- /dev/null +++ b/mm/bootmem_info.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/mm/bootmem_info.c + * + * Copyright (C) + */ +#include +#include +#include +#include +#include + +void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) +{ + page->freelist = (void *)type; + SetPagePrivate(page); + set_page_private(page, info); + page_ref_inc(page); +} + +void put_page_bootmem(struct page *page) +{ + unsigned long type; + + type = (unsigned long) page->freelist; + BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || + type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); + + if (page_ref_dec_return(page) == 1) { + page->freelist = NULL; + ClearPagePrivate(page); + set_page_private(page, 0); + INIT_LIST_HEAD(&page->lru); + free_reserved_page(page); + } +} + +#ifndef CONFIG_SPARSEMEM_VMEMMAP +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + /* Get section's memmap address */ + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + /* + * Get page for the memmap's phys address + * XXX: need more consideration for sparse_vmemmap... + */ + page = virt_to_page(memmap); + mapsize = sizeof(struct page) * PAGES_PER_SECTION; + mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; + + /* remember memmap's page */ + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, SECTION_INFO); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); + +} +#else /* CONFIG_SPARSEMEM_VMEMMAP */ +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); +} +#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ + +void __init register_page_bootmem_info_node(struct pglist_data *pgdat) +{ + unsigned long i, pfn, end_pfn, nr_pages; + int node = pgdat->node_id; + struct page *page; + + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; + page = virt_to_page(pgdat); + + for (i = 0; i < nr_pages; i++, page++) + get_page_bootmem(node, page, NODE_INFO); + + pfn = pgdat->node_start_pfn; + end_pfn = pgdat_end_pfn(pgdat); + + /* register section info */ + for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { + /* + * Some platforms can assign the same pfn to multiple nodes - on + * node0 as well as nodeN. To avoid registering a pfn against + * multiple nodes we check that this pfn does not already + * reside in some other nodes. + */ + if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) + register_page_bootmem_info_section(pfn); + } +} diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index a8cef4955907..4c4ca99745b7 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -141,122 +141,6 @@ static void release_memory_resource(struct resource *res) } #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE -void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) -{ - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); - page_ref_inc(page); -} - -void put_page_bootmem(struct page *page) -{ - unsigned long type; - - type = (unsigned long) page->freelist; - BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || - type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); - - if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - INIT_LIST_HEAD(&page->lru); - free_reserved_page(page); - } -} - -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - /* Get section's memmap address */ - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - /* - * Get page for the memmap's phys address - * XXX: need more consideration for sparse_vmemmap... - */ - page = virt_to_page(memmap); - mapsize = sizeof(struct page) * PAGES_PER_SECTION; - mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; - - /* remember memmap's page */ - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, SECTION_INFO); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); - -} -#else /* CONFIG_SPARSEMEM_VMEMMAP */ -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); -} -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ - -void __init register_page_bootmem_info_node(struct pglist_data *pgdat) -{ - unsigned long i, pfn, end_pfn, nr_pages; - int node = pgdat->node_id; - struct page *page; - - nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; - page = virt_to_page(pgdat); - - for (i = 0; i < nr_pages; i++, page++) - get_page_bootmem(node, page, NODE_INFO); - - pfn = pgdat->node_start_pfn; - end_pfn = pgdat_end_pfn(pgdat); - - /* register section info */ - for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - /* - * Some platforms can assign the same pfn to multiple nodes - on - * node0 as well as nodeN. To avoid registering a pfn against - * multiple nodes we check that this pfn does not already - * reside in some other nodes. - */ - if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) - register_page_bootmem_info_section(pfn); - } -} -#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { diff --git a/mm/sparse.c b/mm/sparse.c index 7bd23f9d6cef..87676bf3af40 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "internal.h" #include From patchwork Tue Dec 22 14:24:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11986755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 356A9C433DB for ; Tue, 22 Dec 2020 14:29:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C59492312E for ; Tue, 22 Dec 2020 14:29:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C59492312E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 08BD06B0087; Tue, 22 Dec 2020 09:29:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 015466B0088; Tue, 22 Dec 2020 09:29:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF7CB8D0009; Tue, 22 Dec 2020 09:29:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id C58036B0087 for ; Tue, 22 Dec 2020 09:29:54 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8C484181AEF1E for ; Tue, 22 Dec 2020 14:29:54 +0000 (UTC) X-FDA: 77621152308.17.coat30_020e25627460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 77070180D0781 for ; Tue, 22 Dec 2020 14:29:54 +0000 (UTC) X-HE-Tag: coat30_020e25627460 X-Filterd-Recvd-Size: 6112 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:29:53 +0000 (UTC) Received: by mail-pf1-f177.google.com with SMTP id t8so8564922pfg.8 for ; Tue, 22 Dec 2020 06:29:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ntojA21L7AWHHQRUBdhZHwBioWXg1bVrcDP2ArBhkkY=; b=XTTONU2KZyeFIIukV/BEYoa3ybU4lZ89lYtgBqJ1YEuOA9kkye11aJ91ssriUtJ4F+ vXAfW2eN15sPIGPeOX8NTzTs0K8/CGg8oaTVLSCpaNQ+BlYuHMhjWi3dIlkbgPQPOOAh 9NWYI3ZLds0WcSf53RD5hHHtXjxQRUMwXlt4231q+ReNs7A2XyIa5qlL5yf1IFLGazvU jV/vu6/OwgYhLa93+DACagHBc6EOIrEMG5CZUVLC5UlbhalmucW6ugVkhQ9rEtqHzPMu wGFC9fy58EIrm0sitEf0aJ9c9PaEKeyWDUPqB22b2HqSBL9gSH/XiCbGcYcb1eTUuOUi Y/bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ntojA21L7AWHHQRUBdhZHwBioWXg1bVrcDP2ArBhkkY=; b=Sgl2JKQY8Y8c++lXs/oSneWa7el6mUIpHi5N8MLnNtRMCDOSoG7rdD37azVy/KfxcH J99xoMNDSMvU0e8MF5ff/bK0T3IaJa/13duUptC5fGnzycoecMZLiFilaa2Nv3Kos15c Z+QUqB9XVw3ZSmJ6Lz+A6Lh4GE5EWxc8wI/nawzFh6p59eCfAbsU8P4o0lv436oiocp3 d9JgyNVbyYOXtQ3BrOkMLZt1vroIH0fqmV6E3EyxgwoKEoRfrHv8QePUTC5+3qyRV2Hh d15k6MBeMcFbNasxamSy70HEM5SR0i0oPbk5jX8bibtDjwfzZfGwwGTtw1mDeWDxXzPJ GEyw== X-Gm-Message-State: AOAM533U0fq5jEy2ujmLIrfXVP1NY1C/Rn7864QhkOqe8Hxxa2lZnlm+ fOu1ydRLHqf5h7oLUdIGdNg4aA== X-Google-Smtp-Source: ABdhPJzCUCmTavcCtYy0Cgeu988bfVe0mdBav9lSJTTmQ2B5meYZqqFRw1PChtjMNZxchKxCfkqcwQ== X-Received: by 2002:a62:8683:0:b029:1a4:4f3f:7e75 with SMTP id x125-20020a6286830000b02901a44f3f7e75mr20092143pfd.68.1608647391128; Tue, 22 Dec 2020 06:29:51 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.29.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:29:50 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Date: Tue, 22 Dec 2020 22:24:31 +0800 Message-Id: <20201222142440.28930-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The HUGETLB_PAGE_FREE_VMEMMAP option is used to enable the freeing of unnecessary vmemmap associated with HugeTLB pages. The config option is introduced early so that supporting code can be written to depend on the option. The initial version of the code only provides support for x86-64. Like other code which frees vmemmap, this config option depends on HAVE_BOOTMEM_INFO_NODE. The routine register_page_bootmem_info() is used to register bootmem info. Therefore, make sure register_page_bootmem_info is enabled if HUGETLB_PAGE_FREE_VMEMMAP is defined. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Acked-by: Mike Kravetz --- arch/x86/mm/init_64.c | 2 +- fs/Kconfig | 18 ++++++++++++++++++ 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0a45f062826e..0435bee2e172 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall; static void __init register_page_bootmem_info(void) { -#ifdef CONFIG_NUMA +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) int i; for_each_online_node(i) diff --git a/fs/Kconfig b/fs/Kconfig index 976e8b9033c4..e7c4c2a79311 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -245,6 +245,24 @@ config HUGETLBFS config HUGETLB_PAGE def_bool HUGETLBFS +config HUGETLB_PAGE_FREE_VMEMMAP + def_bool HUGETLB_PAGE + depends on X86_64 + depends on SPARSEMEM_VMEMMAP + depends on HAVE_BOOTMEM_INFO_NODE + help + The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of + some vmemmap pages associated with pre-allocated HugeTLB pages. + For example, on X86_64 6 vmemmap pages of size 4KB each can be + saved for each 2MB HugeTLB page. 4094 vmemmap pages of size 4KB + each can be saved for each 1GB HugeTLB page. + + When a HugeTLB page is allocated or freed, the vmemmap array + representing the range associated with the page will need to be + remapped. When a page is allocated, vmemmap pages are freed + after remapping. When a page is freed, previously discarded + vmemmap pages must be allocated before remapping. + config MEMFD_CREATE def_bool TMPFS || HUGETLBFS From patchwork Tue Dec 22 14:24:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11986757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FFC4C433E0 for ; Tue, 22 Dec 2020 14:30:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CB22823133 for ; Tue, 22 Dec 2020 14:30:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB22823133 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 40E1D6B0089; Tue, 22 Dec 2020 09:30:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3BD2E8D000B; Tue, 22 Dec 2020 09:30:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 238318D0009; Tue, 22 Dec 2020 09:30:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id 0209D6B0089 for ; Tue, 22 Dec 2020 09:30:07 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 77AE0363B for ; Tue, 22 Dec 2020 14:30:07 +0000 (UTC) X-FDA: 77621152854.28.town90_2515ade27460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id B2C2F6D8A for ; Tue, 22 Dec 2020 14:30:06 +0000 (UTC) X-HE-Tag: town90_2515ade27460 X-Filterd-Recvd-Size: 26863 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:30:05 +0000 (UTC) Received: by mail-pl1-f175.google.com with SMTP id r4so7501699pls.11 for ; Tue, 22 Dec 2020 06:30:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gdIMILHyGG1qENaTy9SpdfBc2ynej1/ovH7abX9jhL8=; b=w6+D4x7v+A/+j7XzWCap3aVKGM7Tpp5CincRBs86Pk+a0T9ehMfgM87jX7UzQPA5+T TIUn/jiF3tbsVcI6wtZDcA673pnwX2MMVo1MUwMonNdjsde9XAZWRIWl22/U9LPuvmjn fKFBalkQF/lv8N2c36LBTivKfslb7ezhIyBvDRCDWJd3KhIZISywNOZ23PeOSeNBnjBa U8GmfV7DvPV/mBwBXnQBlApHF4qdUDRyzHN3EltjOt6l/V3bEzUQYsM3j0YWKM6tKMRI 52GuCrMUjTSU3OBO8/1P3swGykL/ilraYNJkLjr2Jg914rWRPI4MgpuDu3IuwlL7T/+i +uvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gdIMILHyGG1qENaTy9SpdfBc2ynej1/ovH7abX9jhL8=; b=q8Mr+/vOfNTpLPICuZJx6ogcBVgDq6utDRnJUWXgRJUpolU2g6ycKLmdDxgXRgLU/Q F6qgrAOpud4EdM4lV5+KPDoNIXhBj79Keir5TO5VDrA6nXwnGQmBbcFBGsKQq7uIGP3n iTo/UtxeeJVsTQFHhxFzx7uiO07wHQRr20VDNSbsr5CUQn3ZWt0Wvni7YUMV4cwciBBM PVna1lY8mjYVDbVVRcKSU1FOzEA7L8w+/4vauEYTTbe8CfT7pCycdRJwGzKPgJ8Xq4e1 7zBqCiBOlE5TMitjOl5ezLOZSNEj6SvMnm+ILJpFNvBtIb92StJ9YLTUuvC8hblVzNN7 hWvA== X-Gm-Message-State: AOAM530dridA0gWIn9lRhRRr91KK2jsS/q4F5fVT39u43m8CW9FljkZK xLFGWqqMAbioKq6jIG3xwft8ww== X-Google-Smtp-Source: ABdhPJy6QhwqHoETTJYdWjD+TwwCjp3irW0AHDx2gEicGZUZkW7YJRR0uyGhx4BX+7pYJ0aqeL7R+g== X-Received: by 2002:a17:90a:6d62:: with SMTP id z89mr22813401pjj.71.1608647402813; Tue, 22 Dec 2020 06:30:02 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.29.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:30:02 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Date: Tue, 22 Dec 2020 22:24:32 +0800 Message-Id: <20201222142440.28930-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Every HugeTLB has more than one struct page structure. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures to store metadata associated with each HugeTLB. There are a lot of struct page structures associated with each HugeTLB page. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structures. We map the virtual addresses of the remaining pages of tail page structures to the first tail page struct, and then free these page frames. Therefore, we need to reserve two pages as vmemmap areas. When we allocate a HugeTLB page from the buddy, we can free some vmemmap pages associated with each HugeTLB page. It is more appropriate to do it in the prep_new_huge_page(). The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap pages associated with a HugeTLB page can be freed, returns zero for now, which means the feature is disabled. We will enable it once all the infrastructure is there. Signed-off-by: Muchun Song --- include/linux/bootmem_info.h | 27 +++++- include/linux/mm.h | 3 + include/linux/mmdebug.h | 8 ++ mm/Makefile | 1 + mm/hugetlb.c | 3 + mm/hugetlb_vmemmap.c | 211 +++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 20 ++++ mm/sparse-vmemmap.c | 182 +++++++++++++++++++++++++++++++++++++ 8 files changed, 454 insertions(+), 1 deletion(-) create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..5f5d6ad664b8 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -2,7 +2,7 @@ #ifndef __LINUX_BOOTMEM_INFO_H #define __LINUX_BOOTMEM_INFO_H -#include +#include /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +/* + * Any memory allocated via the memblock allocator and not via the + * buddy will be marked reserved already in the memmap. For those + * pages, we can call this function to free it to buddy allocator. + */ +static inline void free_bootmem_page(struct page *page) +{ + unsigned long magic = (unsigned long)page->freelist; + + /* + * The reserve_bootmem_region sets the reserved flag on bootmem + * pages. + */ + VM_WARN_ON_PAGE(page_ref_count(page) != 2, page); + + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + put_page_bootmem(page); + else + VM_WARN_ON_PAGE(1, page); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { @@ -35,6 +56,10 @@ static inline void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { } + +static inline void free_bootmem_page(struct page *page) +{ +} #endif #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index eabe7d9f80d8..f928994ed273 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3005,6 +3005,9 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) } #endif +void vmemmap_remap_free(unsigned long start, unsigned long end, + unsigned long reuse); + void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap); diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h index 5d0767cb424a..eff5b13a6945 100644 --- a/include/linux/mmdebug.h +++ b/include/linux/mmdebug.h @@ -37,6 +37,13 @@ void dump_mm(const struct mm_struct *mm); BUG(); \ } \ } while (0) +#define VM_WARN_ON_PAGE(cond, page) \ + do { \ + if (unlikely(cond)) { \ + dump_page(page, "VM_WARN_ON_PAGE(" __stringify(cond)")");\ + WARN_ON(1); \ + } \ + } while (0) #define VM_WARN_ON_ONCE_PAGE(cond, page) ({ \ static bool __section(".data.once") __warned; \ int __ret_warn_once = !!(cond); \ @@ -60,6 +67,7 @@ void dump_mm(const struct mm_struct *mm); #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond) #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond) +#define VM_WARN_ON_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond) #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond) diff --git a/mm/Makefile b/mm/Makefile index ed4b88fa0f5e..056801d8daae 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP) += frontswap.o obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) += hugetlb_vmemmap.o obj-$(CONFIG_NUMA) += mempolicy.o obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1f3bf1710b66..140135fc8113 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -42,6 +42,7 @@ #include #include #include "internal.h" +#include "hugetlb_vmemmap.h" int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; @@ -1497,6 +1498,8 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + free_huge_page_vmemmap(h, page); + INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c new file mode 100644 index 000000000000..4ffa2a4ae2a8 --- /dev/null +++ b/mm/hugetlb_vmemmap.c @@ -0,0 +1,211 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + * + * The struct page structures (page structs) are used to describe a physical + * page frame. By default, there is a one-to-one mapping from a page frame to + * it's corresponding page struct. + * + * The HugeTLB pages consist of multiple base page size pages and is supported + * by many architectures. See hugetlbpage.rst in the Documentation directory + * for more details. On the x86-64 architecture, HugeTLB pages of size 2MB and + * 1GB are currently supported. Since the base page size on x86 is 4KB, a 2MB + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of + * 4096 base pages. For each base page, there is a corresponding page struct. + * + * Within the HugeTLB subsystem, only the first 4 page structs are used to + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER + * provides this upper limit. The only 'useful' information in the remaining + * page structs is the compound_head field, and this field is the same for all + * tail pages. + * + * By removing redundant page structs for HugeTLB pages, memory can be returned + * to the buddy allocator for other uses. + * + * Different architectures support different HugeTLB pages. For example, the + * following table is the HugeTLB page size supported by x86 and arm64 + * architectures. Becasue arm64 supports 4k, 16k, and 64k base pages and + * supports contiguous entries, so it supports many kinds of sizes of HugeTLB + * page. + * + * +--------------+-----------+-----------------------------------------------+ + * | Architecture | Page Size | HugeTLB Page Size | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | x86-64 | 4KB | 2MB | 1GB | | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | | 4KB | 64KB | 2MB | 32MB | 1GB | + * | +-----------+-----------+-----------+-----------+-----------+ + * | arm64 | 16KB | 2MB | 32MB | 1GB | | + * | +-----------+-----------+-----------+-----------+-----------+ + * | | 64KB | 2MB | 512MB | 16GB | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * + * When the system boot up, every HugeTLB page has more than one struct page + * structs whose size is (unit: pages): + * + * struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * + * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size + * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following + * relationship. + * + * HugeTLB_Size = n * PAGE_SIZE + * + * Then, + * + * struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * = n * sizeof(struct page) / PAGE_SIZE + * + * We can use huge mapping at the pud/pmd level for the HugeTLB page. + * + * For the HugeTLB page of the pmd level mapping, then + * + * struct_size = n * sizeof(struct page) / PAGE_SIZE + * = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE + * = sizeof(struct page) / sizeof(pte_t) + * = 64 / 8 + * = 8 (pages) + * + * Where n is how many pte entries which one page can contains. So the value of + * n is (PAGE_SIZE / sizeof(pte_t)). + * + * This optimization only supports 64-bit system, so the value of sizeof(pte_t) + * is 8. And this optimization also applicable only when the size of struct page + * is a power of two. In most cases, the size of struct page is 64 (e.g. x86-64 + * and arm64). So if we use pmd level mapping for a HugeTLB page, the size of + * struct page structs of it is 8 pages whose size depends on the size of the + * base page. + * + * For the HugeTLB page of the pud level mapping, then + * + * struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) + * = PAGE_SIZE / 8 * 8 (pages) + * = PAGE_SIZE (pages) + * + * Where the struct_size(pmd) is the size of the struct page structs of a + * HugeTLB page of the pmd level mapping. + * + * Next, we take the pmd level mapping of the HugeTLB page as an example to + * show the internal implementation of this optimization. There are 8 pages + * struct page structs associated with a HugeTLB page which is pmd mapped. + * + * Here is how things look before optimization. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | -------------> | 2 | + * | | +-----------+ +-----------+ + * | | | 3 | -------------> | 3 | + * | | +-----------+ +-----------+ + * | | | 4 | -------------> | 4 | + * | PMD | +-----------+ +-----------+ + * | level | | 5 | -------------> | 5 | + * | mapping | +-----------+ +-----------+ + * | | | 6 | -------------> | 6 | + * | | +-----------+ +-----------+ + * | | | 7 | -------------> | 7 | + * | | +-----------+ +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * The value of page->compound_head is the same for all tail pages. The first + * page of page structs (page 0) associated with the HugeTLB page contains the 4 + * page structs necessary to describe the HugeTLB. The only use of the remaining + * pages of page structs (page 1 to page 7) is to point to page->compound_head. + * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs + * will be used for each HugeTLB page. This will allow us to free the remaining + * 6 pages to the buddy allocator. + * + * Here is how things look after remapping. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | ----------------^ ^ ^ ^ ^ ^ + * | | +-----------+ | | | | | + * | | | 3 | ------------------+ | | | | + * | | +-----------+ | | | | + * | | | 4 | --------------------+ | | | + * | PMD | +-----------+ | | | + * | level | | 5 | ----------------------+ | | + * | mapping | +-----------+ | | + * | | | 6 | ------------------------+ | + * | | +-----------+ | + * | | | 7 | --------------------------+ + * | | +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for + * vmemmap pages and restore the previous mapping relationship. + * + * For the HugeTLB page of the pud level mapping. It is similar to the former. + * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages. + * + * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures + * (e.g. aarch64) provides a contiguous bit in the translation table entries + * that hints to the MMU to indicate that it is one of a contiguous set of + * entries that can be cached in a single TLB entry. + * + * The contiguous bit is used to increase the mapping size at the pmd and pte + * (last) level. So this type of HugeTLB page can be optimized only when its + * size of the struct page structs is greater than 2 pages. + */ +#include "hugetlb_vmemmap.h" + +/* + * There are a lot of struct page structures associated with each HugeTLB page. + * For tail pages, the value of compound_head is the same. So we can reuse first + * page of tail page structures. We map the virtual addresses of the remaining + * pages of tail page structures to the first tail page struct, and then free + * these page frames. Therefore, we need to reserve two pages as vmemmap areas. + */ +#define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} + +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) +{ + return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; +} + +void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr = (unsigned long)head; + unsigned long vmemmap_end, vmemmap_reuse; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_addr += RESERVE_VMEMMAP_SIZE; + vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); + vmemmap_reuse = vmemmap_addr - PAGE_SIZE; + + vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h new file mode 100644 index 000000000000..6923f03534d5 --- /dev/null +++ b/mm/hugetlb_vmemmap.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + */ +#ifndef _LINUX_HUGETLB_VMEMMAP_H +#define _LINUX_HUGETLB_VMEMMAP_H +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void free_huge_page_vmemmap(struct hstate *h, struct page *head); +#else +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ +#endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 16183d85a7d5..0e9c49a028b4 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -27,8 +27,190 @@ #include #include #include +#include +#include + #include #include +#include + +/** + * vmemmap_remap_walk - walk vmemmap page table + * + * @remap_pte: called for each non-empty PTE (lowest-level) entry. + * @reuse_page: the page which is reused for the tail vmemmap pages. + * @reuse_addr: the virtual address of the @reuse_page page. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + */ +struct vmemmap_remap_walk { + void (*remap_pte)(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk); + struct page *reuse_page; + unsigned long reuse_addr; + struct list_head *vmemmap_pages; +}; + +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + + /* + * The routine of vmemmap page table walking has the following rules: + * + * - reuse address is part of the range that we are walking. + * - reuse_page is found 'first' in table walk before we start + * remapping (which is calling @walk->remap_pte). + */ + if (walk->reuse_addr == addr) { + BUG_ON(pte_none(*pte)); + + walk->reuse_page = pte_page(*pte++); + addr += PAGE_SIZE; + } + + for (; addr != end; addr += PAGE_SIZE, pte++) { + BUG_ON(pte_none(*pte)); + + walk->remap_pte(pte, addr, walk); + } +} + +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + do { + BUG_ON(pmd_none(*pmd)); + + next = pmd_addr_end(addr, end); + vmemmap_pte_range(pmd, addr, next, walk); + } while (pmd++, addr = next, addr != end); +} + +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(p4d, addr); + do { + BUG_ON(pud_none(*pud)); + + next = pud_addr_end(addr, end); + vmemmap_pmd_range(pud, addr, next, walk); + } while (pud++, addr = next, addr != end); +} + +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + p4d_t *p4d; + unsigned long next; + + p4d = p4d_offset(pgd, addr); + do { + BUG_ON(p4d_none(*p4d)); + + next = p4d_addr_end(addr, end); + vmemmap_pud_range(p4d, addr, next, walk); + } while (p4d++, addr = next, addr != end); +} + +static void vmemmap_remap_range(unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) +{ + unsigned long addr = start; + unsigned long next; + pgd_t *pgd; + + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); + + pgd = pgd_offset_k(addr); + do { + BUG_ON(pgd_none(*pgd)); + + next = pgd_addr_end(addr, end); + vmemmap_p4d_range(pgd, addr, next, walk); + } while (pgd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +/* + * Free a vmemmap page. A vmemmap page can be allocated from the memblock + * allocator or buddy allocator. If the PG_reserved flag is set, it means + * that it allocated from the memblock allocator, just free it via the + * free_bootmem_page(). Otherwise, use __free_page(). + */ +static inline void free_vmemmap_page(struct page *page) +{ + if (PageReserved(page)) + free_bootmem_page(page); + else + __free_page(page); +} + +/* Free a list of the vmemmap pages */ +static void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + /* + * Remap the tail pages as read-only to catch illegal write operation + * to the tail pages. + */ + pgprot_t pgprot = PAGE_KERNEL_RO; + pte_t entry = mk_pte(walk->reuse_page, pgprot); + struct page *page = pte_page(*pte); + + list_add(&page->lru, walk->vmemmap_pages); + set_pte_at(&init_mm, addr, pte, entry); +} + +/** + * vmemmap_remap_free - remap the vmemmap virtual address range [@start, @end) + * to the page which @reuse is mapped, then free vmemmap + * pages. + * @start: start address of the vmemmap virtual address range. + * @end: end address of the vmemmap virtual address range. + * @reuse: reuse address. + */ +void vmemmap_remap_free(unsigned long start, unsigned long end, + unsigned long reuse) +{ + LIST_HEAD(vmemmap_pages); + struct vmemmap_remap_walk walk = { + .remap_pte = vmemmap_remap_pte, + .reuse_addr = reuse, + .vmemmap_pages = &vmemmap_pages, + }; + + BUG_ON(start != reuse + PAGE_SIZE); + + vmemmap_remap_range(reuse, end, &walk); + free_vmemmap_page_list(&vmemmap_pages); +} /* * Allocate a block of memory to be used to back the virtual memory map From patchwork Tue Dec 22 14:24:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11986759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2229C433E9 for ; Tue, 22 Dec 2020 14:30:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5F2B32312E for ; Tue, 22 Dec 2020 14:30:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F2B32312E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 014E06B008A; Tue, 22 Dec 2020 09:30:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EE0CF8D000B; Tue, 22 Dec 2020 09:30:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D81568D0009; Tue, 22 Dec 2020 09:30:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id BD07C6B008A for ; Tue, 22 Dec 2020 09:30:16 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7A41C180AD83A for ; Tue, 22 Dec 2020 14:30:16 +0000 (UTC) X-FDA: 77621153232.25.vein13_0600a0027460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 581601804E3A1 for ; Tue, 22 Dec 2020 14:30:16 +0000 (UTC) X-HE-Tag: vein13_0600a0027460 X-Filterd-Recvd-Size: 9829 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:30:15 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id q4so7511675plr.7 for ; Tue, 22 Dec 2020 06:30:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JLqxbS6LtrTLoVFdlTdUuNVdlDwo14Fqd+EkuFETfK8=; b=WurtkCZNNXR0Vhts3Qpr6O8zUxh9px6S5TDgal3MNrS02aff5oykPsTo2vpYMKc7X1 6j4FojKDH94AmM7d/lBE9KzVLk4zXt9cwHQIiD+XN/4YTOs6ikrdv45LtrJ3oUJPGz56 Ew1ACjDQnsQHHk+9ynJHkl5oRXCtvtqUdbblSavYkRrbXV3O7F9vkIlo14mR5NJt6uWj HS9LFCflT2e04qwcU90bbgOIqQep7Hv0Z+I5fR82zkyNVGOjtnOxUQ+sTSO3ugHXZZhH h+/8HFGPRqtEVbuUXZeo/1u0LkjvDA80dk3bSRsgaTBhKbsOD9JgDxc8sUPMTn2xtvGr 8OSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JLqxbS6LtrTLoVFdlTdUuNVdlDwo14Fqd+EkuFETfK8=; b=jvH5dTm4haZg9izWgJ/YaXlEK5NIFTONE2VpLrPIS2cJ3SuVSKCqvPAjPtyRF2H2L3 M8zJn4Mp0QTe4OmV2IGJDv1MNFqbR2aTGZbXYKJpG5bUQJMyu0KSoJSXv1mgsqZxq1TD 6XAvrfAEVaFAGbV7JKaDzIsD5PJOdHaRoH3w170ASqyNfUDVOPCSDqP0iUXPqOobZI8y CSTU4n6ELV+m06dimILfSLARWdQYMb+4yDCz/fJcRLztgQUN+qGkuOggt062kkal2en/ Km9koqtXNligpiMdWyY7EAqB91B5Belux+Wucs/a9b/tI3uqA7jSgRFkIX/TJqaqEHGp PVBw== X-Gm-Message-State: AOAM5309l5HeZLJ9tFqzhlrkrEDks2hpJ7ScDxbVGaqAOcNWGRB7bnz0 NCXnuUbvo6xigV91nLoK4Zx1gw== X-Google-Smtp-Source: ABdhPJwKezQ06zxgxgIZ9Yx7aSkCO6Idh1yQ80GyilRIHCJEPgcpHpGOrQlu2T8qHGrP1kHtk4Fdmg== X-Received: by 2002:a17:90a:f404:: with SMTP id ch4mr21647948pjb.78.1608647414411; Tue, 22 Dec 2020 06:30:14 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.30.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:30:13 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 04/11] mm/hugetlb: Defer freeing of HugeTLB pages Date: Tue, 22 Dec 2020 22:24:33 +0800 Message-Id: <20201222142440.28930-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the subsequent patch, we should allocate the vmemmap pages when freeing HugeTLB pages. But update_and_free_page() is always called with holding hugetlb_lock, so we cannot use GFP_KERNEL to allocate vmemmap pages. However, we can defer the actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate the vmemmap pages. The __free_hugepage() is where the call to allocate vmemmmap pages will be inserted. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- mm/hugetlb.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++---- mm/hugetlb_vmemmap.c | 12 -------- mm/hugetlb_vmemmap.h | 17 +++++++++++ 3 files changed, 91 insertions(+), 18 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 140135fc8113..fc45a900acf1 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1292,15 +1292,79 @@ static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif -static void update_and_free_page(struct hstate *h, struct page *page) +static void __free_hugepage(struct hstate *h, struct page *page); + +/* + * As update_and_free_page() is always called with holding hugetlb_lock, so we + * cannot use GFP_KERNEL to allocate vmemmap pages. However, we can defer the + * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate + * the vmemmap pages. + * + * The __free_hugepage() is where the call to allocate vmemmmap pages will be + * inserted. + * + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of pages + * to be freed and frees them one-by-one. As the page->mapping pointer is going + * to be cleared in update_hpage_vmemmap_workfn() anyway, it is reused as the + * llist_node structure of a lockless linked list of huge pages to be freed. + */ +static LLIST_HEAD(hpage_update_freelist); + +static void update_hpage_vmemmap_workfn(struct work_struct *work) { - int i; + struct llist_node *node; + struct page *page; + + node = llist_del_all(&hpage_update_freelist); + + while (node) { + page = container_of((struct address_space **)node, + struct page, mapping); + node = node->next; + page->mapping = NULL; + __free_hugepage(page_hstate(page), page); + + cond_resched(); + } +} +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); +static inline void __update_and_free_page(struct hstate *h, struct page *page) +{ + /* No need to allocate vmemmap pages */ + if (!free_vmemmap_pages_per_hpage(h)) { + __free_hugepage(h, page); + return; + } + + /* + * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap + * pages. + * + * Only call schedule_work() if hpage_update_freelist is previously + * empty. Otherwise, schedule_work() had been called but the workfn + * hasn't retrieved the list yet. + */ + if (llist_add((struct llist_node *)&page->mapping, + &hpage_update_freelist)) + schedule_work(&hpage_update_work); +} + +static void update_and_free_page(struct hstate *h, struct page *page) +{ if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; h->nr_huge_pages--; h->nr_huge_pages_node[page_to_nid(page)]--; + + __update_and_free_page(h, page); +} + +static void __free_hugepage(struct hstate *h, struct page *page) +{ + int i; + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | @@ -1313,13 +1377,17 @@ static void update_and_free_page(struct hstate *h, struct page *page) set_page_refcounted(page); if (hstate_is_gigantic(h)) { /* - * Temporarily drop the hugetlb_lock, because - * we might block in free_gigantic_page(). + * Temporarily drop the hugetlb_lock, because we might block + * in free_gigantic_page(). Only drop it in case the vmemmap + * optimization is disabled, since that context does not hold + * the lock. */ - spin_unlock(&hugetlb_lock); + if (!free_vmemmap_pages_per_hpage(h)) + spin_unlock(&hugetlb_lock); destroy_compound_gigantic_page(page, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); - spin_lock(&hugetlb_lock); + if (!free_vmemmap_pages_per_hpage(h)) + spin_lock(&hugetlb_lock); } else { __free_pages(page, huge_page_order(h)); } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 4ffa2a4ae2a8..19f1898aaede 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -178,18 +178,6 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) -/* - * How many vmemmap pages associated with a HugeTLB page that can be freed - * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. - */ -static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) -{ - return 0; -} - static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 6923f03534d5..01f8637adbe0 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -12,9 +12,26 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void free_huge_page_vmemmap(struct hstate *h, struct page *head); + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #else static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Tue Dec 22 14:24:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11986761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DF88C433DB for ; Tue, 22 Dec 2020 14:30:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E00052312E for ; Tue, 22 Dec 2020 14:30:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E00052312E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7B6A06B009A; Tue, 22 Dec 2020 09:30:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7649E8D000B; Tue, 22 Dec 2020 09:30:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 606448D0009; Tue, 22 Dec 2020 09:30:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0170.hostedemail.com [216.40.44.170]) by kanga.kvack.org (Postfix) with ESMTP id 45B746B009A for ; Tue, 22 Dec 2020 09:30:28 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 04426180AD83A for ; Tue, 22 Dec 2020 14:30:28 +0000 (UTC) X-FDA: 77621153736.30.sugar84_1c0937227460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id D1B2E180B3C8E for ; Tue, 22 Dec 2020 14:30:27 +0000 (UTC) X-HE-Tag: sugar84_1c0937227460 X-Filterd-Recvd-Size: 10184 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:30:26 +0000 (UTC) Received: by mail-pg1-f178.google.com with SMTP id n25so2949209pgb.0 for ; Tue, 22 Dec 2020 06:30:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EhFpxQj5/GwY295E6s+oS6dQKdxRaTfLIou+ERGCJ/I=; b=qjROyppgMC+EFovxoNxuL2Shw5LhTCnSLSb9HP8xKrti7yllMLzqxKskejqdL7gQ2t F3aYyROTAUMYNO38GcWcjasrbrf3iE6lHXOOssODwdq43NlhdvjX4pn3YwU5mmCiQbTR NwFPz30JP9/O1Ifq8DSAHONPr0qKcm/lDs68Kv7FwCKMf8TmLakDfDsx82i4KJwn+cyt F4liMAKyTq4EB5wNoh7h0sMOgfMhmUazc+P9+FD39chKTNxdaRle6xjpFNQZ9an35sjK dRNUkq/ZNGFF1zeAM8p/tcCPJ7quomuQ+s45M1NLpNY/7VmaOh6OG5e2FQWxVwCXnNd1 Em4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EhFpxQj5/GwY295E6s+oS6dQKdxRaTfLIou+ERGCJ/I=; b=n3PivjuoDMgC2nWF0FFgGE/XvmZ4oVSRUN4QsO2cT/UVRMqoi7yr4yleQAiftqs1Jc GJIJI8lc5DBzMNZ0QYTKqDvja/8EvTvhMRIz9KF99y/bpC0YUWXT5RTj63Iv1LTsWjSu cZnx4ou49DJCygad0yzWgyU9xZ0bUo6lzBkDwmbEjpDqne8FEkPOfPGAmdtUU7o8v2+Q gkO36GHCJhSZkYOgPxiypx4YNRDB1yhsIqFosIstYAv5Qg193HMM1Zb71NrPrY/CdV06 +V8PTQO5k2DjZirmfoRtWYob9AQ5fIQMRpPyz0Mu++Na2sxNsviP7qgdazHimGdr1tCJ 6d8w== X-Gm-Message-State: AOAM5303PiVxO2TlShmOBFK0CPQD60QIajaDEdNMvPZTd7AwHo2j3/AF 9O1N2KZIi03lSwwqSfK3G+LkPA== X-Google-Smtp-Source: ABdhPJwDPza44OIJ+PBN1OMgjnec0MFIGaAQsALNLALEXGLunvqpozZ66WUvHEN0SneMjqMJ/2Vv5A== X-Received: by 2002:a63:4703:: with SMTP id u3mr19986572pga.385.1608647425780; Tue, 22 Dec 2020 06:30:25 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.30.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:30:25 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 05/11] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Date: Tue, 22 Dec 2020 22:24:34 +0800 Message-Id: <20201222142440.28930-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we free a HugeTLB page to the buddy allocator, we should allocate the vmemmap pages associated with it. We can do that in the __free_hugepage() before freeing it to buddy. Signed-off-by: Muchun Song --- include/linux/mm.h | 2 ++ mm/hugetlb.c | 2 ++ mm/hugetlb_vmemmap.c | 15 +++++++++++ mm/hugetlb_vmemmap.h | 5 ++++ mm/sparse-vmemmap.c | 76 +++++++++++++++++++++++++++++++++++++++++++++++++++- 5 files changed, 99 insertions(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index f928994ed273..16b55d13b0ab 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3007,6 +3007,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) void vmemmap_remap_free(unsigned long start, unsigned long end, unsigned long reuse); +void vmemmap_remap_alloc(unsigned long start, unsigned long end, + unsigned long reuse); void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index fc45a900acf1..ab6d2eabfea8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1365,6 +1365,8 @@ static void __free_hugepage(struct hstate *h, struct page *page) { int i; + alloc_huge_page_vmemmap(h, page); + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 19f1898aaede..6108ae80314f 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -183,6 +183,21 @@ static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; } +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr = (unsigned long)head; + unsigned long vmemmap_end, vmemmap_reuse; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_addr += RESERVE_VMEMMAP_SIZE; + vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); + vmemmap_reuse = vmemmap_addr - PAGE_SIZE; + + vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse); +} + void free_huge_page_vmemmap(struct hstate *h, struct page *head) { unsigned long vmemmap_addr = (unsigned long)head; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 01f8637adbe0..b2c8d2f11d48 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -11,6 +11,7 @@ #include #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); /* @@ -25,6 +26,10 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) return 0; } #else +static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} + static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 0e9c49a028b4..ed4702d5d664 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include @@ -40,7 +41,8 @@ * @remap_pte: called for each non-empty PTE (lowest-level) entry. * @reuse_page: the page which is reused for the tail vmemmap pages. * @reuse_addr: the virtual address of the @reuse_page page. - * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed + * or is mapped from. */ struct vmemmap_remap_walk { void (*remap_pte)(pte_t *pte, unsigned long addr, @@ -50,6 +52,10 @@ struct vmemmap_remap_walk { struct list_head *vmemmap_pages; }; +/* The gfp mask of allocating vmemmap page */ +#define GFP_VMEMMAP_PAGE \ + (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN | __GFP_THISNODE) + static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct vmemmap_remap_walk *walk) @@ -212,6 +218,74 @@ void vmemmap_remap_free(unsigned long start, unsigned long end, free_vmemmap_page_list(&vmemmap_pages); } +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + pgprot_t pgprot = PAGE_KERNEL; + struct page *page; + void *to; + + BUG_ON(pte_page(*pte) != walk->reuse_page); + + page = list_first_entry(walk->vmemmap_pages, struct page, lru); + list_del(&page->lru); + to = page_to_virt(page); + copy_page(to, (void *)walk->reuse_addr); + + set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); +} + +static void alloc_vmemmap_page_list(struct list_head *list, + unsigned long start, unsigned long end) +{ + unsigned long addr; + + for (addr = start; addr < end; addr += PAGE_SIZE) { + struct page *page; + int nid = page_to_nid((const void *)addr); + +retry: + page = alloc_pages_node(nid, GFP_VMEMMAP_PAGE, 0); + if (unlikely(!page)) { + msleep(100); + /* + * We should retry infinitely, because we cannot + * handle allocation failures. Once we allocate + * vmemmap pages successfully, then we can free + * a HugeTLB page. + */ + goto retry; + } + list_add_tail(&page->lru, list); + } +} + +/** + * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end) + * to the page which is from the @vmemmap_pages + * respectively. + * @start: start address of the vmemmap virtual address range. + * @end: end address of the vmemmap virtual address range. + * @reuse: reuse address. + */ +void vmemmap_remap_alloc(unsigned long start, unsigned long end, + unsigned long reuse) +{ + LIST_HEAD(vmemmap_pages); + struct vmemmap_remap_walk walk = { + .remap_pte = vmemmap_restore_pte, + .reuse_addr = reuse, + .vmemmap_pages = &vmemmap_pages, + }; + + might_sleep(); + + BUG_ON(start != reuse + PAGE_SIZE); + + alloc_vmemmap_page_list(&vmemmap_pages, start, end); + vmemmap_remap_range(reuse, end, &walk); +} + /* * Allocate a block of memory to be used to back the virtual memory map * or to back the page tables that are used to create the mapping. From patchwork Tue Dec 22 14:24:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11986763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45D9EC433E0 for ; Tue, 22 Dec 2020 14:30:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E37F32312E for ; Tue, 22 Dec 2020 14:30:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E37F32312E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 879DD6B009B; Tue, 22 Dec 2020 09:30:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 829B58D000B; Tue, 22 Dec 2020 09:30:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6CA818D0009; Tue, 22 Dec 2020 09:30:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id 4FA4F6B009B for ; Tue, 22 Dec 2020 09:30:39 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 112C382499A8 for ; Tue, 22 Dec 2020 14:30:39 +0000 (UTC) X-FDA: 77621154198.14.mist98_3314d6727460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id E8F091802DEEF for ; Tue, 22 Dec 2020 14:30:38 +0000 (UTC) X-HE-Tag: mist98_3314d6727460 X-Filterd-Recvd-Size: 6420 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:30:38 +0000 (UTC) Received: by mail-pf1-f178.google.com with SMTP id x126so8562342pfc.7 for ; Tue, 22 Dec 2020 06:30:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2oh7BiGimGKbgDDnn+uhtQoJ4SgvK0edAogQf1UAb2c=; b=NW1HUXse/ELoaMPuVDyXVFSEXun0IU5JAUCAUQqwl70HtUvXMJ15gRN46nFp02CL/H y123iBq8TgKa6MlDbL7fbshMPHrtU9trYby/dfN+k+iid4+hgB1SWFi9IAZLcOSU5+1j hlVkou1gyu4cv8Lj+YoFecsr3J34P9vh5YPawOjHUGBMe/dkmG3hv6gxBK7jaoNb4hHs C24Gk6/Oq53lnk6iyVLhBzNDwhT6H8OpxAv9KwjhbVuqkTCLpjopNJMS7P1h6gGUW5+A 7ur1UjBRKLuzuNPQPEk+Zoh65ugww85l/kHrfi+YqdfkJhYD9gTxoaFDyvb5tHwnEQ+G XYAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2oh7BiGimGKbgDDnn+uhtQoJ4SgvK0edAogQf1UAb2c=; b=ikbLiJo7cw4KEohCruV8tN/XxD5xgk05RNU/DN1bXARUDJMT3c7W51kvELurpYNrDP 80oijH+3KNoCDYHUto/ZOgNqj/lTq54ioQFJYq/Y82R3nMLXi+tRYh5HdXbAmavmJNfS JgiI/Yf5GX3yoDxlG3XjE00/D0iZ6wyiukJd3/9PecTtzhrfSh0kXtaPwvvn5lp3Dnen uC0jX8dXV8hQAdk/xxHmy6xa3cpGyL+zr9lzhNYONjI5YO0ylrX/8eJK8AQq66bLmD1T WedmyFo1Zeufk6jg1kHZ1hs9wR4YUjfjXIo+rna/0lqGJp1O37/CV0t9+prtnHfj3ZDO xsFg== X-Gm-Message-State: AOAM532FoDhHGm62idjNtPPLmh6dZHQtD3+cbpJITtItHjZyUmOyP+TR izULiqim8qM8e20wQpp8QyY/xA== X-Google-Smtp-Source: ABdhPJwC1b7+z3oU5UdGT0bIyumq1D1G/6x6HmVq4M+8WybN1mLv+noPxL93zTQbS40CfFx40mjThg== X-Received: by 2002:a65:6542:: with SMTP id a2mr8854257pgw.148.1608647437583; Tue, 22 Dec 2020 06:30:37 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.30.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:30:36 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page Date: Tue, 22 Dec 2020 22:24:35 +0800 Message-Id: <20201222142440.28930-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Because we reuse the first tail vmemmap page frame and remap it with read-only, we cannot set the PageHWPosion on a tail page. So we can use the head[4].private to record the real error page index and set the raw error page PageHWPoison later. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- mm/hugetlb.c | 48 ++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 40 insertions(+), 8 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ab6d2eabfea8..dbf4e8eeeff1 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1350,6 +1350,43 @@ static inline void __update_and_free_page(struct hstate *h, struct page *page) schedule_work(&hpage_update_work); } +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) +{ + struct page *page; + + if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) + return; + + page = head + page_private(head + 4); + + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + if (page != head) { + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, + struct page *page) +{ + if (!PageHWPoison(head)) + return; + + if (free_vmemmap_pages_per_hpage(h)) { + set_page_private(head + 4, page - head); + } else if (page != head) { + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + static void update_and_free_page(struct hstate *h, struct page *page) { if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) @@ -1366,6 +1403,7 @@ static void __free_hugepage(struct hstate *h, struct page *page) int i; alloc_huge_page_vmemmap(h, page); + hwpoison_subpage_deliver(h, page); for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | @@ -1843,14 +1881,8 @@ int dissolve_free_huge_page(struct page *page) int nid = page_to_nid(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; - /* - * Move PageHWPoison flag from head page to the raw error page, - * which makes any subpages rather than the error page reusable. - */ - if (PageHWPoison(head) && page != head) { - SetPageHWPoison(page); - ClearPageHWPoison(head); - } + + hwpoison_subpage_set(h, head, page); list_del(&head->lru); h->free_huge_pages--; h->free_huge_pages_node[nid]--; From patchwork Tue Dec 22 14:24:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11986765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE605C433DB for ; Tue, 22 Dec 2020 14:30:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7CA3823124 for ; Tue, 22 Dec 2020 14:30:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7CA3823124 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 19C1B6B009C; Tue, 22 Dec 2020 09:30:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 14B8E6B009D; Tue, 22 Dec 2020 09:30:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0396F8D0009; Tue, 22 Dec 2020 09:30:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0029.hostedemail.com [216.40.44.29]) by kanga.kvack.org (Postfix) with ESMTP id D9D406B009C for ; Tue, 22 Dec 2020 09:30:50 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9F951180ACF1F for ; Tue, 22 Dec 2020 14:30:50 +0000 (UTC) X-FDA: 77621154660.18.women50_0800e7727460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 3DE4E100ED0D0 for ; Tue, 22 Dec 2020 14:30:50 +0000 (UTC) X-HE-Tag: women50_0800e7727460 X-Filterd-Recvd-Size: 5653 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:30:49 +0000 (UTC) Received: by mail-pl1-f182.google.com with SMTP id e2so7499264plt.12 for ; Tue, 22 Dec 2020 06:30:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9ixTyw+PWcrpOZUWRUuN38FW0Yie3cLwnmS8BqQaWcY=; b=syrud4TiyWkdgCobLDgE1RM+Zz8RYp2tPWdxL52aS+nWh9iSrej01f5eIpch/a40x+ DfHQJ43CMk+f5g9Y6ANaIyUuHv8Z8/4z14qVbIeNoZkwisawbknj+4EsIn3U7DzDqgCt bmaFDafixH2mLSJ2K/WWVqStj/jlWpmBv8s80j1yQ+GJg69tH7846/ZqwqSQrhfhKH1t 0d11nbKG3P1aRvFxykRWlmWabmrWEP+7XoYZAoIjAtaC9sxDsxXqAhB1h3+aNjJJ8SdR EnDGanvLV4gx/YUnStXjbHsGy4AS4vmul/lNl2u2ISzjhQ33L52Gb+zJps2WdVpNgX/R SraQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9ixTyw+PWcrpOZUWRUuN38FW0Yie3cLwnmS8BqQaWcY=; b=e8l8oS5mVvkZy34uxXqg8da7ZydlpVMVjMZ2ONGpK4AfMTa1h3BE5Y0WUQlOTHQjLA NO6+wWixLSqb6rtKfpawxtCeHrM1B8/eYmNSliSu0PvPZOA6RmcVlJoHSLuzd0lRSUy+ EgDfxO85xtKb/X3ljSYZRgDJxUxzIkg38umz//Ho99TMG2Dj4effNiAsVRbfcT5MkTj4 dGWkKIGv/hhluxq3CYrqEnE9xPICo561jNc/bDaZJljYDSJwb9jJmAPYVq5eAZtl77Sw n3rfOEWI1e5YrAxvijUkRhVHrWpOTtytliyHSZ9rwonO6oKN7xyrsB0fRlhuZQchgYQ2 SKpg== X-Gm-Message-State: AOAM533P79gjtaSwvoWlsXfJ+qCSm4+0NtoFMs8Qwvin15fpg4ZUIGCQ crquHpdJSfH/jvXW1S5ro/TTgQ== X-Google-Smtp-Source: ABdhPJw8++ajzp2hb8jxN6aHOVxUVkL4c9SQOFtFwJzalAt9FUU6CNJ3NbR3i+TJhFXrgmfkb0py/g== X-Received: by 2002:a17:902:b493:b029:dc:3e1d:4dda with SMTP id y19-20020a170902b493b02900dc3e1d4ddamr9630798plr.48.1608647448878; Tue, 22 Dec 2020 06:30:48 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.30.37 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:30:48 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 07/11] mm/hugetlb: Flush work when dissolving a HugeTLB page Date: Tue, 22 Dec 2020 22:24:36 +0800 Message-Id: <20201222142440.28930-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should flush work when dissolving a HugeTLB page to make sure that the HugeTLB page is freed to the buddy allocator. Because the caller of dissolve_free_huge_pages() relies on this guarantee. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- mm/hugetlb.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dbf4e8eeeff1..3e6aa6cc1f3e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1329,6 +1329,12 @@ static void update_hpage_vmemmap_workfn(struct work_struct *work) } static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); +static inline void flush_hpage_update_work(struct hstate *h) +{ + if (free_vmemmap_pages_per_hpage(h)) + flush_work(&hpage_update_work); +} + static inline void __update_and_free_page(struct hstate *h, struct page *page) { /* No need to allocate vmemmap pages */ @@ -1864,6 +1870,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + struct hstate *h = NULL; /* Not to disrupt normal path by vainly holding hugetlb_lock */ if (!PageHuge(page)) @@ -1877,8 +1884,9 @@ int dissolve_free_huge_page(struct page *page) if (!page_count(page)) { struct page *head = compound_head(page); - struct hstate *h = page_hstate(head); int nid = page_to_nid(head); + + h = page_hstate(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; @@ -1892,6 +1900,14 @@ int dissolve_free_huge_page(struct page *page) } out: spin_unlock(&hugetlb_lock); + + /* + * We should flush work before return to make sure that + * the HugeTLB page is freed to the buddy. + */ + if (!rc && h) + flush_hpage_update_work(h); + return rc; } From patchwork Tue Dec 22 14:24:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11986767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2487DC433E0 for ; Tue, 22 Dec 2020 14:31:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C058E2312E for ; Tue, 22 Dec 2020 14:31:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C058E2312E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5BB4F6B009D; Tue, 22 Dec 2020 09:31:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 569996B009E; Tue, 22 Dec 2020 09:31:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 431178D0009; Tue, 22 Dec 2020 09:31:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id 2856F6B009D for ; Tue, 22 Dec 2020 09:31:02 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D93AB180ACEEE for ; Tue, 22 Dec 2020 14:31:01 +0000 (UTC) X-FDA: 77621155122.15.straw99_3b1585a27460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id B845E1814B0C8 for ; Tue, 22 Dec 2020 14:31:01 +0000 (UTC) X-HE-Tag: straw99_3b1585a27460 X-Filterd-Recvd-Size: 9915 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:31:01 +0000 (UTC) Received: by mail-pg1-f177.google.com with SMTP id x1so2877301pgh.12 for ; Tue, 22 Dec 2020 06:31:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JnsZZKIgn5gwlEYoLyYIa0LxHiKDymjmv3kxxyaJG00=; b=VNcJbqcyjN8p2AH0ZNI4DYQl8OyuZJpS3WgiYqnRy7q6x4fC9oX+3dlv41tWRooIFF +w2U5Ec7MVLI41Mun7lYdCLuD4/j2x3dlpI/REWyQ4woMjyZChuD79X82hGPZgffONkF ZxnDyxUerg3C+Q5KNHo2+maNNNW7aMyk9wowhW/Gg9vkoDJCmtvNzvWlmdfdZ0UrhZ3L jmAWcy397XIy48VVzcgRvpTnYWPszs3S4uZPv1nZ+BFm90TRMfWZl2DPfTGzu7TbRKzY kBZzSpzxcGm0V+0Ydso80LUSyLxXYBDLkOJ41RFxTMXMSKuZW3MxY+q1002OCjICQDUP 2XTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JnsZZKIgn5gwlEYoLyYIa0LxHiKDymjmv3kxxyaJG00=; b=IORnBnu/QAGmCctJ35mOuzOKmiI7BAzyJ5OgOryFo9HuBmNQCrTAihD5kXnWl4aR4x H10i3qAEuraQJh/z4CeB3OCHNfUFIP/Gblpo4DFbUooYssi99j2tgB1EVCD6bOF05w6n r2f0+KHVM7BcIg1qMhjjsIYpDxK3rLha7yP45NWezG36G1sbtX0pZu4TeLcE6PGrg+P3 35D75QOTfUMCwaUzwVJxM35d4J+n93fLOdcVUyvEepRIju3S1ePx/P2023Mqyqpu49hZ Dk9SZMi7l31NYQAtVDHTQMP7hgb1+aRd3IkTSAb/zgTjnZSR5lakAbbge9Amm2vxBpLs pQXA== X-Gm-Message-State: AOAM5307V5ft9g8mETZo5wC7Df1Q3Uniq4XvG51oSZuAV60FD9GjpyhE zUePQXBrU7eWmOQBoPU5F2L4Zg== X-Google-Smtp-Source: ABdhPJxYaW1iYbJbuPXuXu+N6bG9pktoNBFJ+AWF7zycRUpc5moblGG2yLUsgqVUSvz1iC+pPDtEag== X-Received: by 2002:a63:d005:: with SMTP id z5mr20110562pgf.296.1608647460196; Tue, 22 Dec 2020 06:31:00 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.30.49 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:30:59 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Date: Tue, 22 Dec 2020 22:24:37 +0800 Message-Id: <20201222142440.28930-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a kernel parameter hugetlb_free_vmemmap to enable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Barry Song --- Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ arch/x86/mm/init_64.c | 8 ++++++-- include/linux/hugetlb.h | 19 +++++++++++++++++++ mm/hugetlb_vmemmap.c | 24 ++++++++++++++++++++++++ 5 files changed, 66 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 3ae25630a223..44dde9be7e00 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1551,6 +1551,20 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] + hugetlb_free_vmemmap= + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, + this controls freeing unused vmemmap pages associated + with each HugeTLB page. When this option is enabled, + we disable PMD/huge page mapping of vmemmap pages which + increase page table pages. So if a user/sysadmin only + uses a small number of HugeTLB pages (as a percentage + of system memory), they could end up using more memory + with hugetlb_free_vmemmap on as opposed to off. + Format: { on | off (default) } + + on: enable the feature + off: disable the feature + hung_task_panic= [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index f7b1c7462991..3a23c2377acc 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -145,6 +145,9 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing + unused vmemmap pages associated with each HugeTLB page. When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0435bee2e172..1bce5f20e6ca 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, { int err; - if (end - start < PAGES_PER_SECTION * sizeof(struct page)) + if (is_hugetlb_free_vmemmap_enabled() || + end - start < PAGES_PER_SECTION * sizeof(struct page)) err = vmemmap_populate_basepages(start, end, node, NULL); else if (boot_cpu_has(X86_FEATURE_PSE)) err = vmemmap_populate_hugepages(start, end, node, altmap); @@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr, pmd_t *pmd; unsigned int nr_pmd_pages; struct page *page; + bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) || + is_hugetlb_free_vmemmap_enabled(); for (; addr < end; addr = next) { pte_t *pte = NULL; @@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); - if (!boot_cpu_has(X86_FEATURE_PSE)) { + if (base_mapping) { next = (addr + PAGE_SIZE) & PAGE_MASK; pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ebca2ef02212..7f47f0eeca3b 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -770,6 +770,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +extern bool hugetlb_free_vmemmap_enabled; + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return hugetlb_free_vmemmap_enabled; +} +#else +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} +#endif + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; @@ -923,6 +937,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 6108ae80314f..8206978d1679 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -166,6 +166,8 @@ * (last) level. So this type of HugeTLB page can be optimized only when its * size of the struct page structs is greater than 2 pages. */ +#define pr_fmt(fmt) "HugeTLB: " fmt + #include "hugetlb_vmemmap.h" /* @@ -178,6 +180,28 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +bool hugetlb_free_vmemmap_enabled; + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + /* We cannot optimize if a "struct page" crosses page boundaries. */ + if ((!is_power_of_2(sizeof(struct page)))) { + pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); + return 0; + } + + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "on")) + hugetlb_free_vmemmap_enabled = true; + else if (strcmp(buf, "off")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); + static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; From patchwork Tue Dec 22 14:24:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11986769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2370C433DB for ; Tue, 22 Dec 2020 14:31:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 868D72312E for ; Tue, 22 Dec 2020 14:31:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 868D72312E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1E82F6B009F; Tue, 22 Dec 2020 09:31:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 171D06B00A0; Tue, 22 Dec 2020 09:31:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 038E88D0009; Tue, 22 Dec 2020 09:31:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id E0D576B009F for ; Tue, 22 Dec 2020 09:31:20 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8A4173638 for ; Tue, 22 Dec 2020 14:31:20 +0000 (UTC) X-FDA: 77621155920.19.self31_280dcaf27460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 0E5EF1ACC22 for ; Tue, 22 Dec 2020 14:31:13 +0000 (UTC) X-HE-Tag: self31_280dcaf27460 X-Filterd-Recvd-Size: 7701 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:31:12 +0000 (UTC) Received: by mail-pl1-f172.google.com with SMTP id 4so7519916plk.5 for ; Tue, 22 Dec 2020 06:31:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X5etNwBdArvAKJWq8EfFrNOBH11fssIvE/MR1VuAPQE=; b=XYTO/MCoQUaK55YEJVmI6LG6aNnptE9SnYkQNdzCBahml8f9DEka26OnpS1PHPSc0D dgaizDPQ3NllONFo4W8MWMe6ONgxUX5zf0AVzIfkNSsl1t7quOik5OCq0etNt7JvV9ys KPVHJYQGyDZXiuQqkJCcIbdsGXabYXN4Dz+NyqP3NgLDYeKMdWhcXgElRBZTkhvA0VPe weT1mjUJ0VO3zLgdHr0S5YvWFHc1DuG/4mWG90ubyh2tZCeYgQCYOMD65B2wi52b6XxS hXw7reaPmznoPjnuHYaGxyErwPbii0y9Ir89al9tkWlutqbyF4SarkCP1ugh7AG1ZHJ4 gRSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X5etNwBdArvAKJWq8EfFrNOBH11fssIvE/MR1VuAPQE=; b=cBJ48F70qtinPaLxFZpTpxVLagEfzIo3qpBQTDzURODhoXwTjDxhFK7gfFTprci9BL B0slecjdCdYFKNE01FRNL/BTvEEUFqlDu+TwsZ44L0yJavZvnPMrKpBVjLD3+VGP1u1w 56r1tbxPD70M2/eY03F4gQ2Z2y5yv2DRjvdWQh4hF2lVGI3cJDM1lGg6iZdZq74BP/zw oNzmovPaQxEhrTWQ87QWnAOYK37NDbkx68uMTXa93FzdzCwKjmXmufxCky/x+HyHGMRk eKSxIopypbYAapt7YQ/zYjJH4oGDUU6Y+JzPGNXenBL0Z07BChqzlLlF8HUXpKdPsbVB xAyQ== X-Gm-Message-State: AOAM531vJejP1JXcPhaOeJE5zbIR6SwGCwwW/E1F6kXCs8340uhvy++I kF1GBux+oPoX1+4deNQLV/VCBQ== X-Google-Smtp-Source: ABdhPJwzljskCSQmif6FmqkeMeeYeQxPq4c0jNSgU2RK8gLHZk2RBzWABgzRyQnLuDmrZXG81HYeSA== X-Received: by 2002:a17:902:ee02:b029:db:c0d6:57f9 with SMTP id z2-20020a170902ee02b02900dbc0d657f9mr21472295plb.65.1608647471631; Tue, 22 Dec 2020 06:31:11 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.31.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:31:11 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Date: Tue, 22 Dec 2020 22:24:38 +0800 Message-Id: <20201222142440.28930-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All the infrastructure is ready, so we introduce nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a HugeTLB page that can be freed to buddy allocator. And initialize it in the hugetlb_vmemmap_init(). This patch is actual enablement of the feature. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 1 + mm/hugetlb_vmemmap.c | 25 +++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 10 ++++++---- 4 files changed, 35 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 7f47f0eeca3b..66d82ae7b712 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -492,6 +492,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3e6aa6cc1f3e..f19e841236d7 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3326,6 +3326,7 @@ void __init hugetlb_add_hstate(unsigned int order) h->next_nid_to_free = first_memory_node; snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); parsed_hstate = h; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 8206978d1679..7dcb4aa1e512 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -236,3 +236,28 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head) vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse); } + +void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int nr_pages = pages_per_huge_page(h); + unsigned int vmemmap_pages; + + if (!hugetlb_free_vmemmap_enabled) + return; + + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page are not to be freed to buddy + * allocator, the other pages will map to the first tail page, so they + * can be freed. + * + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true + * on some architectures (e.g. aarch64). See Documentation/arm64/ + * hugetlbpage.rst for more details. + */ + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + + pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, + h->name); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index b2c8d2f11d48..8fd9ae113dbd 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -13,17 +13,15 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); +void hugetlb_vmemmap_init(struct hstate *h); /* * How many vmemmap pages associated with a HugeTLB page that can be freed * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. */ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { - return 0; + return h->nr_free_vmemmap_pages; } #else static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) @@ -38,5 +36,9 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return 0; } + +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Tue Dec 22 14:24:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11986771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83E9DC433DB for ; Tue, 22 Dec 2020 14:31:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F184423124 for ; Tue, 22 Dec 2020 14:31:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F184423124 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7B83C6B00A0; Tue, 22 Dec 2020 09:31:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C64D6B00A1; Tue, 22 Dec 2020 09:31:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F2158D0009; Tue, 22 Dec 2020 09:31:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0008.hostedemail.com [216.40.44.8]) by kanga.kvack.org (Postfix) with ESMTP id 393AD6B00A0 for ; Tue, 22 Dec 2020 09:31:25 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F384D8249980 for ; Tue, 22 Dec 2020 14:31:24 +0000 (UTC) X-FDA: 77621156130.14.push47_5d02f3827460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 1FB061802B9FC for ; Tue, 22 Dec 2020 14:31:24 +0000 (UTC) X-HE-Tag: push47_5d02f3827460 X-Filterd-Recvd-Size: 11030 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:31:23 +0000 (UTC) Received: by mail-pl1-f178.google.com with SMTP id j1so7525387pld.3 for ; Tue, 22 Dec 2020 06:31:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1seh5M2ZUWi9UZdev/oNQbO4cU6q+XgrZrg9wVC/rxI=; b=BmDQtNH2NR0AF6lP6dxq23r14xZG2DlZb8P9suzwQaNZvdsoqxz2s0GXzA3nZHToV5 qu84jRwDui1Tml+0SIenBvWz2p62UliZemDh5icDRIW+H/r4nIbccIkSW+FRMrOqLNuJ FVfNAmocE4t1okkwuky0BmTiUG2NnbHzoECZz2iIiiktrUFdJ0Zfr8UD+dl6zvEBxPgq M492QnFsjVHED72zxZZkjgzG2MNLJVkV9u1qeOd932h+JrNwtS6vVRiuwyJel5eKv/9E fUJHpTCDLjF3ZjXfbpCM11KpkxFeaL0G7oGCrtDY1SdR58YOyKfeeJ+e4xD0TZCuFp2C dYUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1seh5M2ZUWi9UZdev/oNQbO4cU6q+XgrZrg9wVC/rxI=; b=BsUnanR9Uyd2MM6n3e293uSJlOH9pb50Ws+AzXypTrCO0rqv2PVko/SRBE5VLDVucO cJsLb2N3qj93sPqg9uKmM/8UBmZCn1gwCPkPXDIx4VYm+As7BPy1k5qbaOESYJMKiF86 eA6jdF2Gx/oAzEZi2hFv6vEJQeQqPmw2zLhdOPzZAAlsjHNxX0Y8tkLGQOf6a+Csp8V5 8czbXvzKonbI+iSI3c1lOKQY6DrqDIyskj02BE/Dbkf3LHbOMJFfy9utDb6RR5bDQX+G 0vvOltvASlEt/mFPhC9Cdb33JUxHpYpG2xLjKL8N8Q1jsmPCisZL2iJjVXRA24pwDVrJ Lvdw== X-Gm-Message-State: AOAM533gXlVrj9LNepBBtQBmLFGL3V9BVdW5jZRZtXmd8xn/FE0V9AiV qzUT1yq3TiPAb1WEFxHktC1yag== X-Google-Smtp-Source: ABdhPJzhMmKvQnNlH+bczONQXN1FohKsO2+35KUwG81SzatSK6MxjwvChkwr8r/0rr9MCW637ooCVg== X-Received: by 2002:a17:90a:6d62:: with SMTP id z89mr22819034pjj.71.1608647482726; Tue, 22 Dec 2020 06:31:22 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.31.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:31:22 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 10/11] mm/hugetlb: Gather discrete indexes of tail page Date: Tue, 22 Dec 2020 22:24:39 +0800 Message-Id: <20201222142440.28930-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For HugeTLB page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page. In this case, it will be easier to add a new tail page index later. There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- include/linux/hugetlb.h | 13 +++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 35 +++++++++++++++++++++++++++-------- mm/hugetlb_vmemmap.c | 8 ++++++++ 4 files changed, 57 insertions(+), 14 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 66d82ae7b712..7295f6b3d55e 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +enum { + SUBPAGE_INDEX_ACTIVE = 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f19e841236d7..2764af5fa0b3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1356,6 +1356,7 @@ static inline void __update_and_free_page(struct hstate *h, struct page *page) schedule_work(&hpage_update_work); } +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) { struct page *page; @@ -1363,7 +1364,7 @@ static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) return; - page = head + page_private(head + 4); + page = head + page_private(head + SUBPAGE_INDEX_HWPOISON); /* * Move PageHWPoison flag from head page to the raw error page, @@ -1382,7 +1383,7 @@ static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, return; if (free_vmemmap_pages_per_hpage(h)) { - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } else if (page != head) { /* * Move PageHWPoison flag from head page to the raw error page, @@ -1392,6 +1393,24 @@ static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, ClearPageHWPoison(head); } } +#else +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) +{ +} + +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, + struct page *page) +{ + if (PageHWPoison(head) && page != head) { + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} +#endif static void update_and_free_page(struct hstate *h, struct page *page) { @@ -1459,20 +1478,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* @@ -1484,17 +1503,17 @@ static inline bool PageHugeTemporary(struct page *page) if (!PageHuge(page)) return false; - return (unsigned long)page[2].mapping == -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U; } static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping = (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U; } static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping = NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL; } static void __free_huge_page(struct page *page) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 7dcb4aa1e512..6b8f7bb2273e 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -242,6 +242,14 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int nr_pages = pages_per_huge_page(h); unsigned int vmemmap_pages; + /* + * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct + * page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, + * so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. + */ + BUILD_BUG_ON(NR_USED_SUBPAGE >= + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + if (!hugetlb_free_vmemmap_enabled) return; From patchwork Tue Dec 22 14:24:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11986773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77F34C433E0 for ; Tue, 22 Dec 2020 14:31:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0FF8E2312E for ; Tue, 22 Dec 2020 14:31:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0FF8E2312E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 877106B00A1; Tue, 22 Dec 2020 09:31:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8007C8D000B; Tue, 22 Dec 2020 09:31:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C6F88D0009; Tue, 22 Dec 2020 09:31:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0156.hostedemail.com [216.40.44.156]) by kanga.kvack.org (Postfix) with ESMTP id 5849E6B00A1 for ; Tue, 22 Dec 2020 09:31:36 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 08F45181AEF1E for ; Tue, 22 Dec 2020 14:31:36 +0000 (UTC) X-FDA: 77621156592.03.sun17_4f0f55527460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id DA32628A4E9 for ; Tue, 22 Dec 2020 14:31:35 +0000 (UTC) X-HE-Tag: sun17_4f0f55527460 X-Filterd-Recvd-Size: 6057 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:31:35 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id x12so7505748plr.10 for ; Tue, 22 Dec 2020 06:31:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XzS1srauFLdS7N6FaDP6UmV3KONPJLbFcZMdSU+F95w=; b=Lts37J0+3a2+gZxXrxIlmZsq95Q5atD4QwQiIzeJWwob3euyPog2S4lryq1WXf5hyc NCaQeZhkJrmH58b2gZybUS3oJlbFTtdpGM+qce4CtwbeAn1ahp3WzpJ0vTZSu0lfbnmB cVaMG8N4hFI0uw/+NFbL7XR7c2b7ikU7NvzH2zoSIRXHMNG19+dFYQ/qXqF679zcq6VH Bt+6xbRRGxx8/5jjqzJGP6NnwgnytuTrxp7vDe8IJRChjOZ3xZwOvBzaaMgGIkTYTkCm nz1m8RjpiY90rLrUmPsE/sUVmAjMmncCNmWsh8TAhB0CCtu9zW51z3eNF2KZ/wheaEH0 R09w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XzS1srauFLdS7N6FaDP6UmV3KONPJLbFcZMdSU+F95w=; b=eRp7wp3RZA3F0/qszadbf8286cAP+LgGdAxo61ZYpmjrcX6rZv5URV/SXFQrNVq8a9 BlafCDYZEJdCzYL+oUk6exQuwTG9kPY84KkKq8s5CWQdmX0Wu7q9HZV3/VRlG9NfjyXy +BrGzkH+mPSNKTYwcfdprVv7M5Ofbzbx1SG3ZQlTE4qMMOxoaxs6QrPOco3jTRQT0/7M nNXXidfWmo3fqwC55HXbf46mRPagiES3Bl44HtgEY8U7F/mD1vHb5lcbXQf38ezz7Iq2 63T3fDmB/V+4myvB8vwUZpG6z1zVo8wYEzmgXbSTYhPcAgpznfXQKRANyfKAHnd4Yhh0 cW4g== X-Gm-Message-State: AOAM53101On5rEVRyk7xkOp66THKloXQw1hrWuBv8AGwVAV5o+lxxqYN VHdTaYYzM44ecBjaKFBULCjM4Q== X-Google-Smtp-Source: ABdhPJy8DyVGhNGlLTRjXJsqd1MUxY4ju29rgjY632JH4g9tOkY13BOMltdn/B0pI2vP/UgK9jgUcQ== X-Received: by 2002:a17:902:7592:b029:dc:3c87:1c63 with SMTP id j18-20020a1709027592b02900dc3c871c63mr10339125pll.47.1608647494418; Tue, 22 Dec 2020 06:31:34 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id a31sm21182088pgb.93.2020.12.22.06.31.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Dec 2020 06:31:33 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v11 11/11] mm/hugetlb: Optimize the code with the help of the compiler Date: Tue, 22 Dec 2020 22:24:40 +0800 Message-Id: <20201222142440.28930-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201222142440.28930-1-songmuchun@bytedance.com> References: <20201222142440.28930-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We cannot optimize if a "struct page" crosses page boundaries. If it is true, we can optimize the code with the help of a compiler. When free_vmemmap_pages_per_hpage() returns zero, most functions are optimized by the compiler. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 3 ++- mm/hugetlb_vmemmap.c | 7 +++++++ mm/hugetlb_vmemmap.h | 5 +++-- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 7295f6b3d55e..adc17765e0e9 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -791,7 +791,8 @@ extern bool hugetlb_free_vmemmap_enabled; static inline bool is_hugetlb_free_vmemmap_enabled(void) { - return hugetlb_free_vmemmap_enabled; + return hugetlb_free_vmemmap_enabled && + is_power_of_2(sizeof(struct page)); } #else static inline bool is_hugetlb_free_vmemmap_enabled(void) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 6b8f7bb2273e..5ea12c7507a6 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -250,6 +250,13 @@ void __init hugetlb_vmemmap_init(struct hstate *h) BUILD_BUG_ON(NR_USED_SUBPAGE >= RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + /* + * The compiler can help us to optimize this function to null + * when the size of the struct page is not power of 2. + */ + if (!is_power_of_2(sizeof(struct page))) + return; + if (!hugetlb_free_vmemmap_enabled) return; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 8fd9ae113dbd..e8de41295d4d 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -17,11 +17,12 @@ void hugetlb_vmemmap_init(struct hstate *h); /* * How many vmemmap pages associated with a HugeTLB page that can be freed - * to the buddy allocator. + * to the buddy allocator. The checking of the is_power_of_2() aims to let + * the compiler help us optimize the code as much as possible. */ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { - return h->nr_free_vmemmap_pages; + return is_power_of_2(sizeof(struct page)) ? h->nr_free_vmemmap_pages : 0; } #else static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)