From patchwork Thu Dec 17 12:12:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2A65C4361B for ; Thu, 17 Dec 2020 12:15:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F34BA238E8 for ; Thu, 17 Dec 2020 12:15:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F34BA238E8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7F0586B006C; Thu, 17 Dec 2020 07:15:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 79F966B006E; Thu, 17 Dec 2020 07:15:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68EAB6B0070; Thu, 17 Dec 2020 07:15:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id 542936B006C for ; Thu, 17 Dec 2020 07:15:39 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DC6223641 for ; Thu, 17 Dec 2020 12:15:38 +0000 (UTC) X-FDA: 77602669956.10.ray23_00070a627434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id A666716A4AB for ; Thu, 17 Dec 2020 12:15:38 +0000 (UTC) X-HE-Tag: ray23_00070a627434 X-Filterd-Recvd-Size: 16813 Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:15:37 +0000 (UTC) Received: by mail-pj1-f45.google.com with SMTP id hk16so4015658pjb.4 for ; Thu, 17 Dec 2020 04:15:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3qJg+8yJ0ZaO1UKBsLGCMyx4R42eQSImcGjXZI1+SbQ=; b=bmsTvFal5b0co2DU2nwxsItRG6t7RzvdTZqNFtn964kxyGio5fd9Ep14ZSUeLarRmw Hua+2JiOJy2QjUJOisZdZArIt1KkXhOYwBalhgnSRUs94wjqFzwloZjtpRAXx5YQrYCq h4p6lZWLPF5QYytMIX6D7ctWxYix1T2yd5BIB8g9bj55etXxZvgnNze2hC99Jc8GPcev idxr8uj3f9u9V6KTLrx0QSAb3ohgOBANMcPqjmhpOuKEppcUE9CaIwutcJiS+n7xh60M /T3xZI2nJKVm+GjVl/AWwOpmh6pwLEqfMnCPNpFZsXFJoU/X6Z2N57P3R8yjCRU2ZWj0 Y9fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3qJg+8yJ0ZaO1UKBsLGCMyx4R42eQSImcGjXZI1+SbQ=; b=QW/0xw6CMnliYRUuwT2+qFJ2O4HcV8JRfbmLSRCrFz1uXpk+L6msMtPnrMmL0aemwn 6ITjY55ty40JSW7fcESITC4oVO99LYWEez0dbXVNaExjcUNl8WRIjt2jE+HtVAi/uWKC 3kPVrOvmCiXvuusugSLNq+HOFdMiCQ0W1Z7N/I4dSRR70aVCmSu14Y3z3B0U1kIxckAB feaVoxeDxgMXhnnhn42r44cQD+XfyZggTsshUjaU/Q6WZrpeVrQTE4rE6nzMO6SknoOT rCa0LWAgqGAgnPjZbTsMPlQCKI1cqLuotxLJVcyZFnsjHGgh15YSTiVOK1vZpjuNJWSI acPQ== X-Gm-Message-State: AOAM533bxzClAq004BAZ+4dN9VuBjD/aueCr1OoUGo6JfrgDvfpHnHJH c1pc4fQO429qnH+qrAslHhxDpg== X-Google-Smtp-Source: ABdhPJzOGYfekfPPqd2/dQ0lBu38iZ8CCWC1u/ZAFC62rGvslYQNdTJchqEoGvkSfnt7B2lKSdWRxA== X-Received: by 2002:a17:90a:cb84:: with SMTP id a4mr7980629pju.50.1608207336369; Thu, 17 Dec 2020 04:15:36 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.15.24 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:15:35 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 01/11] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Date: Thu, 17 Dec 2020 20:12:53 +0800 Message-Id: <20201217121303.13386-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move bootmem info registration common API to individual bootmem_info.c. And we will use {get,put}_page_bootmem() to initialize the page for the vmemmap pages or free the vmemmap pages to buddy in the later patch. So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador Reviewed-by: David Hildenbrand --- arch/x86/mm/init_64.c | 3 +- include/linux/bootmem_info.h | 40 +++++++++++++ include/linux/memory_hotplug.h | 27 --------- mm/Makefile | 1 + mm/bootmem_info.c | 124 +++++++++++++++++++++++++++++++++++++++++ mm/memory_hotplug.c | 116 -------------------------------------- mm/sparse.c | 1 + 7 files changed, 168 insertions(+), 144 deletions(-) create mode 100644 include/linux/bootmem_info.h create mode 100644 mm/bootmem_info.c diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index b5a3fa4033d3..0a45f062826e 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -1571,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return err; } -#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE) +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void register_page_bootmem_memmap(unsigned long section_nr, struct page *start_page, unsigned long nr_pages) { diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h new file mode 100644 index 000000000000..4ed6dee1adc9 --- /dev/null +++ b/include/linux/bootmem_info.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_BOOTMEM_INFO_H +#define __LINUX_BOOTMEM_INFO_H + +#include + +/* + * Types for free bootmem stored in page->lru.next. These have to be in + * some random range in unsigned long space for debugging purposes. + */ +enum { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, + SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, + MIX_SECTION_INFO, + NODE_INFO, + MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, +}; + +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE +void __init register_page_bootmem_info_node(struct pglist_data *pgdat); + +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type); +void put_page_bootmem(struct page *page); +#else +static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) +{ +} + +static inline void put_page_bootmem(struct page *page) +{ +} + +static inline void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ +} +#endif + +#endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 15acce5ab106..84590964ad35 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -33,18 +33,6 @@ struct vmem_altmap; ___page; \ }) -/* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. - */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, - SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, - MIX_SECTION_INFO, - NODE_INFO, - MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, -}; - /* Types for control the zone type of onlined and offlined memory */ enum { /* Offline the memory. */ @@ -222,17 +210,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat); -#else -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} -#endif -extern void put_page_bootmem(struct page *page); -extern void get_page_bootmem(unsigned long ingo, struct page *page, - unsigned long type); - void get_online_mems(void); void put_online_mems(void); @@ -260,10 +237,6 @@ static inline void zone_span_writelock(struct zone *zone) {} static inline void zone_span_writeunlock(struct zone *zone) {} static inline void zone_seqlock_init(struct zone *zone) {} -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} - static inline int try_online_node(int nid) { return 0; diff --git a/mm/Makefile b/mm/Makefile index a1af02ba8f3f..ed4b88fa0f5e 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_KFENCE) += kfence/ obj-$(CONFIG_FAILSLAB) += failslab.o +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c new file mode 100644 index 000000000000..fcab5a3f8cc0 --- /dev/null +++ b/mm/bootmem_info.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/mm/bootmem_info.c + * + * Copyright (C) + */ +#include +#include +#include +#include +#include + +void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) +{ + page->freelist = (void *)type; + SetPagePrivate(page); + set_page_private(page, info); + page_ref_inc(page); +} + +void put_page_bootmem(struct page *page) +{ + unsigned long type; + + type = (unsigned long) page->freelist; + BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || + type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); + + if (page_ref_dec_return(page) == 1) { + page->freelist = NULL; + ClearPagePrivate(page); + set_page_private(page, 0); + INIT_LIST_HEAD(&page->lru); + free_reserved_page(page); + } +} + +#ifndef CONFIG_SPARSEMEM_VMEMMAP +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + /* Get section's memmap address */ + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + /* + * Get page for the memmap's phys address + * XXX: need more consideration for sparse_vmemmap... + */ + page = virt_to_page(memmap); + mapsize = sizeof(struct page) * PAGES_PER_SECTION; + mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; + + /* remember memmap's page */ + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, SECTION_INFO); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); + +} +#else /* CONFIG_SPARSEMEM_VMEMMAP */ +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); +} +#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ + +void __init register_page_bootmem_info_node(struct pglist_data *pgdat) +{ + unsigned long i, pfn, end_pfn, nr_pages; + int node = pgdat->node_id; + struct page *page; + + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; + page = virt_to_page(pgdat); + + for (i = 0; i < nr_pages; i++, page++) + get_page_bootmem(node, page, NODE_INFO); + + pfn = pgdat->node_start_pfn; + end_pfn = pgdat_end_pfn(pgdat); + + /* register section info */ + for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { + /* + * Some platforms can assign the same pfn to multiple nodes - on + * node0 as well as nodeN. To avoid registering a pfn against + * multiple nodes we check that this pfn does not already + * reside in some other nodes. + */ + if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) + register_page_bootmem_info_section(pfn); + } +} diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index a8cef4955907..4c4ca99745b7 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -141,122 +141,6 @@ static void release_memory_resource(struct resource *res) } #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE -void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) -{ - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); - page_ref_inc(page); -} - -void put_page_bootmem(struct page *page) -{ - unsigned long type; - - type = (unsigned long) page->freelist; - BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || - type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); - - if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - INIT_LIST_HEAD(&page->lru); - free_reserved_page(page); - } -} - -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - /* Get section's memmap address */ - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - /* - * Get page for the memmap's phys address - * XXX: need more consideration for sparse_vmemmap... - */ - page = virt_to_page(memmap); - mapsize = sizeof(struct page) * PAGES_PER_SECTION; - mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; - - /* remember memmap's page */ - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, SECTION_INFO); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); - -} -#else /* CONFIG_SPARSEMEM_VMEMMAP */ -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); -} -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ - -void __init register_page_bootmem_info_node(struct pglist_data *pgdat) -{ - unsigned long i, pfn, end_pfn, nr_pages; - int node = pgdat->node_id; - struct page *page; - - nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; - page = virt_to_page(pgdat); - - for (i = 0; i < nr_pages; i++, page++) - get_page_bootmem(node, page, NODE_INFO); - - pfn = pgdat->node_start_pfn; - end_pfn = pgdat_end_pfn(pgdat); - - /* register section info */ - for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - /* - * Some platforms can assign the same pfn to multiple nodes - on - * node0 as well as nodeN. To avoid registering a pfn against - * multiple nodes we check that this pfn does not already - * reside in some other nodes. - */ - if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) - register_page_bootmem_info_section(pfn); - } -} -#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { diff --git a/mm/sparse.c b/mm/sparse.c index 7bd23f9d6cef..87676bf3af40 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "internal.h" #include From patchwork Thu Dec 17 12:12:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00A8BC4361B for ; Thu, 17 Dec 2020 12:15:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 786D5238EF for ; Thu, 17 Dec 2020 12:15:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 786D5238EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E31716B006E; Thu, 17 Dec 2020 07:15:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DE0D86B0070; Thu, 17 Dec 2020 07:15:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C82316B0071; Thu, 17 Dec 2020 07:15:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id ACE106B006E for ; Thu, 17 Dec 2020 07:15:52 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 770AA2466 for ; Thu, 17 Dec 2020 12:15:52 +0000 (UTC) X-FDA: 77602670544.19.news38_010f3a227434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 559961AD1B9 for ; Thu, 17 Dec 2020 12:15:52 +0000 (UTC) X-HE-Tag: news38_010f3a227434 X-Filterd-Recvd-Size: 5958 Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:15:51 +0000 (UTC) Received: by mail-pj1-f45.google.com with SMTP id lb18so3542356pjb.5 for ; Thu, 17 Dec 2020 04:15:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uMWaylOLwrhQO+0qPcTQWOqk5IVfd5cW+XnB4Y3UJOM=; b=HP32Wirp0xJy8F8B53MO0WJ+iP1j1h5d5guVaV+dyEaqPHsJ0HtA78GwwPulzLqmhA 9Hya+1z+TDZDAhQNXwHc8v1HtvdxEzRf+upBJuGl22ePI/8v4KnK2cUXhPD+6ugbG/tf eZYDpkTd5ExtlnHsc8Rw9Cgti3GFsz9Ou3rALkoibuRclS6EKsTvdJavl9dg8niz0Lho 8h+mWxlzSO7fV+oLsL0x4M36s1/swIv1SlSlyQpGK6NkQ1a06FxW+BaQwGX5ssigwser M8dRxJCk4Oa2yQ1taaSKbXE/LzPGBZ+CtoxWjkk6mxuLCP6T1NrReAj16bhWSEPTdwl9 CRFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uMWaylOLwrhQO+0qPcTQWOqk5IVfd5cW+XnB4Y3UJOM=; b=Q0Wk3zKIA4UbwYBHk12NfNjN7JpDOZeaYhZ/NWhkl7v5Z+eDv9dRHL+0aZ0o1Mojpe STBGnD231VJB9lOP395m3qbFZgy8cbR1djSY7JouZBAEeOoz5xSptBmlXCfV5clyhvoK HW87BhvL6xUsq5lt4gm+pgLwQsfA8VfxVMAyDX6De6T/rRs3kJx8xtUH+qOihj5zNw8i RDj7owYVMu1en3E3mbm3ZE1BskmA9bHehyFLP/tWepa8bYwZmEoGcO4HpRqOvS7Kh2Sd BvKb04bz700dsbsPHb8+ZYTDSETiNACBYN96p1mb9jelUphW+eX+1FAiWGFLAyB+EhO7 1qLg== X-Gm-Message-State: AOAM530PTw4bWzlloS04rEPF8+PpQCubcDlnN3C4TyT9vTvf5x9YAGep /+wC+S/UtDtbUddY3pJBC9M5gg== X-Google-Smtp-Source: ABdhPJxTZ/B0obsJFYqu5Nkao7Nm5q+V7wC3rvQeBm1ZiFOsx5R6gfzFUeZHk1BZE9qpSgsUJnYymw== X-Received: by 2002:a17:90a:c70f:: with SMTP id o15mr7617385pjt.40.1608207350845; Thu, 17 Dec 2020 04:15:50 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.15.36 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:15:50 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Date: Thu, 17 Dec 2020 20:12:54 +0800 Message-Id: <20201217121303.13386-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The HUGETLB_PAGE_FREE_VMEMMAP option is used to enable the freeing of unnecessary vmemmap associated with HugeTLB pages. The config option is introduced early so that supporting code can be written to depend on the option. The initial version of the code only provides support for x86-64. Like other code which frees vmemmap, this config option depends on HAVE_BOOTMEM_INFO_NODE. The routine register_page_bootmem_info() is used to register bootmem info. Therefore, make sure register_page_bootmem_info is enabled if HUGETLB_PAGE_FREE_VMEMMAP is defined. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Acked-by: Mike Kravetz --- arch/x86/mm/init_64.c | 2 +- fs/Kconfig | 18 ++++++++++++++++++ 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0a45f062826e..0435bee2e172 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall; static void __init register_page_bootmem_info(void) { -#ifdef CONFIG_NUMA +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) int i; for_each_online_node(i) diff --git a/fs/Kconfig b/fs/Kconfig index 976e8b9033c4..e7c4c2a79311 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -245,6 +245,24 @@ config HUGETLBFS config HUGETLB_PAGE def_bool HUGETLBFS +config HUGETLB_PAGE_FREE_VMEMMAP + def_bool HUGETLB_PAGE + depends on X86_64 + depends on SPARSEMEM_VMEMMAP + depends on HAVE_BOOTMEM_INFO_NODE + help + The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of + some vmemmap pages associated with pre-allocated HugeTLB pages. + For example, on X86_64 6 vmemmap pages of size 4KB each can be + saved for each 2MB HugeTLB page. 4094 vmemmap pages of size 4KB + each can be saved for each 1GB HugeTLB page. + + When a HugeTLB page is allocated or freed, the vmemmap array + representing the range associated with the page will need to be + remapped. When a page is allocated, vmemmap pages are freed + after remapping. When a page is freed, previously discarded + vmemmap pages must be allocated before remapping. + config MEMFD_CREATE def_bool TMPFS || HUGETLBFS From patchwork Thu Dec 17 12:12:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96062C4361B for ; Thu, 17 Dec 2020 12:16:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E73FE238E8 for ; Thu, 17 Dec 2020 12:16:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E73FE238E8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3ACB76B0070; Thu, 17 Dec 2020 07:16:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3821C8D0002; Thu, 17 Dec 2020 07:16:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 249C78D0001; Thu, 17 Dec 2020 07:16:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id 0A65C6B0070 for ; Thu, 17 Dec 2020 07:16:06 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B7D8E8249980 for ; Thu, 17 Dec 2020 12:16:05 +0000 (UTC) X-FDA: 77602671090.28.judge14_211502427434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 76C5B6D68 for ; Thu, 17 Dec 2020 12:16:05 +0000 (UTC) X-HE-Tag: judge14_211502427434 X-Filterd-Recvd-Size: 25352 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:16:04 +0000 (UTC) Received: by mail-pg1-f176.google.com with SMTP id z21so3046202pgj.4 for ; Thu, 17 Dec 2020 04:16:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XhlcKgdce83OWuotPe64NNg06UApv0YS6YzH8RPh8pY=; b=DyrNgEobqygUrtUU3srCQVqaPQanikngMH6WrSaFvwfbxinThInF+MncTcbF475o6F +t2BfYKvmT/tLcDkNAQ1ZMVShoj0H7kh0MoyxFNwRgBGeEhs6NlBRqkFFMHe2GHbd6rT wPu46wMclzcoQgtiXHRp6tAQcjxL5pzjtMvJuogTo7kl1718B+Te70h6pEHXZN0xDjUa I4gVNEwVqTFIA7IstKJX3vKI/k/3/ZP+KEme47YkpmokomekAsXXNex9dXfS9Olpo/BS rrVk1k8nt2V6SPsB4rtpK7JtTVZjWzsr/Cu2gCrMJRkVA2bC02gnemL6krMQpuZrkgDT fPqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XhlcKgdce83OWuotPe64NNg06UApv0YS6YzH8RPh8pY=; b=SMAl5h5rgmmoGMuQzAUhXwFf7Yq/03cUc2Fkbf/V9lt75Y9GTwAkqRXZ0xcSaLHX23 kkPFhb4lEDq0FqzUen5rhJQ6ULtXPysGeu7KArEC+P69hB0iREI8YIHJSyEVFyBkItEx 3NA+mdOxOp1k4RezNeQvVgTpP+t8fjdd91pDENzV6eXnaUHC8jVNCCNaS7HFZnu5F25a UO0WwLu4aniPeCyugzSBhEuEEkRb5DuyHXAWWr6WcYxDzcMyknz1zoBu43Zyt6fbUubF TmSyL7g7NAhKrKa2n1dR5w7Bi1eHXNQFZISCpVo5QjRnC58mRqIq9mASjKh0VSfWBG8t UBaA== X-Gm-Message-State: AOAM5330Ggu1uQmclXG1tJvKGuRQpStKzUqSogKn6Erwak9lyOtPZyWL OA77pp7us/nmdqLaf+yYYyVPfQ== X-Google-Smtp-Source: ABdhPJw/X0CnPocyHtEHCrwuabMzEHgQ8IFLHnm6NY/mdpxsJMwj7T7qLIM0KzSc4/ljgUO9Jq89Hg== X-Received: by 2002:a62:1716:0:b029:19d:b78b:ef02 with SMTP id 22-20020a6217160000b029019db78bef02mr9029683pfx.11.1608207362906; Thu, 17 Dec 2020 04:16:02 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.15.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:16:02 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Date: Thu, 17 Dec 2020 20:12:55 +0800 Message-Id: <20201217121303.13386-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Every HugeTLB has more than one struct page structure. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures to store metadata associated with each HugeTLB. There are a lot of struct page structures associated with each HugeTLB page. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structures. We map the virtual addresses of the remaining pages of tail page structures to the first tail page struct, and then free these page frames. Therefore, we need to reserve two pages as vmemmap areas. When we allocate a HugeTLB page from the buddy, we can free some vmemmap pages associated with each HugeTLB page. It is more appropriate to do it in the prep_new_huge_page(). The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap pages associated with a HugeTLB page can be freed, returns zero for now, which means the feature is disabled. We will enable it once all the infrastructure is there. Signed-off-by: Muchun Song --- include/linux/bootmem_info.h | 27 +++++- include/linux/mm.h | 2 + mm/Makefile | 1 + mm/hugetlb.c | 3 + mm/hugetlb_vmemmap.c | 207 +++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 20 +++++ mm/sparse-vmemmap.c | 177 ++++++++++++++++++++++++++++++++++++ 7 files changed, 436 insertions(+), 1 deletion(-) create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..4c80b7be1771 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -2,7 +2,7 @@ #ifndef __LINUX_BOOTMEM_INFO_H #define __LINUX_BOOTMEM_INFO_H -#include +#include /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +/* + * Any memory allocated via the memblock allocator and not via the + * buddy will be marked reserved already in the memmap. For those + * pages, we can call this function to free it to buddy allocator. + */ +static inline void free_bootmem_page(struct page *page) +{ + unsigned long magic = (unsigned long)page->freelist; + + /* + * The reserve_bootmem_region sets the reserved flag on bootmem + * pages. + */ + VM_WARN_ON(page_ref_count(page) != 2); + + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + put_page_bootmem(page); + else + VM_WARN_ON(1); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { @@ -35,6 +56,10 @@ static inline void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { } + +static inline void free_bootmem_page(struct page *page) +{ +} #endif #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index eabe7d9f80d8..0ecad1a41190 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3005,6 +3005,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) } #endif +void vmemmap_remap_free(unsigned long start, unsigned long size); + void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap); diff --git a/mm/Makefile b/mm/Makefile index ed4b88fa0f5e..056801d8daae 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP) += frontswap.o obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) += hugetlb_vmemmap.o obj-$(CONFIG_NUMA) += mempolicy.o obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1f3bf1710b66..140135fc8113 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -42,6 +42,7 @@ #include #include #include "internal.h" +#include "hugetlb_vmemmap.h" int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; @@ -1497,6 +1498,8 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + free_huge_page_vmemmap(h, page); + INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c new file mode 100644 index 000000000000..5cf7b6122c86 --- /dev/null +++ b/mm/hugetlb_vmemmap.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + * + * The struct page structures (page structs) are used to describe a physical + * page frame. By default, there is a one-to-one mapping from a page frame to + * it's corresponding page struct. + * + * The HugeTLB pages consist of multiple base page size pages and is supported + * by many architectures. See hugetlbpage.rst in the Documentation directory + * for more details. On the x86-64 architecture, HugeTLB pages of size 2MB and + * 1GB are currently supported. Since the base page size on x86 is 4KB, a 2MB + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of + * 4096 base pages. For each base page, there is a corresponding page struct. + * + * Within the HugeTLB subsystem, only the first 4 page structs are used to + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER + * provides this upper limit. The only 'useful' information in the remaining + * page structs is the compound_head field, and this field is the same for all + * tail pages. + * + * By removing redundant page structs for HugeTLB pages, memory can returned to + * the buddy allocator for other uses. + * + * Different architectures support different HugeTLB pages. For example, the + * following table is the HugeTLB page size supported by x86 and arm64 + * architectures. Becasue arm64 supports 4k, 16k, and 64k base pages and + * supports contiguous entries, so it supports many kinds of sizes of HugeTLB + * page. + * + * +--------------+-----------+-----------------------------------------------+ + * | Architecture | Page Size | HugeTLB Page Size | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | x86-64 | 4KB | 2MB | 1GB | | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | | 4KB | 64KB | 2MB | 32MB | 1GB | + * | +-----------+-----------+-----------+-----------+-----------+ + * | arm64 | 16KB | 2MB | 32MB | 1GB | | + * | +-----------+-----------+-----------+-----------+-----------+ + * | | 64KB | 2MB | 512MB | 16GB | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * + * When the system boot up, every HugeTLB page has more than one struct page + * structs whose size is (unit: pages): + * + * struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * + * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size + * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following + * relationship. + * + * HugeTLB_Size = n * PAGE_SIZE + * + * Then, + * + * struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * = n * sizeof(struct page) / PAGE_SIZE + * + * We can use huge mapping at the pud/pmd level for the HugeTLB page. + * + * For the HugeTLB page of the pmd level mapping, then + * + * struct_size = n * sizeof(struct page) / PAGE_SIZE + * = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE + * = sizeof(struct page) / sizeof(pte_t) + * = 64 / 8 + * = 8 (pages) + * + * Where n is how many pte entries which one page can contains. So the value of + * n is (PAGE_SIZE / sizeof(pte_t)). + * + * This optimization only supports 64-bit system, so the value of sizeof(pte_t) + * is 8. And this optimization also applicable only when the size of struct page + * is a power of two. In most cases, the size of struct page is 64 (e.g. x86-64 + * and arm64). So if we use pmd level mapping for a HugeTLB page, the size of + * struct page structs of it is 8 pages whose size depends on the size of the + * base page. + * + * For the HugeTLB page of the pud level mapping, then + * + * struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) + * = PAGE_SIZE / 8 * 8 (pages) + * = PAGE_SIZE (pages) + * + * Where the struct_size(pmd) is the size of the struct page structs of a + * HugeTLB page of the pmd level mapping. + * + * Next, we take the pmd level mapping of the HugeTLB page as an example to + * show the internal implementation of this optimization. There are 8 pages + * struct page structs associated with a HugeTLB page which is pmd mapped. + * + * Here is how things look before optimization. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | -------------> | 2 | + * | | +-----------+ +-----------+ + * | | | 3 | -------------> | 3 | + * | | +-----------+ +-----------+ + * | | | 4 | -------------> | 4 | + * | PMD | +-----------+ +-----------+ + * | level | | 5 | -------------> | 5 | + * | mapping | +-----------+ +-----------+ + * | | | 6 | -------------> | 6 | + * | | +-----------+ +-----------+ + * | | | 7 | -------------> | 7 | + * | | +-----------+ +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * The value of page->compound_head is the same for all tail pages. The first + * page of page structs (page 0) associated with the HugeTLB page contains the 4 + * page structs necessary to describe the HugeTLB. The only use of the remaining + * pages of page structs (page 1 to page 7) is to point to page->compound_head. + * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs + * will be used for each HugeTLB page. This will allow us to free the remaining + * 6 pages to the buddy allocator. + * + * Here is how things look after remapping. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | ----------------^ ^ ^ ^ ^ ^ + * | | +-----------+ | | | | | + * | | | 3 | ------------------+ | | | | + * | | +-----------+ | | | | + * | | | 4 | --------------------+ | | | + * | PMD | +-----------+ | | | + * | level | | 5 | ----------------------+ | | + * | mapping | +-----------+ | | + * | | | 6 | ------------------------+ | + * | | +-----------+ | + * | | | 7 | --------------------------+ + * | | +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for + * vmemmap pages and restore the previous mapping relationship. + * + * For the HugeTLB page of the pud level mapping. It is similar to the former. + * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages. + * + * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures + * (e.g. aarch64) provides a contiguous bit in the translation table entries + * that hints to the MMU to indicate that it is one of a contiguous set of + * entries that can be cached in a single TLB entry. + * + * The contiguous bit is used to increase the mapping size at the pmd and pte + * (last) level. So this type of HugeTLB page can be optimized only when its + * size of the struct page structs is greater than 2 pages. + */ +#include "hugetlb_vmemmap.h" + +/* + * There are a lot of struct page structures associated with each HugeTLB page. + * For tail pages, the value of compound_head is the same. So we can reuse first + * page of tail page structures. We map the virtual addresses of the remaining + * pages of tail page structures to the first tail page struct, and then free + * these page frames. Therefore, we need to reserve two pages as vmemmap areas. + */ +#define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} + +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) +{ + return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; +} + +void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr = (unsigned long)head; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_remap_free(vmemmap_addr + RESERVE_VMEMMAP_SIZE, + free_vmemmap_pages_size_per_hpage(h)); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h new file mode 100644 index 000000000000..6923f03534d5 --- /dev/null +++ b/mm/hugetlb_vmemmap.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + */ +#ifndef _LINUX_HUGETLB_VMEMMAP_H +#define _LINUX_HUGETLB_VMEMMAP_H +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void free_huge_page_vmemmap(struct hstate *h, struct page *head); +#else +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ +#endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 16183d85a7d5..6cf2fdfb81e9 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -27,8 +27,185 @@ #include #include #include +#include +#include + #include #include +#include + +/* + * vmemmap_remap_walk - walk vmemmap page table + * + * @remap_pte: called for each non-empty PTE (lowest-level) entry. + * @reuse_page: the page which is reused for the tail vmemmap pages. + * @reuse_addr: the virtual address of the @reuse_page page. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + */ +struct vmemmap_remap_walk { + void (*remap_pte)(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk); + struct page *reuse_page; + unsigned long reuse_addr; + struct list_head *vmemmap_pages; +}; + +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + + if (walk->reuse_addr == addr) { + BUG_ON(pte_none(*pte)); + walk->reuse_page = pte_page(*pte++); + addr += PAGE_SIZE; + } + + for (; addr != end; addr += PAGE_SIZE, pte++) { + BUG_ON(pte_none(*pte)); + + walk->remap_pte(pte, addr, walk); + } +} + +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + do { + BUG_ON(pmd_none(*pmd)); + + next = pmd_addr_end(addr, end); + vmemmap_pte_range(pmd, addr, next, walk); + } while (pmd++, addr = next, addr != end); +} + +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(p4d, addr); + do { + BUG_ON(pud_none(*pud)); + + next = pud_addr_end(addr, end); + vmemmap_pmd_range(pud, addr, next, walk); + } while (pud++, addr = next, addr != end); +} + +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + p4d_t *p4d; + unsigned long next; + + p4d = p4d_offset(pgd, addr); + do { + BUG_ON(p4d_none(*p4d)); + + next = p4d_addr_end(addr, end); + vmemmap_pud_range(p4d, addr, next, walk); + } while (p4d++, addr = next, addr != end); +} + +static void vmemmap_remap_range(unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) +{ + unsigned long addr = start - PAGE_SIZE; + unsigned long next; + pgd_t *pgd; + + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); + + walk->reuse_page = NULL; + walk->reuse_addr = addr; + + pgd = pgd_offset_k(addr); + do { + BUG_ON(pgd_none(*pgd)); + + next = pgd_addr_end(addr, end); + vmemmap_p4d_range(pgd, addr, next, walk); + } while (pgd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +/* + * Free a vmemmap page. A vmemmap page can be allocated from the memblock + * allocator or buddy allocator. If the PG_reserved flag is set, it means + * that it allocated from the memblock allocator, just free it via the + * free_bootmem_page(). Otherwise, use __free_page(). + */ +static inline void free_vmemmap_page(struct page *page) +{ + if (PageReserved(page)) + free_bootmem_page(page); + else + __free_page(page); +} + +/* Free a list of the vmemmap pages */ +static void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + /* + * Make the tail pages are mapped with read-only to catch + * illegal write operation to the tail pages. + */ + pgprot_t pgprot = PAGE_KERNEL_RO; + pte_t entry = mk_pte(walk->reuse_page, pgprot); + struct page *page; + + page = pte_page(*pte); + list_add(&page->lru, walk->vmemmap_pages); + + set_pte_at(&init_mm, addr, pte, entry); +} + +/** + * vmemmap_remap_free - remap the vmemmap virtual address range + * [start, start + size) to the page which + * [start - PAGE_SIZE, start) is mapped, + * then free vmemmap pages. + * @start: start address of the vmemmap virtual address range + * @size: size of the vmemmap virtual address range + */ +void vmemmap_remap_free(unsigned long start, unsigned long size) +{ + unsigned long end = start + size; + LIST_HEAD(vmemmap_pages); + + struct vmemmap_remap_walk walk = { + .remap_pte = vmemmap_remap_pte, + .vmemmap_pages = &vmemmap_pages, + }; + + vmemmap_remap_range(start, end, &walk); + free_vmemmap_page_list(&vmemmap_pages); +} /* * Allocate a block of memory to be used to back the virtual memory map From patchwork Thu Dec 17 12:12:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E504C4361B for ; Thu, 17 Dec 2020 12:16:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E314F238EF for ; Thu, 17 Dec 2020 12:16:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E314F238EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6DD686B0071; Thu, 17 Dec 2020 07:16:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 68D856B0072; Thu, 17 Dec 2020 07:16:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A41D6B0073; Thu, 17 Dec 2020 07:16:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id 438D36B0071 for ; Thu, 17 Dec 2020 07:16:17 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 081481DED for ; Thu, 17 Dec 2020 12:16:17 +0000 (UTC) X-FDA: 77602671594.22.cook84_15123cb27434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id E244518038E67 for ; Thu, 17 Dec 2020 12:16:16 +0000 (UTC) X-HE-Tag: cook84_15123cb27434 X-Filterd-Recvd-Size: 9638 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:16:16 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id t6so15056350plq.1 for ; Thu, 17 Dec 2020 04:16:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=e4dPLPzck6kFTICnnm6pOIxTCOOE9pZF0RcQDn1W2nM=; b=Z/fm8Jkvky6uM79c3EnJXTurjRvzR81fNFYuCoSJa7/EHqX74YmEwLi9R2nMdKBr0+ 8qbRlG8pRhuR5fqfRfGvXX50XXjV56/6PducBm+aaQNswZaE3tzoplT3bJyXgF9eSEX5 GGoasJLXYPl6kqEOGIDW9JiqTDFKOVJakkQPccJhgAmIEJwD0YAL02rcMzenQkyJv02u Bjj6BLV9uGpk2vAOWTjqQiVN300w3B04OLTeG3Spxpv1hrF2x7Id7QtPr+KmhrJCmxe8 KHpEEWfw5nN2y8HDRQQ1Jews4wu9tgar2Yk1rdoXew5FHoqcoSxhWqChSXG+5mJqz3SJ 3hjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=e4dPLPzck6kFTICnnm6pOIxTCOOE9pZF0RcQDn1W2nM=; b=oQP51EsSTBsSV0E1sCEbjF93SFPL1+2ZGV9Spq+QyrZ5kYodrfcMVj2rZTeH/VZqtU 4xidxaZrl9r8bE+N+v5Y/r+0UIt5gD8KNZmay6CO1XbdxZUFlRGk/z0gCzVglPMHbgBn OZ0QH8dXZLbYESN4IiXFHaIdAeStLHZlfcKGSX8MreWpLCKJplw1hqdyYJ8u4874CgAt /OwaOcWiCO4NePqBd3nfdXZ3Mx6qmEaPeUpc+ltz0YFm671SrCNHtjckNIloxmMFrxrI Xw8Bza1lrw/iwR68Abc/SiNiokr25Fygz9Z5ETGedBcWgx+vFxGoryfxbdFevBryjf9Z pD3g== X-Gm-Message-State: AOAM533nIaiH/riHo3eYEFFDHomWvcIvP0L0c1U/19qZdDjzltwqua2F 23xwwJxBW/B9ICNL67z2XeVM6Q== X-Google-Smtp-Source: ABdhPJxSW3f1EoG0md+ulos+fD0SCWxRxyz/oTDM0Kf9cb/M3CQzUYcF4lB+daxG90MsE14vUA0fJQ== X-Received: by 2002:a17:90b:4a84:: with SMTP id lp4mr7842963pjb.218.1608207375114; Thu, 17 Dec 2020 04:16:15 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.16.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:16:14 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 04/11] mm/hugetlb: Defer freeing of HugeTLB pages Date: Thu, 17 Dec 2020 20:12:56 +0800 Message-Id: <20201217121303.13386-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the subsequent patch, we will allocate the vmemmap pages when free HugeTLB pages. But update_and_free_page() is called from a non-task context(and hold hugetlb_lock), so we can defer the actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate the vmemmap pages. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- mm/hugetlb.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++---- mm/hugetlb_vmemmap.c | 12 -------- mm/hugetlb_vmemmap.h | 17 +++++++++++ 3 files changed, 91 insertions(+), 18 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 140135fc8113..9f35f34d3195 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1292,15 +1292,79 @@ static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif -static void update_and_free_page(struct hstate *h, struct page *page) +static void __free_hugepage(struct hstate *h, struct page *page); + +/* + * As update_and_free_page() is be called from a non-task context(and hold + * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent + * use GFP_ATOMIC to allocate a lot of vmemmap pages. + * + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of + * pages to be freed and frees them one-by-one. As the page->mapping pointer + * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is + * reused as the llist_node structure of a lockless linked list of huge + * pages to be freed. + */ +static LLIST_HEAD(hpage_update_freelist); + +static void update_hpage_vmemmap_workfn(struct work_struct *work) { - int i; + struct llist_node *node; + struct page *page; + node = llist_del_all(&hpage_update_freelist); + + while (node) { + page = container_of((struct address_space **)node, + struct page, mapping); + node = node->next; + page->mapping = NULL; + __free_hugepage(page_hstate(page), page); + + cond_resched(); + } +} +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); + +static inline void __update_and_free_page(struct hstate *h, struct page *page) +{ + /* No need to allocate vmemmap pages */ + if (!free_vmemmap_pages_per_hpage(h)) { + __free_hugepage(h, page); + return; + } + + /* + * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap + * pages. + * + * Only call schedule_work() if hpage_update_freelist is previously + * empty. Otherwise, schedule_work() had been called but the workfn + * hasn't retrieved the list yet. + */ + if (llist_add((struct llist_node *)&page->mapping, + &hpage_update_freelist)) + schedule_work(&hpage_update_work); +} + +static void update_and_free_page(struct hstate *h, struct page *page) +{ if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; h->nr_huge_pages--; h->nr_huge_pages_node[page_to_nid(page)]--; + + __update_and_free_page(h, page); +} + +/* + * This is where the call to allocate vmemmmap pages will be inserted. + */ +static void __free_hugepage(struct hstate *h, struct page *page) +{ + int i; + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | @@ -1313,13 +1377,17 @@ static void update_and_free_page(struct hstate *h, struct page *page) set_page_refcounted(page); if (hstate_is_gigantic(h)) { /* - * Temporarily drop the hugetlb_lock, because - * we might block in free_gigantic_page(). + * Temporarily drop the hugetlb_lock only when this type of + * HugeTLB page does not support vmemmap optimization (which + * context do not hold the hugetlb_lock), because we might + * block in free_gigantic_page(). */ - spin_unlock(&hugetlb_lock); + if (!free_vmemmap_pages_per_hpage(h)) + spin_unlock(&hugetlb_lock); destroy_compound_gigantic_page(page, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); - spin_lock(&hugetlb_lock); + if (!free_vmemmap_pages_per_hpage(h)) + spin_lock(&hugetlb_lock); } else { __free_pages(page, huge_page_order(h)); } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 5cf7b6122c86..c4bbca270453 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -178,18 +178,6 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) -/* - * How many vmemmap pages associated with a HugeTLB page that can be freed - * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. - */ -static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) -{ - return 0; -} - static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 6923f03534d5..01f8637adbe0 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -12,9 +12,26 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void free_huge_page_vmemmap(struct hstate *h, struct page *head); + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #else static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Thu Dec 17 12:12:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91024C4361B for ; Thu, 17 Dec 2020 12:16:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1B7F3238E8 for ; Thu, 17 Dec 2020 12:16:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1B7F3238E8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 927146B0068; Thu, 17 Dec 2020 07:16:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8FFCD6B0072; Thu, 17 Dec 2020 07:16:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7EC3E6B0073; Thu, 17 Dec 2020 07:16:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 6976D6B0068 for ; Thu, 17 Dec 2020 07:16:29 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 24520180AD804 for ; Thu, 17 Dec 2020 12:16:29 +0000 (UTC) X-FDA: 77602672098.13.trip74_580283527434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 04DA818140B70 for ; Thu, 17 Dec 2020 12:16:29 +0000 (UTC) X-HE-Tag: trip74_580283527434 X-Filterd-Recvd-Size: 9924 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:16:28 +0000 (UTC) Received: by mail-pg1-f173.google.com with SMTP id e2so20191850pgi.5 for ; Thu, 17 Dec 2020 04:16:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dj578UsWVM1a5aMLDM3EP67706qoyFbxAsYRyDIHJks=; b=LNBtZQUCmZmD3uyrssYOY84KUqX/8clVAGGSgDhRDr9ltbpoTR8ayWN3A8QM1QbpKS V2kQ/gp4whUbxoin7n8uXrRVtFkfd9kZV6KvTxgPegWSSzt8tgiaAmp4wOkc+uO03FnN Erb8uOTkfs1i8tg+yqKBkPSSugXygl5qsdHUEH/NwBKPlhpj4c/6rZM4/CrgMiaBC65+ pVMcDgN3rv1Q5YNDY+fCBa36+DzReATuxDKAnmeIqtvoNkdaICPXjGZkKp4YMobwJuVe Eu22Y3RdN9PsYDQJbLurn69v5Z9nmUO/Red+Zh62679uUmCYEFGe0wkQEvaFX9ASwSrs ZK2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dj578UsWVM1a5aMLDM3EP67706qoyFbxAsYRyDIHJks=; b=FOgGu+aZ6xSl1f/1rs9XCo3cxue8V7sXa1EdhgNLTQKzhCYhKIpByxF11/iG6ZI0aC VhNVhjwYpgqvEEplq/Nl3jeJYnKYdLD+4Xc4i9Ll6OnRnpgPbtxpwWubQJd4VBiXfFX8 y7KhKSZuOXK383Zi/BVWweG7sOSpo4s0+86AdKpCSJxiPXEYXrM9zhPkriXRH6ypkonU voa7XdbwbgPasHkw42aNZv5P82mOmyMzYUgqWp9lcivgYFSWfpU1w74VCFNAvlEi1Csg 1/pi9pTR4+PBPYjwPypVKNbofy7/wNOsaxFv4WRTjLLzFvgdRnPT67ACUoFwfCFUe07v YZZQ== X-Gm-Message-State: AOAM533qSw7bjjsfkrq84l2FDaSuY1vbM5HxD6sLDhrDfzcw7zeIoPwN RO8GtYGqU1ZoP7Rzuyuk5frZHQ== X-Google-Smtp-Source: ABdhPJyyzPXUFGwfxSIpjE0XjjaxadFSQYvbXxt/MqQvVJemQUqWmzotHqkrw09+XE2e6peWAVwVPA== X-Received: by 2002:a63:308:: with SMTP id 8mr21855610pgd.15.1608207387377; Thu, 17 Dec 2020 04:16:27 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.16.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:16:26 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 05/11] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Date: Thu, 17 Dec 2020 20:12:57 +0800 Message-Id: <20201217121303.13386-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we free a HugeTLB page to the buddy allocator, we should allocate the vmemmap pages associated with it. We can do that in the __free_hugepage() before freeing it to buddy. Signed-off-by: Muchun Song --- include/linux/mm.h | 1 + mm/hugetlb.c | 2 ++ mm/hugetlb_vmemmap.c | 11 ++++++++ mm/hugetlb_vmemmap.h | 5 ++++ mm/sparse-vmemmap.c | 73 +++++++++++++++++++++++++++++++++++++++++++++++++++- 5 files changed, 91 insertions(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0ecad1a41190..e7d022b67ee1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3006,6 +3006,7 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) #endif void vmemmap_remap_free(unsigned long start, unsigned long size); +void vmemmap_remap_alloc(unsigned long start, unsigned long size); void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9f35f34d3195..329f473b929e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1365,6 +1365,8 @@ static void __free_hugepage(struct hstate *h, struct page *page) { int i; + alloc_huge_page_vmemmap(h, page); + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index c4bbca270453..273816dd95b6 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -183,6 +183,17 @@ static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; } +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr = (unsigned long)head; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_remap_alloc(vmemmap_addr + RESERVE_VMEMMAP_SIZE, + free_vmemmap_pages_size_per_hpage(h)); +} + void free_huge_page_vmemmap(struct hstate *h, struct page *head) { unsigned long vmemmap_addr = (unsigned long)head; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 01f8637adbe0..b2c8d2f11d48 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -11,6 +11,7 @@ #include #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); /* @@ -25,6 +26,10 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) return 0; } #else +static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} + static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 6cf2fdfb81e9..01228065b95d 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include @@ -40,7 +41,8 @@ * @remap_pte: called for each non-empty PTE (lowest-level) entry. * @reuse_page: the page which is reused for the tail vmemmap pages. * @reuse_addr: the virtual address of the @reuse_page page. - * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed + * or is mapped from. */ struct vmemmap_remap_walk { void (*remap_pte)(pte_t *pte, unsigned long addr, @@ -50,6 +52,10 @@ struct vmemmap_remap_walk { struct list_head *vmemmap_pages; }; +/* The gfp mask of allocating vmemmap page */ +#define GFP_VMEMMAP_PAGE \ + (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN | __GFP_THISNODE) + static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct vmemmap_remap_walk *walk) @@ -207,6 +213,71 @@ void vmemmap_remap_free(unsigned long start, unsigned long size) free_vmemmap_page_list(&vmemmap_pages); } +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + pgprot_t pgprot = PAGE_KERNEL; + struct page *page; + void *to; + + BUG_ON(pte_page(*pte) != walk->reuse_page); + + page = list_first_entry(walk->vmemmap_pages, struct page, lru); + list_del(&page->lru); + to = page_to_virt(page); + copy_page(to, (void *)walk->reuse_addr); + + set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); +} + +static void alloc_vmemmap_page_list(struct list_head *list, + unsigned long start, unsigned long end) +{ + unsigned long addr; + int nid = page_to_nid((const void *)start); + + for (addr = start; addr < end; addr += PAGE_SIZE) { + struct page *page; + +retry: + page = alloc_pages_node(nid, GFP_VMEMMAP_PAGE, 0); + if (unlikely(!page)) { + msleep(100); + /* + * We should retry infinitely, because we cannot + * handle allocation failures. Once we allocate + * vmemmap pages successfully, then we can free + * a HugeTLB page. + */ + goto retry; + } + list_add_tail(&page->lru, list); + } +} + +/** + * vmemmap_remap_alloc - remap the vmemmap virtual address range + * [start, start + size) to the page respectively + * which from the @vmemmap_pages + * @start: start address of the vmemmap virtual address range + * @size: size of the vmemmap virtual address range + */ +void vmemmap_remap_alloc(unsigned long start, unsigned long size) +{ + LIST_HEAD(vmemmap_pages); + unsigned long end = start + size; + + struct vmemmap_remap_walk walk = { + .remap_pte = vmemmap_restore_pte, + .vmemmap_pages = &vmemmap_pages, + }; + + might_sleep(); + + alloc_vmemmap_page_list(&vmemmap_pages, start, end); + vmemmap_remap_range(start, end, &walk); +} + /* * Allocate a block of memory to be used to back the virtual memory map * or to back the page tables that are used to create the mapping. From patchwork Thu Dec 17 12:12:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979723 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92B3DC4361B for ; Thu, 17 Dec 2020 12:16:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0E3AD238E8 for ; Thu, 17 Dec 2020 12:16:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E3AD238E8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8FFCD6B0072; Thu, 17 Dec 2020 07:16:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8AFE36B0073; Thu, 17 Dec 2020 07:16:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7789F8D0001; Thu, 17 Dec 2020 07:16:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0169.hostedemail.com [216.40.44.169]) by kanga.kvack.org (Postfix) with ESMTP id 631096B0072 for ; Thu, 17 Dec 2020 07:16:41 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2A3443642 for ; Thu, 17 Dec 2020 12:16:41 +0000 (UTC) X-FDA: 77602672602.21.jeans36_0015a1b27434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 065E5180445FD for ; Thu, 17 Dec 2020 12:16:41 +0000 (UTC) X-HE-Tag: jeans36_0015a1b27434 X-Filterd-Recvd-Size: 6422 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:16:40 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id x12so14574954plr.10 for ; Thu, 17 Dec 2020 04:16:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LKwguLzjeWBmYGwSMmLpi6yU5zJ+fIhPLHmeqwyavEM=; b=Fhe/5iybq/cXRHMa8Tq6OxFJ2n/BfLTlrw2UIZZzOI8DIDjo1QDiMWY+POmZuAxChy YmQyJgurSTMtKMJT6QPINmKWkD5/AuZ9ztPw37BEky1ERZdjqKMkx6MIRq3L6E5R1wHS rxS7y2lj0bkDXvgjk++n7TRF8N4s9xjvdYmiJqL+VS312a/QdCH4M58DLhGqoO0W6fTS W8qMzIzBvZSlvlPCp7wkp9swUMZjPV0JhFaaqaGxr6RQ5/QVbullS0LO1V7rUTlTRgTI fp95zoFos8LW4Qq6qRQXLMnM7TOkLZm3pI1/y7iK6QuXVdr1ijMfPpyIb9TZr/bWqRMX Oxuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LKwguLzjeWBmYGwSMmLpi6yU5zJ+fIhPLHmeqwyavEM=; b=VVH+Tu+NWYA/Zli9zOYN/y+nofbr6aG648auZfUgKdp9hjs8rY9nQ30r3OtoIyvbGy u49TkTATmfUTi7of0RLrWKx3wMpUGw1yHevgEoHeqmQvZc1uB1C5pggzhzlptKBDkUGY /FtsctTnUfrEVZWXDIlgB9fGcCSAuxD4B0ox1WPa4tMvg/ZI5iMBeDPLMoYpYeMlQmH2 8a9ODk+DPSKz0RgHLkbJXJVAasmMDI6sZAwxSqzsq6aK7ow0KaMH9wsuGuH31gOKQAYk HHFWreROlcSsbvStZf+WyaD8f2ark0NnkbNdXKo5hn7zxtV9Q3Itq80ndWD8/y2+IA9H g3sQ== X-Gm-Message-State: AOAM533cuzsA4tUudHbMp3WupZDBm2gU646I5/zIdzXUDwNX4rU4jQKl jssr5guFgMEhWUbbCUQ/M/ZxBg== X-Google-Smtp-Source: ABdhPJwzxvaOOteOBZXRp2OqanCQPqB1Mp6wmaeBu2IrG4A+vMMCtCLNURz7rCm647vegcvQhm4q2A== X-Received: by 2002:a17:90a:67ce:: with SMTP id g14mr7860805pjm.33.1608207399401; Thu, 17 Dec 2020 04:16:39 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.16.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:16:38 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page Date: Thu, 17 Dec 2020 20:12:58 +0800 Message-Id: <20201217121303.13386-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Because we reuse the first tail vmemmap page frame and remap it with read-only, we cannot set the PageHWPosion on a tail page. So we can use the head[4].private to record the real error page index and set the raw error page PageHWPoison later. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- mm/hugetlb.c | 48 ++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 40 insertions(+), 8 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 329f473b929e..f15aa9b19b6e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1347,6 +1347,43 @@ static inline void __update_and_free_page(struct hstate *h, struct page *page) schedule_work(&hpage_update_work); } +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) +{ + struct page *page; + + if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) + return; + + page = head + page_private(head + 4); + + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + if (page != head) { + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, + struct page *page) +{ + if (!PageHWPoison(head)) + return; + + if (free_vmemmap_pages_per_hpage(h)) { + set_page_private(head + 4, page - head); + } else if (page != head) { + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + static void update_and_free_page(struct hstate *h, struct page *page) { if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) @@ -1366,6 +1403,7 @@ static void __free_hugepage(struct hstate *h, struct page *page) int i; alloc_huge_page_vmemmap(h, page); + hwpoison_subpage_deliver(h, page); for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | @@ -1843,14 +1881,8 @@ int dissolve_free_huge_page(struct page *page) int nid = page_to_nid(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; - /* - * Move PageHWPoison flag from head page to the raw error page, - * which makes any subpages rather than the error page reusable. - */ - if (PageHWPoison(head) && page != head) { - SetPageHWPoison(page); - ClearPageHWPoison(head); - } + + hwpoison_subpage_set(h, head, page); list_del(&head->lru); h->free_huge_pages--; h->free_huge_pages_node[nid]--; From patchwork Thu Dec 17 12:12:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AA36C4361B for ; Thu, 17 Dec 2020 12:16:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AF70B238EC for ; Thu, 17 Dec 2020 12:16:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AF70B238EC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2E4F56B0073; Thu, 17 Dec 2020 07:16:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 26D5A6B0074; Thu, 17 Dec 2020 07:16:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E7616B0075; Thu, 17 Dec 2020 07:16:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0203.hostedemail.com [216.40.44.203]) by kanga.kvack.org (Postfix) with ESMTP id E465F6B0073 for ; Thu, 17 Dec 2020 07:16:55 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id ACF70180AD804 for ; Thu, 17 Dec 2020 12:16:55 +0000 (UTC) X-FDA: 77602673190.28.cause09_011172227434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 982626D64 for ; Thu, 17 Dec 2020 12:16:55 +0000 (UTC) X-HE-Tag: cause09_011172227434 X-Filterd-Recvd-Size: 5512 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:16:55 +0000 (UTC) Received: by mail-pf1-f174.google.com with SMTP id f9so18899337pfc.11 for ; Thu, 17 Dec 2020 04:16:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7Efs9O+Tk92ClTQOdI0RWD/KsiOR71gsSXX4XHfsyAk=; b=L1Q1eYqYKO0zvPjCfjSIfHHGcNAqhDIG3lJs221961ERQ4S1SnUpxQ4jIl0FQ1/cJo F0XEMgxmJHC5FBc6NiaU3juWZHGz1Yfe4Oh9JKJgs5p4VZ9wiVlwG+DjdawOAXyxKETw 6IgSmVsZmZI2WMVBN/nfgbLgWXGSTx3dkvL/LgRgMNspfYmxHPOtfCOxURuAA6b1u+LD dNHlLpuiPnFl3ySPxcOKI8Bgjvt9p03vfZT/oYu0uxjVdHal33zbDpJwwZnIS9VgYXNh A1yOHox9HRMeStuqBsbzsgmUCMxSSdsGoYEf7f0ViPcf+7SoV7A2/ACN5fIc39Ocd5Lp Bf/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7Efs9O+Tk92ClTQOdI0RWD/KsiOR71gsSXX4XHfsyAk=; b=K4axie/H9dbC95rHItn/2zFAkEFuZak02ZK7/MNyqfQ2icXV/mdXgmXOKqLFzSYpqg SpSVW4Vzu4ciWLtdEzENFFmkmzSlYJYd7o4F9Ko/LawlCaFT4psbNWOB87XvW7hPMn9d Y9H8IGH1v9/5sbr9CxuuPINrQ/a08w2+0PDhUIZS4JQ8I2AOYkPsliJ/NiPPzAfJpjMy juWwJ2h9EZ7tK1mmbImaAecpQPymIVwGXngJAFjmFXIsmWW7aITgXICm6KMaYL6C3AR8 Bm7UIf97Tz3YzxEamTz9LCc/JorHf/L5idXvBjwKV4HuLXwBSlUmc3+Ud420jNf6ZoJJ jhdA== X-Gm-Message-State: AOAM530yvAyhmE+OkRoo68JV2oJdEGCKDKf/MktMLlc3zuh1wF/w9JqF 2/Vamo57GSwVS/sfQjm44kuBxQ== X-Google-Smtp-Source: ABdhPJwyGVM/3sUL3COu58Gvvs0SsTRi1P30cejz730zCZtUr41jShPb2zwAFvYAnO682bytBUeJ6Q== X-Received: by 2002:a62:2c9:0:b029:19d:d3f7:7dba with SMTP id 192-20020a6202c90000b029019dd3f77dbamr7619121pfc.40.1608207414203; Thu, 17 Dec 2020 04:16:54 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.16.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:16:53 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 07/11] mm/hugetlb: Flush work when dissolving hugetlb page Date: Thu, 17 Dec 2020 20:12:59 +0800 Message-Id: <20201217121303.13386-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should flush work when dissolving a hugetlb page to make sure that the hugetlb page is freed to the buddy. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- mm/hugetlb.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f15aa9b19b6e..fea8a96dd718 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1326,6 +1326,12 @@ static void update_hpage_vmemmap_workfn(struct work_struct *work) } static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); +static inline void flush_hpage_update_work(struct hstate *h) +{ + if (free_vmemmap_pages_per_hpage(h)) + flush_work(&hpage_update_work); +} + static inline void __update_and_free_page(struct hstate *h, struct page *page) { /* No need to allocate vmemmap pages */ @@ -1864,6 +1870,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + struct hstate *h = NULL; /* Not to disrupt normal path by vainly holding hugetlb_lock */ if (!PageHuge(page)) @@ -1877,8 +1884,9 @@ int dissolve_free_huge_page(struct page *page) if (!page_count(page)) { struct page *head = compound_head(page); - struct hstate *h = page_hstate(head); int nid = page_to_nid(head); + + h = page_hstate(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; @@ -1892,6 +1900,14 @@ int dissolve_free_huge_page(struct page *page) } out: spin_unlock(&hugetlb_lock); + + /* + * We should flush work before return to make sure that + * the HugeTLB page is freed to the buddy. + */ + if (!rc && h) + flush_hpage_update_work(h); + return rc; } From patchwork Thu Dec 17 12:13:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979727 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4328AC4361B for ; Thu, 17 Dec 2020 12:17:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ABA4F2395A for ; Thu, 17 Dec 2020 12:17:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ABA4F2395A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0EA1F8D0001; Thu, 17 Dec 2020 07:17:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 071746B0075; Thu, 17 Dec 2020 07:17:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E56DB8D0001; Thu, 17 Dec 2020 07:17:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id C97446B0074 for ; Thu, 17 Dec 2020 07:17:09 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 839B92476 for ; Thu, 17 Dec 2020 12:17:09 +0000 (UTC) X-FDA: 77602673778.22.shape42_5f10e6a27434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 5A33C18038E60 for ; Thu, 17 Dec 2020 12:17:09 +0000 (UTC) X-HE-Tag: shape42_5f10e6a27434 X-Filterd-Recvd-Size: 9473 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:17:08 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id y8so15041115plp.8 for ; Thu, 17 Dec 2020 04:17:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=82iH06gv0pfV8oeCrw6OMsUI6CAkk3A2C2xnsJNWudA=; b=ctcgr3uyz0WIWDjOq8PScxifWrBHsYSnHCoIG998yQI59xxOrk6xl3NsYzTeZ/SgyS eRuwJAo/QocZC21OQ7/sSm2NLIR4oPiV9T24IqTdQUx6bx0oiIgynSmDkKciJSy8dlsI mrPmvPS9hiWyScKlaJrWCOezCNESt17vmH7bhQkYaXb/kXUDr4asWr2ZCmOtPXzBAFFa Hk9d7Lxl8SBLOTyeFogxSNvinKL98Nrh/kuEQtpOp7YMd9qjF9oVICM7zHMj6OEM7fun 5tsoE51M9gKEwtO5zanUGIqluHRGhsIYY3YXTNjGLOqSM6tXPOyJ9oNE1qL69PY8N3sw T3bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=82iH06gv0pfV8oeCrw6OMsUI6CAkk3A2C2xnsJNWudA=; b=dyhSM7mtctavj8FaEy0tibbvKAV4UpYnok9LnSaF2VbKi7cBJA+FLnNi4mXjWykuki TiuKK7Sc60n/wgQoUCSfppoVbe6OS13LdtvvYDw0xH+fovBaNFTMVf6DTW0waDS0KVUS gug7/9+efXbJIfDsAzelkEFNXFnma9/Poy02CrOHZ726s2FOuW3vWKDTNKT7AvOi7T8E foGkVwiOgtQi3TvRBY8Bh4ENjlzEWj+l7P+NZMetImxUO5Sbdwj3rrxfdGiQkxL9g6m4 VjwyPwdb+Cq948pk4bjRFi7eqo8XaTJFzasb2jHADduSoFJd07piya7Gz1kfot97JT6R 5Qqg== X-Gm-Message-State: AOAM530BoZl+LRG0bdgixroFGvmZdLr3iTiGrxM6HtyYaQniPfh4GBzf bra937vYbrN07haxlJy7KfLOHg== X-Google-Smtp-Source: ABdhPJy+c/BzCm9+/D+0KwgZro7McwzDx3r9+rj6U5DJa52cVkDIltdz4fKjnCAZ0691Ym4kRxM8fw== X-Received: by 2002:a17:902:8bc5:b029:dc:1e79:e746 with SMTP id r5-20020a1709028bc5b02900dc1e79e746mr5089783plo.77.1608207427955; Thu, 17 Dec 2020 04:17:07 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.16.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:17:07 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Date: Thu, 17 Dec 2020 20:13:00 +0800 Message-Id: <20201217121303.13386-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a kernel parameter hugetlb_free_vmemmap to enable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Barry Song --- Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ arch/x86/mm/init_64.c | 8 ++++++-- include/linux/hugetlb.h | 19 +++++++++++++++++++ mm/hugetlb_vmemmap.c | 16 ++++++++++++++++ 5 files changed, 58 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 3ae25630a223..44dde9be7e00 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1551,6 +1551,20 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] + hugetlb_free_vmemmap= + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, + this controls freeing unused vmemmap pages associated + with each HugeTLB page. When this option is enabled, + we disable PMD/huge page mapping of vmemmap pages which + increase page table pages. So if a user/sysadmin only + uses a small number of HugeTLB pages (as a percentage + of system memory), they could end up using more memory + with hugetlb_free_vmemmap on as opposed to off. + Format: { on | off (default) } + + on: enable the feature + off: disable the feature + hung_task_panic= [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index f7b1c7462991..3a23c2377acc 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -145,6 +145,9 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing + unused vmemmap pages associated with each HugeTLB page. When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0435bee2e172..1bce5f20e6ca 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, { int err; - if (end - start < PAGES_PER_SECTION * sizeof(struct page)) + if (is_hugetlb_free_vmemmap_enabled() || + end - start < PAGES_PER_SECTION * sizeof(struct page)) err = vmemmap_populate_basepages(start, end, node, NULL); else if (boot_cpu_has(X86_FEATURE_PSE)) err = vmemmap_populate_hugepages(start, end, node, altmap); @@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr, pmd_t *pmd; unsigned int nr_pmd_pages; struct page *page; + bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) || + is_hugetlb_free_vmemmap_enabled(); for (; addr < end; addr = next) { pte_t *pte = NULL; @@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); - if (!boot_cpu_has(X86_FEATURE_PSE)) { + if (base_mapping) { next = (addr + PAGE_SIZE) & PAGE_MASK; pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ebca2ef02212..7f47f0eeca3b 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -770,6 +770,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +extern bool hugetlb_free_vmemmap_enabled; + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return hugetlb_free_vmemmap_enabled; +} +#else +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} +#endif + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; @@ -923,6 +937,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 273816dd95b6..9e9bd458a3f5 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -178,6 +178,22 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +bool hugetlb_free_vmemmap_enabled; + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "on")) + hugetlb_free_vmemmap_enabled = true; + else if (strcmp(buf, "off")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); + static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; From patchwork Thu Dec 17 12:13:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81A97C2BB48 for ; Thu, 17 Dec 2020 12:17:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 05EE1238EC for ; Thu, 17 Dec 2020 12:17:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 05EE1238EC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 72C498D0002; Thu, 17 Dec 2020 07:17:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D90C6B0075; Thu, 17 Dec 2020 07:17:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57C668D0002; Thu, 17 Dec 2020 07:17:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0045.hostedemail.com [216.40.44.45]) by kanga.kvack.org (Postfix) with ESMTP id 3E61F6B0074 for ; Thu, 17 Dec 2020 07:17:23 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id EA8CB180AD822 for ; Thu, 17 Dec 2020 12:17:22 +0000 (UTC) X-FDA: 77602674324.27.value34_000dcc127434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id C9AFC3D663 for ; Thu, 17 Dec 2020 12:17:22 +0000 (UTC) X-HE-Tag: value34_000dcc127434 X-Filterd-Recvd-Size: 8389 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:17:22 +0000 (UTC) Received: by mail-pf1-f177.google.com with SMTP id v2so1399475pfm.9 for ; Thu, 17 Dec 2020 04:17:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZGquxPf1EV0i/SFljU8n2h0GszYc756EJNVkGgcQ60w=; b=d52ODSzEuxNSk+DAPE+OZN+9tUM4kMz2GttCtNqpeVNMF3oJUU3RhkT+ds8KX8YSOR Ll6zqcVbnF+vm4Qx14W3Nb1GaTKX9tu/ABkYs+dc5HZMK5adMgdH36UF5987ETTzt5p7 u0hGG512y1vP9v47j8VRSr5g+U6Jm4BluVRBaTWNiFaawIB9mHgONsZC6e83noEq/sZL aVr1PdyKquHO1ozUYvvuTYUCVuo3pENGI+2phVFViYJWKeDFm410O9Wd1Rmmp+ND4rPm hic0bncLqS+HhlqPNnDAFwYmbstrT3A5Pf/wJApwzK4t39Ac+k30glMV7kJFup1Tura9 YZ/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZGquxPf1EV0i/SFljU8n2h0GszYc756EJNVkGgcQ60w=; b=Q6M5uU69BnW236jnbfpf1jVd0DmswgAeQNeAka8x/99DDzWT9sFTRJ8ky28aANFG4g ZY3BrNMS1M+VlqBQCr+IRuN+faBPJHGOkEa7kZ2z7AuUfB+hUA1T48IVam60PXxfLuj4 IWBeKP98nYsBXjQA+Qz51toUffeqNk+7HjAPXm6LXnfDLcTndY4+cUMfhNu/l6XyOg1O qdfwhdxqHLb5uxtMkZEYblsMPxncOpyu2/ShyAlzfdon+5kwOIb29KHr+Wvkm6zIOJCU k5y/SWjyv9ZiXgOpftE1Nwgb+s8GBLiq0jVrgMkJ/lMSyQgGIlI9yLGqjr+ZXTgAMDFn sMPA== X-Gm-Message-State: AOAM5333FrgwtpPYCFqvjW7i9t4j11WVxOUzAUIpWl2Strr0TRla2UAq pjVm8P/IpveHculaJg+kiFDKgw== X-Google-Smtp-Source: ABdhPJx8m2RwXsULLtvEJyDlYNAOxN+ElSqljhirypTgiE27YWdoDg179GudIS25d1hZt7LC9EwM3g== X-Received: by 2002:a62:1716:0:b029:19d:b78b:ef02 with SMTP id 22-20020a6217160000b029019db78bef02mr9035007pfx.11.1608207441163; Thu, 17 Dec 2020 04:17:21 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.17.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:17:20 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Date: Thu, 17 Dec 2020 20:13:01 +0800 Message-Id: <20201217121303.13386-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All the infrastructure is ready, so we introduce nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a HugeTLB page that can be freed to buddy allocator. And initialize it in the hugetlb_vmemmap_init(). This patch is actual enablement of the feature. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 1 + mm/hugetlb_vmemmap.c | 33 +++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 10 ++++++---- 4 files changed, 43 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 7f47f0eeca3b..66d82ae7b712 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -492,6 +492,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index fea8a96dd718..6c02f49959fd 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3326,6 +3326,7 @@ void __init hugetlb_add_hstate(unsigned int order) h->next_nid_to_free = first_memory_node; snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); parsed_hstate = h; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 9e9bd458a3f5..3ebfe1706c77 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -166,6 +166,8 @@ * (last) level. So this type of HugeTLB page can be optimized only when its * size of the struct page structs is greater than 2 pages. */ +#define pr_fmt(fmt) "HugeTLB: " fmt + #include "hugetlb_vmemmap.h" /* @@ -182,6 +184,12 @@ bool hugetlb_free_vmemmap_enabled; static int __init early_hugetlb_free_vmemmap_param(char *buf) { + /* We cannot optimize if a "struct page" crosses page boundaries. */ + if ((!is_power_of_2(sizeof(struct page)))) { + pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); + return 0; + } + if (!buf) return -EINVAL; @@ -220,3 +228,28 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head) vmemmap_remap_free(vmemmap_addr + RESERVE_VMEMMAP_SIZE, free_vmemmap_pages_size_per_hpage(h)); } + +void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int nr_pages = pages_per_huge_page(h); + unsigned int vmemmap_pages; + + if (!hugetlb_free_vmemmap_enabled) + return; + + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page are not to be freed to buddy + * allocator, the other pages will map to the first tail page, so they + * can be freed. + * + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true + * on some architectures (e.g. aarch64). See Documentation/arm64/ + * hugetlbpage.rst for more details. + */ + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + + pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, + h->name); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index b2c8d2f11d48..8fd9ae113dbd 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -13,17 +13,15 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); +void hugetlb_vmemmap_init(struct hstate *h); /* * How many vmemmap pages associated with a HugeTLB page that can be freed * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. */ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { - return 0; + return h->nr_free_vmemmap_pages; } #else static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) @@ -38,5 +36,9 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return 0; } + +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Thu Dec 17 12:13:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28CE1C4361B for ; Thu, 17 Dec 2020 12:17:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A6AF9238E8 for ; Thu, 17 Dec 2020 12:17:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A6AF9238E8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2BCC36B006C; Thu, 17 Dec 2020 07:17:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 247176B0074; Thu, 17 Dec 2020 07:17:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0726E6B0075; Thu, 17 Dec 2020 07:17:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id DD73E6B006C for ; Thu, 17 Dec 2020 07:17:34 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A4C173642 for ; Thu, 17 Dec 2020 12:17:34 +0000 (UTC) X-FDA: 77602674828.12.legs51_53179f827434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 7E3AE1802E2C0 for ; Thu, 17 Dec 2020 12:17:34 +0000 (UTC) X-HE-Tag: legs51_53179f827434 X-Filterd-Recvd-Size: 10008 Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:17:33 +0000 (UTC) Received: by mail-pg1-f181.google.com with SMTP id g18so20227772pgk.1 for ; Thu, 17 Dec 2020 04:17:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AGm10PB3Ragyu82RQJk7rSLkCyjV7VbKz++NFYoIOmU=; b=VpOQASOd6P7NBGOFxYGw3LIOCHFGyPLLc/ai4qvk5+DzRGSDzkI8I6RHWmMS8LMIh6 W7EAXY8vsFb4S09dYAyU/VIBzMrgU7IOvzEgCF537g4uwLASRy+ZJ0Q4tKAguqABTZwA 0Dnjjv/BlmP21uHJqe7SUF/jzlrWLPL46i4GuhaW1od/hOpQaqoCPqu/+fDfJoowVIef rC8XY7sr9zsCfPA8NwnfETpRehRlNwZx9twzyvTLQo5NMmH2BFqkjNFYs/Ax0bzhHnnd 2TUwbuVq12pKLecMDczNdy5yef4RKkks94DV8ChRxEcGJA7eSTBP/UJRDLexZ60mqr5a 2cUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AGm10PB3Ragyu82RQJk7rSLkCyjV7VbKz++NFYoIOmU=; b=VjuUKSi20by5AqJGdbsKOJOOv4FkYI3y/W9N8pcX3CZxIYd886Pg1DX8WhSjKS6n6v sUjsntg4fGVjHJqhqdltEgzCzJTbe7FyL2Ur08iIdZKTGvPILO8tmHlTqEDIPL8VV5rG aKQkCRn7K0QamSpZJUkNhojqmsgKGxVXcsYDV+vy1yzrABNh7ysb/cn+wPCBKIjd0M2S qM5lTv4UzavxVRjfecSY5oFx99yit8Ek08yB9cUinq7E7dUxApUpyJwYi6awziFoLdL7 OhSY2XO1Q7zll7CAv3iU0ecjmYDtFp/j5aoSsg8HNlej6uEEaNPw97FL/tZM+wpP1dOA SiFA== X-Gm-Message-State: AOAM533tKmcVL9NhdPJntxvUK+EHAY4cUrNSnCWNW+AwkeR+1+SASiHV +PvngCRIDuJN2/B76fT1RGE+3Q== X-Google-Smtp-Source: ABdhPJxIOc/Coo4G1qTlpZGAugoIhPZHs3NTr2eDf0E6jrOVCNwIcV7bM8ies51gVRGIq8lCJmZkSw== X-Received: by 2002:a65:6154:: with SMTP id o20mr36913152pgv.419.1608207453166; Thu, 17 Dec 2020 04:17:33 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.17.21 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:17:32 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 10/11] mm/hugetlb: Gather discrete indexes of tail page Date: Thu, 17 Dec 2020 20:13:02 +0800 Message-Id: <20201217121303.13386-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For HugeTLB page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page. In this case, it will be easier to add a new tail page index later. There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- include/linux/hugetlb.h | 13 +++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 16 ++++++++-------- mm/hugetlb_vmemmap.c | 8 ++++++++ 4 files changed, 38 insertions(+), 14 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 66d82ae7b712..7295f6b3d55e 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +enum { + SUBPAGE_INDEX_ACTIVE = 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6c02f49959fd..78dd88dda857 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1360,7 +1360,7 @@ static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) return; - page = head + page_private(head + 4); + page = head + page_private(head + SUBPAGE_INDEX_HWPOISON); /* * Move PageHWPoison flag from head page to the raw error page, @@ -1379,7 +1379,7 @@ static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, return; if (free_vmemmap_pages_per_hpage(h)) { - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } else if (page != head) { /* * Move PageHWPoison flag from head page to the raw error page, @@ -1459,20 +1459,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* @@ -1484,17 +1484,17 @@ static inline bool PageHugeTemporary(struct page *page) if (!PageHuge(page)) return false; - return (unsigned long)page[2].mapping == -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U; } static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping = (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U; } static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping = NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL; } static void __free_huge_page(struct page *page) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 3ebfe1706c77..ad123b760245 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -234,6 +234,14 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int nr_pages = pages_per_huge_page(h); unsigned int vmemmap_pages; + /* + * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct + * page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, + * so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. + */ + BUILD_BUG_ON(NR_USED_SUBPAGE >= + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + if (!hugetlb_free_vmemmap_enabled) return; From patchwork Thu Dec 17 12:13:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A75F4C2BB9A for ; Thu, 17 Dec 2020 12:17:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 316D3238EF for ; Thu, 17 Dec 2020 12:17:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 316D3238EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A38306B0074; Thu, 17 Dec 2020 07:17:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E6FF6B0075; Thu, 17 Dec 2020 07:17:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8878E6B0078; Thu, 17 Dec 2020 07:17:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id 7092C6B0074 for ; Thu, 17 Dec 2020 07:17:47 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3AC5B824999B for ; Thu, 17 Dec 2020 12:17:47 +0000 (UTC) X-FDA: 77602675374.24.map43_100265427434 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id 1D4E61A4A7 for ; Thu, 17 Dec 2020 12:17:47 +0000 (UTC) X-HE-Tag: map43_100265427434 X-Filterd-Recvd-Size: 6051 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 12:17:46 +0000 (UTC) Received: by mail-pf1-f172.google.com with SMTP id m6so9191084pfm.6 for ; Thu, 17 Dec 2020 04:17:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zLKc/ylYHo2GjStF5CCHYeuxj4GTbCnD5ZdkK0cu8t0=; b=tU0IqBKmVMhKyj8NikbXTGBgiH4IX66VEr+KuEmc56agJmFGIoS8nRvEgWiFkEl+e9 YBV1WexgvQzmd4fe5vXEwSmCKv9BTBhzFhOvrjzuTGPMvlJyChkbB3dyCUR6Hvw04dI2 P8su279ZRwIpSIDybEf+3GhBGemvcfwgfcpArJHAKGhSV6rPG/VCddE2bOsbDS1f5hXq wJjLzhqMWpecu2tjJsgWxMwaxRVxMm422EcTHtDm+TWtmB5viZkTY1Mbup0C8/8oefLl 3cGBiBBMjF1MLZI0m6WkVkkiemo/iv1tBvaBleCfjBxXNTRasnRiRjumo7mVdMrMp9vQ vOfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zLKc/ylYHo2GjStF5CCHYeuxj4GTbCnD5ZdkK0cu8t0=; b=aHkP+1Ej3AwCSu1IMb5OIBvgEaDMBTOO7WMGIjqxPCp4jILqxRUTLqT7DKLlXE23TR IJ9Y/oyNlBsscCFtLhdkpJd+4tGFsOzqkiK0DsfElYMp938fyCJazLOkaY/McCDFJw4y 6dhsgR/T/PcE/p3i1EVDnUhg+K6PMPzecoe3hGQUYu/A+Ce9Dw8Z9KI7nShu2UzIlY/n O+MgFMfH2qBKxlV6MTiuX2wd74nDB0N3wj0sm4lTUo5XJjw/J8beATRIVTkEF1atDHnz 6crjoBkdXcdDxtFiX7QaBII5f1YnD045WjG4zvna+TFfMCuUzEg33WEkMjYdehTvdzZl nhWg== X-Gm-Message-State: AOAM533DkFqwPb22tY7Cvvhs/U1waesGytUkH4QBQngquBOUjy1QXQ2y r2QUgv7eDFH0EpDOVsHuAYqGsg== X-Google-Smtp-Source: ABdhPJwNWAF94v1/MjPZmdhFmgA8vL4MblAFVKgbyIhGzQw5FgVtgMkV10xLyk+9ehHM6GKpzQ0SgQ== X-Received: by 2002:a62:ae0c:0:b029:1a5:819d:9ac5 with SMTP id q12-20020a62ae0c0000b02901a5819d9ac5mr18177290pff.26.1608207465486; Thu, 17 Dec 2020 04:17:45 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.17.33 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:17:44 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 11/11] mm/hugetlb: Optimize the code with the help of the compiler Date: Thu, 17 Dec 2020 20:13:03 +0800 Message-Id: <20201217121303.13386-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We cannot optimize if a "struct page" crosses page boundaries. If it is true, we can optimize the code with the help of a compiler. When free_vmemmap_pages_per_hpage() returns zero, most functions are optimized by the compiler. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 3 ++- mm/hugetlb_vmemmap.c | 7 +++++++ mm/hugetlb_vmemmap.h | 5 +++-- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 7295f6b3d55e..adc17765e0e9 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -791,7 +791,8 @@ extern bool hugetlb_free_vmemmap_enabled; static inline bool is_hugetlb_free_vmemmap_enabled(void) { - return hugetlb_free_vmemmap_enabled; + return hugetlb_free_vmemmap_enabled && + is_power_of_2(sizeof(struct page)); } #else static inline bool is_hugetlb_free_vmemmap_enabled(void) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index ad123b760245..987248a004f0 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -242,6 +242,13 @@ void __init hugetlb_vmemmap_init(struct hstate *h) BUILD_BUG_ON(NR_USED_SUBPAGE >= RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + /* + * The compiler can help us to optimize this function to null + * when the size of the struct page is not power of 2. + */ + if (!is_power_of_2(sizeof(struct page))) + return; + if (!hugetlb_free_vmemmap_enabled) return; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 8fd9ae113dbd..e8de41295d4d 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -17,11 +17,12 @@ void hugetlb_vmemmap_init(struct hstate *h); /* * How many vmemmap pages associated with a HugeTLB page that can be freed - * to the buddy allocator. + * to the buddy allocator. The checking of the is_power_of_2() aims to let + * the compiler help us optimize the code as much as possible. */ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { - return h->nr_free_vmemmap_pages; + return is_power_of_2(sizeof(struct page)) ? h->nr_free_vmemmap_pages : 0; } #else static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)