From patchwork Tue Nov 24 09:52:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75A14C2D0E4 for ; Tue, 24 Nov 2020 09:56:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C87E62075A for ; Tue, 24 Nov 2020 09:56:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="SD+KS5uv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C87E62075A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 57E076B0071; Tue, 24 Nov 2020 04:56:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 508406B0072; Tue, 24 Nov 2020 04:56:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F8C86B0073; Tue, 24 Nov 2020 04:56:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0187.hostedemail.com [216.40.44.187]) by kanga.kvack.org (Postfix) with ESMTP id 2B21B6B0071 for ; Tue, 24 Nov 2020 04:56:54 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id EDD6B180AD80F for ; Tue, 24 Nov 2020 09:56:53 +0000 (UTC) X-FDA: 77518857906.24.way09_1a052002736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id D496C1A4A5 for ; Tue, 24 Nov 2020 09:56:53 +0000 (UTC) X-HE-Tag: way09_1a052002736d X-Filterd-Recvd-Size: 14219 Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:56:53 +0000 (UTC) Received: by mail-pg1-f171.google.com with SMTP id s63so7129199pgc.8 for ; Tue, 24 Nov 2020 01:56:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W2Lt/HFpEA/hqI+q/d/AJuv9tCMbUi3JtA5McU8GXbw=; b=SD+KS5uvf85wZygHvM6VuOj/q9MWvEqgHDaOhJAxS/fuRX0jUABkCNlNI3uoVyFoQA ezFZkS+/O6sABeHcUQJZW7vg/nMTPM5/FZmbKRzAnNAq3bPiatsRZCx1/g/SKd9EFJka qizjYvyk5jbw7z5kdlrk267neQpt+XgJuR1ID69zMYxq3eijYm3k6rqWcfZ1cKzHSQLr uHY3d6GeiQ6LMQ0M31Pihl3znzF6kbj+2nnSaseMJyPN3AeaKXICcQThr+2f5l+eXnw9 L5CSK4/voD2+CCKcePXOwFcgnW7tFe9ZCZJWOwCYsKx1nulH34SKUoi8+2yMgRUX7BNa vogg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W2Lt/HFpEA/hqI+q/d/AJuv9tCMbUi3JtA5McU8GXbw=; b=qqyBRWvW6QQMLnbL9HfJyFgVHMWtQWWpCA52SyO/ez7g5fLD9onaRtvCfFXQ5DV4Ch CbA1afo88KOhdBS3i+KyHNm5jRIIJ3CMLpdPOwZ58fEubzT9MI7CEYRvAytT8asBNT4m qIV2pyKfYyEYhcYFmwLsA5qDytvfrZgjK5fyHogH9SHjWiK6R34oYOqpiy3zllE2HeHz e7aJ+mV6uMdsdxmdze0QPCZwj6nBSuA7kY/nG0U+VOpXIEarKY8MoEu5JNvmKoETgS3Q GrbRnXaO/yr3nduGgXCZl7niDA4ikbyxYj3fM2uN8cvPL1tgYylUyqRPoADiQSHsVyG8 fjfg== X-Gm-Message-State: AOAM531/o/6DUpiAx9Dx9eEiAsW4+BVCioIJtf7eGy9RJuk22gHhsJgr fbCPVIwZblH6UcHTFfX7Itu8UQ== X-Google-Smtp-Source: ABdhPJzp/9DPSL6qdSc4EfvMRU4B+3Up+ULDVqhx32Clvk+mqXOtMo2HlqmFg7ndm2X8taRYCkmmVQ== X-Received: by 2002:a65:684f:: with SMTP id q15mr3070892pgt.125.1606211812062; Tue, 24 Nov 2020 01:56:52 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.56.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:56:51 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 01/16] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Date: Tue, 24 Nov 2020 17:52:44 +0800 Message-Id: <20201124095259.58755-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move bootmem info registration common API to individual bootmem_info.c for later patch use. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador --- arch/x86/mm/init_64.c | 1 + include/linux/bootmem_info.h | 27 ++++++++++++ include/linux/memory_hotplug.h | 23 ---------- mm/Makefile | 1 + mm/bootmem_info.c | 99 ++++++++++++++++++++++++++++++++++++++++++ mm/memory_hotplug.c | 91 +------------------------------------- 6 files changed, 129 insertions(+), 113 deletions(-) create mode 100644 include/linux/bootmem_info.h create mode 100644 mm/bootmem_info.c diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index b5a3fa4033d3..c7f7ad55b625 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h new file mode 100644 index 000000000000..65bb9b23140f --- /dev/null +++ b/include/linux/bootmem_info.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_BOOTMEM_INFO_H +#define __LINUX_BOOTMEM_INFO_H + +#include + +/* + * Types for free bootmem stored in page->lru.next. These have to be in + * some random range in unsigned long space for debugging purposes. + */ +enum { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, + SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, + MIX_SECTION_INFO, + NODE_INFO, + MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, +}; + +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE +void __init register_page_bootmem_info_node(struct pglist_data *pgdat); +#else +static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) +{ +} +#endif + +#endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 51a877fec8da..19e5d067294c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -33,18 +33,6 @@ struct vmem_altmap; ___page; \ }) -/* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. - */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, - SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, - MIX_SECTION_INFO, - NODE_INFO, - MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, -}; - /* Types for control the zone type of onlined and offlined memory */ enum { /* Offline the memory. */ @@ -209,13 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat); -#else -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} -#endif extern void put_page_bootmem(struct page *page); extern void get_page_bootmem(unsigned long ingo, struct page *page, unsigned long type); @@ -254,10 +235,6 @@ static inline int mhp_notimplemented(const char *func) return -ENOSYS; } -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} - static inline int try_online_node(int nid) { return 0; diff --git a/mm/Makefile b/mm/Makefile index d5649f1c12c0..752111587c99 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -82,6 +82,7 @@ obj-$(CONFIG_SLAB) += slab.o obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_FAILSLAB) += failslab.o +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c new file mode 100644 index 000000000000..39fa8fc120bc --- /dev/null +++ b/mm/bootmem_info.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/mm/bootmem_info.c + * + * Copyright (C) + */ +#include +#include +#include +#include +#include + +#ifndef CONFIG_SPARSEMEM_VMEMMAP +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + /* Get section's memmap address */ + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + /* + * Get page for the memmap's phys address + * XXX: need more consideration for sparse_vmemmap... + */ + page = virt_to_page(memmap); + mapsize = sizeof(struct page) * PAGES_PER_SECTION; + mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; + + /* remember memmap's page */ + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, SECTION_INFO); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); + +} +#else /* CONFIG_SPARSEMEM_VMEMMAP */ +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); +} +#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ + +void __init register_page_bootmem_info_node(struct pglist_data *pgdat) +{ + unsigned long i, pfn, end_pfn, nr_pages; + int node = pgdat->node_id; + struct page *page; + + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; + page = virt_to_page(pgdat); + + for (i = 0; i < nr_pages; i++, page++) + get_page_bootmem(node, page, NODE_INFO); + + pfn = pgdat->node_start_pfn; + end_pfn = pgdat_end_pfn(pgdat); + + /* register section info */ + for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { + /* + * Some platforms can assign the same pfn to multiple nodes - on + * node0 as well as nodeN. To avoid registering a pfn against + * multiple nodes we check that this pfn does not already + * reside in some other nodes. + */ + if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) + register_page_bootmem_info_section(pfn); + } +} diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index baded53b9ff9..2da4ad071456 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -167,96 +168,6 @@ void put_page_bootmem(struct page *page) } } -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - /* Get section's memmap address */ - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - /* - * Get page for the memmap's phys address - * XXX: need more consideration for sparse_vmemmap... - */ - page = virt_to_page(memmap); - mapsize = sizeof(struct page) * PAGES_PER_SECTION; - mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; - - /* remember memmap's page */ - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, SECTION_INFO); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); - -} -#else /* CONFIG_SPARSEMEM_VMEMMAP */ -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); -} -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ - -void __init register_page_bootmem_info_node(struct pglist_data *pgdat) -{ - unsigned long i, pfn, end_pfn, nr_pages; - int node = pgdat->node_id; - struct page *page; - - nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; - page = virt_to_page(pgdat); - - for (i = 0; i < nr_pages; i++, page++) - get_page_bootmem(node, page, NODE_INFO); - - pfn = pgdat->node_start_pfn; - end_pfn = pgdat_end_pfn(pgdat); - - /* register section info */ - for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - /* - * Some platforms can assign the same pfn to multiple nodes - on - * node0 as well as nodeN. To avoid registering a pfn against - * multiple nodes we check that this pfn does not already - * reside in some other nodes. - */ - if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) - register_page_bootmem_info_section(pfn); - } -} -#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { From patchwork Tue Nov 24 09:52:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45702C2D0E4 for ; Tue, 24 Nov 2020 09:57:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7403820872 for ; Tue, 24 Nov 2020 09:57:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="rqnVg68d" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7403820872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AA4AE6B0072; Tue, 24 Nov 2020 04:57:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A36516B0073; Tue, 24 Nov 2020 04:57:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9250C6B0074; Tue, 24 Nov 2020 04:57:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id 75BC66B0072 for ; Tue, 24 Nov 2020 04:57:04 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2FE1511217 for ; Tue, 24 Nov 2020 09:57:04 +0000 (UTC) X-FDA: 77518858368.03.cap50_2416b432736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 1292E28A4E8 for ; Tue, 24 Nov 2020 09:57:04 +0000 (UTC) X-HE-Tag: cap50_2416b432736d X-Filterd-Recvd-Size: 8883 Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:57:03 +0000 (UTC) Received: by mail-pf1-f195.google.com with SMTP id b63so17966356pfg.12 for ; Tue, 24 Nov 2020 01:57:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YQYB74FkwOxFI1h+l2VXl2pw2h4ZcuxqFh5PVtqZk4E=; b=rqnVg68d76PVpgYg3/d6F2C6WFWhKKmKAt+sAzW8rWgezC1C6s3/SNDuelVbsJp1zL 0eFsyionvitpceHC0+x4a/HHMxRsFvmG/9bSdSzW4py6xxJZfBxAO9EtZYPHe2Hbwzs6 l6Tf1AeyY53trSaLqCk6GE6tH0klj9dAG2zIA0EJs7wDIpGe1HUv7go+JBwsPMpIRrYr 4q8IqNU6W9PjZVjjl7yE4LFASyZQXmdeB4Ia+Yui2bLkJmtQnmxoMdOPpA9RY6fOGg7x 4OgNliJip2aX4s8PPIWEkLttCFdqqBKq5n0jMiXXBKSu3x7z6Ti9xqMrNMrzoLTNWKz5 0klQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YQYB74FkwOxFI1h+l2VXl2pw2h4ZcuxqFh5PVtqZk4E=; b=Gv/1WFxy1MOdMwJ9oU5atwHM+njhWBK3/Mxz5FWu+Z8JFvDhlrQPSCsHFc4BpT/5Xd 0zNC6MO85Obx4ApApLMGGMEapLeK7D3UgD+zDbiQhC8yTnoBoizAQpVWxdnwFLMWnmCy NcejVdNQtwHPuBvLv/AuyRbw8NF+3nGl9ApMcVwm5PTjQft2ZIsSbkAMviYpgRvX16OJ 9812C3ZAyrQ4eXfIlLXeD4IDY5KNmVHUb1+fHL3oOkDhjMjN8GUFOSAIqubS2zTsYO0y 8TypbhHP6TZHwV4go1Eu0pG4DA4kb4xV3/1Tg+n9RD6/QPUfH3JPJhzFEwbnMqibEkNR NLuw== X-Gm-Message-State: AOAM5338GMYI7mUn1Vd6+WjUIucP5cqYKagTIxE11/VRNYH415CF+G6J wtZ8MME1ncYg8IqiUocqBh7aGQ== X-Google-Smtp-Source: ABdhPJxA57sXMzwbpUoFJsMDr4i9xEcSLLTQNrVmo0S2leZIkvnwpTZ4NSXZA6eVceqAnBsSOoK+Yg== X-Received: by 2002:a63:cc01:: with SMTP id x1mr3153166pgf.15.1606211822617; Tue, 24 Nov 2020 01:57:02 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.56.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:57:02 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 02/16] mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c Date: Tue, 24 Nov 2020 17:52:45 +0800 Message-Id: <20201124095259.58755-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we will use {get,put}_page_bootmem() to initialize the page for vmemmap or free vmemmap page to buddy. So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador --- arch/x86/mm/init_64.c | 2 +- include/linux/bootmem_info.h | 13 +++++++++++++ include/linux/memory_hotplug.h | 4 ---- mm/bootmem_info.c | 25 +++++++++++++++++++++++++ mm/memory_hotplug.c | 27 --------------------------- mm/sparse.c | 1 + 6 files changed, 40 insertions(+), 32 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index c7f7ad55b625..0a45f062826e 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1572,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return err; } -#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE) +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void register_page_bootmem_memmap(unsigned long section_nr, struct page *start_page, unsigned long nr_pages) { diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 65bb9b23140f..4ed6dee1adc9 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -18,10 +18,23 @@ enum { #ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void __init register_page_bootmem_info_node(struct pglist_data *pgdat); + +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type); +void put_page_bootmem(struct page *page); #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { } + +static inline void put_page_bootmem(struct page *page) +{ +} + +static inline void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ +} #endif #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 19e5d067294c..c9f3361fe84b 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -197,10 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -extern void put_page_bootmem(struct page *page); -extern void get_page_bootmem(unsigned long ingo, struct page *page, - unsigned long type); - void get_online_mems(void); void put_online_mems(void); diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index 39fa8fc120bc..fcab5a3f8cc0 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -10,6 +10,31 @@ #include #include +void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) +{ + page->freelist = (void *)type; + SetPagePrivate(page); + set_page_private(page, info); + page_ref_inc(page); +} + +void put_page_bootmem(struct page *page) +{ + unsigned long type; + + type = (unsigned long) page->freelist; + BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || + type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); + + if (page_ref_dec_return(page) == 1) { + page->freelist = NULL; + ClearPagePrivate(page); + set_page_private(page, 0); + INIT_LIST_HEAD(&page->lru); + free_reserved_page(page); + } +} + #ifndef CONFIG_SPARSEMEM_VMEMMAP static void register_page_bootmem_info_section(unsigned long start_pfn) { diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 2da4ad071456..ae57eedc341f 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include @@ -142,32 +141,6 @@ static void release_memory_resource(struct resource *res) } #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE -void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) -{ - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); - page_ref_inc(page); -} - -void put_page_bootmem(struct page *page) -{ - unsigned long type; - - type = (unsigned long) page->freelist; - BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || - type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); - - if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - INIT_LIST_HEAD(&page->lru); - free_reserved_page(page); - } -} - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { diff --git a/mm/sparse.c b/mm/sparse.c index b25ad8e64839..a4138410d890 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "internal.h" #include From patchwork Tue Nov 24 09:52:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E9D2C56201 for ; Tue, 24 Nov 2020 09:57:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D1F2C206FA for ; Tue, 24 Nov 2020 09:57:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="Teew9Gvu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D1F2C206FA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 298D16B0074; Tue, 24 Nov 2020 04:57:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 26E546B0075; Tue, 24 Nov 2020 04:57:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 136106B0078; Tue, 24 Nov 2020 04:57:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id F14976B0074 for ; Tue, 24 Nov 2020 04:57:16 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BA66E1121B for ; Tue, 24 Nov 2020 09:57:16 +0000 (UTC) X-FDA: 77518858872.28.humor41_56003542736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 9A97A1121C for ; Tue, 24 Nov 2020 09:57:16 +0000 (UTC) X-HE-Tag: humor41_56003542736d X-Filterd-Recvd-Size: 5365 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:57:16 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id x15so10442780pll.2 for ; Tue, 24 Nov 2020 01:57:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=s5A1WGxjr1FnragMjsAXSXENwZtbX/VCL9QpT/2OWj4=; b=Teew9GvubUfF7KXPZ0vrHLrrdljg0Jw0SHTS0miGJ+H6HSyJHNWx1gygUfRYCkCrw+ EFlGXQHbhIpnw8VIJKvVn4N3/oQ/DX9sOoWurp6agSO9mb8j7HwAT5HTBnBH1IQVvsna fDeuYUqOhNQomskdJzjwkVHcasMWrx3FEUjTx3ntf1w2c+IiqrHinlQ3zoo6pFATvJ1G K45S4R7n1zZbkYNmNDSvo2Lc5p4+HIJ5CL1tSne7ARclcUu/D609C3ImDl6jiPGuS9ZX gXvkFgnL1TrHD1X2zyk+ZphWPFsdUlyNF1qz7TBA6lPcHVBeCqRT5PHJO4KwXbDrDI+v 9u0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=s5A1WGxjr1FnragMjsAXSXENwZtbX/VCL9QpT/2OWj4=; b=HyM+LbDB3xi71nkGzq1FcuHM1yqqJeyha8QXV9Vm+DBoMW+PDV/ArpB7P/TNPCqGxq 3qbnctd2IcQjZ2LVV9OgVcnr5r0uPSOlXTe5c/hh8yMIj67MlxwmIYpeRuKKKw1bx9Bx b3pyYA0Y8kduiXt6p24yXsR/GQjoY/voucCSz/Ub6XXEpG90RFblMUwf1XhdVMUTDaFV EHpvBS2Kt+HdKzSHJ/XyttYA1FFy9x9RLyG4p26fnpTzT6msAQR2JvfVS5RusgUH7W1J 85RLXZbkmaY96V8TwUoqvDyjcEDV30Jq2jQbIuZu+8OwEXzqisOtdg3384NynrXTQpIe 0fSA== X-Gm-Message-State: AOAM5337laOCTApPKESJwtZWMuY5NtC0I61hezBl3sCr9J135169Ofu0 x4IyskQusFEKvE85mg0WiLWX4Q== X-Google-Smtp-Source: ABdhPJzwWHwqGMfwuSes+ofTOYpFZkuqkNbGuqv07A6D+lBpddBbGPWVNDBt6EKW68EcbG3uetJiWA== X-Received: by 2002:a17:902:b192:b029:d7:ca4a:4ec1 with SMTP id s18-20020a170902b192b02900d7ca4a4ec1mr3406894plr.76.1606211834998; Tue, 24 Nov 2020 01:57:14 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.57.02 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:57:14 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 03/16] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Date: Tue, 24 Nov 2020 17:52:46 +0800 Message-Id: <20201124095259.58755-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure whether to enable the feature of freeing unused vmemmap associated with HugeTLB pages. And this is just for dependency check. Now only support x86. Signed-off-by: Muchun Song --- arch/x86/mm/init_64.c | 2 +- fs/Kconfig | 14 ++++++++++++++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0a45f062826e..0435bee2e172 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall; static void __init register_page_bootmem_info(void) { -#ifdef CONFIG_NUMA +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) int i; for_each_online_node(i) diff --git a/fs/Kconfig b/fs/Kconfig index 976e8b9033c4..4961dd488444 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -245,6 +245,20 @@ config HUGETLBFS config HUGETLB_PAGE def_bool HUGETLBFS +config HUGETLB_PAGE_FREE_VMEMMAP + def_bool HUGETLB_PAGE + depends on X86 + depends on SPARSEMEM_VMEMMAP + depends on HAVE_BOOTMEM_INFO_NODE + help + When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some + memory from pre-allocated HugeTLB pages when they are not used. + 6 pages per 2MB HugeTLB page and 4094 per 1GB HugeTLB page. + + When the pages are going to be used or freed up, the vmemmap array + representing that range needs to be remapped again and the pages + we discarded earlier need to be rellocated again. + config MEMFD_CREATE def_bool TMPFS || HUGETLBFS From patchwork Tue Nov 24 09:52:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E5C9C2D0E4 for ; Tue, 24 Nov 2020 09:57:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 620742075A for ; Tue, 24 Nov 2020 09:57:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="nB2biY17" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 620742075A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DE2A56B0078; Tue, 24 Nov 2020 04:57:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DB9876B007B; Tue, 24 Nov 2020 04:57:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CCEF46B007D; Tue, 24 Nov 2020 04:57:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0045.hostedemail.com [216.40.44.45]) by kanga.kvack.org (Postfix) with ESMTP id B84766B0078 for ; Tue, 24 Nov 2020 04:57:27 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7562E181AEF1F for ; Tue, 24 Nov 2020 09:57:27 +0000 (UTC) X-FDA: 77518859334.12.brake79_0e034ff2736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 5288D18028B81 for ; Tue, 24 Nov 2020 09:57:27 +0000 (UTC) X-HE-Tag: brake79_0e034ff2736d X-Filterd-Recvd-Size: 14521 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:57:26 +0000 (UTC) Received: by mail-pf1-f176.google.com with SMTP id c66so17995409pfa.4 for ; Tue, 24 Nov 2020 01:57:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RMs53eG3LNPeSYWBw7UeJT//wAK2SKIhi0UahgCqvGo=; b=nB2biY17c1xHJJtyysg9/RdKIqh/WyRzZLTm27dFW9u5FnxjgtcNLn/45HQW3BzUIR 9wHuN4gS2AYhLDDCALsU/mVlJ2GIHNiAFiSrovSnLrICF3OHSkuUTh4BFLH22+cksTwD 1PgcWa2PTxXWvcrDQje0NQYCteS9oBQS7eqXHe8QPwWA3noro7qF0se2rPAuI0l4LOYc 2seU1h2eFiD0dcKzuLIClZh9RA3DgYIwKdJ01w8ZZeUzB1T/6AxZ3xMWBG465OwznCBC uFNY5wD8H8IRg/nWgL1+Yc1Mpnb3kRtZzp1hRR9kw4cE6C1OFjZ68z+CSIgEQJMJuYpV Akng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RMs53eG3LNPeSYWBw7UeJT//wAK2SKIhi0UahgCqvGo=; b=WWgocfrlm2dv7ldVWjE/FE5rIIQf50dw6UORZnegQCRZIqTwwpPOZ4Ww33LP+DzF8P FIHolH/e12PAdKVngOfUNnYOmYyqiNtx3hPPusqxn6C+rkqJk/3cNd+tZGs5TIvvkAcx k+P7iH30NgFzRLYwYaCQ4IFURv0gCt4U9Arm5pUoN8+zHFhDVXsFGobRBkwkquPT+i1J zsHdvpMr07EWpoHZIhLO7mAk88RWZlNynZbu/MQudqiBGWprIWmyn/XzTfR6/oZZaGbV zemUjHNEhJn8uGcMuooWLxnslc/kZ4HnN34KIahd09TjzoG7jGUGoTuPSUgp8IIflyOh M2Eg== X-Gm-Message-State: AOAM5326ZQwvwXqAddBBxt7fAUF1aqnkB0RKNuP80xqaSwstZ9kb74eY hkdP24vJuqF3iYO2y8zQaNC9XA== X-Google-Smtp-Source: ABdhPJz6R1rHuo8sewOYy6Re4ZKmMVZcLwxrF7DkD4k/dIylF9KTzobi1SpweTyz/dYGGlxFz545/g== X-Received: by 2002:a17:90a:c214:: with SMTP id e20mr4147249pjt.212.1606211845770; Tue, 24 Nov 2020 01:57:25 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.57.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:57:25 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 04/16] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Date: Tue, 24 Nov 2020 17:52:47 +0800 Message-Id: <20201124095259.58755-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Every HugeTLB has more than one struct page structure. The 2M HugeTLB has 512 struct page structure and 1G HugeTLB has 4096 struct page structures. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures to store metadata associated with each HugeTLB. There are a lot of struct page structures(8 page frames for 2MB HugeTLB page and 4096 page frames for 1GB HugeTLB page) associated with each HugeTLB page. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structures. We map the virtual addresses of the remaining pages of tail page structures to the first tail page struct, and then free these page frames. Therefore, we need to reserve two pages as vmemmap areas. So we introduce a new nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a HugeTLB page that we can free to buddy system. Signed-off-by: Muchun Song Acked-by: Mike Kravetz --- include/linux/hugetlb.h | 3 ++ mm/Makefile | 1 + mm/hugetlb.c | 3 ++ mm/hugetlb_vmemmap.c | 129 ++++++++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 20 ++++++++ 5 files changed, 156 insertions(+) create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d5cc5f802dd4..eed3dd3bd626 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -492,6 +492,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/Makefile b/mm/Makefile index 752111587c99..2a734576bbc0 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP) += frontswap.o obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) += hugetlb_vmemmap.o obj-$(CONFIG_NUMA) += mempolicy.o obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 81a41aa080a5..f88032c24667 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -42,6 +42,7 @@ #include #include #include "internal.h" +#include "hugetlb_vmemmap.h" int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; @@ -3285,6 +3286,8 @@ void __init hugetlb_add_hstate(unsigned int order) snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); + parsed_hstate = h; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c new file mode 100644 index 000000000000..fad760483e01 --- /dev/null +++ b/mm/hugetlb_vmemmap.c @@ -0,0 +1,129 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + * + * The struct page structures (page structs) are used to describe a physical + * page frame. By default, there is a one-to-one mapping from a page frame to + * it's corresponding page struct. + * + * The HugeTLB pages consist of multiple base page size pages and is supported + * by many architectures. See hugetlbpage.rst in the Documentation directory + * for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB + * are currently supported. Since the base page size on x86 is 4KB, a 2MB + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of + * 4096 base pages. For each base page, there is a corresponding page struct. + * + * Within the HugeTLB subsystem, only the first 4 page structs are used to + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER + * provides this upper limit. The only 'useful' information in the remaining + * page structs is the compound_head field, and this field is the same for all + * tail pages. + * + * By removing redundant page structs for HugeTLB pages, memory can returned to + * the buddy allocator for other uses. + * + * When the system boot up, every 2M HugeTLB has 512 struct page structs which + * size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE). + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | -------------> | 2 | + * | | +-----------+ +-----------+ + * | | | 3 | -------------> | 3 | + * | | +-----------+ +-----------+ + * | | | 4 | -------------> | 4 | + * | 2MB | +-----------+ +-----------+ + * | | | 5 | -------------> | 5 | + * | | +-----------+ +-----------+ + * | | | 6 | -------------> | 6 | + * | | +-----------+ +-----------+ + * | | | 7 | -------------> | 7 | + * | | +-----------+ +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * The value of page->compound_head is the same for all tail pages. The first + * page of page structs (page 0) associated with the HugeTLB page contains the 4 + * page structs necessary to describe the HugeTLB. The only use of the remaining + * pages of page structs (page 1 to page 7) is to point to page->compound_head. + * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs + * will be used for each HugeTLB page. This will allow us to free the remaining + * 6 pages to the buddy allocator. + * + * Here is how things look after remapping. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | ----------------^ ^ ^ ^ ^ ^ + * | | +-----------+ | | | | | + * | | | 3 | ------------------+ | | | | + * | | +-----------+ | | | | + * | | | 4 | --------------------+ | | | + * | 2MB | +-----------+ | | | + * | | | 5 | ----------------------+ | | + * | | +-----------+ | | + * | | | 6 | ------------------------+ | + * | | +-----------+ | + * | | | 7 | --------------------------+ + * | | +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for + * vmemmap pages and restore the previous mapping relationship. + * + * Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page. It is similar + * to the 2MB HugeTLB page. We also can use this approach to free the vmemmap + * pages. + */ +#define pr_fmt(fmt) "HugeTLB Vmemmap: " fmt + +#include "hugetlb_vmemmap.h" + +/* + * There are a lot of struct page structures(8 page frames for 2MB HugeTLB page + * and 4096 page frames for 1GB HugeTLB page) associated with each HugeTLB page. + * For tail pages, the value of compound_head is the same. So we can reuse first + * page of tail page structures. We map the virtual addresses of the remaining + * pages of tail page structures to the first tail page struct, and then free + * these page frames. Therefore, we need to reserve two pages as vmemmap areas. + */ +#define RESERVE_VMEMMAP_NR 2U + +void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int order = huge_page_order(h); + unsigned int vmemmap_pages; + + vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page are not to be freed to buddy + * system, the others page will map to the first tail page. So there + * are the remaining pages that can be freed. + * + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? This is + * not expected to happen unless the system is corrupted. So on the + * safe side, it is only a safety net. + */ + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + + pr_debug("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, + h->name); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h new file mode 100644 index 000000000000..40c0c7dfb60d --- /dev/null +++ b/mm/hugetlb_vmemmap.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + */ +#ifndef _LINUX_HUGETLB_VMEMMAP_H +#define _LINUX_HUGETLB_VMEMMAP_H +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void __init hugetlb_vmemmap_init(struct hstate *h); +#else +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ +#endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Tue Nov 24 09:52:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E7A2C64E7D for ; Tue, 24 Nov 2020 09:57:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CABC4208B8 for ; Tue, 24 Nov 2020 09:57:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="XLegzWPa" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CABC4208B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3F76B6B007D; Tue, 24 Nov 2020 04:57:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A7846B007E; Tue, 24 Nov 2020 04:57:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26F2F6B0080; Tue, 24 Nov 2020 04:57:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 11F136B007D for ; Tue, 24 Nov 2020 04:57:38 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CE3CE117EB for ; Tue, 24 Nov 2020 09:57:37 +0000 (UTC) X-FDA: 77518859754.08.sock11_1a0b2ba2736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id A16B01819E793 for ; Tue, 24 Nov 2020 09:57:37 +0000 (UTC) X-HE-Tag: sock11_1a0b2ba2736d X-Filterd-Recvd-Size: 5372 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:57:37 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id l11so10448056plt.1 for ; Tue, 24 Nov 2020 01:57:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9VN22FTK/cp8gibu50+JlwycxClP9nhjODvnGGV4tug=; b=XLegzWPaw4syhictcJDtdO8UlkWiQesAIDyjezsiGhWLYuWMqEXoinK2ALJ8vmvQSZ BvUt+1LFjXIBzla2PEW+QARNYV9h7cyU9JD1U2M1q36WdZQk3IUrcFT19L7vp4lT8DMq BZUyc42SDVQmpNFqRg7Tdjw1YHFvUTpN8afa+VvcqJHp7Yk4E1ogM+NIkAPfiOs29t0K qkYlWJblARlDVzKPsJTzyqtP7URpFQRJ244SGxw3ExqgNtZdWk6qWjkA+cz01UAE8jjW c+HeVQYFmzstVWSkSGecvWrwt3U7ILbmZGi2kEA5YajcZKL8YGfmIginV+jTZ3Cl1i3N 4esw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9VN22FTK/cp8gibu50+JlwycxClP9nhjODvnGGV4tug=; b=C6Ju2983Cm9QmV/D45p087la3p3QL/nq/tnYRpvv9ez0ZxEujOZXTCDglHtJXoaBnR zK1Lbdcuqu/5dEvOAN4ubnDLAEvbXzrY6Jhguq6GM7R4Eo6LQykd3Z7bjuYdgVxBl1cH CTvXz3jIzQa8BJs5G/WmfnmGKVlnf1fPl4ghRT8KZESik8a+bTPzq9V/SGhultirfOda hjmX7+OvbP30AFN4FwwhNWsfK6kK1ZUzZjckJAMv86cfIDjb61xltw6010jxrlo+ZYZF 4c+SdUXqmgDrqENcrPQRAT+N1bjIogrlkYWUntMAZ3VtDLOc/J8Mv7RztQf6dnurzpAq kiMQ== X-Gm-Message-State: AOAM530WsIeYFVwIkE0hGbJXo7d4ihVpkjDp3iEzXISLiL1frgx0i7/A xW+CGVhazH7lwX9KDtMHiyIt3A== X-Google-Smtp-Source: ABdhPJzkfWmZzIezFc+/XShwSpl+HYDK87BrOmp7V7j23AYWeRgU/oTmRV9QXDo9TveCXF8djbdG6Q== X-Received: by 2002:a17:902:6949:b029:da:17d0:d10f with SMTP id k9-20020a1709026949b02900da17d0d10fmr3322210plt.71.1606211856275; Tue, 24 Nov 2020 01:57:36 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.57.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:57:35 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 05/16] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Date: Tue, 24 Nov 2020 17:52:48 +0800 Message-Id: <20201124095259.58755-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we can use the free_vmemmap_page() to free the unused vmemmap pages and initialize a page for vmemmap page using via prepare_vmemmap_page(). Signed-off-by: Muchun Song --- include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..239e3cc8f86c 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -3,6 +3,7 @@ #define __LINUX_BOOTMEM_INFO_H #include +#include /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +static inline void free_vmemmap_page(struct page *page) +{ + VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2); + + /* bootmem page has reserved flag in the reserve_bootmem_region */ + if (PageReserved(page)) { + unsigned long magic = (unsigned long)page->freelist; + + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + put_page_bootmem(page); + else + WARN_ON(1); + } +} + +static inline void prepare_vmemmap_page(struct page *page) +{ + unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page)); + + get_page_bootmem(section_nr, page, SECTION_INFO); + mark_page_reserved(page); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { From patchwork Tue Nov 24 09:52:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89456C2D0E4 for ; Tue, 24 Nov 2020 09:57:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED59420857 for ; Tue, 24 Nov 2020 09:57:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="tIraSHcG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED59420857 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6B5A06B0080; Tue, 24 Nov 2020 04:57:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 68B916B0081; Tue, 24 Nov 2020 04:57:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A20C6B0082; Tue, 24 Nov 2020 04:57:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id 44E0B6B0080 for ; Tue, 24 Nov 2020 04:57:48 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0F095181AEF1F for ; Tue, 24 Nov 2020 09:57:48 +0000 (UTC) X-FDA: 77518860216.19.crime54_400d7b02736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id D500A1ACEA4 for ; Tue, 24 Nov 2020 09:57:47 +0000 (UTC) X-HE-Tag: crime54_400d7b02736d X-Filterd-Recvd-Size: 4570 Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:57:47 +0000 (UTC) Received: by mail-pf1-f195.google.com with SMTP id b6so7546997pfp.7 for ; Tue, 24 Nov 2020 01:57:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wHH3/jV+QgVymFROaAhyfWbEGBmLguB5hGi6rHX+6Xk=; b=tIraSHcGfsyLsDtWBE3f6AbmDLC7hMtycXi3qi3o3exCsJUurK5k90ln0isM5Drjro DB3XS0aPqP3k6MAuAEVkri+xicJsmaoBVJ6XcZdWf8bJSsOSZ+4lKenrLm+w/1Hl8vqf tOk3a9jGCBOgRSLz+QSj9jE2Cn6cwR3NOsMBDXeJjikoBRbZqjJTCH/zd9viZZRoNDeN KbNVk81Mtl72BMRuaz22/mNFA91Rf2fvN2O6zk32JPUzAltYH9HoWnvWrsY96Hb5ULYA mHCgOqrC+bIuwKv1R0RntDu8DmcEb0G2EVW6bSQp02sUOQyIxIFHjEe++uDDOc5ZkbBd YtRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wHH3/jV+QgVymFROaAhyfWbEGBmLguB5hGi6rHX+6Xk=; b=ZxenYj6ThiV70fa8/NsEhvfft+xgLp2wgRQI4oJW1AdrlZeJXqJ9X4cZVSXfUjVXxT EB2RWTEdVaf99Azte79ky+ijdfWu/rfMDgd/0yNOGAKH1SU6/X+VrWKwqb1EcINECvIT 73N/rsDYiOMjNM3cxidNH/JJJtY3yNhVj/wQfZdNzbueEK5Rtqk/bWmzHp3IOR23KLOv bNIzb1HB1KzxEfd4thHOUZVJuK/u3yUXYK8fE2f2/+nNDSfWS+TY3TLTYClqunCG+XGp E4qqV5nsKqupx83weWUlRqWzutkX4XJBvYKtUGwb3EERXKqJ/qjqGR9yTUhYDrIDdT7Z qy/w== X-Gm-Message-State: AOAM531xKQRGCsBVP0sx+bo+CoBY3U0bfSkMi6yCerx8pHXpDCjDY47C Wg6G+mLtCmuIKPjREfhfYdm09Q== X-Google-Smtp-Source: ABdhPJysdofJSVH3w1/BPgZk3TLFnf6hemKsSPCvyi4+CmocLeKlIa0h4pckqN8WwHQ4tU+2hvxHEQ== X-Received: by 2002:a62:2a81:0:b029:18c:310f:74fe with SMTP id q123-20020a622a810000b029018c310f74femr3356086pfq.50.1606211866326; Tue, 24 Nov 2020 01:57:46 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.57.36 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:57:45 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 06/16] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two Date: Tue, 24 Nov 2020 17:52:49 +0800 Message-Id: <20201124095259.58755-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We only can free the tail vmemmap pages of HugeTLB to the buddy allocator when the size of struct page is a power of two. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index fad760483e01..fd60cfdf3d40 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int order = huge_page_order(h); unsigned int vmemmap_pages; + if (!is_power_of_2(sizeof(struct page))) { + pr_info("disable freeing vmemmap pages for %s\n", h->name); + return; + } + vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page and the first tail page are not to be freed to buddy From patchwork Tue Nov 24 09:52:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CA7AC56201 for ; Tue, 24 Nov 2020 09:57:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A65BD2076E for ; Tue, 24 Nov 2020 09:57:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="B7s2QSZd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A65BD2076E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 30DB56B0083; Tue, 24 Nov 2020 04:57:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2BAD06B0088; Tue, 24 Nov 2020 04:57:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 135A76B0083; Tue, 24 Nov 2020 04:57:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id EB4E06B0081 for ; Tue, 24 Nov 2020 04:57:57 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B1DBB8249980 for ; Tue, 24 Nov 2020 09:57:57 +0000 (UTC) X-FDA: 77518860594.15.eggs12_171017b2736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 907B51814B0C1 for ; Tue, 24 Nov 2020 09:57:57 +0000 (UTC) X-HE-Tag: eggs12_171017b2736d X-Filterd-Recvd-Size: 5240 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:57:57 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id l17so6551805pgk.1 for ; Tue, 24 Nov 2020 01:57:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oSUrQajIlhHrZQJhWJQy1InopKyNlwdkkxQ0BvZwQN0=; b=B7s2QSZdwKs6D656MgfnkoLqCCJ2k2pYSfIOnNWNNwqAEh9uEpIuiX6LDtbE178WrC jjuUSq3hI3hM9sGyUKnOwI2f/d9Q40hQClsq9HYgFoO3ChJdL6Qx7I6vsp2nbvr1gRts qcBkPfLQu3qC95KdnFaki+XdTFWPz/K1bW75zQR8DP+pwj+gpuQifRbQCErRjYqaOurf KXWOnmuvUSM50zWK0gtccC3PATob4pad+REin14akFJ0RKcNtv2Fap12YEtyFV5pE4bc jwRhZFLQOH/bxopGPWardMumil0DeJldoqEwMI4vR86WvTDOOwS4e5JZPH+pDNUxWuEv wO0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oSUrQajIlhHrZQJhWJQy1InopKyNlwdkkxQ0BvZwQN0=; b=SP6dS1h2W2sXMcxuWT13auravHu/AEfQ16Fe/I+dtbJY4M5X+CtkG70IutHkD5hoJU 021usXsH+CnHWhbIQ96OY/hwbWgU6HVgmFxy8j7mqR5Ji7ngpQ9NCooHYRPAXbZeeJqr KSKc2cTgG35z5HjfEVDj+CiZLONdbZXZmmqgIyG+hwdFHslXSPsEs3l9GHSzKD5cP5xG U1C3UpnQpGeUgtiFzG8w1d8cQT/70tZ001K0Vv3T1nVcMPgigC46eSeaKGq5oa7C/dCp +7DmAV3wA7xaKXPM/ctSMOnqifhlR+BnBrDb3Pwojc5zSB/rzzeMg29YkzIrP//B6RJx 0sGQ== X-Gm-Message-State: AOAM533TrAH0ZIPkHJg4uEpakSbaqzUv9SIG6woghtG6TPVS5iZ8WBtp 8PEJ7f9WFivEx4AzCAllI5aBmA== X-Google-Smtp-Source: ABdhPJyMrb78F1w9ZubIENSfiY4yzl9F3+dQeY4OKhrOsOuJH8YxUhUghcDgoJFYUmMbp5zY3YWt3A== X-Received: by 2002:a17:90a:f406:: with SMTP id ch6mr3827991pjb.134.1606211876417; Tue, 24 Nov 2020 01:57:56 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.57.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:57:55 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 07/16] x86/mm/64: Disable PMD page mapping of vmemmap Date: Tue, 24 Nov 2020 17:52:50 +0800 Message-Id: <20201124095259.58755-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we enable the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, we can just disbale PMD page mapping of vmemmap to simplify the code. In this case, we do not need complex code doing vmemmap page table manipulation. This is a way to simply the first version of this patch series. In the future, we can add some code doing page table manipulation. Signed-off-by: Muchun Song --- arch/x86/mm/init_64.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0435bee2e172..155cb06a6961 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1557,7 +1557,9 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, { int err; - if (end - start < PAGES_PER_SECTION * sizeof(struct page)) + if (IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)) + err = vmemmap_populate_basepages(start, end, node, NULL); + else if (end - start < PAGES_PER_SECTION * sizeof(struct page)) err = vmemmap_populate_basepages(start, end, node, NULL); else if (boot_cpu_has(X86_FEATURE_PSE)) err = vmemmap_populate_hugepages(start, end, node, altmap); @@ -1610,7 +1612,8 @@ void register_page_bootmem_memmap(unsigned long section_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); - if (!boot_cpu_has(X86_FEATURE_PSE)) { + if (!boot_cpu_has(X86_FEATURE_PSE) || + IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)) { next = (addr + PAGE_SIZE) & PAGE_MASK; pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) From patchwork Tue Nov 24 09:52:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F604C2D0E4 for ; Tue, 24 Nov 2020 09:58:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9F32F204EC for ; Tue, 24 Nov 2020 09:58:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="FwWSJi2c" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9F32F204EC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 247406B0088; Tue, 24 Nov 2020 04:58:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D09E6B0089; Tue, 24 Nov 2020 04:58:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 022B96B008A; Tue, 24 Nov 2020 04:58:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0107.hostedemail.com [216.40.44.107]) by kanga.kvack.org (Postfix) with ESMTP id D4B6F6B0088 for ; Tue, 24 Nov 2020 04:58:08 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A12FB181AEF1F for ; Tue, 24 Nov 2020 09:58:08 +0000 (UTC) X-FDA: 77518861056.18.pin60_0e0e6d72736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 78EFB100ED3D0 for ; Tue, 24 Nov 2020 09:58:08 +0000 (UTC) X-HE-Tag: pin60_0e0e6d72736d X-Filterd-Recvd-Size: 10176 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:58:07 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id m9so17005511pgb.4 for ; Tue, 24 Nov 2020 01:58:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BbFCNgyPE8nizGGDJDZBGUHg6UO+mJRBaj4BfE6x8So=; b=FwWSJi2cHVLIvbiOkLH6oUoM/YtuKQ9XtRGesAdYMI8WK/9ZQRRLTsMwEun+e1dW4R WJf3KMSYagwqAkOJl39z+uBWiCKjtOVdYqd84WGW33xAGCtWyyZ2MZhj6ghdETJ5mX9W pk+kX0lq3SP2kWM2s9gwaHeBiQtgXFDcMw4o+CygYAAuqpgs1ZY+czuXaEoAnUX7ob8x 0ldPerB5AR0kLOKkKWUacv2367EhY1Wp1ejTVHv9/y31UF1L6b2arv+hIs4WcRP8FpDG WRmEK1V0zzx4XPrAM9G0UE8kTphJnNhaheePBt+e/IPjR7xKioUsJnKUfFwtm9y6vAJT JtTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BbFCNgyPE8nizGGDJDZBGUHg6UO+mJRBaj4BfE6x8So=; b=HcqZoniWVCyB7BNlAMM9BkURux1mrYwuZroat8Ue2soVjoLx9scshi3lxCM3HXyjPh AABgGiYJ8YDp5bX8nRhOrodpYtNJx5UI3UU+wSYTrAxnZXkeQdrFs1/kJvk5GV+izF/E obMI8mk1nyq6GOncyWZGhoAIMT66aU4r/gV+M/pCq+2hlYQou6GLe/XvOJ3pofE8FSXB hdEKafOh4kGUxONqtnm61o3KvZjpPZfQzyCmq71r7AnnQ2Q4cp46WEtmBHc5nRkTA6oP 8hksvq5gT5LEUECfl9RSr4oziJXXRAfGo7vFONKW9UwoLccVV9+SXR6KFgyTVuw6+fXw 2pOA== X-Gm-Message-State: AOAM533H/C5fGvn9m/hqRZZkHtMfxINSQjTOJnoKkdMGfBkXJi9qITX+ NXTaFCtPDL5usmb/49cnW0yxYA== X-Google-Smtp-Source: ABdhPJwX9HQ0VjDIqeOVrfmO9iIZzoKa4yniFwFtNwTWpg2A7NszkjO8GLva5IH0THXOXslntQZcCQ== X-Received: by 2002:a17:90a:fd0e:: with SMTP id cv14mr3877515pjb.182.1606211886807; Tue, 24 Nov 2020 01:58:06 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.57.56 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:58:06 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 08/16] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Date: Tue, 24 Nov 2020 17:52:51 +0800 Message-Id: <20201124095259.58755-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we allocate a hugetlb page from the buddy, we should free the unused vmemmap pages associated with it. We can do that in the prep_new_huge_page(). Signed-off-by: Muchun Song --- arch/x86/include/asm/pgtable_64_types.h | 8 ++ mm/hugetlb.c | 2 + mm/hugetlb_vmemmap.c | 133 +++++++++++++++++++++++++++++++- mm/hugetlb_vmemmap.h | 5 ++ 4 files changed, 147 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 52e5f5f2240d..bedbd2e7d06c 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -139,6 +139,14 @@ extern unsigned int ptrs_per_p4d; # define VMEMMAP_START __VMEMMAP_BASE_L4 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ +/* + * VMEMMAP_SIZE - allows the whole linear region to be covered by + * a struct page array. + */ +#define VMEMMAP_SIZE (1UL << (__VIRTUAL_MASK_SHIFT - PAGE_SHIFT - \ + 1 + ilog2(sizeof(struct page)))) +#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE) + #define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f88032c24667..9662b5535f3a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1499,6 +1499,8 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + free_huge_page_vmemmap(h, page); + INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index fd60cfdf3d40..1576f69bd1d3 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -92,8 +92,9 @@ * to the 2MB HugeTLB page. We also can use this approach to free the vmemmap * pages. */ -#define pr_fmt(fmt) "HugeTLB Vmemmap: " fmt +#define pr_fmt(fmt) "HugeTLB vmemmap: " fmt +#include #include "hugetlb_vmemmap.h" /* @@ -105,6 +106,136 @@ * these page frames. Therefore, we need to reserve two pages as vmemmap areas. */ #define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +#define TAIL_PAGE_REUSE -1 + +#ifndef VMEMMAP_HPAGE_SHIFT +#define VMEMMAP_HPAGE_SHIFT HPAGE_SHIFT +#endif +#define VMEMMAP_HPAGE_ORDER (VMEMMAP_HPAGE_SHIFT - PAGE_SHIFT) +#define VMEMMAP_HPAGE_NR (1 << VMEMMAP_HPAGE_ORDER) +#define VMEMMAP_HPAGE_SIZE ((1UL) << VMEMMAP_HPAGE_SHIFT) +#define VMEMMAP_HPAGE_MASK (~(VMEMMAP_HPAGE_SIZE - 1)) + +#define vmemmap_hpage_addr_end(addr, end) \ +({ \ + unsigned long __boundary; \ + __boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK; \ + (__boundary - 1 < (end) - 1) ? __boundary : (end); \ +}) + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return h->nr_free_vmemmap_pages; +} + +static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) +{ + return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; +} + +static inline unsigned long vmemmap_pages_size_per_hpage(struct hstate *h) +{ + return (unsigned long)vmemmap_pages_per_hpage(h) << PAGE_SHIFT; +} + +/* + * Walk a vmemmap address to the pmd it maps. + */ +static pmd_t *vmemmap_to_pmd(unsigned long page) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + + if (page < VMEMMAP_START || page >= VMEMMAP_END) + return NULL; + + pgd = pgd_offset_k(page); + if (pgd_none(*pgd)) + return NULL; + p4d = p4d_offset(pgd, page); + if (p4d_none(*p4d)) + return NULL; + pud = pud_offset(p4d, page); + if (pud_none(*pud) || pud_bad(*pud)) + return NULL; + + return pmd_offset(pud, page); +} + +static inline void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, + unsigned long start, + unsigned long end, + struct list_head *free_pages) +{ + /* Make the tail pages are mapped read-only. */ + pgprot_t pgprot = PAGE_KERNEL_RO; + pte_t entry = mk_pte(reuse, pgprot); + unsigned long addr; + + for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) { + struct page *page; + pte_t old = *ptep; + + VM_WARN_ON(!pte_present(old)); + page = pte_page(old); + list_add(&page->lru, free_pages); + + set_pte_at(&init_mm, addr, ptep, entry); + } +} + +static void __free_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start, + unsigned long end, + struct list_head *vmemmap_pages) +{ + unsigned long next, addr = start; + struct page *reuse = NULL; + + do { + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) + reuse = pte_page(ptep[TAIL_PAGE_REUSE]); + + next = vmemmap_hpage_addr_end(addr, end); + __free_huge_page_pte_vmemmap(reuse, ptep, addr, next, + vmemmap_pages); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + pmd_t *pmd; + unsigned long start, end; + unsigned long vmemmap_addr = (unsigned long)head; + LIST_HEAD(free_pages); + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + pmd = vmemmap_to_pmd(vmemmap_addr); + BUG_ON(!pmd); + + start = vmemmap_addr + RESERVE_VMEMMAP_SIZE; + end = vmemmap_addr + vmemmap_pages_size_per_hpage(h); + __free_huge_page_pmd_vmemmap(pmd, start, end, &free_pages); + free_vmemmap_page_list(&free_pages); +} void __init hugetlb_vmemmap_init(struct hstate *h) { diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 40c0c7dfb60d..67113b67495f 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -12,9 +12,14 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void __init hugetlb_vmemmap_init(struct hstate *h); +void free_huge_page_vmemmap(struct hstate *h, struct page *head); #else static inline void hugetlb_vmemmap_init(struct hstate *h) { } + +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Tue Nov 24 09:52:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2679CC64E90 for ; Tue, 24 Nov 2020 09:58:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A515820B1F for ; Tue, 24 Nov 2020 09:58:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="QuTKi5Le" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A515820B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2E2B16B0089; Tue, 24 Nov 2020 04:58:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 26BA66B008C; Tue, 24 Nov 2020 04:58:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E74C6B0092; Tue, 24 Nov 2020 04:58:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id E6EF16B0089 for ; Tue, 24 Nov 2020 04:58:18 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id AEB658249980 for ; Tue, 24 Nov 2020 09:58:18 +0000 (UTC) X-FDA: 77518861476.06.cows99_3d017df2736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 8ECB110040CAE for ; Tue, 24 Nov 2020 09:58:18 +0000 (UTC) X-HE-Tag: cows99_3d017df2736d X-Filterd-Recvd-Size: 9980 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:58:18 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id m9so17005889pgb.4 for ; Tue, 24 Nov 2020 01:58:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Lo85sr+ys6lf1U70qdtLk1A2yy6l0L87SOY5IKt+Rqg=; b=QuTKi5LeFc78ApnwKtQnKFpGpAFaDKYR2SPEUmVWgj/4ZUR2v9AJyfuVW5WVruRFoN FEET+MlGGH2GJqOaxPwIghe/UnIRZSlhrKcvxotqSlPEoiKkQ6MzkI+7L799Zttov95a 5ESstlLDRvECWy8oer4helYMUoiejTAZVivERzLIA4DIoXk12ZgRh9AESBhUU58mm+Tq MQ1vggCT8xvopG0hkJahoUzZu+i1zc6KIeJytn1q0RbymNBA6QjVrr+uAxAVTyaQQPYw kMmaMQ0RVA4b4TdaTAvyE1uPbdrl5KaLcfZZIduwUOj+rlzVO1MuKKmzrPYebm1E8B8G GAow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Lo85sr+ys6lf1U70qdtLk1A2yy6l0L87SOY5IKt+Rqg=; b=tvHoYkiqy+S36Kzn0w4CZoB9s7kfDbvi1/5CULPiTKVEZsXwu0puOjGd68Mt3xVG53 4OoD85qQP2SqQ4hJzPlERKtecEhmLwRAK3koLQoCv/kWmPJUMVXR0QQimuIzpeKDPt2d Z+nQpJHBPn8cRElxc47NTEE13m1VM9Pp/E7nHNhtXGJ5m834p++/nYQwKAKrX/NeZLD9 Tzj5HEs71Ey6fjjk7s0WQmIU5mUOexqBrAFu0TT6giug+nu1KVsORJtusfDVdkmW7q+q PvVgwWjZOfNJ/fEtoB6xSYnQ+0C6H2h1zySoSuU2b+R3AtrJCRK+YYc4t+nVNXCtU0DD eyrA== X-Gm-Message-State: AOAM531D3sFyYYSiDYeSAh3vxSRnWQQ7WR0lXOa0fMBnTiopnwWi1SuH Ch+DuM0HVvN5t98zd7xT/QbclQ== X-Google-Smtp-Source: ABdhPJyAtKbJcYyB5s0iPG+oh91QP+h3kDFCwvnUvcYufL0PAPrifyoMdxkl3ZhCw8GYFXaszwlw9g== X-Received: by 2002:a62:f20e:0:b029:197:f6d8:8d4d with SMTP id m14-20020a62f20e0000b0290197f6d88d4dmr3405377pfh.58.1606211897122; Tue, 24 Nov 2020 01:58:17 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.58.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:58:16 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 09/16] mm/hugetlb: Defer freeing of HugeTLB pages Date: Tue, 24 Nov 2020 17:52:52 +0800 Message-Id: <20201124095259.58755-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the subsequent patch, we will allocate the vmemmap pages when free HugeTLB pages. But update_and_free_page() is called from a non-task context(and hold hugetlb_lock), so we can defer the actual freeing in a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages. Signed-off-by: Muchun Song --- mm/hugetlb.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++------ mm/hugetlb_vmemmap.c | 5 --- mm/hugetlb_vmemmap.h | 10 ++++++ 3 files changed, 95 insertions(+), 16 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9662b5535f3a..41056b4230f1 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1221,7 +1221,7 @@ static void destroy_compound_gigantic_page(struct page *page, __ClearPageHead(page); } -static void free_gigantic_page(struct page *page, unsigned int order) +static void __free_gigantic_page(struct page *page, unsigned int order) { /* * If the page isn't allocated using the cma allocator, @@ -1288,20 +1288,100 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { return NULL; } -static inline void free_gigantic_page(struct page *page, unsigned int order) { } +static inline void __free_gigantic_page(struct page *page, + unsigned int order) { } static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif -static void update_and_free_page(struct hstate *h, struct page *page) +static void __free_hugepage(struct hstate *h, struct page *page); + +/* + * As update_and_free_page() is be called from a non-task context(and hold + * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent + * use GFP_ATOMIC to allocate a lot of vmemmap pages. + * + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of + * pages to be freed and frees them one-by-one. As the page->mapping pointer + * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is + * reused as the llist_node structure of a lockless linked list of huge + * pages to be freed. + */ +static LLIST_HEAD(hpage_update_freelist); + +static void update_hpage_vmemmap_workfn(struct work_struct *work) { - int i; + struct llist_node *node; + struct page *page; + + node = llist_del_all(&hpage_update_freelist); + + while (node) { + page = container_of((struct address_space **)node, + struct page, mapping); + node = node->next; + page->mapping = NULL; + __free_hugepage(page_hstate(page), page); + cond_resched(); + } +} +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); + +static inline void __update_and_free_page(struct hstate *h, struct page *page) +{ + /* No need to allocate vmemmap pages */ + if (!free_vmemmap_pages_per_hpage(h)) { + __free_hugepage(h, page); + return; + } + + /* + * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap + * pages. + * + * Only call schedule_work() if hpage_update_freelist is previously + * empty. Otherwise, schedule_work() had been called but the workfn + * hasn't retrieved the list yet. + */ + if (llist_add((struct llist_node *)&page->mapping, + &hpage_update_freelist)) + schedule_work(&hpage_update_work); +} + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +static inline void free_gigantic_page(struct hstate *h, struct page *page) +{ + __free_gigantic_page(page, huge_page_order(h)); +} +#else +static inline void free_gigantic_page(struct hstate *h, struct page *page) +{ + /* + * Temporarily drop the hugetlb_lock, because + * we might block in __free_gigantic_page(). + */ + spin_unlock(&hugetlb_lock); + __free_gigantic_page(page, huge_page_order(h)); + spin_lock(&hugetlb_lock); +} +#endif + +static void update_and_free_page(struct hstate *h, struct page *page) +{ if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; h->nr_huge_pages--; h->nr_huge_pages_node[page_to_nid(page)]--; + + __update_and_free_page(h, page); +} + +static void __free_hugepage(struct hstate *h, struct page *page) +{ + int i; + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | @@ -1313,14 +1393,8 @@ static void update_and_free_page(struct hstate *h, struct page *page) set_compound_page_dtor(page, NULL_COMPOUND_DTOR); set_page_refcounted(page); if (hstate_is_gigantic(h)) { - /* - * Temporarily drop the hugetlb_lock, because - * we might block in free_gigantic_page(). - */ - spin_unlock(&hugetlb_lock); destroy_compound_gigantic_page(page, huge_page_order(h)); - free_gigantic_page(page, huge_page_order(h)); - spin_lock(&hugetlb_lock); + free_gigantic_page(h, page); } else { __free_pages(page, huge_page_order(h)); } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 1576f69bd1d3..f6ba288966d4 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -124,11 +124,6 @@ (__boundary - 1 < (end) - 1) ? __boundary : (end); \ }) -static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) -{ - return h->nr_free_vmemmap_pages; -} - static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) { return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 67113b67495f..293897b9f1d8 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -13,6 +13,11 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void __init hugetlb_vmemmap_init(struct hstate *h); void free_huge_page_vmemmap(struct hstate *h, struct page *head); + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return h->nr_free_vmemmap_pages; +} #else static inline void hugetlb_vmemmap_init(struct hstate *h) { @@ -21,5 +26,10 @@ static inline void hugetlb_vmemmap_init(struct hstate *h) static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Tue Nov 24 09:52:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33441C2D0E4 for ; Tue, 24 Nov 2020 09:58:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 888E32076B for ; Tue, 24 Nov 2020 09:58:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="Gm0mjZUZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 888E32076B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0C0DE6B008C; Tue, 24 Nov 2020 04:58:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0490C6B0093; Tue, 24 Nov 2020 04:58:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2CFE6B0095; Tue, 24 Nov 2020 04:58:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id C710E6B008C for ; Tue, 24 Nov 2020 04:58:28 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 96E71180AD811 for ; Tue, 24 Nov 2020 09:58:28 +0000 (UTC) X-FDA: 77518861896.30.stamp99_3304aae2736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 7930E180B3C83 for ; Tue, 24 Nov 2020 09:58:28 +0000 (UTC) X-HE-Tag: stamp99_3304aae2736d X-Filterd-Recvd-Size: 8653 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:58:27 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id b6so7548726pfp.7 for ; Tue, 24 Nov 2020 01:58:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JpttDQ3+UpRWFZGIokSg/wUkYwNYHQgTk6UVKoBkeLU=; b=Gm0mjZUZo7zA0izrHNveruNSQpbuz5ojd5lRvjZkLwR0sKbaTIRLwKpZEXmqd+1d81 5e1PHD0YUicJwnVh/wuHh4JiLhY1/A627ofFS8cxdsQjeMpmX03OkJ3aGK9MrbfrDDkK lvWchEmQh6eOTBP6pl2QPC+E7TbtkEfWPGdQotIQhNG2uOdY4dJNN4vfam6//Wmnf/wa DC0Rk6sE/dIvBxNyAP1PxoSnIReJQdF+9Ctxl1NVTVYnbckZYmXhQP7DFe0imgIl0Dly d6Z9Qn8jYZawRcSKTjHPRhawGY0J/UKmsv0bJrw9kkQe3dMZlD7iuOi49kzFGgxFlDv9 yXmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JpttDQ3+UpRWFZGIokSg/wUkYwNYHQgTk6UVKoBkeLU=; b=sUYoKoozWHfbzDvCMYiYU5a85nN2FG9kMgq3r/RDkyLMT0mX/PyGNp0/H3+ws86/gT liQAHSy7c44fnLmreGbL7KrP3WyHpLqVwyirpdrTE/RrysaV3JP37LdD8g249YGVqCcK p3Z2Ic/M9LaDOEb664nuIsaNcxf0Zpe639ZILIJ+fT1fn51EqVC7IU/BRGvr9sNv1CkO DyFKJtQK9lXiV6LskJ+EAOMU3DVOhtqQ8ZIcSHzEcvGbpLj0Qx5J8nyXeqi/C2ws1/NT lc2TnTcTfP75MOCFDe8zwFvdgc2Xmoi1LRPtIxK4WAaN0xJYubfNmIFJSw+C/QJAzD39 6tZg== X-Gm-Message-State: AOAM5333kkXrBZen405v/lPPziCJ/qkNEWnRTAieEf+9OpQpMEnGI9FC ZrlMn1BKoYUAUQfKrQHXRECtyw== X-Google-Smtp-Source: ABdhPJz3kRh7lpNK/DAxol80hx9LY0gJYD6O17ncIwy1kSZ2WnppJR53a574t27X7WEHzvtZF0+IHg== X-Received: by 2002:a17:90a:4283:: with SMTP id p3mr4150189pjg.174.1606211907222; Tue, 24 Nov 2020 01:58:27 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.58.17 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:58:26 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 10/16] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Date: Tue, 24 Nov 2020 17:52:53 +0800 Message-Id: <20201124095259.58755-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we free a hugetlb page to the buddy, we should allocate the vmemmap pages associated with it. We can do that in the __free_hugepage(). Signed-off-by: Muchun Song --- mm/hugetlb.c | 2 + mm/hugetlb_vmemmap.c | 102 +++++++++++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 5 +++ 3 files changed, 109 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 41056b4230f1..3fafa39fcac6 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1382,6 +1382,8 @@ static void __free_hugepage(struct hstate *h, struct page *page) { int i; + alloc_huge_page_vmemmap(h, page); + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index f6ba288966d4..d6a1b06c1322 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -95,6 +95,7 @@ #define pr_fmt(fmt) "HugeTLB vmemmap: " fmt #include +#include #include "hugetlb_vmemmap.h" /* @@ -108,6 +109,8 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) #define TAIL_PAGE_REUSE -1 +#define GFP_VMEMMAP_PAGE \ + (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_HIGH) #ifndef VMEMMAP_HPAGE_SHIFT #define VMEMMAP_HPAGE_SHIFT HPAGE_SHIFT @@ -159,6 +162,105 @@ static pmd_t *vmemmap_to_pmd(unsigned long page) return pmd_offset(pud, page); } +static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, + unsigned long start, + unsigned long end, + struct list_head *remap_pages) +{ + pgprot_t pgprot = PAGE_KERNEL; + void *from = page_to_virt(reuse); + unsigned long addr; + + for (addr = start; addr < end; addr += PAGE_SIZE) { + void *to; + struct page *page; + pte_t entry, old = *ptep; + + page = list_first_entry(remap_pages, struct page, lru); + list_del(&page->lru); + to = page_to_virt(page); + copy_page(to, from); + + /* + * Make sure that any data that writes to the @to is made + * visible to the physical page. + */ + flush_kernel_vmap_range(to, PAGE_SIZE); + + prepare_vmemmap_page(page); + + entry = mk_pte(page, pgprot); + set_pte_at(&init_mm, addr, ptep++, entry); + + VM_BUG_ON(!pte_present(old) || pte_page(old) != reuse); + } +} + +static void __remap_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start, + unsigned long end, + struct list_head *vmemmap_pages) +{ + unsigned long next, addr = start; + struct page *reuse = NULL; + + do { + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) + reuse = pte_page(ptep[TAIL_PAGE_REUSE]); + + next = vmemmap_hpage_addr_end(addr, end); + __remap_huge_page_pte_vmemmap(reuse, ptep, addr, next, + vmemmap_pages); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) +{ + unsigned int nr = free_vmemmap_pages_per_hpage(h); + + while (nr--) { + struct page *page; + +retry: + page = alloc_page(GFP_VMEMMAP_PAGE); + if (unlikely(!page)) { + msleep(100); + /* + * We should retry infinitely, because we cannot + * handle allocation failures. Once we allocate + * vmemmap pages successfully, then we can free + * a HugeTLB page. + */ + goto retry; + } + list_add_tail(&page->lru, list); + } +} + +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + pmd_t *pmd; + unsigned long start, end; + unsigned long vmemmap_addr = (unsigned long)head; + LIST_HEAD(map_pages); + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + alloc_vmemmap_pages(h, &map_pages); + + pmd = vmemmap_to_pmd(vmemmap_addr); + BUG_ON(!pmd); + + start = vmemmap_addr + RESERVE_VMEMMAP_SIZE; + end = vmemmap_addr + vmemmap_pages_size_per_hpage(h); + __remap_huge_page_pmd_vmemmap(pmd, start, end, &map_pages); +} + static inline void free_vmemmap_page_list(struct list_head *list) { struct page *page, *next; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 293897b9f1d8..7887095488f4 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -12,6 +12,7 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void __init hugetlb_vmemmap_init(struct hstate *h); +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) @@ -23,6 +24,10 @@ static inline void hugetlb_vmemmap_init(struct hstate *h) { } +static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} + static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } From patchwork Tue Nov 24 09:52:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97C04C56201 for ; Tue, 24 Nov 2020 09:58:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 034A920679 for ; Tue, 24 Nov 2020 09:58:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="XDYX4dXd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 034A920679 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7ED256B0093; Tue, 24 Nov 2020 04:58:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 774FC6B0096; Tue, 24 Nov 2020 04:58:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 617CE6B0098; Tue, 24 Nov 2020 04:58:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0062.hostedemail.com [216.40.44.62]) by kanga.kvack.org (Postfix) with ESMTP id 45CC16B0093 for ; Tue, 24 Nov 2020 04:58:39 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0A7F68249980 for ; Tue, 24 Nov 2020 09:58:39 +0000 (UTC) X-FDA: 77518862358.24.honey54_13154292736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id E14571A4A5 for ; Tue, 24 Nov 2020 09:58:38 +0000 (UTC) X-HE-Tag: honey54_13154292736d X-Filterd-Recvd-Size: 9094 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:58:38 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id w4so16976043pgg.13 for ; Tue, 24 Nov 2020 01:58:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZFwyQlsscT2RBnxHWArfuTVJVgRwj0xOpsgIKb91Ge0=; b=XDYX4dXdNWF3Fzn1dtGBYeifCfGEgb75fiQL65NrdLEisMD4Yjgdvk+N3BLItujm2a 5JfBNhqze5ZmKNgmb89doGH/AVnWBc3oYcTwUB6v9/vBFfJ+eGIGaXWtcCtPNocKonY2 kf0Pe5l2R4v/U84HUg8jxCoLkeTAsG9C5UtfLckgc3/SHCik32CyJ9ErPu/2B2AeXp4B 37/TibIqva5tDvTnGJQ/Z26Box1IlLx/ZpK1cfdoD4kyV4mfV+Y2JmlmdIqDhize0RU1 lABC0IrCkGG5KdoT+6mSWOtT9BbVzcXez3Iupgxa9K82Kav6b5abq0J0W1KWZHEjaDTA JPHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZFwyQlsscT2RBnxHWArfuTVJVgRwj0xOpsgIKb91Ge0=; b=caQeurMfNA72fP+ToSyM7CuuzEGcf88wfoypvdI8x42uV2T1ogycICBXPtmC7aW/vO ir6PLWhpgyKrY2OIsA/62j91bS4d4BYWTey4l97+KIy0A8xVRo82TF15C/VEn9EvA68v TPvBHpGEGd7QQQ8Vd4CeRqUVb5uLp+e+Pf61C5no3++sgP6aw7GT5/V3Xtd34fAKlaCH 8Dg1k/bUa7Z0fY/9EZtRioLzk5AmyUQCmbpMzf11bQWvG82eIvip5kxw+8m5+Rqs80du Zig2SdF0KasyHbWvXJvYmWpBy3iYeQ24kDQuvKrTql8aiRrIHar++Py/iCiM5GXjiZMo hWYA== X-Gm-Message-State: AOAM530X3qObMnP8Pnowk8mMAlWvwu+I+kGP9C2i8OXqLOeMIbnrrXcF 0Kj9tJdANrQnMOiYmAoG6OVIEA== X-Google-Smtp-Source: ABdhPJxo20c2KFbIHjZgWLrl89GNL9t8kpfDAKmq+mis09TlkPg6pRW/DaEm5NbaUDzC6/ruk91INQ== X-Received: by 2002:a17:90b:3781:: with SMTP id mz1mr3959844pjb.229.1606211917751; Tue, 24 Nov 2020 01:58:37 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.58.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:58:37 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 11/16] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Date: Tue, 24 Nov 2020 17:52:54 +0800 Message-Id: <20201124095259.58755-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The __free_huge_page_pmd_vmemmap and __remap_huge_page_pmd_vmemmap are almost the same code. So introduce remap_free_huge_page_pmd_vmemmap helper to simplify the code. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 87 +++++++++++++++++++++------------------------------- 1 file changed, 35 insertions(+), 52 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index d6a1b06c1322..509ca451e232 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -127,6 +127,10 @@ (__boundary - 1 < (end) - 1) ? __boundary : (end); \ }) +typedef void (*vmemmap_pte_remap_func_t)(struct page *reuse, pte_t *ptep, + unsigned long start, unsigned long end, + void *priv); + static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) { return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; @@ -162,21 +166,42 @@ static pmd_t *vmemmap_to_pmd(unsigned long page) return pmd_offset(pud, page); } +static void remap_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start, + unsigned long end, + vmemmap_pte_remap_func_t fn, void *priv) +{ + unsigned long next, addr = start; + struct page *reuse = NULL; + + do { + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) + reuse = pte_page(ptep[TAIL_PAGE_REUSE]); + + next = vmemmap_hpage_addr_end(addr, end); + fn(reuse, ptep, addr, next, priv); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, unsigned long start, - unsigned long end, - struct list_head *remap_pages) + unsigned long end, void *priv) { pgprot_t pgprot = PAGE_KERNEL; void *from = page_to_virt(reuse); unsigned long addr; + struct list_head *pages = priv; for (addr = start; addr < end; addr += PAGE_SIZE) { void *to; struct page *page; pte_t entry, old = *ptep; - page = list_first_entry(remap_pages, struct page, lru); + page = list_first_entry(pages, struct page, lru); list_del(&page->lru); to = page_to_virt(page); copy_page(to, from); @@ -196,28 +221,6 @@ static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, } } -static void __remap_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start, - unsigned long end, - struct list_head *vmemmap_pages) -{ - unsigned long next, addr = start; - struct page *reuse = NULL; - - do { - pte_t *ptep; - - ptep = pte_offset_kernel(pmd, addr); - if (!reuse) - reuse = pte_page(ptep[TAIL_PAGE_REUSE]); - - next = vmemmap_hpage_addr_end(addr, end); - __remap_huge_page_pte_vmemmap(reuse, ptep, addr, next, - vmemmap_pages); - } while (pmd++, addr = next, addr != end); - - flush_tlb_kernel_range(start, end); -} - static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) { unsigned int nr = free_vmemmap_pages_per_hpage(h); @@ -258,7 +261,8 @@ void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) start = vmemmap_addr + RESERVE_VMEMMAP_SIZE; end = vmemmap_addr + vmemmap_pages_size_per_hpage(h); - __remap_huge_page_pmd_vmemmap(pmd, start, end, &map_pages); + remap_huge_page_pmd_vmemmap(pmd, start, end, + __remap_huge_page_pte_vmemmap, &map_pages); } static inline void free_vmemmap_page_list(struct list_head *list) @@ -273,13 +277,13 @@ static inline void free_vmemmap_page_list(struct list_head *list) static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, unsigned long start, - unsigned long end, - struct list_head *free_pages) + unsigned long end, void *priv) { /* Make the tail pages are mapped read-only. */ pgprot_t pgprot = PAGE_KERNEL_RO; pte_t entry = mk_pte(reuse, pgprot); unsigned long addr; + struct list_head *pages = priv; for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) { struct page *page; @@ -287,34 +291,12 @@ static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, VM_WARN_ON(!pte_present(old)); page = pte_page(old); - list_add(&page->lru, free_pages); + list_add(&page->lru, pages); set_pte_at(&init_mm, addr, ptep, entry); } } -static void __free_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start, - unsigned long end, - struct list_head *vmemmap_pages) -{ - unsigned long next, addr = start; - struct page *reuse = NULL; - - do { - pte_t *ptep; - - ptep = pte_offset_kernel(pmd, addr); - if (!reuse) - reuse = pte_page(ptep[TAIL_PAGE_REUSE]); - - next = vmemmap_hpage_addr_end(addr, end); - __free_huge_page_pte_vmemmap(reuse, ptep, addr, next, - vmemmap_pages); - } while (pmd++, addr = next, addr != end); - - flush_tlb_kernel_range(start, end); -} - void free_huge_page_vmemmap(struct hstate *h, struct page *head) { pmd_t *pmd; @@ -330,7 +312,8 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head) start = vmemmap_addr + RESERVE_VMEMMAP_SIZE; end = vmemmap_addr + vmemmap_pages_size_per_hpage(h); - __free_huge_page_pmd_vmemmap(pmd, start, end, &free_pages); + remap_huge_page_pmd_vmemmap(pmd, start, end, + __free_huge_page_pte_vmemmap, &free_pages); free_vmemmap_page_list(&free_pages); } From patchwork Tue Nov 24 09:52:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EED7C2D0E4 for ; Tue, 24 Nov 2020 09:58:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 999E520679 for ; Tue, 24 Nov 2020 09:58:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="R/NFgWFz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 999E520679 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 220566B0096; Tue, 24 Nov 2020 04:58:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CFF16B0099; Tue, 24 Nov 2020 04:58:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 024C06B009A; Tue, 24 Nov 2020 04:58:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id DC7456B0096 for ; Tue, 24 Nov 2020 04:58:49 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A3D87180AD80F for ; Tue, 24 Nov 2020 09:58:49 +0000 (UTC) X-FDA: 77518862778.06.smell51_2212d502736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 84BFC100434B2 for ; Tue, 24 Nov 2020 09:58:49 +0000 (UTC) X-HE-Tag: smell51_2212d502736d X-Filterd-Recvd-Size: 6832 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:58:49 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id x24so5717652pfn.6 for ; Tue, 24 Nov 2020 01:58:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RFw3IJ+e2PheHYuR5pwkJRvmAhprgLu2/OHbj9OxvOs=; b=R/NFgWFzv5XD8+fMmy9hSPore6Nhp1q1lz+iTh/Z6t/bPrllx4dpov59KYoAon2lNA GBpERZR/JZMY3ulmkYQpKDP8DV953PQ3RZzUQxKM4mnyWNwRhDa1wb4snI4zBh2wr5Kb oXkrMON3RrVLHJTImt44dHhTRUluYGutuVa+Os1DRIMMBtTcn1CMLLEZ94p2kvJTScoi /oK4DbQV2PgqhlLFxSIpN6IZAYBXesDQpPwdvfStSi3uUNLJyrQfB+zmcM3hkx4A7BYj ZK6OtqPE/foVctZ5CI9SefAoHhTozBrksQuotfIRLF6NPlQ4HBdzxuvEFbwwoy4rwUfi M9yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RFw3IJ+e2PheHYuR5pwkJRvmAhprgLu2/OHbj9OxvOs=; b=TY8tZL8XhULVl1rltz1qp4Q+bfqJRU1nmahXxlEujB1nYM0UdSQSdDEkZkDm0dq80N Wa5t1TZb4aKusjcvbJASTXU79z+xwesSC5CMiJMbE81js9doKeiTSNfFkjWvViNFxxLv lA6lUAOwRCmcd2ePqxtj48BCc9ANnZ9WEiI1AMMMWolqcPBj/YjCwfEQGVaUoMj8ei0S TJlC9bRTg27CXmZzkq7HPE/UuAu3HgjTaO6Th0hYcIzQycJ8lx3po3dbufgUI4eBO674 9LZFcm1G7vwpSBdoAplzkbQHYp1w24RCoqnmqT5fvuGUMgpTf7J1AWCxyQvEnNtp4cOp vDBw== X-Gm-Message-State: AOAM531r+XVmzUgSTH7bg0jhtFLz3y6Bx04bN8Vqs+b4lmECFqOr4OZC bp3Z5Y9Vhyp04rSBkbov9t8IvA== X-Google-Smtp-Source: ABdhPJzdxw87RN1KSfcNqb2JmA6MNZy/cqHyN2dAAW7ioN9KAQtWf+p2mboNDrApUqfrr5RM9+njBA== X-Received: by 2002:a62:78d3:0:b029:198:ad8:7d05 with SMTP id t202-20020a6278d30000b02901980ad87d05mr3424298pfc.18.1606211928001; Tue, 24 Nov 2020 01:58:48 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.58.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:58:47 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 12/16] mm/hugetlb: Set the PageHWPoison to the raw error page Date: Tue, 24 Nov 2020 17:52:55 +0800 Message-Id: <20201124095259.58755-13-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Because we reuse the first tail vmemmap page frame and remap it with read-only, we cannot set the PageHWPosion on a tail page. So we can use the head[4].mapping to record the real error page index and set the raw error page PageHWPoison later. Signed-off-by: Muchun Song --- mm/hugetlb.c | 11 +++-------- mm/hugetlb_vmemmap.h | 39 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 42 insertions(+), 8 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3fafa39fcac6..ade20954eb81 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1383,6 +1383,7 @@ static void __free_hugepage(struct hstate *h, struct page *page) int i; alloc_huge_page_vmemmap(h, page); + subpage_hwpoison_deliver(page); for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | @@ -1930,14 +1931,8 @@ int dissolve_free_huge_page(struct page *page) int nid = page_to_nid(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; - /* - * Move PageHWPoison flag from head page to the raw error page, - * which makes any subpages rather than the error page reusable. - */ - if (PageHWPoison(head) && page != head) { - SetPageHWPoison(page); - ClearPageHWPoison(head); - } + + set_subpage_hwpoison(head, page); list_del(&head->lru); h->free_huge_pages--; h->free_huge_pages_node[nid]--; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 7887095488f4..4bb35d87ae10 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -15,6 +15,29 @@ void __init hugetlb_vmemmap_init(struct hstate *h); void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); +static inline void subpage_hwpoison_deliver(struct page *head) +{ + struct page *page = head; + + if (PageHWPoison(head)) + page = head + page_private(head + 4); + + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + if (page != head) { + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + +static inline void set_subpage_hwpoison(struct page *head, struct page *page) +{ + if (PageHWPoison(head)) + set_page_private(head + 4, page - head); +} + static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return h->nr_free_vmemmap_pages; @@ -32,6 +55,22 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } +static inline void subpage_hwpoison_deliver(struct page *head) +{ +} + +static inline void set_subpage_hwpoison(struct page *head, struct page *page) +{ + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + if (PageHWPoison(head) && page != head) { + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return 0; From patchwork Tue Nov 24 09:52:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 201E1C56201 for ; Tue, 24 Nov 2020 09:59:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8C8E020708 for ; Tue, 24 Nov 2020 09:59:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="Z5F/w/tm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8C8E020708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0E78A6B0099; Tue, 24 Nov 2020 04:59:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0708E6B009B; Tue, 24 Nov 2020 04:59:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E7B246B009C; Tue, 24 Nov 2020 04:58:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0133.hostedemail.com [216.40.44.133]) by kanga.kvack.org (Postfix) with ESMTP id CDE026B0099 for ; Tue, 24 Nov 2020 04:58:59 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9CB8B9413 for ; Tue, 24 Nov 2020 09:58:59 +0000 (UTC) X-FDA: 77518863198.09.thumb96_18119142736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 8204B180AD811 for ; Tue, 24 Nov 2020 09:58:59 +0000 (UTC) X-HE-Tag: thumb96_18119142736d X-Filterd-Recvd-Size: 5466 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:58:58 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id x15so10444832pll.2 for ; Tue, 24 Nov 2020 01:58:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AhLUtLFhMowsX7y4QUFW0UEwoN/NQ4MXIPAKrIe+pb4=; b=Z5F/w/tm3Uj3uOWyx+XA5NHR0bbcoVlS2LknPto/PCroM31hRPIweQWnTy65FM0+Nd 0DBzo7aNdGPxnpgna4W+Z/RWKX6GP/rVs4V3l+zZcXsOKkMusa6AcJ8h1c2j+OWp5Ibm UcisxXP4wMSLAVbbGLILP52x66590iml6M7Q8F8Q4rnH9h+QEDKv2Ka/whnqoYDBwki3 jagqwwe7lDrSk9dze123ufHSv/p8nERtOPQ6BHE5nfAb1GQDgyhBvocIgy0hghu31ifr xMYofHJkBKmADKxtUfkOsuzLp7tdi3j3qSmPWBPXnCYp+EOrOcpw8ODfN3PV0w4jM/HA tTMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AhLUtLFhMowsX7y4QUFW0UEwoN/NQ4MXIPAKrIe+pb4=; b=T2TCNn/OX7uPDYEiOJ9395/fHWCHQJy7f5WSB+PTEZTyJqu6YQ+v04+0XIJwDxirr6 qWfpF+JLCq7hbnPwrTUPQFcWhiX9HY5Shb8YokePgrZ2TTCVJziQodMYrxWT9iqf9ToY P/e2YGCnzz6/JxQkC54vlnoNY/fwO6JNY0rkF6McJHlDWKy3D9J+Di29DuvdLQQH8njT n/Zd9TaNgLj6qsGqioSGD1PRsN6riivsaf0r/aOk0Qay6Zp/I97KmBZytCOGVD8lF/Bd xE0LHJ7FOd1d4endHiroMqTEWODhcPemyT/exSsxqUsbHyDAmG4+Y+CDzodNWJIMLLcC IaRA== X-Gm-Message-State: AOAM530GFp7sj9Hs4uUZsq5Ir74hCAN0GfuNcoCBqJ3gDYmiXIwtKvhB gT6tRrLAgMqJngEHRT5/Hu/49Q== X-Google-Smtp-Source: ABdhPJxOhASAWI7bhf2EZ3rDH1q81mI7X3+WN0iuPQlauueb9EcbgaGjAccpG4m6xdjfy0iZs2IWQQ== X-Received: by 2002:a17:902:bc46:b029:d6:d98a:1a68 with SMTP id t6-20020a170902bc46b02900d6d98a1a68mr3310014plz.63.1606211938283; Tue, 24 Nov 2020 01:58:58 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.58.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:58:57 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 13/16] mm/hugetlb: Flush work when dissolving hugetlb page Date: Tue, 24 Nov 2020 17:52:56 +0800 Message-Id: <20201124095259.58755-14-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should flush work when dissolving a hugetlb page to make sure that the hugetlb page is freed to the buddy. Signed-off-by: Muchun Song --- mm/hugetlb.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ade20954eb81..15e2c1dd32ea 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1328,6 +1328,12 @@ static void update_hpage_vmemmap_workfn(struct work_struct *work) } static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); +static inline void flush_hpage_update_work(struct hstate *h) +{ + if (free_vmemmap_pages_per_hpage(h)) + flush_work(&hpage_update_work); +} + static inline void __update_and_free_page(struct hstate *h, struct page *page) { /* No need to allocate vmemmap pages */ @@ -1914,6 +1920,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + struct hstate *h = NULL; /* Not to disrupt normal path by vainly holding hugetlb_lock */ if (!PageHuge(page)) @@ -1927,8 +1934,9 @@ int dissolve_free_huge_page(struct page *page) if (!page_count(page)) { struct page *head = compound_head(page); - struct hstate *h = page_hstate(head); int nid = page_to_nid(head); + + h = page_hstate(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; @@ -1942,6 +1950,14 @@ int dissolve_free_huge_page(struct page *page) } out: spin_unlock(&hugetlb_lock); + + /* + * We should flush work before return to make sure that + * the HugeTLB page is freed to the buddy. + */ + if (!rc && h) + flush_hpage_update_work(h); + return rc; } From patchwork Tue Nov 24 09:52:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E319C64E7C for ; Tue, 24 Nov 2020 09:59:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DBBBB20857 for ; Tue, 24 Nov 2020 09:59:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="fuMUt74x" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DBBBB20857 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 664AA6B009B; Tue, 24 Nov 2020 04:59:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5ED506B009D; Tue, 24 Nov 2020 04:59:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B4C76B009E; Tue, 24 Nov 2020 04:59:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 30CE66B009B for ; Tue, 24 Nov 2020 04:59:10 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A958CA2C8 for ; Tue, 24 Nov 2020 09:59:09 +0000 (UTC) X-FDA: 77518863618.04.ice17_4c14e0f2736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 7C3D6800CE91 for ; Tue, 24 Nov 2020 09:59:09 +0000 (UTC) X-HE-Tag: ice17_4c14e0f2736d X-Filterd-Recvd-Size: 6774 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:59:08 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id s63so7134129pgc.8 for ; Tue, 24 Nov 2020 01:59:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RvROAgOOAQ4kY4iKE3ESPFWS7AbcyyeQezHTq1QFTAg=; b=fuMUt74xPel262JKZcWj7UdAWTy2xxyYGTNPRBO2ViJpSb3cOepUJiAuAL5eAr2ooz j2AHvTaxLpMpe17XuGaalenP6FFJcPzfpy9YqA23Bq2eUtD1M3G8z51lfjCSJya72zRY Cqsck+YkGPmh1FnH25pcEzUHMUjFjiVc3fm2OCsWimzfMVj9RcOe98i0MZLz6Kmym7e4 /K5gvmzeNjSQjb5v/FXLbtcDiHPYmOJUPWJlz1ldS8ljQO9iC0uEm2rIxpMCB8mcFTC1 7kQOBSiW05idLnK6y4kUUNxTqg1SlWoQCaGLWLXOnm4r0vJ38ZT0iZv8dgHT3KKv09NC vzUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RvROAgOOAQ4kY4iKE3ESPFWS7AbcyyeQezHTq1QFTAg=; b=qgDu8KOBgOgk4WWY70lncz+MZDXyV0wNoxbtVVmjz2xjLZrIzrsaeC3nGNBPZWxHgU 2kNmGGYKV89aY6RNDsp5ktmBZq1CJugS2yEFc3XbTvEsl/HbYR/gYZd8TEbX8xGr0rvc t703/PUCoCpEwnpWnoYmPW6orbm4tpUDmsu398Rf6SwOodyLUoo8/AioGUkks85Sw5GW R1fo/oieQUJPbiNSIOWn1QtMy6HD/+S4X67sly1aD2m6gduTsgix4y4ToUe5Lq4LZxXJ 0zc6gbGifCsDJliFXGsKa51SnuZndGhIwNRJVvjUFlSaKs7EZS48rD+Xmby4sHo7SWu4 hSLQ== X-Gm-Message-State: AOAM531PEX6loOnE/NOukVR7NsWAY41R0AfX3BjXvJgC0QTPhzq2Xco8 ow9YUEk66TqNRGzZk1VlGDyyEg== X-Google-Smtp-Source: ABdhPJxlanZ/RJAQaev8flwUs/w/EGF8DKrJNUyiGS3fentCZnvyeLaxGW2GOzpgkK7Q4Y0SmdGDQg== X-Received: by 2002:a17:90a:c214:: with SMTP id e20mr4154217pjt.212.1606211948164; Tue, 24 Nov 2020 01:59:08 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.58.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:59:07 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 14/16] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Date: Tue, 24 Nov 2020 17:52:57 +0800 Message-Id: <20201124095259.58755-15-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a kernel parameter hugetlb_free_vmemmap to disable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. Signed-off-by: Muchun Song --- Documentation/admin-guide/kernel-parameters.txt | 9 +++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ mm/hugetlb_vmemmap.c | 19 ++++++++++++++++++- 3 files changed, 30 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 5debfe238027..d28c3acde965 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1551,6 +1551,15 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] + hugetlb_free_vmemmap= + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, + this controls freeing unused vmemmap pages associated + with each HugeTLB page. + Format: { on | off (default) } + + on: enable the feature + off: disable the feature + hung_task_panic= [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index f7b1c7462991..6a8b57f6d3b7 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -145,6 +145,9 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing + unused vmemmap pages associated each HugeTLB page. When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 509ca451e232..b2222f8d1245 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -131,6 +131,22 @@ typedef void (*vmemmap_pte_remap_func_t)(struct page *reuse, pte_t *ptep, unsigned long start, unsigned long end, void *priv); +static bool hugetlb_free_vmemmap_enabled __initdata; + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "on")) + hugetlb_free_vmemmap_enabled = true; + else if (strcmp(buf, "off")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); + static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) { return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; @@ -322,7 +338,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int order = huge_page_order(h); unsigned int vmemmap_pages; - if (!is_power_of_2(sizeof(struct page))) { + if (!is_power_of_2(sizeof(struct page)) || + !hugetlb_free_vmemmap_enabled) { pr_info("disable freeing vmemmap pages for %s\n", h->name); return; } From patchwork Tue Nov 24 09:52:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9123C56201 for ; Tue, 24 Nov 2020 09:59:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3D3A320679 for ; Tue, 24 Nov 2020 09:59:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="IOq24E+r" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D3A320679 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B6F056B009D; Tue, 24 Nov 2020 04:59:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B1DB36B009F; Tue, 24 Nov 2020 04:59:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 971166B00A0; Tue, 24 Nov 2020 04:59:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id 7E5A96B009D for ; Tue, 24 Nov 2020 04:59:21 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4BC74180AD811 for ; Tue, 24 Nov 2020 09:59:21 +0000 (UTC) X-FDA: 77518864122.03.joke15_351847a2736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 28FFE28A4E8 for ; Tue, 24 Nov 2020 09:59:21 +0000 (UTC) X-HE-Tag: joke15_351847a2736d X-Filterd-Recvd-Size: 9175 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:59:20 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id 62so16996212pgg.12 for ; Tue, 24 Nov 2020 01:59:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W96GcRd95QFdsA9bezqiYZ3Ng26OfkYqhZoucFhsfUs=; b=IOq24E+rzU6pmMRQtMPcpQIT6iptbDJPPOGq7RrMFzQc6u6a6R3IoEEUTJxmWY4tyx 6j/2S1cv7Ne7sIIVXPRTpBfD/AbCdP4lx5a+oxnnEbbbvd6ENxujpYsvvwlQsK0dSuCQ txljfZ8j1e2GgA5+H3ywIdlDP1SdzRFuIq4B+LVd+aGSxcM55c9UBR1uzCy4Il9J+Nnf DMIEBjm0nESe7pNwQJEZOzG2+KVlL4wLDAsHyXLghlG2ta82jOVfXOo+tRownh+gOEM1 eQJPyBQMmUpqGhR4DQnDNPpWg4Pf98fINfRSUkKGim9tXpuLwNlar9HwyhMJx4M6LptL f5Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W96GcRd95QFdsA9bezqiYZ3Ng26OfkYqhZoucFhsfUs=; b=cf1/W7KloA8y9VSNJ8S5XV/2WjsxD3r5ontsPXuz6QLyHIF/RH8LoMv5/hTqHKZEeb F2L1yrb/v/J2Rw58bV0rt70jWiKbttO0/a49flWEacSYaZ8+y3bOMnCcee+oAnsWh1qf IWz3Z8jxqHD8sywwHuPS3TswgSKkKLE/ggjGm5FsYJGyge2gYaweMvOpCTSHz1DzLtQ4 GQju0anscv47clsQIUCSOwBz9pQiaaoTG0UeLbF/C/2l1MVj95K0lFit9Q4PT1bxJjsy ONgQfiZNZ/swoivDRdhUuexhR5hMqixJkyP4Pfna8pA4lh6pQHe/jfWBJx1xF/mPk4xU TIwg== X-Gm-Message-State: AOAM532et7S260KC8ut710S9FzCjkfWgDJ5BdzJM1SbQP8qptgmF8ixJ rQHeVYVnjpwOoxnB4ysP4STCow== X-Google-Smtp-Source: ABdhPJyGXLT2YnZQYT7GzU/fcRtiRnLECo0raO0CeaMKvw08Ynj/2beYY0SZU8ZQI0PWkm3CFnfVoA== X-Received: by 2002:a62:18c9:0:b029:197:e24e:60f2 with SMTP id 192-20020a6218c90000b0290197e24e60f2mr3356423pfy.14.1606211959566; Tue, 24 Nov 2020 01:59:19 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.59.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:59:19 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 15/16] mm/hugetlb: Gather discrete indexes of tail page Date: Tue, 24 Nov 2020 17:52:58 +0800 Message-Id: <20201124095259.58755-16-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For hugetlb page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page In this case, it will be easier to add a new tail page index later. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 13 +++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 12 ++++++------ mm/hugetlb_vmemmap.h | 4 ++-- 4 files changed, 30 insertions(+), 14 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index eed3dd3bd626..8a615ae2d233 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +enum { + SUBPAGE_INDEX_ACTIVE = 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 15e2c1dd32ea..7700da372716 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1429,20 +1429,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* @@ -1454,17 +1454,17 @@ static inline bool PageHugeTemporary(struct page *page) if (!PageHuge(page)) return false; - return (unsigned long)page[2].mapping == -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U; } static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping = (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U; } static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping = NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL; } static void __free_huge_page(struct page *page) diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 4bb35d87ae10..54c2ca0e0dbe 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -20,7 +20,7 @@ static inline void subpage_hwpoison_deliver(struct page *head) struct page *page = head; if (PageHWPoison(head)) - page = head + page_private(head + 4); + page = head + page_private(head + SUBPAGE_INDEX_HWPOISON); /* * Move PageHWPoison flag from head page to the raw error page, @@ -35,7 +35,7 @@ static inline void subpage_hwpoison_deliver(struct page *head) static inline void set_subpage_hwpoison(struct page *head, struct page *page) { if (PageHWPoison(head)) - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) From patchwork Tue Nov 24 09:52:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11927827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AC2BC56201 for ; Tue, 24 Nov 2020 09:59:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AE30820679 for ; Tue, 24 Nov 2020 09:59:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="fU4syUhx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AE30820679 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 37DA76B009F; Tue, 24 Nov 2020 04:59:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 305F96B00A1; Tue, 24 Nov 2020 04:59:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D0766B00A2; Tue, 24 Nov 2020 04:59:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id 0497F6B009F for ; Tue, 24 Nov 2020 04:59:32 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C280A8249980 for ; Tue, 24 Nov 2020 09:59:31 +0000 (UTC) X-FDA: 77518864542.26.soup68_491649c2736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id A499318049B68 for ; Tue, 24 Nov 2020 09:59:31 +0000 (UTC) X-HE-Tag: soup68_491649c2736d X-Filterd-Recvd-Size: 4551 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 09:59:31 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id w202so2186543pff.10 for ; Tue, 24 Nov 2020 01:59:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Bynt/1YGSIdeJG2cOcgImXpzmEpwnUEaY6/7f6Ae/g4=; b=fU4syUhxdPJXkfQh34bMos1Y3zr1DCPrRNdijHklOltKrH8VvpAXtOADs20QwQcEjW A2lK/GIaljVc1gKpKY+vWKXPR9OX02jV7LL2OxqKTKdW4TMY9bWP+fnxOuc6z68yESqq gPQsJIyUY3JrkeRY5zHkeN2Y8rM6TmLsRERR3l+Scp4P6MJ+uxijHnXpyVuclcExnT3i gzDQQIfKE2CNH1P5PIdE0F04T9wEPhqMpEHIzsrjT1bfyLdmVV1Yc+Rcj7e9TKtK/A6B 43CvqdjuVYMGBrGRpCljlvKSDsJaaJr0BCU5Ev8i5DzEfhLX4j3RvZAfincG31o5Ls53 GL8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Bynt/1YGSIdeJG2cOcgImXpzmEpwnUEaY6/7f6Ae/g4=; b=gZFuUhLRIjRZ9xZMYeJBDd2MAGqiwFEG9H66yxcSrNMVy2DsOgGF9ZADSogHiXKmlM 4MYl8ERoHUBTe/ppQyIKdkjLZMo1Uux1RH7WqnyZP20FKoow7z2NHvYFYrdBo5DU0nAC NshAUrOB+2flqY8H5PgPyV6BuGp+AxgTRBP0U7VDtY1lb4DyZFnI7p+Ipmkh28eu43R/ 2l+dJavOmyAdaKzAEcDtK957dzf1OjKn9bX1hll1QrjzkYvMYPpyBuVCibW9fjrzNPZh 6lL8c/TXdUeJpG3uGd4ltwFoaYMjESk1juAMeIj9z5o3XHXB3Kwz3J0NSZ0+LZC6tWhN oVfw== X-Gm-Message-State: AOAM5308SXx1ll9VBdw1p1PVSOSfC9+JYiEowfRl2z+6xVTH3z3YURLV Ekp2w/CxiVmGH+r83JFGkp+jNA== X-Google-Smtp-Source: ABdhPJy1pVteQcfd7t6pUTOEaOSYXhdcK1qTpL/2aKqFlzz5VS2SYxybBYu983LImuXmPOsDkOMJAQ== X-Received: by 2002:a63:f857:: with SMTP id v23mr3071328pgj.174.1606211970431; Tue, 24 Nov 2020 01:59:30 -0800 (PST) Received: from localhost.localdomain ([103.136.220.120]) by smtp.gmail.com with ESMTPSA id t20sm2424562pjg.25.2020.11.24.01.59.19 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Nov 2020 01:59:29 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v6 16/16] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Date: Tue, 24 Nov 2020 17:52:59 +0800 Message-Id: <20201124095259.58755-17-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201124095259.58755-1-songmuchun@bytedance.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are only `RESERVE_VMEMMAP_SIZE / sizeof(struct page)` struct pages can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch this invalid usage of tail struct page. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index b2222f8d1245..d2c013582110 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -338,6 +338,9 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int order = huge_page_order(h); unsigned int vmemmap_pages; + BUILD_BUG_ON(NR_USED_SUBPAGE >= + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + if (!is_power_of_2(sizeof(struct page)) || !hugetlb_free_vmemmap_enabled) { pr_info("disable freeing vmemmap pages for %s\n", h->name);