From patchwork Fri Nov 13 10:59:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902973 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B4AE697 for ; Fri, 13 Nov 2020 11:01:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EF45722253 for ; Fri, 13 Nov 2020 11:01:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="LObOy+M7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EF45722253 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 154216B00A3; Fri, 13 Nov 2020 06:01:05 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0DCDF6B00A4; Fri, 13 Nov 2020 06:01:05 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E70B26B00A5; Fri, 13 Nov 2020 06:01:04 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id B01BF6B00A3 for ; Fri, 13 Nov 2020 06:01:04 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 50178180AD802 for ; Fri, 13 Nov 2020 11:01:04 +0000 (UTC) X-FDA: 77479102848.20.truck82_2b182862730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 2CEC7180C060D for ; Fri, 13 Nov 2020 11:01:04 +0000 (UTC) X-Spam-Summary: 1,0,0,ac730e61000e1d61,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:1:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1801:2194:2196:2199:2200:2393:2538:2559:2562:2636:3138:3139:3140:3141:3142:3308:3865:3866:3867:3868:3870:3871:3874:4250:4321:4385:4605:5007:6117:6119:6261:6653:6737:6738:7903:9036:9592:10004:11026:11232:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13255:13894:13972:14096:14394:21080:21444:21451:21627:21972:21990:30003:30012:30054:30064:30067:30070:30075:30076,0,RBL:209.85.215.173:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yg4gqrk4kp3rw4bqtrhneafmr6myciddi1epmqjfmwfm1eweexkocdf8bwaju.4z4idax36mxtimy58cdxexrmshum3uur8fswu1yg64mr5yrmgnnc1z374p9mmqj.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:267,LU A_SUMMAR X-HE-Tag: truck82_2b182862730e X-Filterd-Recvd-Size: 14187 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:01:03 +0000 (UTC) Received: by mail-pg1-f173.google.com with SMTP id f18so6795907pgi.8 for ; Fri, 13 Nov 2020 03:01:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Rk2HsY6r/UP2ZIxsYA2jNVKnMzvj7IcFze5tetV5kyU=; b=LObOy+M7rNE0t7MNnMGDVJokPgtQyeUxqvSzEB3uzf7FIKfFz2b/Sq4xBrxoTPliTB Jt+7Ghr0iMwHKzE/tpbQ1m1saRLLLfRXN3EZdiCqqhgJsT/sa2iOwbuMYtquPA2IpJFN vtPOj/WZjguGPMSoMh0z3XzO/TEZgfC5N34xP5kuOXd/Wime4CVYwLtdhQ6imMWrFxEM PKuZEaiBUEg74DFm3AgAr58a6/eR7VOeUrZVvzf/ZJXa26EHKdKd3SdEL/kfmfgD75oV 2y8Yc5rG9hqdvnYSM2Sfy02qSMLEg+xdBgzbho0dYXGqGqvIUvOuLEI15ndSvqa83v9u cZ6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Rk2HsY6r/UP2ZIxsYA2jNVKnMzvj7IcFze5tetV5kyU=; b=ALO2UffFsXM5HVALvt09hxjS42EnYU3qsbcfGA8od0ktICA3dm/zyYaoqufD7/JR7J RGRhIgJjzzwALxOybotTmSmE8qjLfE5av+4fwh7VLCTg77kVsnks7CSbWRvohjYtm6ie t+XjGV5ZfQILGnEHlCCtavu0ZL8f5EAD33cCmPOmbtPdLR0npmYW64bPDNyIBdHdbBXd ANMNOlaDkiT0OpNpuiCRWWBiRYtGU3P0huPhXHYiEZGfZ+WsHDnW/yS426/ITgguYq+w YGiTpaQNIwbgC//nd8HEP7jCV1ZpB6DYlqRN2VQfdjgSd68P1avX8BQyrWIeA2P5nEjc digg== X-Gm-Message-State: AOAM5326q2sJqkAkunUkRC0vTJ6N/Q502GztEStZ79MU6BmJU0rB3E43 3ojP9vvoZgw4wt9T+RRztUWU5g== X-Google-Smtp-Source: ABdhPJyAoFrQrpDLZh3ItxE3fz9KtnO/QY6GCiinBKnJz83BNzjQ+jLbnLpt8ElKQG9qf8uCMFATjA== X-Received: by 2002:aa7:8c17:0:b029:18c:967a:d793 with SMTP id c23-20020aa78c170000b029018c967ad793mr1527529pfd.4.1605265262050; Fri, 13 Nov 2020 03:01:02 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.00.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:01:01 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 01/21] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Date: Fri, 13 Nov 2020 18:59:32 +0800 Message-Id: <20201113105952.11638-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move bootmem info registration common API to individual bootmem_info.c for later patch use. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador --- arch/x86/mm/init_64.c | 1 + include/linux/bootmem_info.h | 27 ++++++++++++ include/linux/memory_hotplug.h | 23 ---------- mm/Makefile | 1 + mm/bootmem_info.c | 99 ++++++++++++++++++++++++++++++++++++++++++ mm/memory_hotplug.c | 91 +------------------------------------- 6 files changed, 129 insertions(+), 113 deletions(-) create mode 100644 include/linux/bootmem_info.h create mode 100644 mm/bootmem_info.c diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index b5a3fa4033d3..c7f7ad55b625 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h new file mode 100644 index 000000000000..65bb9b23140f --- /dev/null +++ b/include/linux/bootmem_info.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_BOOTMEM_INFO_H +#define __LINUX_BOOTMEM_INFO_H + +#include + +/* + * Types for free bootmem stored in page->lru.next. These have to be in + * some random range in unsigned long space for debugging purposes. + */ +enum { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, + SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, + MIX_SECTION_INFO, + NODE_INFO, + MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, +}; + +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE +void __init register_page_bootmem_info_node(struct pglist_data *pgdat); +#else +static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) +{ +} +#endif + +#endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 51a877fec8da..19e5d067294c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -33,18 +33,6 @@ struct vmem_altmap; ___page; \ }) -/* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. - */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, - SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, - MIX_SECTION_INFO, - NODE_INFO, - MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, -}; - /* Types for control the zone type of onlined and offlined memory */ enum { /* Offline the memory. */ @@ -209,13 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat); -#else -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} -#endif extern void put_page_bootmem(struct page *page); extern void get_page_bootmem(unsigned long ingo, struct page *page, unsigned long type); @@ -254,10 +235,6 @@ static inline int mhp_notimplemented(const char *func) return -ENOSYS; } -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} - static inline int try_online_node(int nid) { return 0; diff --git a/mm/Makefile b/mm/Makefile index d5649f1c12c0..752111587c99 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -82,6 +82,7 @@ obj-$(CONFIG_SLAB) += slab.o obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_FAILSLAB) += failslab.o +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c new file mode 100644 index 000000000000..39fa8fc120bc --- /dev/null +++ b/mm/bootmem_info.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/mm/bootmem_info.c + * + * Copyright (C) + */ +#include +#include +#include +#include +#include + +#ifndef CONFIG_SPARSEMEM_VMEMMAP +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + /* Get section's memmap address */ + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + /* + * Get page for the memmap's phys address + * XXX: need more consideration for sparse_vmemmap... + */ + page = virt_to_page(memmap); + mapsize = sizeof(struct page) * PAGES_PER_SECTION; + mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; + + /* remember memmap's page */ + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, SECTION_INFO); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); + +} +#else /* CONFIG_SPARSEMEM_VMEMMAP */ +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); +} +#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ + +void __init register_page_bootmem_info_node(struct pglist_data *pgdat) +{ + unsigned long i, pfn, end_pfn, nr_pages; + int node = pgdat->node_id; + struct page *page; + + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; + page = virt_to_page(pgdat); + + for (i = 0; i < nr_pages; i++, page++) + get_page_bootmem(node, page, NODE_INFO); + + pfn = pgdat->node_start_pfn; + end_pfn = pgdat_end_pfn(pgdat); + + /* register section info */ + for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { + /* + * Some platforms can assign the same pfn to multiple nodes - on + * node0 as well as nodeN. To avoid registering a pfn against + * multiple nodes we check that this pfn does not already + * reside in some other nodes. + */ + if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) + register_page_bootmem_info_section(pfn); + } +} diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index baded53b9ff9..2da4ad071456 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -167,96 +168,6 @@ void put_page_bootmem(struct page *page) } } -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - /* Get section's memmap address */ - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - /* - * Get page for the memmap's phys address - * XXX: need more consideration for sparse_vmemmap... - */ - page = virt_to_page(memmap); - mapsize = sizeof(struct page) * PAGES_PER_SECTION; - mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; - - /* remember memmap's page */ - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, SECTION_INFO); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); - -} -#else /* CONFIG_SPARSEMEM_VMEMMAP */ -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); -} -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ - -void __init register_page_bootmem_info_node(struct pglist_data *pgdat) -{ - unsigned long i, pfn, end_pfn, nr_pages; - int node = pgdat->node_id; - struct page *page; - - nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; - page = virt_to_page(pgdat); - - for (i = 0; i < nr_pages; i++, page++) - get_page_bootmem(node, page, NODE_INFO); - - pfn = pgdat->node_start_pfn; - end_pfn = pgdat_end_pfn(pgdat); - - /* register section info */ - for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - /* - * Some platforms can assign the same pfn to multiple nodes - on - * node0 as well as nodeN. To avoid registering a pfn against - * multiple nodes we check that this pfn does not already - * reside in some other nodes. - */ - if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) - register_page_bootmem_info_section(pfn); - } -} -#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { From patchwork Fri Nov 13 10:59:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902975 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E12F1697 for ; Fri, 13 Nov 2020 11:01:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8D6CD22250 for ; Fri, 13 Nov 2020 11:01:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="QJPR6F7R" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8D6CD22250 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B1C226B00A5; Fri, 13 Nov 2020 06:01:15 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AC87C6B00A6; Fri, 13 Nov 2020 06:01:15 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 96A7B6B00A7; Fri, 13 Nov 2020 06:01:15 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id 6685E6B00A5 for ; Fri, 13 Nov 2020 06:01:15 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0C8FB8249980 for ; Fri, 13 Nov 2020 11:01:15 +0000 (UTC) X-FDA: 77479103310.18.able17_4b010122730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id D7658100ED3C5 for ; Fri, 13 Nov 2020 11:01:14 +0000 (UTC) X-Spam-Summary: 1,0,0,72ad163fa630ae3c,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1606:1730:1747:1777:1792:1801:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3355:3865:3866:3867:3870:3871:4119:4321:4385:4605:5007:6261:6653:6737:6738:8957:9592:10004:11026:11232:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13255:13894:13972:14096:14394:21080:21094:21323:21444:21451:21627:21987:21990:30012:30054:30064:30070:30075,0,RBL:209.85.215.193:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04y8zuwu933efm4yuyyexdmqwg9xfycmys4fhwdpdqi3s5hsta18xiief6rhubh.nddcf861a5cyz55h877coxkxxgxashkg1mi7d1g3yzqi3rs3gr43wbxuugtt641.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: able17_4b010122730e X-Filterd-Recvd-Size: 8802 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:01:14 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id r186so6835541pgr.0 for ; Fri, 13 Nov 2020 03:01:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=54lEiVv8FYXYBhDxSmTzUswAk1pRn+2hKBQrJSvurBE=; b=QJPR6F7RxLD4VyQ8ybuR4Sw6RQISQ0abOecOPPQN9KTP/Rzf6CDogzj50IQkUxjZ0O qCa0Qf8eAOU0mDnEih49QhP2pmk9OTzjnklutFNaKV4RR2li+wwkmQDQsC6003hpCrlQ NZurSPgHA/9zn9OdyTGoNSkum6GPJ/uF+39hO8hQXE535rdd6i+7kL16Jg3JzjNvOTTU Sk7m9hScSWu09OP6+GvTqrTx8D083Sceg3CAkftYgNy1BdVu4KqGtxEHlQxpj1UazkBi Osmh+fzOdOGL1rHyJGBAYEKZDVRvjbPSSCdQ/KhIul2E0DHED//Jm/ecaU0YXyZOkxZy rWCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=54lEiVv8FYXYBhDxSmTzUswAk1pRn+2hKBQrJSvurBE=; b=ZVtSOdN7eb2v95p55HhAUAr2j0GEqA3NLhmapapYUqz9z81proOJTqASfeW8YIRrs8 XjmmxRXCA8XNteSPlFXHQhlVThu6IvUWSFnjo/jdSjNEMdv+4NKZLsTTcWc0U3MbdWOI TuMmP8xVMtIxedfeevtru80aJtdJtEAjWCMWeR+LWf3f3VIW3Qc1HnslSGKu64YZV5aj kmvI908dP2pIyF2P72lKmCZ+mf+O4qXRUiKmHpRQ+P2GXcMqiEd6sO1ci1SkN7SXeh/o CFLrQPiwAlqL96Q6I6gJn5qjsMucLGfAfrgvvLYQYyMY35x+ugAxvrpuEUBDWSAlpjLU 3uIw== X-Gm-Message-State: AOAM533/0HUQBWrWHASsyqxl5sEHoMQ1vxOGyKq18uNJacs5b2a9DPfY JtC+D7BeSpUuACRvfj+0YagtcA== X-Google-Smtp-Source: ABdhPJyF2z+PzSW3K5xrwYqV2JFj+OJqHX0IlvEqMHCE97nCGYgs9Uu6GiAzh2T3wlvueVk78tk5RQ== X-Received: by 2002:a63:1c26:: with SMTP id c38mr1552302pgc.398.1605265273557; Fri, 13 Nov 2020 03:01:13 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.01.02 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:01:12 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 02/21] mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c Date: Fri, 13 Nov 2020 18:59:33 +0800 Message-Id: <20201113105952.11638-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we will use {get,put}_page_bootmem() to initialize the page for vmemmap or free vmemmap page to buddy. So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador --- arch/x86/mm/init_64.c | 2 +- include/linux/bootmem_info.h | 13 +++++++++++++ include/linux/memory_hotplug.h | 4 ---- mm/bootmem_info.c | 25 +++++++++++++++++++++++++ mm/memory_hotplug.c | 27 --------------------------- mm/sparse.c | 1 + 6 files changed, 40 insertions(+), 32 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index c7f7ad55b625..0a45f062826e 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1572,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return err; } -#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE) +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void register_page_bootmem_memmap(unsigned long section_nr, struct page *start_page, unsigned long nr_pages) { diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 65bb9b23140f..4ed6dee1adc9 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -18,10 +18,23 @@ enum { #ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void __init register_page_bootmem_info_node(struct pglist_data *pgdat); + +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type); +void put_page_bootmem(struct page *page); #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { } + +static inline void put_page_bootmem(struct page *page) +{ +} + +static inline void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ +} #endif #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 19e5d067294c..c9f3361fe84b 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -197,10 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -extern void put_page_bootmem(struct page *page); -extern void get_page_bootmem(unsigned long ingo, struct page *page, - unsigned long type); - void get_online_mems(void); void put_online_mems(void); diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index 39fa8fc120bc..fcab5a3f8cc0 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -10,6 +10,31 @@ #include #include +void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) +{ + page->freelist = (void *)type; + SetPagePrivate(page); + set_page_private(page, info); + page_ref_inc(page); +} + +void put_page_bootmem(struct page *page) +{ + unsigned long type; + + type = (unsigned long) page->freelist; + BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || + type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); + + if (page_ref_dec_return(page) == 1) { + page->freelist = NULL; + ClearPagePrivate(page); + set_page_private(page, 0); + INIT_LIST_HEAD(&page->lru); + free_reserved_page(page); + } +} + #ifndef CONFIG_SPARSEMEM_VMEMMAP static void register_page_bootmem_info_section(unsigned long start_pfn) { diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 2da4ad071456..ae57eedc341f 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include @@ -142,32 +141,6 @@ static void release_memory_resource(struct resource *res) } #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE -void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) -{ - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); - page_ref_inc(page); -} - -void put_page_bootmem(struct page *page) -{ - unsigned long type; - - type = (unsigned long) page->freelist; - BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || - type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); - - if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - INIT_LIST_HEAD(&page->lru); - free_reserved_page(page); - } -} - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { diff --git a/mm/sparse.c b/mm/sparse.c index b25ad8e64839..a4138410d890 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "internal.h" #include From patchwork Fri Nov 13 10:59:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902977 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B1CCA138B for ; Fri, 13 Nov 2020 11:01:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4C66A2224F for ; Fri, 13 Nov 2020 11:01:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="0FY1bBXs" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4C66A2224F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6F8166B00A7; Fri, 13 Nov 2020 06:01:30 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 681926B00A8; Fri, 13 Nov 2020 06:01:30 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5257B6B00A9; Fri, 13 Nov 2020 06:01:30 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0049.hostedemail.com [216.40.44.49]) by kanga.kvack.org (Postfix) with ESMTP id 1DF706B00A7 for ; Fri, 13 Nov 2020 06:01:30 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B513E180AD802 for ; Fri, 13 Nov 2020 11:01:29 +0000 (UTC) X-FDA: 77479103898.15.fuel33_29090972730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 795611814B0C8 for ; Fri, 13 Nov 2020 11:01:29 +0000 (UTC) X-Spam-Summary: 10,1,0,6bbcb4b63f53d1e4,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:404:541:800:960:965:966:973:981:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1541:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3866:3867:3868:3871:3874:4250:4321:4384:4385:4390:4395:4605:5007:6261:6630:6653:6737:6738:10004:11026:11473:11658:11914:12043:12048:12114:12296:12297:12438:12517:12519:12555:12895:12986:13069:13161:13229:13311:13357:13894:13972:14096:14181:14384:14394:14721:21080:21444:21451:21627:30054,0,RBL:209.85.214.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04y8ixeuu4ucdspjqhgutximrk748ycfupc51q3c9rwtbs6bbhd1ete1zqnoiuz.6g77fbgh3wwpknh4h61d89xnidezb15y6ron8ksqz6mxt5afu8dkmu1eiy83aok.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: fuel33_29090972730e X-Filterd-Recvd-Size: 5282 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:01:28 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id 35so215013ple.12 for ; Fri, 13 Nov 2020 03:01:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tOaBoYHHig9NCQlC9kF5etq1J+Bfm3eBMm2Sx+lDC9A=; b=0FY1bBXsIYcDLc/vtlAVQoYH27ooDuEQbD+06g/beLVBOPdjhnMLVU9A/kqlh3T4zl ceOcdEHMd7WPwYNlvCwKyXK3/lGyDAta5zW2W9/T1YPP7AmWjGIqKoMJHqDzYFG5NYgx 48rJfq+iaKiXGwo0UEtYiIJDOrmYnVAAhZbgcFQnC1EiNO+H6MT1suV+4KLEp3Ukst3O F/u2klHTmOG0GE4nDr9v3APIKvxlg14w4EHblZBmghIuyLjktJwVIOEy9x1PTca2okZi 1iINOZ6Pxf/fKwPfuQTLBv7tVGOp83hbbljCAlC9d4qm42/gxUdPPQlFIQM6141JpSG/ e4Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tOaBoYHHig9NCQlC9kF5etq1J+Bfm3eBMm2Sx+lDC9A=; b=m4wk302kPSONfcMmVW4B1kI/kQ8QrC5PHOBL/2UTQouPjQjAt29SW0ugLOSggj3xHl 9AE9RCRbn/WPLKq6P8b7S0p9TSN+uqBx6lX50t7cksAoWVBYSjnaWuUruuCwMQIs8H5y 1gUqw6fd9jFeMHPJZy2XQTiudVZnZ5YDEW7pgEdaRD/ZhnQ5ZMZH4ctYSLDR0I0TvAjM kVReDdejPiZOYZXgZdUeyIeXJtHA3+gH3/omxE8Ii6LcFAoSon6tyGSsNNDqyoMD/Ym+ ELmsH99H6zgvcAiFlbXIPCrfN9BTlqmoA0P1rpoa3fp1lLxJu+mCd9KA112vzVnhVU8+ yheQ== X-Gm-Message-State: AOAM530E42Od0yO5dUgZ1xKOyAIG3SRkKg3dVyvY8QqS7YaJymPdvIJH qc5ZecHE2j6VUjL6+hIGCCUUoA== X-Google-Smtp-Source: ABdhPJwxjMYztWRHjZezcYZqZHmla6x+5SMEjoB0WH1+Uuko6Mf4T0NnhsjY85Bi51WHYWPsZ8jUNA== X-Received: by 2002:a17:902:7b90:b029:d6:ad06:d4c0 with SMTP id w16-20020a1709027b90b02900d6ad06d4c0mr1545505pll.35.1605265287351; Fri, 13 Nov 2020 03:01:27 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.01.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:01:26 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 03/21] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Date: Fri, 13 Nov 2020 18:59:34 +0800 Message-Id: <20201113105952.11638-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure whether to enable the feature of freeing unused vmemmap associated with HugeTLB pages. Now only support x86. Signed-off-by: Muchun Song --- arch/x86/mm/init_64.c | 2 +- fs/Kconfig | 14 ++++++++++++++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0a45f062826e..0435bee2e172 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall; static void __init register_page_bootmem_info(void) { -#ifdef CONFIG_NUMA +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) int i; for_each_online_node(i) diff --git a/fs/Kconfig b/fs/Kconfig index 976e8b9033c4..67e1bc99574f 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -245,6 +245,20 @@ config HUGETLBFS config HUGETLB_PAGE def_bool HUGETLBFS +config HUGETLB_PAGE_FREE_VMEMMAP + def_bool HUGETLB_PAGE + depends on X86 + depends on SPARSEMEM_VMEMMAP + depends on HAVE_BOOTMEM_INFO_NODE + help + When using SPARSEMEM_VMEMMAP, the system can save up some memory + from pre-allocated HugeTLB pages when they are not used. 6 pages + per 2MB HugeTLB page and 4094 per 1GB HugeTLB page. + + When the pages are going to be used or freed up, the vmemmap array + representing that range needs to be remapped again and the pages + we discarded earlier need to be rellocated again. + config MEMFD_CREATE def_bool TMPFS || HUGETLBFS From patchwork Fri Nov 13 10:59:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902979 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E1F38138B for ; Fri, 13 Nov 2020 11:01:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7A0C322250 for ; Fri, 13 Nov 2020 11:01:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="Sn8J3NHL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7A0C322250 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 942176B00A9; Fri, 13 Nov 2020 06:01:42 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8CB9C6B00AA; Fri, 13 Nov 2020 06:01:42 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 791B46B00AB; Fri, 13 Nov 2020 06:01:42 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 42C1A6B00A9 for ; Fri, 13 Nov 2020 06:01:42 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DCF321EE6 for ; Fri, 13 Nov 2020 11:01:41 +0000 (UTC) X-FDA: 77479104402.25.can19_2c0c3a02730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 953CE1804E3A9 for ; Fri, 13 Nov 2020 11:01:41 +0000 (UTC) X-Spam-Summary: 1,0,0,f701f065461979ed,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:1:2:41:69:355:379:541:800:960:965:966:973:981:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:2194:2196:2198:2199:2200:2201:2393:2538:2539:2559:2562:2693:2731:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4052:4321:4385:4390:4395:4605:5007:6119:6261:6653:6737:6738:7875:7903:8603:8999:9036:9121:10004:11026:11233:11473:11658:11914:12043:12048:12050:12114:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13138:13231:13255:13894:14096:14394:21080:21094:21323:21433:21444:21450:21451:21627:21987:21990:30029:30046:30054:30064:30067:30069:30070,0,RBL:209.85.215.179:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yf7uq86k7fkri918argesgnibqnocsuupigfgbr3g61jneef1671oi99pdb7f.5zudjhq697zwds4mfpy8rwnbnjqsnfj5cmbh89qjn8nio8h38t166jid3r65cf6.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,Domain Cache:0, X-HE-Tag: can19_2c0c3a02730e X-Filterd-Recvd-Size: 12712 Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:01:40 +0000 (UTC) Received: by mail-pg1-f179.google.com with SMTP id f18so6797360pgi.8 for ; Fri, 13 Nov 2020 03:01:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tZWBkad1aByik7EPGEdLiaHkKOVuBPryX6lktgLSNfc=; b=Sn8J3NHLWKRmNkgu+5kZJsZ6eEiDa5bBIamYqx2NEg/W+sVELMtYnAZdShaC7CRojq 4Ggn04dhe0mbnOnTEWVOnkWLQI/k8GMQfAUMn0aQL1dEdQ2XD1Yo719jRyFykLauvt/k dPx5D1M1qb8ksnmWbVr5si5dwirvEjztDidJCVJQreHxwafHLEjSR9iH8/Amvc3n/UrX N+IziT7TTAX2XZJgUAL/W5cskTQKSl8DiEmUSdnRm3te5Jaoa32nyNy6pGe6f9AcUptS Dolo3jceYF7i7RJ1mQWjmn6+aEqWcYd/hweF38Ot6vACeYyTtSZNSLqb+deMx6sYAn6r amsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tZWBkad1aByik7EPGEdLiaHkKOVuBPryX6lktgLSNfc=; b=TrjBTRUp+3xky/HGarOEzv63bZsXjemZgozSLMUT3kNuFEqnSGBhf7gZGq7pxN8i1y v1iEcJ7gk8Oo8LtU8td6rpe4AlZwmBztfn1Z44LUchD6hTLRVoUtlflta2bC5wYjmTLC uL0oLro9dyXShwCHmfv9qPvlFFPWxsh6OyyA2dKw1NdiTa77VYmfhxJD8kwbCMkNMM5+ XfZAxwJKyX9cF6rpja8QJL1RU4UOtVxPopGprROKtJV5dnIP8HVJSmjSuCcMDAEB/1NM SyuRtLYLClcd/utOCimy004isTYg3Or/MdEn5sTZQnQhjYYWrQtP8xQd/ZrUc4YvhPBI XGYg== X-Gm-Message-State: AOAM533eQVkUOEkg+wH2ziUOdefw+19nkjO6wnPhl7VUYIHFoqPfZSBv +YsYy2GUEv7goiTcZWDIaROhGQ== X-Google-Smtp-Source: ABdhPJxzdkaQ72DWuqPJHJj9pNTlv7mpkg7nsUPfbN7ITv8pKowvZoLXseNg38sfh+U7wDMt+l9VMg== X-Received: by 2002:a17:90b:154:: with SMTP id em20mr2356952pjb.114.1605265299802; Fri, 13 Nov 2020 03:01:39 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.01.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:01:39 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Date: Fri, 13 Nov 2020 18:59:35 +0800 Message-Id: <20201113105952.11638-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the size of HugeTLB page is 2MB, we need 512 struct page structures (8 pages) to be associated with it. As far as I know, we only use the first 4 struct page structures. Use of first 4 struct page structures comes from HUGETLB_CGROUP_MIN_ORDER. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structs. We map the virtual addresses of the remaining 6 pages of tail page structs to the first tail page struct, and then free these 6 pages. Therefore, we need to reserve at least 2 pages as vmemmap areas. So we introduce a new nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a HugeTLB page that we can free to buddy system. Signed-off-by: Muchun Song Acked-by: Mike Kravetz --- include/linux/hugetlb.h | 3 ++ mm/Makefile | 1 + mm/hugetlb.c | 3 ++ mm/hugetlb_vmemmap.c | 108 ++++++++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 20 +++++++++ 5 files changed, 135 insertions(+) create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d5cc5f802dd4..eed3dd3bd626 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -492,6 +492,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/Makefile b/mm/Makefile index 752111587c99..2a734576bbc0 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP) += frontswap.o obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) += hugetlb_vmemmap.o obj-$(CONFIG_NUMA) += mempolicy.o obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 81a41aa080a5..f88032c24667 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -42,6 +42,7 @@ #include #include #include "internal.h" +#include "hugetlb_vmemmap.h" int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; @@ -3285,6 +3286,8 @@ void __init hugetlb_add_hstate(unsigned int order) snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); + parsed_hstate = h; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c new file mode 100644 index 000000000000..a6c9948302e2 --- /dev/null +++ b/mm/hugetlb_vmemmap.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + * + * Nowadays we track the status of physical page frames using struct page + * structures arranged in one or more arrays. And here exists one-to-one + * mapping between the physical page frame and the corresponding struct page + * structure. + * + * The HugeTLB support is built on top of multiple page size support that + * is provided by most modern architectures. For example, x86 CPUs normally + * support 4K and 2M (1G if architecturally supported) page sizes. Every + * HugeTLB has more than one struct page structure. The 2M HugeTLB has 512 + * struct page structure and 1G HugeTLB has 4096 struct page structures. But + * in the core of HugeTLB only uses the first 4 (Use of first 4 struct page + * structures comes from HUGETLB_CGROUP_MIN_ORDER.) struct page structures to + * store metadata associated with each HugeTLB. The rest of the struct page + * structures are usually read the compound_head field which are all the same + * value. If we can free some struct page memory to buddy system so that we + * can save a lot of memory. + * + * When the system boot up, every 2M HugeTLB has 512 struct page structures + * which size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE). + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | | 1 | -------------> | 1 | + * | | | 2 | -------------> | 2 | + * | | | 3 | -------------> | 3 | + * | | | 4 | -------------> | 4 | + * | 2M | | 5 | -------------> | 5 | + * | | | 6 | -------------> | 6 | + * | | | 7 | -------------> | 7 | + * | | +-----------+ +-----------+ + * | | + * | | + * +-----------+ + * + * + * When a HugeTLB is preallocated, we can change the mapping from above to + * bellow. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | | 1 | -------------> | 1 | + * | | | 2 | -------------> +-----------+ + * | | | 3 | -----------------^ ^ ^ ^ ^ + * | | | 4 | -------------------+ | | | + * | 2M | | 5 | ---------------------+ | | + * | | | 6 | -----------------------+ | + * | | | 7 | -------------------------+ + * | | +-----------+ + * | | + * | | + * +-----------+ + * + * For tail pages, the value of compound_head is the same. So we can reuse + * first page of tail page structures. We map the virtual addresses of the + * remaining 6 pages of tail page structures to the first tail page structures, + * and then free these 6 page frames. Therefore, we need to reserve at least 2 + * pages as vmemmap areas. + * + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for + * vmemmap pages and restore the previous mapping relationship. + */ +#define pr_fmt(fmt) "HugeTLB Vmemmap: " fmt + +#include "hugetlb_vmemmap.h" + +/* + * There are 512 struct page structures(8 pages) associated with each 2MB + * hugetlb page. For tail pages, the value of compound_head is the same. + * So we can reuse first page of tail page structures. We map the virtual + * addresses of the remaining 6 pages of tail page structures to the first + * tail page struct, and then free these 6 pages. Therefore, we need to + * reserve at least 2 pages as vmemmap areas. + */ +#define RESERVE_VMEMMAP_NR 2U + +void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int order = huge_page_order(h); + unsigned int vmemmap_pages; + + vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page are not to be freed to buddy + * system, the others page will map to the first tail page. So there + * are (@vmemmap_pages - RESERVE_VMEMMAP_NR) pages can be freed. + * + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? This is + * not expected to happen unless the system is corrupted. So on the + * safe side, it is only a safety net. + */ + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + else + h->nr_free_vmemmap_pages = 0; + + pr_debug("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, + h->name); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h new file mode 100644 index 000000000000..40c0c7dfb60d --- /dev/null +++ b/mm/hugetlb_vmemmap.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + */ +#ifndef _LINUX_HUGETLB_VMEMMAP_H +#define _LINUX_HUGETLB_VMEMMAP_H +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void __init hugetlb_vmemmap_init(struct hstate *h); +#else +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ +#endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Fri Nov 13 10:59:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902981 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A22E7138B for ; Fri, 13 Nov 2020 11:01:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 54C10207DE for ; Fri, 13 Nov 2020 11:01:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="WHRw8PZC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 54C10207DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 747FF6B00AB; Fri, 13 Nov 2020 06:01:54 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6F72C6B00AC; Fri, 13 Nov 2020 06:01:54 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 548866B00AD; Fri, 13 Nov 2020 06:01:54 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id 1E78F6B00AB for ; Fri, 13 Nov 2020 06:01:54 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BD36B824999B for ; Fri, 13 Nov 2020 11:01:53 +0000 (UTC) X-FDA: 77479104906.09.alley29_4711d022730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id A4F8A180AD801 for ; Fri, 13 Nov 2020 11:01:53 +0000 (UTC) X-Spam-Summary: 1,0,0,2e438ff6f85b8a02,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:3138:3139:3140:3141:3142:3354:3622:3865:3867:3868:3871:4118:4250:4321:4385:4390:4395:5007:6119:6261:6653:6737:6738:7875:7903:9010:9036:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14096:14110:14181:14394:14721:21080:21444:21451:21627:21990:30054,0,RBL:209.85.210.196:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04y8yueh1nisdbb8pm1y8g38rtfiyypm83ce1tomgb4667da5zrbqonme39g64q.pmzuoswfcn18q3pecbmpbjmq96z77bduaejg4asa5m6kxdd7c5r7as3ych8sff5.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: alley29_4711d022730e X-Filterd-Recvd-Size: 7387 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:01:52 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id y7so7302260pfq.11 for ; Fri, 13 Nov 2020 03:01:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QgKZ0gBbzr5Y1R6IbjL9V67vQkRA/Xenmaet0eZn1jI=; b=WHRw8PZCdlUW+Ct60o/wJTbtCRkWymC3+ozO2o4xnhtWd3zzxeCSnf30bccZp6xzy4 +tTcOs3I6WX/N6lD5i1sLgTmz/5B/SWP+MWQx04YC4Ha6Uy3+iUH7O79oBW5DZ1Clmko z4M5Yj3N+ZfOoHLU3AoUO+kMMaJ+QIXea+8RvHlVVCgk5InzQPNnuy8+6KsqC7vOdwWW jny1Ho5Br8/nO5IRAWJGLslGttCNvapDZQkAkdl/MbZZJLnhI6eK7kRUgkqpJIG5zPNy M3J2ojBtrl9u09K4ppggcLSNf7lZH7LUTwKtvcqlLLBqVjdGyTgxbmC503UhoiEq63CS enQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QgKZ0gBbzr5Y1R6IbjL9V67vQkRA/Xenmaet0eZn1jI=; b=gYRbqDqEwp3xFA/IszXVbgsduach0sre8SbMIVqDV1s08k1s1kZtZ3cuO+6a/Xjer3 25bRD4AHNF4dipVhYbwnhefPbiTouwxghZJjuEg/pdSQUj7R+OFlRH4yb7E/4OgoDhco bQJf+P+Hcv5rEGdVB0hKI1FE5VTGJtrzNbnTX8Vty+R9YK57n7n3p0x9GQaOx/kAS9IQ yaTddj8sMXGjwz5xKugKBDrKN7lpn/F9LIMzu/yS5ZKs4tOC9/LhqSvoMUjYd0ytykVI PUAJiSi1/iPOrC45IvtEiILu8zH5N1bdBzrZSOBW+xzpxSTbgoP5be88TXjcu+e1aidk 54qg== X-Gm-Message-State: AOAM532hGLTfQU9239EfGA52fC7q13hR4kZer6N5lZjNzflPtgr/iQ/F u3lR/RsNfKi/s5ukC5teS4NPww== X-Google-Smtp-Source: ABdhPJwY3zGp6zFuIGwwhXgYIEFo1bUv9jDKGGKu4tB/Cr++GdoMqEUxYD2qfW+e+Lja4kSHIAKnyA== X-Received: by 2002:a17:90a:4215:: with SMTP id o21mr2307679pjg.166.1605265312084; Fri, 13 Nov 2020 03:01:52 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.01.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:01:51 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers Date: Fri, 13 Nov 2020 18:59:36 +0800 Message-Id: <20201113105952.11638-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On x86_64, vmemmap is always PMD mapped if the machine has hugepages support and if we have 2MB contiguos pages and PMD aligned. If we want to free the unused vmemmap pages, we have to split the huge pmd firstly. So we should pre-allocate pgtable to split PMD to PTE. Signed-off-by: Muchun Song Acked-by: Mike Kravetz --- mm/hugetlb_vmemmap.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 12 +++++++++ 2 files changed, 85 insertions(+) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index a6c9948302e2..b7dfa97b4ea9 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -71,6 +71,8 @@ */ #define pr_fmt(fmt) "HugeTLB Vmemmap: " fmt +#include +#include #include "hugetlb_vmemmap.h" /* @@ -83,6 +85,77 @@ */ #define RESERVE_VMEMMAP_NR 2U +#ifndef VMEMMAP_HPAGE_SHIFT +#define VMEMMAP_HPAGE_SHIFT HPAGE_SHIFT +#endif +#define VMEMMAP_HPAGE_ORDER (VMEMMAP_HPAGE_SHIFT - PAGE_SHIFT) +#define VMEMMAP_HPAGE_NR (1 << VMEMMAP_HPAGE_ORDER) +#define VMEMMAP_HPAGE_SIZE ((1UL) << VMEMMAP_HPAGE_SHIFT) +#define VMEMMAP_HPAGE_MASK (~(VMEMMAP_HPAGE_SIZE - 1)) + +#define page_huge_pte(page) ((page)->pmd_huge_pte) + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return h->nr_free_vmemmap_pages; +} + +static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) +{ + return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; +} + +static inline unsigned long vmemmap_pages_size_per_hpage(struct hstate *h) +{ + return (unsigned long)vmemmap_pages_per_hpage(h) << PAGE_SHIFT; +} + +static inline unsigned int pgtable_pages_to_prealloc_per_hpage(struct hstate *h) +{ + unsigned long vmemmap_size = vmemmap_pages_size_per_hpage(h); + + /* + * No need pre-allocate page tables when there is no vmemmap pages + * to free. + */ + if (!free_vmemmap_pages_per_hpage(h)) + return 0; + + return ALIGN(vmemmap_size, VMEMMAP_HPAGE_SIZE) >> VMEMMAP_HPAGE_SHIFT; +} + +void vmemmap_pgtable_free(struct page *page) +{ + struct page *pte_page, *t_page; + + list_for_each_entry_safe(pte_page, t_page, &page->lru, lru) { + list_del(&pte_page->lru); + pte_free_kernel(&init_mm, page_to_virt(pte_page)); + } +} + +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) +{ + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h); + + /* Store preallocated pages on huge page lru list */ + INIT_LIST_HEAD(&page->lru); + + while (nr--) { + pte_t *pte_p; + + pte_p = pte_alloc_one_kernel(&init_mm); + if (!pte_p) + goto out; + list_add(&virt_to_page(pte_p)->lru, &page->lru); + } + + return 0; +out: + vmemmap_pgtable_free(page); + return -ENOMEM; +} + void __init hugetlb_vmemmap_init(struct hstate *h) { unsigned int order = huge_page_order(h); diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 40c0c7dfb60d..2a72d2f62411 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -9,12 +9,24 @@ #ifndef _LINUX_HUGETLB_VMEMMAP_H #define _LINUX_HUGETLB_VMEMMAP_H #include +#include #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void __init hugetlb_vmemmap_init(struct hstate *h); +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page); +void vmemmap_pgtable_free(struct page *page); #else static inline void hugetlb_vmemmap_init(struct hstate *h) { } + +static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) +{ + return 0; +} + +static inline void vmemmap_pgtable_free(struct page *page) +{ +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Fri Nov 13 10:59:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902983 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E3BE3697 for ; Fri, 13 Nov 2020 11:02:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 98CBE22250 for ; Fri, 13 Nov 2020 11:02:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="VXie40CJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 98CBE22250 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BDC836B00AD; Fri, 13 Nov 2020 06:02:06 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B8A216B00AE; Fri, 13 Nov 2020 06:02:06 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2CB66B00AF; Fri, 13 Nov 2020 06:02:06 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 710676B00AD for ; Fri, 13 Nov 2020 06:02:06 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1668C362B for ; Fri, 13 Nov 2020 11:02:06 +0000 (UTC) X-FDA: 77479105452.30.map47_5100fc92730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id D11D1180B3C8E for ; Fri, 13 Nov 2020 11:02:05 +0000 (UTC) X-Spam-Summary: 1,0,0,34d56320f75a2a4c,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1541:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3867:3868:3870:4385:4390:4395:4605:5007:6261:6653:6737:6738:8957:10004:11026:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:13972:14096:14181:14384:14394:14721:21080:21444:21451:21627:30054:30075,0,RBL:209.85.210.194:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yg6uq85mtt58j8qusrtswkgeiijyp1iq1boq96tua4c5kdemr7ngazci5gzr3.gx3czmremc7uj7uy8fxhxdwi18cqk995qxbcwjr17m8we3wxrc74xrfxo9fgxwz.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: map47_5100fc92730e X-Filterd-Recvd-Size: 5338 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:02:05 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id a18so7340700pfl.3 for ; Fri, 13 Nov 2020 03:02:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9VN22FTK/cp8gibu50+JlwycxClP9nhjODvnGGV4tug=; b=VXie40CJ/fHrNIqDxNTz1GZQ9IUwdONiHU7BNAkIY8B2uU8hpqGGZFFGMkRMjvUHxM bnwwM0lkwcuOxrSnvs0tEfBJS8l0MqOb+zmKW2JGLjAgfvn54whcocRtiQNLzGFVF+4D Hov8wnJwOFJa0AXkTfmtTsgXBJ1QGaQyNs1QqmydTXHXH/kY4VPmnLIM66LxcvsPPuRb auD5GVm6SIHHjS3IdC4oC/JNZznkRbWzWnUqDZfIWxBSYVSNZpsA2HQH7sCsuDtnprw6 L+21F7cWT+Doj8Kxmr9kKjJ9dPaP6PE7rxydgtjMrWsz6e/cZglYP4WQKQm3JXnOOwbX 6Jjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9VN22FTK/cp8gibu50+JlwycxClP9nhjODvnGGV4tug=; b=D3itMQiGG4ROFSaU/Tn6jD2dt+PgvO1eGw2I81ytU8qukiecsmBv3R9IEapgo7Z14t KByM1tiip/H68LTSiyq0pNWcShYl0GMwgH+JWqciuG2VdZxvkVjUSVm4G5mUSPX4MOYI SLSutsNCPwGkGtoyb6Oi77tutzzobSFZl4V+s2R79vB9ZkqNmo/bnWvsIwbn0LPvoa4K Bh39z6bPNj1d83d/me4ZeUKtgSbLJ1+SL4PjaTlLTrfTFz5N4s/V7NKG91RBd/H8Nlr9 lyATe3RnBXQ5UrxZ0l4dDx8L1C/PRomtILAYbv3mWi62fHrryhxdbx2QbHipVmycvk8w dNCw== X-Gm-Message-State: AOAM532q1/VFhMnOTjGKw0i4XIQRxAgb24nGF0/mguWieM33giUq17q/ /UtNdxxrexG85kvUbjdh/FbYRA== X-Google-Smtp-Source: ABdhPJxbg1v/KZquALQzR8zC1f8a2UHGf7V/31l5O4xTMJCYuYdRHE2ig6g4Zrdc9lqZDjdhP6lnLg== X-Received: by 2002:aa7:970a:0:b029:18b:5773:13e6 with SMTP id a10-20020aa7970a0000b029018b577313e6mr1665133pfg.34.1605265324421; Fri, 13 Nov 2020 03:02:04 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.01.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:02:03 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 06/21] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Date: Fri, 13 Nov 2020 18:59:37 +0800 Message-Id: <20201113105952.11638-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we can use the free_vmemmap_page() to free the unused vmemmap pages and initialize a page for vmemmap page using via prepare_vmemmap_page(). Signed-off-by: Muchun Song --- include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..239e3cc8f86c 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -3,6 +3,7 @@ #define __LINUX_BOOTMEM_INFO_H #include +#include /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +static inline void free_vmemmap_page(struct page *page) +{ + VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2); + + /* bootmem page has reserved flag in the reserve_bootmem_region */ + if (PageReserved(page)) { + unsigned long magic = (unsigned long)page->freelist; + + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + put_page_bootmem(page); + else + WARN_ON(1); + } +} + +static inline void prepare_vmemmap_page(struct page *page) +{ + unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page)); + + get_page_bootmem(section_nr, page, SECTION_INFO); + mark_page_reserved(page); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { From patchwork Fri Nov 13 10:59:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902985 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3AEAD697 for ; Fri, 13 Nov 2020 11:02:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CD9C8207DE for ; Fri, 13 Nov 2020 11:02:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="rf1io2iY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CD9C8207DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E00716B00AF; Fri, 13 Nov 2020 06:02:17 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DAC876B00B0; Fri, 13 Nov 2020 06:02:17 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4EDF6B00B1; Fri, 13 Nov 2020 06:02:17 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id 8FD8D6B00AF for ; Fri, 13 Nov 2020 06:02:17 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3205C8249980 for ; Fri, 13 Nov 2020 11:02:17 +0000 (UTC) X-FDA: 77479105914.21.book46_1e090292730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id EC69E180442C3 for ; Fri, 13 Nov 2020 11:02:16 +0000 (UTC) X-Spam-Summary: 1,0,0,2015bc66db029949,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3355:3865:3867:3868:3870:3871:3874:4119:4321:4385:4390:4395:4605:5007:6261:6653:6737:6738:8957:9036:10004:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13255:13894:13972:14096:14181:14394:14721:21080:21222:21444:21451:21627:21990:30054:30075,0,RBL:209.85.215.194:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yfodgfonepixb83wo1a61c844zfyph1ym9ei4b4c4xojzog6ihw54cc8nf9i8.73bar4e8p3ysmxz19mpiwuxnwtde8neuffmjmb9y5ksai3eyi5twk6p5u9mawjt.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: book46_1e090292730e X-Filterd-Recvd-Size: 8070 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:02:16 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id i13so6801899pgm.9 for ; Fri, 13 Nov 2020 03:02:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=y8+6kFiX35+SYRYCMY/5tIq0PVNkZdMDuwu0+2DoVOg=; b=rf1io2iYRtz0L6+JHseYf3dNS5Mb/AXZcKwA/Xb/zSYFjMXMVAMQKlgCaQWr9kzns9 n2H6cxCQQAw2K805Tb6B1f+nayVdlCQxYmE38oSQo3sdMt9dUaEOe4B4W2yW5awu+IZO t2HpmWp49mlrv5X/DxeSo4d2nWsH6NZe7dRo9HIkpVlEosJgOdWcVp/vIyWDA+aPW5hx VIIpf4RiBQp2Q5zaO0raOuFgcW6EmUlh/5MVBlKzug41UUnLf7O66tbD/bCKvXVSZcp0 0UKvWGF3MM5MmYzrF++wlOOSwoVqq4o3IjJI7BUGkuNg+kFYYI8uGiX+WNQFI4K5Y65i eFRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=y8+6kFiX35+SYRYCMY/5tIq0PVNkZdMDuwu0+2DoVOg=; b=WjPmjsFJB5igfT065fG9psEuT16o7M7WOjG2ddpadP+lUiasyOueMJfiWXj1eaTW6o YrI6G+jJfyflHo/J39T99SUeCIByB7W1xuCj/hKXYLdkDEcDUvq10+ztveUBO5DKrqjU bS3pfgHLDMuuDZcqipTMC5Lnm9uEwtyS7ouqAdjRL+T6gYMbAAvUdELsAMK3DcQkZ++x q/zStZAPX10bDBq+ijAUEW/bq0DRZPV1Lw9yo86sKctEiuHFYG0cm1alHtaDYWv4SGLL JeGUrV+dUiiox5FkV8rUyfc0Sp73DjQIl0IUnnYWjjrjVTXyFYT58Q4mWblqVJQDdg0j fnOA== X-Gm-Message-State: AOAM533GgC3Bcn2O1chDgoC8vXiwZTJDdKOvRtKhXqLvfK4GVv09BB3/ pADSjKwhlh5DS43at9ujVBinOQ== X-Google-Smtp-Source: ABdhPJz7GS0CVqm/4oAdI7rZkckINR7GOC8+rUKzbAR0127hPf1uyWJEPox2fCoHDgwBDHSrLeK5oA== X-Received: by 2002:a63:5d04:: with SMTP id r4mr1575017pgb.165.1605265335521; Fri, 13 Nov 2020 03:02:15 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.02.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:02:14 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 07/21] mm/bootmem_info: Combine bootmem info and type into page->freelist Date: Fri, 13 Nov 2020 18:59:38 +0800 Message-Id: <20201113105952.11638-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The page->private shares storage with page->ptl. In the later patch, we will use the page->ptl. So here we combine bootmem info and type into page->freelist so that we can do not use page->private. Signed-off-by: Muchun Song --- arch/x86/mm/init_64.c | 2 +- include/linux/bootmem_info.h | 18 ++++++++++++++++-- mm/bootmem_info.c | 12 ++++++------ mm/sparse.c | 4 ++-- 4 files changed, 25 insertions(+), 11 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0435bee2e172..9b738c6cb659 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -883,7 +883,7 @@ static void __meminit free_pagetable(struct page *page, int order) if (PageReserved(page)) { __ClearPageReserved(page); - magic = (unsigned long)page->freelist; + magic = page_bootmem_type(page); if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) { while (nr_pages--) put_page_bootmem(page++); diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 239e3cc8f86c..95ae80838680 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -6,7 +6,7 @@ #include /* - * Types for free bootmem stored in page->lru.next. These have to be in + * Types for free bootmem stored in page->freelist. These have to be in * some random range in unsigned long space for debugging purposes. */ enum { @@ -17,6 +17,20 @@ enum { MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, }; +#define BOOTMEM_TYPE_BITS (ilog2(MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE) + 1) +#define BOOTMEM_TYPE_MAX ((1UL << BOOTMEM_TYPE_BITS) - 1) +#define BOOTMEM_INFO_MAX (ULONG_MAX >> BOOTMEM_TYPE_BITS) + +static inline unsigned long page_bootmem_type(struct page *page) +{ + return (unsigned long)page->freelist & BOOTMEM_TYPE_MAX; +} + +static inline unsigned long page_bootmem_info(struct page *page) +{ + return (unsigned long)page->freelist >> BOOTMEM_TYPE_BITS; +} + #ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void __init register_page_bootmem_info_node(struct pglist_data *pgdat); @@ -30,7 +44,7 @@ static inline void free_vmemmap_page(struct page *page) /* bootmem page has reserved flag in the reserve_bootmem_region */ if (PageReserved(page)) { - unsigned long magic = (unsigned long)page->freelist; + unsigned long magic = page_bootmem_type(page); if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) put_page_bootmem(page); diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index fcab5a3f8cc0..9baf163965fd 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -12,9 +12,9 @@ void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); + BUG_ON(info > BOOTMEM_INFO_MAX); + BUG_ON(type > BOOTMEM_TYPE_MAX); + page->freelist = (void *)((info << BOOTMEM_TYPE_BITS) | type); page_ref_inc(page); } @@ -22,14 +22,12 @@ void put_page_bootmem(struct page *page) { unsigned long type; - type = (unsigned long) page->freelist; + type = page_bootmem_type(page); BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); if (page_ref_dec_return(page) == 1) { page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); INIT_LIST_HEAD(&page->lru); free_reserved_page(page); } @@ -101,6 +99,8 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat) int node = pgdat->node_id; struct page *page; + BUILD_BUG_ON(MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE > BOOTMEM_TYPE_MAX); + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; page = virt_to_page(pgdat); diff --git a/mm/sparse.c b/mm/sparse.c index a4138410d890..fca5fa38c2bc 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -740,12 +740,12 @@ static void free_map_bootmem(struct page *memmap) >> PAGE_SHIFT; for (i = 0; i < nr_pages; i++, page++) { - magic = (unsigned long) page->freelist; + magic = page_bootmem_type(page); BUG_ON(magic == NODE_INFO); maps_section_nr = pfn_to_section_nr(page_to_pfn(page)); - removing_section_nr = page_private(page); + removing_section_nr = page_bootmem_info(page); /* * When this function is called, the removing section is From patchwork Fri Nov 13 10:59:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902987 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B076C138B for ; Fri, 13 Nov 2020 11:02:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 616DE22253 for ; Fri, 13 Nov 2020 11:02:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="Qs7/EY2O" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 616DE22253 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8CECD6B00B1; Fri, 13 Nov 2020 06:02:29 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 87E216B00B2; Fri, 13 Nov 2020 06:02:29 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 71DE96B00B3; Fri, 13 Nov 2020 06:02:29 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0233.hostedemail.com [216.40.44.233]) by kanga.kvack.org (Postfix) with ESMTP id 445E36B00B1 for ; Fri, 13 Nov 2020 06:02:29 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id DD0A38249980 for ; Fri, 13 Nov 2020 11:02:28 +0000 (UTC) X-FDA: 77479106376.20.girl22_09104402730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id B5290180C07AB for ; Fri, 13 Nov 2020 11:02:28 +0000 (UTC) X-Spam-Summary: 1,0,0,563aa1653d7763e7,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1542:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3874:4117:4321:4385:5007:6119:6261:6653:6737:6738:7903:8957:9010:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13894:14096:14110:14181:14394:14721:21080:21222:21444:21451:21627:21990:30029:30054,0,RBL:209.85.210.196:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04y8wrgocrf9rh9ubq4oihj4aq5xfoc9kbp7t314x9bzhrbrhn9depucqfee5ma.sb41zsw6kc6nsz3xtmikr3pf1hx3wowihjscm1kmtitxquhujyxrd1opku3qx9e.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: girl22_09104402730e X-Filterd-Recvd-Size: 6168 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:02:28 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id c66so7348052pfa.4 for ; Fri, 13 Nov 2020 03:02:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Cn1eur0hRRQaGJzuu65rghQYy8+1f9C2xe6gNwoWm68=; b=Qs7/EY2O7pRYN07lxQyNdfBkhlRaUWLkq0EuDrDXsHifqGUSi5pr8HXyqf8Xst8cLQ dSzl5OUZ0F4lAkTKSBFUZ9l+yO63guzHlT+LZZieFQoWkpTyJ9Ss6Dz0f/1grscLe8ai Qp45KvGwqMIQQJ5/l5iDlIkabIi14mdtPEDbVDoIOXOPxcg9QCxeocUYZ36ndhTv5zHO mJrd7MeM2yQawQnn1CrJaLMepWz2dNjx+V1h9paZUAySAdU0JaAeoYI+8fPlwBC1wwnS d4UwcY6yLfjtZrr8DORBjA/dI0KVQpXavlyny76UGOejpABWicFbHCoJyI4pO0pM7zXV bvUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Cn1eur0hRRQaGJzuu65rghQYy8+1f9C2xe6gNwoWm68=; b=tKc4MRlpRBbcvcm1S1b73XTPiipGDclmXvaEOuSdqn2MXtitlZt1FtrFqp1I0RzPOF ScaFq+hSDpNcR4R3kOeiLdBT3jCE7q1UMoTUc7p9P/GXCAGJa6JDtJz+9nLAviVLSxU6 iPdB/krtIzNkfxUbvv0mDcjO+LWUa4xflhk35wf8ZqwnZ0d0tDCtaMm9Tr+MMdNPSJoA jUnpxNOHmkovMO8RURRJfwEE8WuAo6UV1WehvzMAgQ6hrSYAhYJpIdeiavJy1YttfDcY +mfGpZgVp40OhU+f1dOhunxu/7Q5Uld+TNGzQwMSaLN8bXl9DwjxRysTmfloSjX29tWT ZrNQ== X-Gm-Message-State: AOAM533gwlnv7FH8rttCS5bxDGbp/e/HQw3LIh3zfdOY/9dRhua0l2G1 IrBG9Z3jEYnkzpS03rcfBNWjZg== X-Google-Smtp-Source: ABdhPJzfdPOUWv4x55loz+dkhdJH7Hla0oplT8khLvEfFiSk7stTfJ/zpEiwFwrNbNgwR3+YM3+COA== X-Received: by 2002:a63:2f41:: with SMTP id v62mr1647071pgv.10.1605265347250; Fri, 13 Nov 2020 03:02:27 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.02.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:02:26 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 08/21] mm/hugetlb: Initialize page table lock for vmemmap Date: Fri, 13 Nov 2020 18:59:39 +0800 Message-Id: <20201113105952.11638-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we will use the vmemmap page table lock to guard the splitting of the vmemmap PMD. So initialize the vmemmap page table lock. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index b7dfa97b4ea9..332c131c01a8 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -71,6 +71,8 @@ */ #define pr_fmt(fmt) "HugeTLB Vmemmap: " fmt +#include +#include #include #include #include "hugetlb_vmemmap.h" @@ -179,3 +181,70 @@ void __init hugetlb_vmemmap_init(struct hstate *h) pr_debug("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, h->name); } + +static int __init vmemmap_pud_entry(pud_t *pud, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct page *page = pud_page(*pud); + + /* + * The page->private shares storage with page->ptl. So make sure + * that the PG_private is not set and initialize page->private to + * zero. + */ + VM_BUG_ON_PAGE(PagePrivate(page), page); + set_page_private(page, 0); + + BUG_ON(!pmd_ptlock_init(page)); + + return 0; +} + +static void __init vmemmap_ptlock_init_section(unsigned long start_pfn) +{ + unsigned long section_nr; + struct mem_section *ms; + struct page *memmap, *memmap_end; + struct mm_struct *mm = &init_mm; + + const struct mm_walk_ops ops = { + .pud_entry = vmemmap_pud_entry, + }; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + memmap_end = memmap + PAGES_PER_SECTION; + + mmap_read_lock(mm); + BUG_ON(walk_page_range_novma(mm, (unsigned long)memmap, + (unsigned long)memmap_end, + &ops, NULL, NULL)); + mmap_read_unlock(mm); +} + +static void __init vmemmap_ptlock_init_node(int nid) +{ + unsigned long pfn, end_pfn; + struct pglist_data *pgdat = NODE_DATA(nid); + + pfn = pgdat->node_start_pfn; + end_pfn = pgdat_end_pfn(pgdat); + + for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) + vmemmap_ptlock_init_section(pfn); +} + +static int __init vmemmap_ptlock_init(void) +{ + int nid; + + if (!hugepages_supported()) + return 0; + + for_each_online_node(nid) + vmemmap_ptlock_init_node(nid); + + return 0; +} +core_initcall(vmemmap_ptlock_init); From patchwork Fri Nov 13 10:59:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902989 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25712138B for ; Fri, 13 Nov 2020 11:02:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B787822253 for ; Fri, 13 Nov 2020 11:02:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="ZUTFcF+B" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B787822253 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DAF856B00B3; Fri, 13 Nov 2020 06:02:40 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D369C6B00B4; Fri, 13 Nov 2020 06:02:40 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD73A6B00B5; Fri, 13 Nov 2020 06:02:40 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0173.hostedemail.com [216.40.44.173]) by kanga.kvack.org (Postfix) with ESMTP id 90A106B00B3 for ; Fri, 13 Nov 2020 06:02:40 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 321D03629 for ; Fri, 13 Nov 2020 11:02:40 +0000 (UTC) X-FDA: 77479106880.21.boats64_1c175fe2730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 13C11180442C2 for ; Fri, 13 Nov 2020 11:02:40 +0000 (UTC) X-Spam-Summary: 1,0,0,a4a12085422b5806,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:1:2:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1801:1981:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:3138:3139:3140:3141:3142:3865:3866:3867:3870:3872:3874:4052:4321:4385:4390:4395:4605:5007:6119:6261:6653:6737:6738:7875:7903:9010:10004:11026:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13255:13894:14096:14394:21080:21094:21323:21444:21451:21627:21990:30003:30054,0,RBL:209.85.210.195:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04ygngbjeuqm84e437ka7qcedf9p7octxorzje69ei381qqqxs1c8u5mcznkqk7.x3rcg9gz65txnjwttez9sxphd6y1z9redmjhc6qsptoy49hh93d94jqbwdp1ri3.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:72,LUA_SUMMARY:none X-HE-Tag: boats64_1c175fe2730e X-Filterd-Recvd-Size: 13148 Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:02:39 +0000 (UTC) Received: by mail-pf1-f195.google.com with SMTP id z3so7317655pfb.10 for ; Fri, 13 Nov 2020 03:02:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LKuqtFH5JUlrHA7PzKF4ebsglGxF2PGetuaoo/F+Ffc=; b=ZUTFcF+BuGjHrqfKr2XgoxuSlbd8ad3D7D8iyRxSENEQlUwf337yPvgLUVuj+g1akk SLc5WMKOImbtl0ofAU3lSieeA4ivN4aEEHgMIT6P43lFWdXBvyLMbuU0tlVm7grU+Iui d7BfC17I+ggEYcKzMw11dgML2Mf5TJvzD03ipzppfdh2Z0K0ar2SXGE3SFyab1KiltmD p8XCNFt5y63AzKkTGiZ18H8QG7Of0sT4HkX+//jmVXCqN5L05qP/JsnES2syVKpyRaZG KQ9Zb3Z/wRjSaj04eWlmtidODg+ckjeVGBzH8tpIRAuMilNsfsMIquUol0A13YUjHgyT zUvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LKuqtFH5JUlrHA7PzKF4ebsglGxF2PGetuaoo/F+Ffc=; b=XP4KyfJbcCZINZS3+eXJs4T0i1ilv78iI1QXpQ2Ej+KK9PQYEdrA0te7Zf5VJ6ivsB IIhSTRCYTzKtqzmaTw2jlzosl/6d+JRTKm1XN8qBd9Lqo/IZLcv4wjhmU15Zn55dcIhi n4WCDJBmJ6ibIcVwJ+Jq+s8vLegZp+HzcYvwKsIyh4qS+VQrPDmyVTDgFsIA8gVDlEGw fSH/96bO6S6w6che74F08OQnVxc7s+YjA4BkelzuTbPEwcM19jd//ZVqgmpJHYY1WCDO 4ugYTyNpFD3atKqEkkIq8KTN8vPfyihCv9SJcLUoynrkOLuKGTZuGbGjwEp9T9Jiu+Eg l7aQ== X-Gm-Message-State: AOAM533GdYyjWgFU41xB9I60z8rLOgxuCxumBoEQ60jny4rE0AO0uKRG /+up5X0J1maIL7Ax/9V3jeMxYQ== X-Google-Smtp-Source: ABdhPJwswcq2YGe+gBHL4WweZ2D9gwxnBPuKfclhMp1MUE9U8mn2gENKmBSoHj3FmdgtBMhuVwu6eQ== X-Received: by 2002:a63:eb4a:: with SMTP id b10mr1693417pgk.416.1605265358779; Fri, 13 Nov 2020 03:02:38 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.02.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:02:38 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Date: Fri, 13 Nov 2020 18:59:40 +0800 Message-Id: <20201113105952.11638-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we allocate a hugetlb page from the buddy, we should free the unused vmemmap pages associated with it. We can do that in the prep_new_huge_page(). Signed-off-by: Muchun Song --- arch/x86/include/asm/hugetlb.h | 9 ++ arch/x86/include/asm/pgtable_64_types.h | 8 ++ mm/hugetlb.c | 16 +++ mm/hugetlb_vmemmap.c | 188 ++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 5 + 5 files changed, 226 insertions(+) diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h index 1721b1aadeb1..c601fe042832 100644 --- a/arch/x86/include/asm/hugetlb.h +++ b/arch/x86/include/asm/hugetlb.h @@ -4,6 +4,15 @@ #include #include +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +#define vmemmap_pmd_huge vmemmap_pmd_huge +static inline bool vmemmap_pmd_huge(pmd_t *pmd) +{ + return pmd_large(*pmd); +} +#endif #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 52e5f5f2240d..bedbd2e7d06c 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -139,6 +139,14 @@ extern unsigned int ptrs_per_p4d; # define VMEMMAP_START __VMEMMAP_BASE_L4 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ +/* + * VMEMMAP_SIZE - allows the whole linear region to be covered by + * a struct page array. + */ +#define VMEMMAP_SIZE (1UL << (__VIRTUAL_MASK_SHIFT - PAGE_SHIFT - \ + 1 + ilog2(sizeof(struct page)))) +#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE) + #define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f88032c24667..a0ce6f33a717 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1499,6 +1499,14 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + free_huge_page_vmemmap(h, page); + /* + * Because we store preallocated pages on @page->lru, + * vmemmap_pgtable_free() must be called before the + * initialization of @page->lru in INIT_LIST_HEAD(). + */ + vmemmap_pgtable_free(page); + INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); @@ -1751,6 +1759,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, if (!page) return NULL; + if (vmemmap_pgtable_prealloc(h, page)) { + if (hstate_is_gigantic(h)) + free_gigantic_page(page, huge_page_order(h)); + else + put_page(page); + return NULL; + } + if (hstate_is_gigantic(h)) prep_compound_gigantic_page(page, huge_page_order(h)); prep_new_huge_page(h, page, page_to_nid(page)); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 332c131c01a8..937562a15f1e 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -74,6 +74,7 @@ #include #include #include +#include #include #include "hugetlb_vmemmap.h" @@ -86,6 +87,8 @@ * reserve at least 2 pages as vmemmap areas. */ #define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +#define TAIL_PAGE_REUSE -1 #ifndef VMEMMAP_HPAGE_SHIFT #define VMEMMAP_HPAGE_SHIFT HPAGE_SHIFT @@ -97,6 +100,21 @@ #define page_huge_pte(page) ((page)->pmd_huge_pte) +#define vmemmap_hpage_addr_end(addr, end) \ +({ \ + unsigned long __boundary; \ + __boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK; \ + (__boundary - 1 < (end) - 1) ? __boundary : (end); \ +}) + +#ifndef vmemmap_pmd_huge +#define vmemmap_pmd_huge vmemmap_pmd_huge +static inline bool vmemmap_pmd_huge(pmd_t *pmd) +{ + return pmd_huge(*pmd); +} +#endif + static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return h->nr_free_vmemmap_pages; @@ -158,6 +176,176 @@ int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) return -ENOMEM; } +/* + * Walk a vmemmap address to the pmd it maps. + */ +static pmd_t *vmemmap_to_pmd(unsigned long page) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + + if (page < VMEMMAP_START || page >= VMEMMAP_END) + return NULL; + + pgd = pgd_offset_k(page); + if (pgd_none(*pgd)) + return NULL; + p4d = p4d_offset(pgd, page); + if (p4d_none(*p4d)) + return NULL; + pud = pud_offset(p4d, page); + + if (pud_none(*pud) || pud_bad(*pud)) + return NULL; + pmd = pmd_offset(pud, page); + + return pmd; +} + +static inline spinlock_t *vmemmap_pmd_lock(pmd_t *pmd) +{ + return pmd_lock(&init_mm, pmd); +} + +static inline int freed_vmemmap_hpage(struct page *page) +{ + return atomic_read(&page->_mapcount) + 1; +} + +static inline int freed_vmemmap_hpage_inc(struct page *page) +{ + return atomic_inc_return_relaxed(&page->_mapcount) + 1; +} + +static inline int freed_vmemmap_hpage_dec(struct page *page) +{ + return atomic_dec_return_relaxed(&page->_mapcount) + 1; +} + +static inline void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, + unsigned long start, + unsigned long end, + struct list_head *free_pages) +{ + /* Make the tail pages are mapped read-only. */ + pgprot_t pgprot = PAGE_KERNEL_RO; + pte_t entry = mk_pte(reuse, pgprot); + unsigned long addr; + + for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) { + struct page *page; + pte_t old = *ptep; + + VM_WARN_ON(!pte_present(old)); + page = pte_page(old); + list_add(&page->lru, free_pages); + + set_pte_at(&init_mm, addr, ptep, entry); + } +} + +static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, + unsigned long addr, + struct list_head *free_pages) +{ + unsigned long next; + unsigned long start = addr + RESERVE_VMEMMAP_SIZE; + unsigned long end = addr + vmemmap_pages_size_per_hpage(h); + struct page *reuse = NULL; + + addr = start; + do { + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) + reuse = pte_page(ptep[TAIL_PAGE_REUSE]); + + next = vmemmap_hpage_addr_end(addr, end); + __free_huge_page_pte_vmemmap(reuse, ptep, addr, next, + free_pages); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr) +{ + int i; + pgprot_t pgprot = PAGE_KERNEL; + struct mm_struct *mm = &init_mm; + struct page *page; + pmd_t old_pmd, _pmd; + + old_pmd = READ_ONCE(*pmd); + page = pmd_page(old_pmd); + pmd_populate_kernel(mm, &_pmd, pte_p); + + for (i = 0; i < VMEMMAP_HPAGE_NR; i++, addr += PAGE_SIZE) { + pte_t entry, *pte; + + entry = mk_pte(page + i, pgprot); + pte = pte_offset_kernel(&_pmd, addr); + VM_BUG_ON(!pte_none(*pte)); + set_pte_at(mm, addr, pte, entry); + } + + /* make pte visible before pmd */ + smp_wmb(); + pmd_populate_kernel(mm, pmd, pte_p); +} + +static void split_vmemmap_huge_page(struct page *head, pmd_t *pmd) +{ + struct page *pte_page, *t_page; + unsigned long start = (unsigned long)head & VMEMMAP_HPAGE_MASK; + unsigned long addr = start; + + list_for_each_entry_safe(pte_page, t_page, &head->lru, lru) { + list_del(&pte_page->lru); + VM_BUG_ON(freed_vmemmap_hpage(pte_page)); + split_vmemmap_pmd(pmd++, page_to_virt(pte_page), addr); + addr += VMEMMAP_HPAGE_SIZE; + } + + flush_tlb_kernel_range(start, addr); +} + +void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + pmd_t *pmd; + spinlock_t *ptl; + LIST_HEAD(free_pages); + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + pmd = vmemmap_to_pmd((unsigned long)head); + BUG_ON(!pmd); + + ptl = vmemmap_pmd_lock(pmd); + if (vmemmap_pmd_huge(pmd)) + split_vmemmap_huge_page(head, pmd); + + __free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages); + freed_vmemmap_hpage_inc(pmd_page(*pmd)); + spin_unlock(ptl); + + free_vmemmap_page_list(&free_pages); +} + void __init hugetlb_vmemmap_init(struct hstate *h) { unsigned int order = huge_page_order(h); diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 2a72d2f62411..fb8b77659ed5 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -15,6 +15,7 @@ void __init hugetlb_vmemmap_init(struct hstate *h); int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page); void vmemmap_pgtable_free(struct page *page); +void free_huge_page_vmemmap(struct hstate *h, struct page *head); #else static inline void hugetlb_vmemmap_init(struct hstate *h) { @@ -28,5 +29,9 @@ static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) static inline void vmemmap_pgtable_free(struct page *page) { } + +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Fri Nov 13 10:59:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902995 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3FCD5138B for ; Fri, 13 Nov 2020 11:02:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D157722250 for ; Fri, 13 Nov 2020 11:02:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="GkG5ugGo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D157722250 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E4B0F6B00B5; Fri, 13 Nov 2020 06:02:53 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DD2646B00B6; Fri, 13 Nov 2020 06:02:53 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C75046B00B7; Fri, 13 Nov 2020 06:02:53 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id 98B306B00B5 for ; Fri, 13 Nov 2020 06:02:53 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3CC0D3629 for ; Fri, 13 Nov 2020 11:02:53 +0000 (UTC) X-FDA: 77479107426.04.ball53_4d0ff062730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 2112A8008B33 for ; Fri, 13 Nov 2020 11:02:53 +0000 (UTC) X-Spam-Summary: 13,1.2,0,34f2ffb76e07f101,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:1:2:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4049:4321:4385:4390:4395:4605:5007:6261:6653:6737:6738:8660:8957:9010:9592:10008:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13148:13230:13894:14096:14110:14394:21080:21433:21444:21451:21627:21740:21939:21990:30054,0,RBL:209.85.215.196:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yfjhnxhpqmy4cfm8b11q4ykou4gop3pxc6oke9ttehb8sf8or6rpjwbrm78qk.6gshzf9hoecie989ednckhgz1iwumgubqgn6kioykq88j7re6tp5w8yrkd5y1b8.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: ball53_4d0ff062730e X-Filterd-Recvd-Size: 10263 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:02:52 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id r186so6838976pgr.0 for ; Fri, 13 Nov 2020 03:02:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vcV5fAWWMBHeyweXdcrSnAyduSHKw3FF3vonlmGtxeQ=; b=GkG5ugGo7afFvzcVWG0uVadPXARhWCh67dhbqbpTbmIVZKNqmVIpsFg7CVXFIV3NEo RDIzcxfDzAJhEDfgx3xDPKxYuSWL9OhYBLwUnaDCyhbY+QnQQXL0JpCvVCH1lfJ7ZQxA ONnJ8/wWPB5aDwAwuD+YcmurfSkwDg8rU9tonjCsdvycvK3/2fQg2DPyXUDNBqRZtr+o mhqvplLA7nGcNbAXzwUBDea9GI1QI1zWBEZ6fJ3Eh+RgeaxYcoJoM2IpIQ1hXvJZ1QAu 5Q9lHOWpXdTaVRef5blqHgWKOa3QhJN8tR870SSqsW1b4xQwm4FqBrubV6xH13Pd9XRN CuWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vcV5fAWWMBHeyweXdcrSnAyduSHKw3FF3vonlmGtxeQ=; b=THRh03G/QGz6RHNHBM7B7mQ6OQx+1x7hcNmmX36Ce0ZvT8FfKfO0oimdlh7+1YK+De 6Ic7w4ditqTPttqyJ3pLUBZ2rrpresIFdlHRPtYXPOevz3NKDS04cLmi4ij0m0XPQNFI gViYj6aaXmohds2Fq2o+Im2L7juBmR3b8JObbTSnyGrnBlrtwb7sXhJT6D73BnF6MJNx 0jrg7Eo2BaLfBlQxWnRAaW+eiV8DxDdbjisUKNG7niRigP/Zpe4VXeIqW0rh5+PXAojM pLV6QkPyqiNuH9lpe3dm6Jy9gKozy6BUnjReKgmJBLTLk8BZ1pJ06f+x1WbLzUI7q1t1 +ySw== X-Gm-Message-State: AOAM530p71LrCVv2fsu/H0BhkmE3gxl1g5yPfrZjPHmWZmHukxztonOS FBPlafX1R91BRb15M/na0xWjtg== X-Google-Smtp-Source: ABdhPJwjl1roJo8W6r9y3YrlPbLX28igbMI2+0vuCQlC05Klj/YC3PratEhWwTn260/jMo9Nr5AoZg== X-Received: by 2002:a17:90a:6392:: with SMTP id f18mr2295613pjj.143.1605265370011; Fri, 13 Nov 2020 03:02:50 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.02.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:02:49 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 10/21] mm/hugetlb: Defer freeing of hugetlb pages Date: Fri, 13 Nov 2020 18:59:41 +0800 Message-Id: <20201113105952.11638-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the subsequent patch, we will allocate the vmemmap pages when free huge pages. But update_and_free_page() is be called from a non-task context(and hold hugetlb_lock), we can defer the actual freeing in a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages. Signed-off-by: Muchun Song --- mm/hugetlb.c | 98 +++++++++++++++++++++++++++++++++++++++++++++------- mm/hugetlb_vmemmap.c | 5 --- mm/hugetlb_vmemmap.h | 10 ++++++ 3 files changed, 96 insertions(+), 17 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a0ce6f33a717..4aabf12aca9b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1221,7 +1221,7 @@ static void destroy_compound_gigantic_page(struct page *page, __ClearPageHead(page); } -static void free_gigantic_page(struct page *page, unsigned int order) +static void __free_gigantic_page(struct page *page, unsigned int order) { /* * If the page isn't allocated using the cma allocator, @@ -1288,20 +1288,100 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { return NULL; } -static inline void free_gigantic_page(struct page *page, unsigned int order) { } +static inline void __free_gigantic_page(struct page *page, + unsigned int order) { } static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif -static void update_and_free_page(struct hstate *h, struct page *page) +static void __free_hugepage(struct hstate *h, struct page *page); + +/* + * As update_and_free_page() is be called from a non-task context(and hold + * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent + * use GFP_ATOMIC to allocate a lot of vmemmap pages. + * + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of + * pages to be freed and frees them one-by-one. As the page->mapping pointer + * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is + * reused as the llist_node structure of a lockless linked list of huge + * pages to be freed. + */ +static LLIST_HEAD(hpage_update_freelist); + +static void update_hpage_vmemmap_workfn(struct work_struct *work) { - int i; + struct llist_node *node; + struct page *page; + + node = llist_del_all(&hpage_update_freelist); + + while (node) { + page = container_of((struct address_space **)node, + struct page, mapping); + node = node->next; + page->mapping = NULL; + __free_hugepage(page_hstate(page), page); + cond_resched(); + } +} +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); + +static inline void __update_and_free_page(struct hstate *h, struct page *page) +{ + /* No need to allocate vmemmap pages */ + if (!free_vmemmap_pages_per_hpage(h)) { + __free_hugepage(h, page); + return; + } + + /* + * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap + * pages. + * + * Only call schedule_work() if hpage_update_freelist is previously + * empty. Otherwise, schedule_work() had been called but the workfn + * hasn't retrieved the list yet. + */ + if (llist_add((struct llist_node *)&page->mapping, + &hpage_update_freelist)) + schedule_work(&hpage_update_work); +} + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +static inline void free_gigantic_page(struct hstate *h, struct page *page) +{ + __free_gigantic_page(page, huge_page_order(h)); +} +#else +static inline void free_gigantic_page(struct hstate *h, struct page *page) +{ + /* + * Temporarily drop the hugetlb_lock, because + * we might block in __free_gigantic_page(). + */ + spin_unlock(&hugetlb_lock); + __free_gigantic_page(page, huge_page_order(h)); + spin_lock(&hugetlb_lock); +} +#endif + +static void update_and_free_page(struct hstate *h, struct page *page) +{ if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; h->nr_huge_pages--; h->nr_huge_pages_node[page_to_nid(page)]--; + + __update_and_free_page(h, page); +} + +static void __free_hugepage(struct hstate *h, struct page *page) +{ + int i; + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | @@ -1313,14 +1393,8 @@ static void update_and_free_page(struct hstate *h, struct page *page) set_compound_page_dtor(page, NULL_COMPOUND_DTOR); set_page_refcounted(page); if (hstate_is_gigantic(h)) { - /* - * Temporarily drop the hugetlb_lock, because - * we might block in free_gigantic_page(). - */ - spin_unlock(&hugetlb_lock); destroy_compound_gigantic_page(page, huge_page_order(h)); - free_gigantic_page(page, huge_page_order(h)); - spin_lock(&hugetlb_lock); + free_gigantic_page(h, page); } else { __free_pages(page, huge_page_order(h)); } @@ -1761,7 +1835,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, if (vmemmap_pgtable_prealloc(h, page)) { if (hstate_is_gigantic(h)) - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_page(h, page); else put_page(page); return NULL; diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 937562a15f1e..e6fca02b57b2 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -115,11 +115,6 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd) } #endif -static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) -{ - return h->nr_free_vmemmap_pages; -} - static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) { return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index fb8b77659ed5..a23fb1375859 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -16,6 +16,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h); int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page); void vmemmap_pgtable_free(struct page *page); void free_huge_page_vmemmap(struct hstate *h, struct page *head); + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return h->nr_free_vmemmap_pages; +} #else static inline void hugetlb_vmemmap_init(struct hstate *h) { @@ -33,5 +38,10 @@ static inline void vmemmap_pgtable_free(struct page *page) static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Fri Nov 13 10:59:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902999 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E65C697 for ; Fri, 13 Nov 2020 11:03:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 51673207DE for ; Fri, 13 Nov 2020 11:03:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="A25SWB3K" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 51673207DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 71D226B00B7; Fri, 13 Nov 2020 06:03:03 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6CD7B6B00B8; Fri, 13 Nov 2020 06:03:03 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56E726B00B9; Fri, 13 Nov 2020 06:03:03 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id 2B48A6B00B7 for ; Fri, 13 Nov 2020 06:03:03 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D0AC9180AD801 for ; Fri, 13 Nov 2020 11:03:02 +0000 (UTC) X-FDA: 77479107804.07.hill73_5516a692730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id B83181803F9A9 for ; Fri, 13 Nov 2020 11:03:02 +0000 (UTC) X-Spam-Summary: 1,0,0,facdb6dea44544b4,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3355:3865:3867:3870:3871:3872:3874:4119:4321:4385:4605:5007:6119:6261:6653:6737:6738:7903:9010:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13255:13894:14096:14110:14181:14394:14721:21080:21094:21323:21444:21451:21627:21990:30054,0,RBL:209.85.215.194:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yfg7w6upqk8e8jtucwbj6x5wyanypijrwj5sme79koekh9t5dztyb6uun4rbm.ehk74hqr8t7gj5b5g1o9zdh86d4w36kyk6ee997e9bqwp8rbtkzs9fs4t75xfsn.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: hill73_5516a692730e X-Filterd-Recvd-Size: 8495 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:03:02 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id f27so6828231pgl.1 for ; Fri, 13 Nov 2020 03:03:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hlk7aVv3IqJJByqDA178uIBNh5+nIvTl7+bGApMXbLg=; b=A25SWB3K4oQ5j6N7N3wG52EulpSl7eeETmRdl+oeRg8MWFXbginsuTEmzlcJaReKCl xSCANPJ9pyQdc2vmBngtteY0TcEFjSQVTca1fr0dSIN020tiuDONowJ0mxbGSKAkEchi 02Qv+QBkVPwr0xl5LEwKAdf1tFa6WRNQ8+H62XDCdytWmsnZDgFpKOY8D0VJ9+/bVPd6 /ernKziU9EP6mwnBZ6cbKRczbhncU77f5sUubUrQOEucEdhInvkbm7TaPjO/eZSQjgQo RGC0izznVN0ivukoMRvsWxTCcrLptfZbhOSoAqyLqxV9iJKR6/tWoPdKOnwyP9exlN75 3wUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hlk7aVv3IqJJByqDA178uIBNh5+nIvTl7+bGApMXbLg=; b=kUlAllv7gvOXHc9jIbHQsVYR2IN8TkK2YmRUs428xTsi3hYn8kYX3+EnHW5h1C38AL KED5NQzsNFn1s9b9wZlRz7FkIzLyZZn3Ah+ekTOA1fGwmimvjcyQLXiXuVzi24Tio7LL +TkV+Ej8119cReGyQDAgW8OHOOzCTRD7WrCIWLCM9UQvn93ArIDTivvA2MPB4cZZnREC BXFZo6flQp3wpw1FvHrjt6TKRMNJlQ+U+QgMHuzrCnrNuGAA6KdFG2CRmtU82u08M8gs vfDERzxaStiGuRyYJQylfVU0y8Aom/oC98WtzNSaXmU1OfG2xDnKvJ+clP5t0lIzNP0p bI5A== X-Gm-Message-State: AOAM533eGQdzsyWZzU/RaB4y9V7pFowtQmCvLu6g8KEMhgQLLV8Z1fNi Af7RsBOJHwCTOXg966+NTAOTMw== X-Google-Smtp-Source: ABdhPJxWFjUKKO5zEewfas6kEkbcVL3DLFn6RolpO74n5YA9HNm/WK9m82qwu/Od8O9RjaITfVus1A== X-Received: by 2002:a17:90a:4dc3:: with SMTP id r3mr2359489pjl.155.1605265381285; Fri, 13 Nov 2020 03:03:01 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.02.50 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:03:00 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 11/21] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Date: Fri, 13 Nov 2020 18:59:42 +0800 Message-Id: <20201113105952.11638-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we free a hugetlb page to the buddy, we should allocate the vmemmap pages associated with it. We can do that in the __free_hugepage(). Signed-off-by: Muchun Song --- mm/hugetlb.c | 2 ++ mm/hugetlb_vmemmap.c | 100 +++++++++++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 5 +++ 3 files changed, 107 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4aabf12aca9b..ba927ae7f9bd 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1382,6 +1382,8 @@ static void __free_hugepage(struct hstate *h, struct page *page) { int i; + alloc_huge_page_vmemmap(h, page); + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index e6fca02b57b2..9918dc63c062 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -89,6 +89,8 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) #define TAIL_PAGE_REUSE -1 +#define GFP_VMEMMAP_PAGE \ + (GFP_KERNEL | __GFP_NOFAIL | __GFP_MEMALLOC) #ifndef VMEMMAP_HPAGE_SHIFT #define VMEMMAP_HPAGE_SHIFT HPAGE_SHIFT @@ -219,6 +221,104 @@ static inline int freed_vmemmap_hpage_dec(struct page *page) return atomic_dec_return_relaxed(&page->_mapcount) + 1; } +static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, + unsigned long start, + unsigned long end, + struct list_head *remap_pages) +{ + pgprot_t pgprot = PAGE_KERNEL; + void *from = page_to_virt(reuse); + unsigned long addr; + + for (addr = start; addr < end; addr += PAGE_SIZE) { + void *to; + struct page *page; + pte_t entry, old = *ptep; + + page = list_first_entry_or_null(remap_pages, struct page, lru); + list_del(&page->lru); + to = page_to_virt(page); + copy_page(to, from); + + /* + * Make sure that any data that writes to the @to is made + * visible to the physical page. + */ + flush_kernel_vmap_range(to, PAGE_SIZE); + + prepare_vmemmap_page(page); + + entry = mk_pte(page, pgprot); + set_pte_at(&init_mm, addr, ptep++, entry); + + VM_BUG_ON(!pte_present(old) || pte_page(old) != reuse); + } +} + +static void __remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, + unsigned long addr, + struct list_head *remap_pages) +{ + unsigned long next; + unsigned long start = addr + RESERVE_VMEMMAP_SIZE; + unsigned long end = addr + vmemmap_pages_size_per_hpage(h); + struct page *reuse = NULL; + + addr = start; + do { + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) + reuse = pte_page(ptep[TAIL_PAGE_REUSE]); + + next = vmemmap_hpage_addr_end(addr, end); + __remap_huge_page_pte_vmemmap(reuse, ptep, addr, next, + remap_pages); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) +{ + int i; + + for (i = 0; i < free_vmemmap_pages_per_hpage(h); i++) { + struct page *page; + + /* This should not fail */ + page = alloc_page(GFP_VMEMMAP_PAGE); + list_add_tail(&page->lru, list); + } +} + +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + pmd_t *pmd; + spinlock_t *ptl; + LIST_HEAD(remap_pages); + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + alloc_vmemmap_pages(h, &remap_pages); + + pmd = vmemmap_to_pmd((unsigned long)head); + BUG_ON(!pmd); + + ptl = vmemmap_pmd_lock(pmd); + __remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, + &remap_pages); + if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) { + /* + * Todo: + * Merge pte to huge pmd if it has ever been split. + */ + } + spin_unlock(ptl); +} + static inline void free_vmemmap_page_list(struct list_head *list) { struct page *page, *next; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index a23fb1375859..a5054f310528 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -15,6 +15,7 @@ void __init hugetlb_vmemmap_init(struct hstate *h); int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page); void vmemmap_pgtable_free(struct page *page); +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) @@ -35,6 +36,10 @@ static inline void vmemmap_pgtable_free(struct page *page) { } +static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} + static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } From patchwork Fri Nov 13 10:59:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11903001 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A5AF7138B for ; Fri, 13 Nov 2020 11:03:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5CFF62224F for ; Fri, 13 Nov 2020 11:03:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="HXMyr8Xa" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CFF62224F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 76D4F6B00BA; Fri, 13 Nov 2020 06:03:14 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 71B166B00BB; Fri, 13 Nov 2020 06:03:14 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5BCEB6B00BC; Fri, 13 Nov 2020 06:03:14 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 2C4776B00BA for ; Fri, 13 Nov 2020 06:03:14 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C61EE1EE6 for ; Fri, 13 Nov 2020 11:03:13 +0000 (UTC) X-FDA: 77479108266.13.store11_371265e2730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id A53DC18140B67 for ; Fri, 13 Nov 2020 11:03:13 +0000 (UTC) X-Spam-Summary: 1,0,0,ccd811091812c079,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:4119:4385:4605:5007:6119:6261:6653:6737:6738:9010:9592:10004:11026:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12683:12895:13161:13229:13894:14096:14110:14394:21080:21444:21451:21627:21990:30012:30054,0,RBL:209.85.214.195:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yr5y5rm9teabg3der4ob7dt3edqyczjep8brk8j5eobircmyoce8mb7on6cj3.b3yopiywi1d6b5c9n7kbm44z5czobyb1ckwkwy3xjss8ci3bg4kusk1zpa98dc5.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:71,LUA_SUMMARY:none X-HE-Tag: store11_371265e2730e X-Filterd-Recvd-Size: 8674 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:03:13 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id b3so4367615pls.11 for ; Fri, 13 Nov 2020 03:03:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VTBEkY0EWBbMy5iIzZRtsaqCivtCwTiUQlKtEZa+cOQ=; b=HXMyr8XazQ6oTzSEwqXN6zTDQfzwHhzhGI9vF9dhhvZ4EyuBVJKj33LuCIeBNL1WJO 4i49MSS3mHAOPKlaowtPqF0hKSFYc4DeNeZ23+FxCSvgVL2UY1vA8cA1GIU1y2S+2ue9 k6Iu3xWN0uVnmNNobJsIPiB+23J/BOF1qLmkb4JCArM4HrZPkL+KiAupM8r28sq2B8lh pp61KKhCKGP1P5j3i6Cxtv+tIxo76J3d8L52b65YJ6wPS5KYhdcV1Y5Jt8XOpRSYH3dL pDXGANFAs3AGL8HGfeynfn023qG7xQviy2/sFmuKnR6+Rm4EFS+fquD+urXexrFetu8+ g+Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VTBEkY0EWBbMy5iIzZRtsaqCivtCwTiUQlKtEZa+cOQ=; b=lrrFGP5bFL599rHSdwEaunGbGU3x6HllZsIRECUxRqYlzMJ70Q+dIR+BdPbtdzvZW5 htDMBAduufpRA8ugrYvTGug6udPEnfcVizOPaUNDPNqzCi7fD+T9kRjJJr1vsRcpzLD5 QVSCMvm7NO+QkBHdmdVF44VML4b+sYIIVu2WoiwydlJPtvsq7a8utZiFBb9Av1XazSir ZvdZyGcu8ykDTnJ+mGd4hnX7FXurUeZ4cggVJHLLyWxLstVzO++alyOjkwvYKedeiCjw GOWraKxneYEANUQhbwnB5D9yBAaXyvwxAtz4sRzO0vHfv2DRxkUeIpz0MW1PfTHRoVgS l8fg== X-Gm-Message-State: AOAM530khMO6u1kDPf1HEYIm2ALJSbVRjmIZ8+eaN5ef7GzD77CgAl3Q hraiCSFApKMOFsV3QIcTw6twSg== X-Google-Smtp-Source: ABdhPJzrYDT5xoZpsO3M+6yIguGxa3HMCUVuWgvEDjS67lXujwXi8LjnpKOLMdYhmMzRef8jrVTIfw== X-Received: by 2002:a17:902:7487:b029:d6:c03b:bce4 with SMTP id h7-20020a1709027487b02900d6c03bbce4mr1570873pll.36.1605265392353; Fri, 13 Nov 2020 03:03:12 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.03.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:03:11 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 12/21] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Date: Fri, 13 Nov 2020 18:59:43 +0800 Message-Id: <20201113105952.11638-13-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The __free_huge_page_pmd_vmemmap and __remap_huge_page_pmd_vmemmap are almost the same code. So introduce remap_free_huge_page_pmd_vmemmap helper to simplify the code. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 108 +++++++++++++++++++++------------------------------ 1 file changed, 45 insertions(+), 63 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 9918dc63c062..ae9dbfb682ab 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -221,6 +221,47 @@ static inline int freed_vmemmap_hpage_dec(struct page *page) return atomic_dec_return_relaxed(&page->_mapcount) + 1; } +static inline void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +typedef void (*remap_pte_fn)(struct page *reuse, pte_t *ptep, + unsigned long start, unsigned long end, + struct list_head *pages); + +static void remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, + unsigned long addr, + struct list_head *pages, + remap_pte_fn remap_fn) +{ + unsigned long next; + unsigned long start = addr + RESERVE_VMEMMAP_SIZE; + unsigned long end = addr + vmemmap_pages_size_per_hpage(h); + struct page *reuse = NULL; + + flush_cache_vunmap(start, end); + + addr = start; + do { + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) + reuse = pte_page(ptep[TAIL_PAGE_REUSE]); + + next = vmemmap_hpage_addr_end(addr, end); + remap_fn(reuse, ptep, addr, next, pages); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, unsigned long start, unsigned long end, @@ -255,31 +296,6 @@ static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, } } -static void __remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, - unsigned long addr, - struct list_head *remap_pages) -{ - unsigned long next; - unsigned long start = addr + RESERVE_VMEMMAP_SIZE; - unsigned long end = addr + vmemmap_pages_size_per_hpage(h); - struct page *reuse = NULL; - - addr = start; - do { - pte_t *ptep; - - ptep = pte_offset_kernel(pmd, addr); - if (!reuse) - reuse = pte_page(ptep[TAIL_PAGE_REUSE]); - - next = vmemmap_hpage_addr_end(addr, end); - __remap_huge_page_pte_vmemmap(reuse, ptep, addr, next, - remap_pages); - } while (pmd++, addr = next, addr != end); - - flush_tlb_kernel_range(start, end); -} - static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) { int i; @@ -308,8 +324,8 @@ void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) BUG_ON(!pmd); ptl = vmemmap_pmd_lock(pmd); - __remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, - &remap_pages); + remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &remap_pages, + __remap_huge_page_pte_vmemmap); if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) { /* * Todo: @@ -319,16 +335,6 @@ void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) spin_unlock(ptl); } -static inline void free_vmemmap_page_list(struct list_head *list) -{ - struct page *page, *next; - - list_for_each_entry_safe(page, next, list, lru) { - list_del(&page->lru); - free_vmemmap_page(page); - } -} - static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, unsigned long start, unsigned long end, @@ -351,31 +357,6 @@ static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, } } -static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, - unsigned long addr, - struct list_head *free_pages) -{ - unsigned long next; - unsigned long start = addr + RESERVE_VMEMMAP_SIZE; - unsigned long end = addr + vmemmap_pages_size_per_hpage(h); - struct page *reuse = NULL; - - addr = start; - do { - pte_t *ptep; - - ptep = pte_offset_kernel(pmd, addr); - if (!reuse) - reuse = pte_page(ptep[TAIL_PAGE_REUSE]); - - next = vmemmap_hpage_addr_end(addr, end); - __free_huge_page_pte_vmemmap(reuse, ptep, addr, next, - free_pages); - } while (pmd++, addr = next, addr != end); - - flush_tlb_kernel_range(start, end); -} - static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr) { int i; @@ -434,7 +415,8 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head) if (vmemmap_pmd_huge(pmd)) split_vmemmap_huge_page(head, pmd); - __free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages); + remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages, + __free_huge_page_pte_vmemmap); freed_vmemmap_hpage_inc(pmd_page(*pmd)); spin_unlock(ptl); From patchwork Fri Nov 13 10:59:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11903003 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BFFD9697 for ; Fri, 13 Nov 2020 11:03:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7364D22250 for ; Fri, 13 Nov 2020 11:03:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="aGk77ho7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7364D22250 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 833A76B00BD; Fri, 13 Nov 2020 06:03:25 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7DEBE6B00BE; Fri, 13 Nov 2020 06:03:25 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 680A36B00BF; Fri, 13 Nov 2020 06:03:25 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id 3B1096B00BD for ; Fri, 13 Nov 2020 06:03:25 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D6DB91EE6 for ; Fri, 13 Nov 2020 11:03:24 +0000 (UTC) X-FDA: 77479108728.30.meal94_040fd052730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id B2010180B3C8B for ; Fri, 13 Nov 2020 11:03:24 +0000 (UTC) X-Spam-Summary: 1,0,0,6cf61cf0126f2fa9,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2196:2199:2393:2553:2559:2562:3138:3139:3140:3141:3142:3353:3865:3867:3871:3872:3874:4321:4385:5007:6119:6261:6653:6737:6738:7903:8957:10004:11026:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12895:12986:13894:14096:14181:14394:14721:21080:21094:21323:21444:21451:21627:21990:30054:30090,0,RBL:209.85.210.193:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04y833bdkpmhzj8jyg6bti1in11u1ycyrw99syddze156ocmqwgaihag8o134ek.bkuzackja66imhyzs1q1pmoampzg7hfr965db4cwqesihjr5cc71iy11kg9umuy.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:67,LUA_SUMMARY:none X-HE-Tag: meal94_040fd052730e X-Filterd-Recvd-Size: 5835 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:03:24 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id c66so7349978pfa.4 for ; Fri, 13 Nov 2020 03:03:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SX/i7Qyg4EeyUbzxMmXSVQWEWDSzbgT6hhsWUt63V1U=; b=aGk77ho777Ou5HKsMp4MLx10GBvaNnVTXQ5XKeHBPmU6a/rGunCtgJxW+iegljbe9B HYThHErweWzFAlZxpir6SbXMVsfFc4pWbJs8RbotA9e3eLIcVZK5UDrKGvl+QYTfqkNg hBdez0L+yQAUvxLt/CGMqRjzch0M4uQIJnO78Yv79HH5Yl3Q4yCr83lmQnNo0tFliJ+F IhHM5zhlKNOGHNuWZAJxPO/+7uTfK7qIubaKw+nrwIk0cYDXMHpR1bqu3JhE0Z+ga03j OCKTW13nMPchxVUWO2w0C3bW6p7eli74kPQcVj3NfNso1yWOZSbiOSqPczO8QjTHhkJp dByw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SX/i7Qyg4EeyUbzxMmXSVQWEWDSzbgT6hhsWUt63V1U=; b=oUoZsUttCMOzyOOckbjiyTF4RkPogmcabjY3Wz9RQq1lLkhOSKRvVjnHh5tp1G7IWw yht8z+Vk03eDJeT/luKAzQI1VmEZN/gWG8+iTve1K+q77BnT3eRGf2DL9TaGsQ5grGYr zXUJBblxwVME7Y18hge+2e8NDNdAmeq8L95q6MPKCNwkdvUfTObhalC0N82HEpbOBECP s51ItePbHEa1uO26nK54W9Mu0DvzI78ErwOvZLXFpBVSWjLoZ4CaeviYs0ZOILTmgEg/ Nd8eaq0KVPgdBdD2qM7pIU/H37/Lz74sTndKWTtmkvGajv0oz8Zkv10UJKufDitDfFgW MiAA== X-Gm-Message-State: AOAM531wgXFr41iM97n+oJFGoWUUGPuDauBvQjqmcGaRfy79O4pyBOXC M+ZJ1pzyOSfZWbajMgbJ/a2Krg== X-Google-Smtp-Source: ABdhPJwQeGgpDvfFR9ELb9n7ojrevOtBi4ecXD9eqeAxem6yL+RhNnapjzGSGHmQx41LzOsy0em5AQ== X-Received: by 2002:a17:90b:3884:: with SMTP id mu4mr2431364pjb.157.1605265403481; Fri, 13 Nov 2020 03:03:23 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.03.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:03:22 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 13/21] mm/hugetlb: Use PG_slab to indicate split pmd Date: Fri, 13 Nov 2020 18:59:44 +0800 Message-Id: <20201113105952.11638-14-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we allocate hugetlb page from buddy, we may need split huge pmd to pte. When we free the hugetlb page, we can merge pte to pmd. So we need to distinguish whether the previous pmd has been split. The page table is not allocated from slab. So we can reuse the PG_slab to indicate that the pmd has been split. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 26 ++++++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index ae9dbfb682ab..58bff13a2301 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -262,6 +262,25 @@ static void remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, flush_tlb_kernel_range(start, end); } +static inline bool pmd_split(pmd_t *pmd) +{ + return PageSlab(pmd_page(*pmd)); +} + +static inline void set_pmd_split(pmd_t *pmd) +{ + /* + * We should not use slab for page table allocation. So we can set + * PG_slab to indicate that the pmd has been split. + */ + __SetPageSlab(pmd_page(*pmd)); +} + +static inline void clear_pmd_split(pmd_t *pmd) +{ + __ClearPageSlab(pmd_page(*pmd)); +} + static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, unsigned long start, unsigned long end, @@ -326,11 +345,12 @@ void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) ptl = vmemmap_pmd_lock(pmd); remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &remap_pages, __remap_huge_page_pte_vmemmap); - if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) { + if (!freed_vmemmap_hpage_dec(pmd_page(*pmd)) && pmd_split(pmd)) { /* * Todo: * Merge pte to huge pmd if it has ever been split. */ + clear_pmd_split(pmd); } spin_unlock(ptl); } @@ -412,8 +432,10 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head) BUG_ON(!pmd); ptl = vmemmap_pmd_lock(pmd); - if (vmemmap_pmd_huge(pmd)) + if (vmemmap_pmd_huge(pmd)) { split_vmemmap_huge_page(head, pmd); + set_pmd_split(pmd); + } remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages, __free_huge_page_pte_vmemmap); From patchwork Fri Nov 13 10:59:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11903007 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E077D138B for ; Fri, 13 Nov 2020 11:03:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 91316207DE for ; Fri, 13 Nov 2020 11:03:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="xgIg3/QK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91316207DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AA17F6B00C0; Fri, 13 Nov 2020 06:03:38 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A75D46B00C1; Fri, 13 Nov 2020 06:03:38 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 940C66B00C2; Fri, 13 Nov 2020 06:03:38 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id 670496B00C0 for ; Fri, 13 Nov 2020 06:03:38 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0FFE11EE6 for ; Fri, 13 Nov 2020 11:03:38 +0000 (UTC) X-FDA: 77479109316.05.bells77_5f1455d2730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id DE7C818000DA0 for ; Fri, 13 Nov 2020 11:03:37 +0000 (UTC) X-Spam-Summary: 1,0,0,412942e0538e8222,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:2898:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:4119:4321:4385:4390:4395:4605:5007:6119:6261:6653:6737:6738:7903:8957:9010:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13161:13229:13894:14096:14110:14181:14394:14721:21080:21444:21451:21627:21990:30003:30054:30069:30070,0,RBL:209.85.215.195:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yrnifttnfc58m9394trsay8n6tgocdpiys96g4nudrrwxwn7a6f4ndba7mua1.xffajqup9a669z4a3qtpf3pcndffa3x1jecz46xm5bd8e5baamxg9xbx18mreic.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:71,LUA_SUM MARY:non X-HE-Tag: bells77_5f1455d2730e X-Filterd-Recvd-Size: 8327 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:03:37 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id h16so2735680pgb.7 for ; Fri, 13 Nov 2020 03:03:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=x/VUjVOYmsQwRKKait0CFs56LeGV8YzJZLIDQdYffzc=; b=xgIg3/QKGf6dxSoHKltSyre1Ol0iIRn4K27fY/8MiyrkhwK/SpyaIcaTlJheIKDlYY xHEt7C/tCkUHdvFjx6xSmMyGKpwSCtx5FF0QMhdZMw9l+KGR490zwmuBz0wNaJcr3AXu msqsZorMyilBA9m2JWYK3oOMPGAYRYljK9goKfyYBU6A1TZrB95VZpk5XgA6DmnD8UEP cqKCmBOENBH8FJlRNxYy3R+PxsqhxdJjYQNMVWV/k+jO34dupVzR9CwnRfWiqVBem3bW jP6v/1AWwPyd4jzVbWCH0Cgq/ManlpIVAp57aOowGE18K0dnczi5qx25bPvAyDDXKJqo 9HQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=x/VUjVOYmsQwRKKait0CFs56LeGV8YzJZLIDQdYffzc=; b=OVAbhD7S8qtHmBqYtdBb1nZq1ZaiJlxPCzPYX7pLuRMRP11JHUoiMnM9v+n4geYa6H 9JuMDE11NiIg/YawKiBV3tFyAkdDwF0sUECRjbOQTwxLraAAn/0+PaPrgK4qGaaAMh33 iF6gKL3R+aE3HMPjiTt1pEYD7EsJXSaq0BzKAMsFfrHgFJlezmxUOkvQYBJpuhXt6+pq YtdmDvMQZb+GQuVfmFf1u6/g40fKoDkGyxBt0azyLqW+uYDVDX/+5p34SqtPcIaEz8L9 QzDKL0vjILR0bKseBdNiqvktycIwSM28yiuOuXuSxFWJVtsK31KFChr+DKNDNfED+ddF FzRA== X-Gm-Message-State: AOAM531pA3hgucYJbx6xtVnpESmkDw/EEInMi1UFOUVg+sG7Y/Bk6Br1 R1I+BmhNTcBLAmhzRTeiiIoiMQ== X-Google-Smtp-Source: ABdhPJxz6GV3Upa7UIWM1o5ZtyqPLk+RKv5PF1MNTl6oGRUC5i/1X3kV5tm984yPZsLs/yu0dsyXSw== X-Received: by 2002:a63:1924:: with SMTP id z36mr1660267pgl.354.1605265414712; Fri, 13 Nov 2020 03:03:34 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.03.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:03:34 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 14/21] mm/hugetlb: Support freeing vmemmap pages of gigantic page Date: Fri, 13 Nov 2020 18:59:45 +0800 Message-Id: <20201113105952.11638-15-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The gigantic page is allocated by bootmem, if we want to free the unused vmemmap pages. We also should allocate the page table. So we also allocate page tables from bootmem. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 5 +++++ mm/hugetlb_vmemmap.c | 55 +++++++++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 13 ++++++++++++ 4 files changed, 76 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index eed3dd3bd626..da18fc9ed152 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -506,6 +506,9 @@ struct hstate { struct huge_bootmem_page { struct list_head list; struct hstate *hstate; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + pte_t *vmemmap_pte; +#endif }; struct page *alloc_huge_page(struct vm_area_struct *vma, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ba927ae7f9bd..055604d07046 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2607,6 +2607,7 @@ static void __init gather_bootmem_prealloc(void) WARN_ON(page_count(page) != 1); prep_compound_huge_page(page, h->order); WARN_ON(PageReserved(page)); + gather_vmemmap_pgtable_init(m, page); prep_new_huge_page(h, page, page_to_nid(page)); put_page(page); /* free it into the hugepage allocator */ @@ -2659,6 +2660,10 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) break; cond_resched(); } + + if (hstate_is_gigantic(h)) + i -= gather_vmemmap_pgtable_prealloc(); + if (i < h->max_huge_pages) { char buf[32]; diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 58bff13a2301..47f81e0b3832 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -75,6 +75,7 @@ #include #include #include +#include #include #include "hugetlb_vmemmap.h" @@ -173,6 +174,60 @@ int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) return -ENOMEM; } +unsigned long __init gather_vmemmap_pgtable_prealloc(void) +{ + struct huge_bootmem_page *m, *tmp; + unsigned long nr_free = 0; + + list_for_each_entry_safe(m, tmp, &huge_boot_pages, list) { + struct hstate *h = m->hstate; + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h); + unsigned int pgtable_size; + + if (!nr) + continue; + + pgtable_size = nr << PAGE_SHIFT; + m->vmemmap_pte = memblock_alloc_try_nid(pgtable_size, + PAGE_SIZE, 0, MEMBLOCK_ALLOC_ACCESSIBLE, + NUMA_NO_NODE); + if (!m->vmemmap_pte) { + nr_free++; + list_del(&m->list); + memblock_free_early(__pa(m), huge_page_size(h)); + } + } + + return nr_free; +} + +void __init gather_vmemmap_pgtable_init(struct huge_bootmem_page *m, + struct page *page) +{ + struct hstate *h = m->hstate; + unsigned long pte = (unsigned long)m->vmemmap_pte; + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h); + + /* Store preallocated pages on huge page lru list */ + INIT_LIST_HEAD(&page->lru); + + while (nr--) { + struct page *pte_page = virt_to_page(pte); + + __ClearPageReserved(pte_page); + list_add(&pte_page->lru, &page->lru); + pte += PAGE_SIZE; + } + + /* + * If we had gigantic hugepages allocated at boot time, we need + * to restore the 'stolen' pages to totalram_pages in order to + * fix confusing memory reports from free(1) and another + * side-effects, like CommitLimit going negative. + */ + adjust_managed_page_count(page, nr); +} + /* * Walk a vmemmap address to the pmd it maps. */ diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index a5054f310528..79f330bb0714 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -15,6 +15,9 @@ void __init hugetlb_vmemmap_init(struct hstate *h); int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page); void vmemmap_pgtable_free(struct page *page); +unsigned long __init gather_vmemmap_pgtable_prealloc(void); +void __init gather_vmemmap_pgtable_init(struct huge_bootmem_page *m, + struct page *page); void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); @@ -36,6 +39,16 @@ static inline void vmemmap_pgtable_free(struct page *page) { } +static inline unsigned long gather_vmemmap_pgtable_prealloc(void) +{ + return 0; +} + +static inline void gather_vmemmap_pgtable_init(struct huge_bootmem_page *m, + struct page *page) +{ +} + static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) { } From patchwork Fri Nov 13 10:59:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11903011 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 00BAA138B for ; Fri, 13 Nov 2020 11:03:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A81EC2224F for ; Fri, 13 Nov 2020 11:03:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="vYF40kLS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A81EC2224F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CC3DF6B00C3; Fri, 13 Nov 2020 06:03:47 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C717A6B00C4; Fri, 13 Nov 2020 06:03:47 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B13956B00C5; Fri, 13 Nov 2020 06:03:47 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0048.hostedemail.com [216.40.44.48]) by kanga.kvack.org (Postfix) with ESMTP id 846786B00C3 for ; Fri, 13 Nov 2020 06:03:47 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3428D8249980 for ; Fri, 13 Nov 2020 11:03:47 +0000 (UTC) X-FDA: 77479109694.05.deer79_580afc72730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 1B5561801DC01 for ; Fri, 13 Nov 2020 11:03:47 +0000 (UTC) X-Spam-Summary: 1,0,0,37503bef0cdd5b0a,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2553:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3872:3874:4117:4321:4385:4605:5007:6261:6653:6737:6738:7875:8957:10004:11026:11232:11658:11914:12043:12048:12291:12297:12438:12517:12519:12555:12683:12895:13894:14096:14110:14181:14394:14721:21080:21444:21451:21627:21990:30054:30090,0,RBL:209.85.215.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yr6tspae6yexxrdic1ip7b4dk1eop3e7nr4xn4r9eaqaigynpgkgs9wi8kxet.k8mkiamztxghbrqyqqrh8d4apnohjpi36am6mpbdnf8i7st7zq7tt1ajjquzfo6.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: deer79_580afc72730e X-Filterd-Recvd-Size: 6783 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:03:46 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id 62so6795443pgg.12 for ; Fri, 13 Nov 2020 03:03:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QzolJXKJFt3Cv9lk0AvV6UgqgHDueM2wckEIHiP70o4=; b=vYF40kLSPTori9kiUCaA8h+WJRxn/OJj5FR3U6TfhNVIRJ7HIWbWL5bz2jmSwgCAh7 QLXUgeaue7x6mna7QPdQSqYCZgX3VfQZE5fOrBaMuupSjUfiVrpq42S5Ww5vJb3jz+0K KZraHE4s3VI/MVTe/GbH/NRYf3RnEK6aJPK5zGbtRQ5cIlyFZegnpzJjP3YQ3cQEpZHW QFw9ZSI+hH37JpxeFP8Zu9aueUjInncq/8gD8aA/GFxdKXDV91nT9FT9q9R0s167Qb/s 01bp9m/k992a3+NE/2zqSJ5xkx6SDwejark0dKPO0/KTtafxi3MGqRPOfSOZA8Uusl04 Em2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QzolJXKJFt3Cv9lk0AvV6UgqgHDueM2wckEIHiP70o4=; b=cMAFk1Dz48+on6nGHk4zHqZASFKKTqRzXSZt6pxVLOT+ZNNRb+/VYgckTIUyKO7s8j Zg9ly6Xn538yBv1e6g2iF1/a137QL8vq9p228QeHSAu7jR+8oyeSIxCFmlkRdme9tDaQ BZ1NPtV42igbYdoJZxLdAuz6231tqG5fItPEyQPmtnzbMeNjPZ+yZe+2y+gTWEwa2HpV CNwiqMkIra/hU8mJydpWyy72kbO/iNuBU1gz5fkOTwNiDKj5zCiLLb2MXhybTWNnqq1x ep8jb+3FOxYHfQUu+bcwYwTBZdqH/yxfaoR4yDiAcKEkr0/tGNhRUL+Pf/ISwGFmkc03 zhLQ== X-Gm-Message-State: AOAM530xQE2AuzjImwbj9l0H4asSJacLruN9AdaUZZJ0zwSWUV5jBizD J1L5pfbSl+/R9EjV+8UXVQczAg== X-Google-Smtp-Source: ABdhPJwSxrC8AebEKZmKNBIl86jC4CYLWk9JpK9nMJhulmTMM7fshqtrmP4/n4vXa34e85lcpIUKTQ== X-Received: by 2002:a17:90a:a58e:: with SMTP id b14mr2244140pjq.203.1605265425784; Fri, 13 Nov 2020 03:03:45 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.03.35 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:03:45 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 15/21] mm/hugetlb: Set the PageHWPoison to the raw error page Date: Fri, 13 Nov 2020 18:59:46 +0800 Message-Id: <20201113105952.11638-16-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Because we reuse the first tail page, if we set PageHWPosion on a tail page. It indicates that we may set PageHWPoison on a series of pages. So we can use the head[4].mapping to record the real error page index and set the raw error page PageHWPoison later. Signed-off-by: Muchun Song --- mm/hugetlb.c | 11 +++-------- mm/hugetlb_vmemmap.h | 39 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 42 insertions(+), 8 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 055604d07046..b853aacd5c16 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1383,6 +1383,7 @@ static void __free_hugepage(struct hstate *h, struct page *page) int i; alloc_huge_page_vmemmap(h, page); + subpage_hwpoison_deliver(page); for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | @@ -1944,14 +1945,8 @@ int dissolve_free_huge_page(struct page *page) int nid = page_to_nid(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; - /* - * Move PageHWPoison flag from head page to the raw error page, - * which makes any subpages rather than the error page reusable. - */ - if (PageHWPoison(head) && page != head) { - SetPageHWPoison(page); - ClearPageHWPoison(head); - } + + set_subpage_hwpoison(head, page); list_del(&head->lru); h->free_huge_pages--; h->free_huge_pages_node[nid]--; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 79f330bb0714..b09fd658ce20 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -21,6 +21,29 @@ void __init gather_vmemmap_pgtable_init(struct huge_bootmem_page *m, void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); +static inline void subpage_hwpoison_deliver(struct page *head) +{ + struct page *page = head; + + if (PageHWPoison(head)) + page = head + page_private(head + 4); + + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + if (page != head) { + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + +static inline void set_subpage_hwpoison(struct page *head, struct page *page) +{ + if (PageHWPoison(head)) + set_page_private(head + 4, page - head); +} + static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return h->nr_free_vmemmap_pages; @@ -57,6 +80,22 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } +static inline void subpage_hwpoison_deliver(struct page *head) +{ +} + +static inline void set_subpage_hwpoison(struct page *head, struct page *page) +{ + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + if (PageHWPoison(head) && page != head) { + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return 0; From patchwork Fri Nov 13 10:59:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11903017 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 386B5138B for ; Fri, 13 Nov 2020 11:04:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E0A512224C for ; Fri, 13 Nov 2020 11:03:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="JSAXh7YJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E0A512224C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 06B926B00C6; Fri, 13 Nov 2020 06:03:59 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0153B6B00C7; Fri, 13 Nov 2020 06:03:58 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DAACC6B00C8; Fri, 13 Nov 2020 06:03:58 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id A9BB16B00C6 for ; Fri, 13 Nov 2020 06:03:58 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 542A61EE6 for ; Fri, 13 Nov 2020 11:03:58 +0000 (UTC) X-FDA: 77479110156.16.pets59_3f0312c2730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 2E727100E690C for ; Fri, 13 Nov 2020 11:03:58 +0000 (UTC) X-Spam-Summary: 1,0,0,285ff09e3e559209,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1541:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3867:3871:3874:4321:4385:5007:6261:6653:6737:6738:7903:8957:9707:10004:11026:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12895:12986:13069:13161:13229:13311:13357:13894:14096:14181:14384:14394:14721:21080:21094:21323:21444:21451:21627:21740:21990:30054,0,RBL:209.85.215.194:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yrbir61qktf9n1sask3bn3wui1jyc6jid47zx4z1zneom9uykua4dhnfcxjno.bb1uh4ufn6ppwb88z8xib6com5436955shgxbx7ffxqhn95dz85yj96sbb7hbya.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: pets59_3f0312c2730e X-Filterd-Recvd-Size: 5380 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:03:57 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id f38so6832015pgm.2 for ; Fri, 13 Nov 2020 03:03:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cH/Hq+5EQ/oi9mKzhIRZsbYyftv0OzxuKDhe9q6sjjU=; b=JSAXh7YJcNqokebrRU/knl5iBmIG32+URS2ypuie9/klOs+fNW0xtemZbdsT107jfi lGi/TvUQdp5z0vY73XRc/ke5HV2aJ+najoRsKGG6lVTCKaB0FzSPbjvmw9BLEZV9+eUG 83de/2ing92BFVJLQSpg4tP+1UDpnLvpFGtjyYRO4wcTDKbJLIQ2N2ri6hPeKBOk1blq Am9Lx3pv2T9+zLAsEMK1zi2XjGmY65iKnJGaK2dCgYZyeTR7AYAFn5TePk1u3QXQu5VY LD5W9BRuPxg9l0FK/SApUcfDPtVMoODKZAAqajaTDPmP60/aR4gD/My6VACXBDkrOT29 jwtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cH/Hq+5EQ/oi9mKzhIRZsbYyftv0OzxuKDhe9q6sjjU=; b=DUuqSfmpIVbUwvCCxcz472nRy61r8rKFnwPxaO3lSfSxvjskIl4IDvS4AK7Ciukkbc BWVgIt/18TZm3ddwZfRV94FEceGqzE6CYDucHSmHn+e16PRFaAKBzjvx7PZweDAG0/Jw y6aLHWc7nnN2Bgck2dy8vPWi2efTzh1Kqeh2odOu0S6wBUaLNe1x4T7arOq0BBPCnpz9 GThJfqXNzrkYowgHtwjTNvMD0i3DpW+RjqVVdwfuST5QlmM6ijvcQKx5Czlhjo8bJcQL nHOWqOMagAIHURB36e/8xp9RJhSiJtc5j6aMAfrSrRI4uz0as1QO0BB7c5N3J64Eh9an eFVw== X-Gm-Message-State: AOAM530dIIcoZzMNZ02gEzOgT2qn5KeFK5wlUfjSXzQxFxw2XofAAuVK oTBYIgo+iNgc5hu9G5UX3qwuFw== X-Google-Smtp-Source: ABdhPJxTymz2DHvUvoJr0lCEEIthLi1WvzQlf/oZUISFpWsZvJkwBMH21yPfXI+wXT2WzCZZPRkW/A== X-Received: by 2002:a63:1865:: with SMTP id 37mr1712731pgy.322.1605265436829; Fri, 13 Nov 2020 03:03:56 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.03.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:03:56 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 16/21] mm/hugetlb: Flush work when dissolving hugetlb page Date: Fri, 13 Nov 2020 18:59:47 +0800 Message-Id: <20201113105952.11638-17-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should flush work when dissolving a hugetlb page to make sure that the hugetlb page is freed to the buddy. Signed-off-by: Muchun Song --- mm/hugetlb.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b853aacd5c16..9aad0b63d369 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1328,6 +1328,12 @@ static void update_hpage_vmemmap_workfn(struct work_struct *work) } static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); +static inline void flush_hpage_update_work(struct hstate *h) +{ + if (free_vmemmap_pages_per_hpage(h)) + flush_work(&hpage_update_work); +} + static inline void __update_and_free_page(struct hstate *h, struct page *page) { /* No need to allocate vmemmap pages */ @@ -1928,6 +1934,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + struct hstate *h = NULL; /* Not to disrupt normal path by vainly holding hugetlb_lock */ if (!PageHuge(page)) @@ -1941,8 +1948,9 @@ int dissolve_free_huge_page(struct page *page) if (!page_count(page)) { struct page *head = compound_head(page); - struct hstate *h = page_hstate(head); int nid = page_to_nid(head); + + h = page_hstate(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; @@ -1956,6 +1964,14 @@ int dissolve_free_huge_page(struct page *page) } out: spin_unlock(&hugetlb_lock); + + /* + * We should flush work before return to make sure that + * the HugeTLB page is freed to the buddy. + */ + if (!rc && h) + flush_hpage_update_work(h); + return rc; } From patchwork Fri Nov 13 10:59:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11903019 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4AEF7138B for ; Fri, 13 Nov 2020 11:04:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EDA512224C for ; Fri, 13 Nov 2020 11:04:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="QulP4xf/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EDA512224C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 104696B00B9; Fri, 13 Nov 2020 06:04:10 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0B45B6B00C9; Fri, 13 Nov 2020 06:04:10 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E94806B00CA; Fri, 13 Nov 2020 06:04:09 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0073.hostedemail.com [216.40.44.73]) by kanga.kvack.org (Postfix) with ESMTP id BCF8B6B00B9 for ; Fri, 13 Nov 2020 06:04:09 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5E5A98249980 for ; Fri, 13 Nov 2020 11:04:09 +0000 (UTC) X-FDA: 77479110618.06.dress38_090c9472730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 346C610047A02 for ; Fri, 13 Nov 2020 11:04:09 +0000 (UTC) X-Spam-Summary: 1,0,0,b3ebdc7c4e10777c,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:4117:4321:4385:4390:4395:4605:5007:6119:6261:6653:6737:6738:8603:9036:10004:11026:11232:11473:11658:11914:12043:12048:12114:12291:12297:12438:12517:12519:12555:12895:13894:14181:14394:14721:21080:21094:21323:21444:21451:21627:21966:21990:30029:30034:30054,0,RBL:209.85.215.195:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yf1r5xjqpnbd6qo13gfj891j5mfoc13zqbd3upugy335ffgeerxzh88d7azu5.kmr448izshky8pog5jdocwyz6gzftzsyjuykbfxdx6ajcn9jcehgd4d8ghuoef9.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: dress38_090c9472730e X-Filterd-Recvd-Size: 6739 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:04:08 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id f18so6802266pgi.8 for ; Fri, 13 Nov 2020 03:04:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fI+ffyIJ/lWFcSPqbFf/SmboYz3zeHAfd7luhLQKQ2M=; b=QulP4xf/F6bZj2Zxu1813MvOIxpc2rFiP4TBkEPNGz0cA8cZfezugMTjZ1JRHyAQPK +YUUWrYNgbVdX27QXnFjI6qj01jwtydS7G7T061gSxs//krrWqRIAABtWkIYNR0lykl5 TdLs0L4qv6nPhkfMKYbDMbbPWCIHbDIohnT86/1voKVq0HpLeRh49n5lPp4FwnlRjWhy qM+1w2MIzt6mat1Ecs5s8/gbvmVB3nSl3JyAq2dMBiV187YaVB+wTY+OFHRBNibHqweP ztz7ULVEjVJpEVz5+uq+rgoj7N1H9erXAjIw1ObKzX+WeTY033M2B0NZRD5OWfuRcTGQ hp2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fI+ffyIJ/lWFcSPqbFf/SmboYz3zeHAfd7luhLQKQ2M=; b=Wp+fSyre0uMcL7FIuY+tgarT17U6msRUuSiBpLQ8rxgC0LGf/uF9gOXxpuC2yiO6Ig FUxyknw08U603N3A6Qyjon1tyPmMkdCh1sqabY+qGzxDKQicoQLyWKsf/Ub4Jr23OIwh L6DycVk3ExNFZ42dXhtmm8IwFl9Q4m0oXwF8fANMDMLR+yI6NM9rIZgluJyDYfKXIwND Gv7x23fncT6EJT3znwtcUGYJigskFVA6237WrXQTRNy1to5Lm/qanuBLZabNSCnvs6uR JUsXbV86PO8wXjokSsLodyVVkT320RZRQKzy+oQpQKydTQDBfEv8hnardhaVlVXirfyO hScg== X-Gm-Message-State: AOAM532afGCRp+SatSpZm3/FN7cihszGgfuPWza7KPu//F2Br3hKhETP aqFPnmMDIWiRAtvvDmqKNzjLDA== X-Google-Smtp-Source: ABdhPJycK8Y8+LhZZx+UnepZGWXv3ZrX0BXb/re7wsijJy2j7WEjgUHiRaeIGY1XCDVXSaWuN3Ls6w== X-Received: by 2002:a65:56ca:: with SMTP id w10mr1741133pgs.204.1605265447958; Fri, 13 Nov 2020 03:04:07 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.03.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:04:07 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 17/21] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Date: Fri, 13 Nov 2020 18:59:48 +0800 Message-Id: <20201113105952.11638-18-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a kernel parameter hugetlb_free_vmemmap to disable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. Signed-off-by: Muchun Song --- Documentation/admin-guide/kernel-parameters.txt | 9 +++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ mm/hugetlb_vmemmap.c | 22 ++++++++++++++++++++++ 3 files changed, 34 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 5debfe238027..ccf07293cb63 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1551,6 +1551,15 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] + hugetlb_free_vmemmap= + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, + this controls freeing unused vmemmap pages associated + with each HugeTLB page. + Format: { on (default) | off } + + on: enable the feature + off: disable the feature + hung_task_panic= [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index f7b1c7462991..7d6129ee97dd 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -145,6 +145,9 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this disables freeing + unused vmemmap pages associated each HugeTLB page. When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 47f81e0b3832..1528b156920c 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -118,6 +118,22 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd) } #endif +static bool hugetlb_free_vmemmap_disabled __initdata; + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "off")) + hugetlb_free_vmemmap_disabled = true; + else if (strcmp(buf, "on")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); + static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) { return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; @@ -505,6 +521,12 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int order = huge_page_order(h); unsigned int vmemmap_pages; + if (hugetlb_free_vmemmap_disabled) { + h->nr_free_vmemmap_pages = 0; + pr_info("disable free vmemmap pages for %s\n", h->name); + return; + } + vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page and the first tail page are not to be freed to buddy From patchwork Fri Nov 13 10:59:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11903021 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E5F17697 for ; Fri, 13 Nov 2020 11:04:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9AFD3207DE for ; Fri, 13 Nov 2020 11:04:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="x4h4tozn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9AFD3207DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B13806B00BC; Fri, 13 Nov 2020 06:04:22 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AC3E86B00CB; Fri, 13 Nov 2020 06:04:22 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 93F306B00CC; Fri, 13 Nov 2020 06:04:22 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0127.hostedemail.com [216.40.44.127]) by kanga.kvack.org (Postfix) with ESMTP id 649856B00BC for ; Fri, 13 Nov 2020 06:04:22 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0DF73180AD801 for ; Fri, 13 Nov 2020 11:04:22 +0000 (UTC) X-FDA: 77479111164.22.birds46_5409fcf2730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id CEBC618038E67 for ; Fri, 13 Nov 2020 11:04:21 +0000 (UTC) X-Spam-Summary: 1,0,0,4afba9d6f7091dbc,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:2:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3865:3866:3867:3871:3872:3874:4119:4321:4385:4605:5007:6119:6120:6261:6653:6737:6738:7901:7903:8603:9010:10004:11026:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13894:14096:14110:14394:21080:21444:21451:21627:21966:21990:30054,0,RBL:209.85.215.195:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04y8np15i3cxg63zccmdn6hz5ne74ocm44p1tpw8snynigz6gu1wbqd85tg8cnp.om6ttqfrrjxr9qdt983znjtxogkxpk89bx76611jzmb5ppxdap9aj1skitxxygk.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: birds46_5409fcf2730e X-Filterd-Recvd-Size: 8847 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:04:21 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id i7so6819345pgh.6 for ; Fri, 13 Nov 2020 03:04:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2dnPV1b6uHWD2D+PaHlh5ehZyTbtQb+H242CmT/tu8g=; b=x4h4toznMW9myqfV4/4rtBvtB3x76JiQu2/IusbB/1Q3+xmDQkJKzs1TsHxWt/zxkF rYvDmGEcnfWEohs8dWmCxfpMQzt24NSGwNE/8jF+QNl7QAMOnnzgFAoAaNkOF+Gs7R5z ObcAsEEqtcE/KIQyrxRkTW1EBlAAjJm02KYO6I/NcZ0TuIKNyDrTkulq8q2o0zu76NJV RMlNYeE2QRpW39xbzGLuIaY4CIkMKnZTo3OHOMKlEdgEUMTm95kK/Tb+3b0ze2akmsxM KqtPCxok9SvhICziSupm5tLHqksj/uRJkxvuQyRQ3bddyhwUldhnUzZSBiFUz0yeyeGZ QrMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2dnPV1b6uHWD2D+PaHlh5ehZyTbtQb+H242CmT/tu8g=; b=Z9cb5ZlPVDxFz9QlteALLUwa4HIs/pTKytPptfMDa2X6Ikvjp27dRZOf2Mph/RY61f Fa43ApxsGPrVgPg4gohdXiL59iIknwcd1wyUITwKkqQnoonZezXjr0ZHmNRPYcpIICnX dxYx/RSU24I8NN6Mk9+l2b3zluVS2FHOl30R9g3SmefyMKbZpbibr1WyzjUPkinT26By 85sqerh2Rz+nCG8aL3+dThcMw6220g3AK5jS9OIikiwe2u4MFc6tyK+nskVdFOOVOqSc qj/OqTPw9VRRfdFSeQwOHeuICIfqsh2OYc6zbyQ2FT+VnVVpIEWHIMFr0N7CzLiywIld rRAg== X-Gm-Message-State: AOAM530lcN6R+oFdZxFjw+r5xNPBMXYy7Vg4KGk17WxkU0s8iakSfU3Z Dy6pqUZHf+0M+qqSIAI+ksbanQ== X-Google-Smtp-Source: ABdhPJwej1GeIBlLpEuMCUF9hhhhxhMLbmKZpJvqWgyB4Wj77yK7vWxMgfdOWrnLEfwZC9Rr+9YU8w== X-Received: by 2002:a63:cb51:: with SMTP id m17mr1607379pgi.337.1605265460592; Fri, 13 Nov 2020 03:04:20 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.04.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:04:19 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 18/21] mm/hugetlb: Merge pte to huge pmd only for gigantic page Date: Fri, 13 Nov 2020 18:59:49 +0800 Message-Id: <20201113105952.11638-19-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Merge pte to huge pmd if it has ever been split. Now only support gigantic page which's vmemmap pages size is an integer multiple of PMD_SIZE. This is the simplest case to handle. Signed-off-by: Muchun Song --- arch/x86/include/asm/hugetlb.h | 8 +++ mm/hugetlb_vmemmap.c | 118 ++++++++++++++++++++++++++++++++++++++++- 2 files changed, 124 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h index c601fe042832..1de1c519a84a 100644 --- a/arch/x86/include/asm/hugetlb.h +++ b/arch/x86/include/asm/hugetlb.h @@ -12,6 +12,14 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd) { return pmd_large(*pmd); } + +#define vmemmap_pmd_mkhuge vmemmap_pmd_mkhuge +static inline pmd_t vmemmap_pmd_mkhuge(struct page *page) +{ + pte_t entry = pfn_pte(page_to_pfn(page), PAGE_KERNEL_LARGE); + + return __pmd(pte_val(entry)); +} #endif #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 1528b156920c..5c00826a98b3 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -118,6 +118,14 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd) } #endif +#ifndef vmemmap_pmd_mkhuge +#define vmemmap_pmd_mkhuge vmemmap_pmd_mkhuge +static inline pmd_t vmemmap_pmd_mkhuge(struct page *page) +{ + return pmd_mkhuge(mk_pmd(page, PAGE_KERNEL)); +} +#endif + static bool hugetlb_free_vmemmap_disabled __initdata; static int __init early_hugetlb_free_vmemmap_param(char *buf) @@ -386,6 +394,104 @@ static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, } } +static void __replace_huge_page_pte_vmemmap(pte_t *ptep, unsigned long start, + unsigned int nr, struct page *huge, + struct list_head *free_pages) +{ + unsigned long addr; + unsigned long end = start + (nr << PAGE_SHIFT); + pgprot_t pgprot = PAGE_KERNEL; + + for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) { + struct page *page; + pte_t old = *ptep; + pte_t entry; + + prepare_vmemmap_page(huge); + + entry = mk_pte(huge++, pgprot); + VM_WARN_ON(!pte_present(old)); + page = pte_page(old); + list_add(&page->lru, free_pages); + + set_pte_at(&init_mm, addr, ptep, entry); + } +} + +static void replace_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start, + struct page *huge, + struct list_head *free_pages) +{ + unsigned long end = start + VMEMMAP_HPAGE_SIZE; + + flush_cache_vunmap(start, end); + __replace_huge_page_pte_vmemmap(pte_offset_kernel(pmd, start), start, + VMEMMAP_HPAGE_NR, huge, free_pages); + flush_tlb_kernel_range(start, end); +} + +static pte_t *merge_vmemmap_pte(pmd_t *pmdp, unsigned long addr) +{ + pte_t *pte; + struct page *page; + + pte = pte_offset_kernel(pmdp, addr); + page = pte_page(*pte); + set_pmd(pmdp, vmemmap_pmd_mkhuge(page)); + + return pte; +} + +static void merge_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start, + struct page *huge, + struct list_head *free_pages) +{ + replace_huge_page_pmd_vmemmap(pmd, start, huge, free_pages); + pte_free_kernel(&init_mm, merge_vmemmap_pte(pmd, start)); + flush_tlb_kernel_range(start, start + VMEMMAP_HPAGE_SIZE); +} + +static inline void dissolve_compound_page(struct page *page, unsigned int order) +{ + int i; + unsigned int nr_pages = 1 << order; + + for (i = 1; i < nr_pages; i++) + set_page_count(page + i, 1); +} + +static void merge_gigantic_page_vmemmap(struct hstate *h, struct page *head, + pmd_t *pmd) +{ + LIST_HEAD(free_pages); + unsigned long addr = (unsigned long)head; + unsigned long end = addr + vmemmap_pages_size_per_hpage(h); + + for (; addr < end; addr += VMEMMAP_HPAGE_SIZE) { + void *to; + struct page *page; + + page = alloc_pages(GFP_VMEMMAP_PAGE & ~__GFP_NOFAIL, + VMEMMAP_HPAGE_ORDER); + if (!page) + goto out; + + dissolve_compound_page(page, VMEMMAP_HPAGE_ORDER); + to = page_to_virt(page); + memcpy(to, (void *)addr, VMEMMAP_HPAGE_SIZE); + + /* + * Make sure that any data that writes to the + * @to is made visible to the physical page. + */ + flush_kernel_vmap_range(to, VMEMMAP_HPAGE_SIZE); + + merge_huge_page_pmd_vmemmap(pmd++, addr, page, &free_pages); + } +out: + free_vmemmap_page_list(&free_pages); +} + static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) { int i; @@ -418,10 +524,18 @@ void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) __remap_huge_page_pte_vmemmap); if (!freed_vmemmap_hpage_dec(pmd_page(*pmd)) && pmd_split(pmd)) { /* - * Todo: - * Merge pte to huge pmd if it has ever been split. + * Merge pte to huge pmd if it has ever been split. Now only + * support gigantic page which's vmemmap pages size is an + * integer multiple of PMD_SIZE. This is the simplest case + * to handle. */ clear_pmd_split(pmd); + + if (IS_ALIGNED(vmemmap_pages_per_hpage(h), VMEMMAP_HPAGE_NR)) { + spin_unlock(ptl); + merge_gigantic_page_vmemmap(h, head, pmd); + return; + } } spin_unlock(ptl); } From patchwork Fri Nov 13 10:59:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11903023 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A464138B for ; Fri, 13 Nov 2020 11:04:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2AE4522250 for ; Fri, 13 Nov 2020 11:04:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="hgVW5dqk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2AE4522250 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3EA756B00CD; Fri, 13 Nov 2020 06:04:36 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3BEF06B00CE; Fri, 13 Nov 2020 06:04:36 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B0446B00CF; Fri, 13 Nov 2020 06:04:36 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id F194F6B00CD for ; Fri, 13 Nov 2020 06:04:35 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8ADAE3629 for ; Fri, 13 Nov 2020 11:04:35 +0000 (UTC) X-FDA: 77479111710.25.shape59_1913cfd2730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 545FD1804E3A9 for ; Fri, 13 Nov 2020 11:04:35 +0000 (UTC) X-Spam-Summary: 1,0,0,aebb5c487b4de96f,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:2:41:355:379:541:800:960:966:973:981:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1605:1606:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:2892:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3872:4120:4321:4385:4605:5007:6261:6653:6737:6738:7875:7903:8603:8957:10004:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13894:14096:14394:21080:21444:21451:21627:21990:30054:30075,0,RBL:209.85.210.194:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yf845adgnnznotdrpse1b9wg84ayp8yd94omriwo8img3ciowdrco1cra33iz.g5mpg1gyx1afn1bgf345qofwwd9sidf6rxzux1qbxcz1kxykrb4ujb7kn1oha5u.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: shape59_1913cfd2730e X-Filterd-Recvd-Size: 9094 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:04:34 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id c66so7352306pfa.4 for ; Fri, 13 Nov 2020 03:04:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=q+VtCLDnL2Xa2mKU9/pdAYc0VFam3D1bbZtkQW1vxCY=; b=hgVW5dqkMQlyxy+pm08vW0Vd4PTnyXbalrCOkkce5dt2d7VoBxu3eV1+af7yino6+6 cGORVLq1cSX0r2YA/O/gfxdClSiPYIYUmP+R2rEBhSknBmWIxKIZ3QrN1xE/O3Bl8BAD dne9R+uc5fjS1f+S/hTh2/DMUQaxpq6G1hj5sWcyOTa2aaW4weditA2SU1YHChizvM5O QsA3ug8vIl0Mhx6DwWu2jjJU5ozuPlMxamPpzHj8yVFvS7TflofJtxQbUcoFAyKq0rwr 6FtSlJW7HhROiINTFTjqs1TiqwwHpu91rRIWgVglPqRJ7o3j8fcW9b/0TVj05DGoe76S zuJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=q+VtCLDnL2Xa2mKU9/pdAYc0VFam3D1bbZtkQW1vxCY=; b=k/gs5MsrFFyuGZyB6jHvvKOWEnu0UImhg8tbCz11qqcIyqHEGkh2Wz0abmFGSFxRE/ adJ53+8Q6uj9cTe1mNQoCRgZWe9uav0ljousSrG6k/cZkYIhBYMOTA7knwzm0KAe8Ok2 MyKbLoyZgG0tfb5hHAnXALKrUuMM0irYnKVJ6TlwXX9sxW/e+34hH3av4gpNCChunQCp jhop55GXwH9mUBq6tVcqDv6cLxFJvCOwq6rFbO3oZ3pqSddTFsHE+qWeQ31fA0FgA3eB GprUUOZxMhG5ImpJvkiTlHBq4CXDXfbTQLGVZefZ4UHWx7nvq+gdx7Lj0Ebazd1RM8HS MbLA== X-Gm-Message-State: AOAM531M2GoS8qzaVulU1TDXGXieAVt5QHYnTFxxuECvEBYdNal1lkAy wSsOW8vGRAkwb5tcbyJLWDUXyw== X-Google-Smtp-Source: ABdhPJwfPxpTm7KDY6rcrwOeaIhwRPV0v2rdEjxMNJn3NT0L5sknMy457PQCHIEbzp6HKN47tm1isA== X-Received: by 2002:a17:90b:1058:: with SMTP id gq24mr2370569pjb.29.1605265473882; Fri, 13 Nov 2020 03:04:33 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.04.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:04:33 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 19/21] mm/hugetlb: Gather discrete indexes of tail page Date: Fri, 13 Nov 2020 18:59:50 +0800 Message-Id: <20201113105952.11638-20-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For hugetlb page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page In this case, it will be easier to add a new tail page index later. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 13 +++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 12 ++++++------ mm/hugetlb_vmemmap.h | 4 ++-- 4 files changed, 30 insertions(+), 14 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index da18fc9ed152..fa9d38a3ac6f 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +enum { + SUBPAGE_INDEX_ACTIVE = 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9aad0b63d369..dfa982f4b525 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1429,20 +1429,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* @@ -1454,17 +1454,17 @@ static inline bool PageHugeTemporary(struct page *page) if (!PageHuge(page)) return false; - return (unsigned long)page[2].mapping == -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U; } static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping = (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U; } static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping = NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL; } static void __free_huge_page(struct page *page) diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index b09fd658ce20..86d80c7f1dc7 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -26,7 +26,7 @@ static inline void subpage_hwpoison_deliver(struct page *head) struct page *page = head; if (PageHWPoison(head)) - page = head + page_private(head + 4); + page = head + page_private(head + SUBPAGE_INDEX_HWPOISON); /* * Move PageHWPoison flag from head page to the raw error page, @@ -41,7 +41,7 @@ static inline void subpage_hwpoison_deliver(struct page *head) static inline void set_subpage_hwpoison(struct page *head, struct page *page) { if (PageHWPoison(head)) - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) From patchwork Fri Nov 13 10:59:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11903025 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 09A29697 for ; Fri, 13 Nov 2020 11:04:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B116B22258 for ; Fri, 13 Nov 2020 11:04:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="acPhX2RT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B116B22258 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C73A06B00C2; Fri, 13 Nov 2020 06:04:47 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C23DE6B00D0; Fri, 13 Nov 2020 06:04:47 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B12EB6B00D1; Fri, 13 Nov 2020 06:04:47 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0242.hostedemail.com [216.40.44.242]) by kanga.kvack.org (Postfix) with ESMTP id 7B2CA6B00C2 for ; Fri, 13 Nov 2020 06:04:47 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2B02A3629 for ; Fri, 13 Nov 2020 11:04:47 +0000 (UTC) X-FDA: 77479112214.07.chess45_05074482730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id 0F1EB1803F9A9 for ; Fri, 13 Nov 2020 11:04:47 +0000 (UTC) X-Spam-Summary: 1,0,0,39377d28e8795293,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1539:1568:1711:1714:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3866:3867:4321:4385:5007:6261:6642:6653:6737:6738:10004:11026:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:13069:13161:13229:13255:13311:13357:13894:14181:14384:14394:14721:21080:21444:21451:21627:21990:30054,0,RBL:209.85.215.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yfigj881nnfre419xgah6hottznypupcnje1ihtzdjfgxmnfr17gnbh73xaqc.pgo3uyeygp7td6absyg8e5f1r7xafeao4fn3z7kfcchqdnu7ayqnokz9ibwk71r.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: chess45_05074482730e X-Filterd-Recvd-Size: 4352 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:04:46 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id h16so2738104pgb.7 for ; Fri, 13 Nov 2020 03:04:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9fcYda0AgPV/WnXKlnMuFEs34xJLysA9FVwvOk3sXIs=; b=acPhX2RT/KAG0d/biBqn+RStXr042kHnAgVWmRzsh8fygXJWhjB4LGnM3ArK78IYHy 05M1F8hixARWaE1jYAwk1yTcqe4rYbkfaAHkyKyIi4+8+9KuXuglgShz7W6hkk/X9HE1 N2n5uQS2/wDQuZFTzGZ2NDZiK2mDbfUba6OJJMr87S9/fgyc80MoT7cLUTQ0vFQyAdgF du7NuXu0ZzcYLYPUkUpImZuusIrbVD8hT0F5R4Tcs5ycO3EjK1rKcs04htAn854HEBTd hZoDX34VvtIBK69tAyNUPPlre/Wvo5RGsq3oAxvzDW7F326FTW5eb4uxR5bfOdkM8Pbw Eazg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9fcYda0AgPV/WnXKlnMuFEs34xJLysA9FVwvOk3sXIs=; b=RyoXMcQZ5FeFRs/VfrxFZDjoozf9nvXNBS77adD3KSX36gA3Lg9VrshdZJSlkrqGlT D2qqNpF9K+5wjX6f2239EBSRKJiJbh2ozWZSpkfYUW6+i0wCEE1LjvcgR5bywr7QNFfB gNxJxO7entg20HLHgMILgdW/P7fqAcUeXeoHNW4XhzPr6Z1EN2jqCnWxeZPFWp84g2fK U0w/emrgDBGImwb6V53a7HiUZUCmneoz3/KxSa9HZ9hKimq/qrC4kR7w8HSDvj9k7dxT f+VJmfeIen+jxId07z3qQXFQtUYhjbYvkoZub2jJBAf/JJtokHY4+KPPVyI+uejBna7k JFLA== X-Gm-Message-State: AOAM533CFKdG9c4wPM5U2xeeE/z5cW0bfSmIy8szJ58jYDR0NWO5rxnF BZUJFlVU7G6z7wzN2/afP2CZVg== X-Google-Smtp-Source: ABdhPJy3y4l2jKKhlBarikLuHDHXd6EaPh5lhfhf6nys027+oDsqWvIQDzCElVYorC4v+mz0uMOCFA== X-Received: by 2002:a17:90a:d184:: with SMTP id fu4mr1106379pjb.173.1605265485688; Fri, 13 Nov 2020 03:04:45 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.04.34 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:04:45 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 20/21] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Date: Fri, 13 Nov 2020 18:59:51 +0800 Message-Id: <20201113105952.11638-21-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are only `RESERVE_VMEMMAP_SIZE / sizeof(struct page)` struct pages can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch this invalid usage of tail struct page. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 5c00826a98b3..f67aec6e3bb1 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -717,6 +717,9 @@ static int __init vmemmap_ptlock_init(void) { int nid; + BUILD_BUG_ON(NR_USED_SUBPAGE >= + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + if (!hugepages_supported()) return 0; From patchwork Fri Nov 13 10:59:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11903027 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BD0C138B for ; Fri, 13 Nov 2020 11:05:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BEA6B207DE for ; Fri, 13 Nov 2020 11:04:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="AS+sztqa" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BEA6B207DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E96686B00C5; Fri, 13 Nov 2020 06:04:58 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E6BB66B00D2; Fri, 13 Nov 2020 06:04:58 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CBE8C6B00D3; Fri, 13 Nov 2020 06:04:58 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 9D61F6B00C5 for ; Fri, 13 Nov 2020 06:04:58 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4877C180AD801 for ; Fri, 13 Nov 2020 11:04:58 +0000 (UTC) X-FDA: 77479112676.14.snake69_3415aa92730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 1F3ED1822987B for ; Fri, 13 Nov 2020 11:04:58 +0000 (UTC) X-Spam-Summary: 1,0,0,fdfa0bc3123cac0c,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1540:1711:1714:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3350:3865:3866:3867:3871:4385:4390:4395:5007:6261:6653:6737:6738:8603:10004:11026:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:13069:13255:13311:13357:13894:14181:14384:14394:14721:21080:21094:21323:21444:21451:21627:21966:21990:30029:30054,0,RBL:209.85.214.195:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yg51pykbhm9jzc397aqdey6xbeoyc9rzzomzgqy11jmiqq4q3x31jqcopn3py.3uamkk1cyah6xwz11cj39f3g59yr3gd1tkdtg699u6zkaqqnjdegm7axuwg9dtm.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: snake69_3415aa92730e X-Filterd-Recvd-Size: 4485 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:04:57 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id d17so2872733plr.5 for ; Fri, 13 Nov 2020 03:04:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l9UdRDVTVA9p+ZXsVGnKTJDsCrdBiTFetvlG+i4H0yc=; b=AS+sztqaorePlJ6XZc9emmP6jtL7Eyohp2JdsVut2PX0qLCIhzXU6ZdcJNpPT2gjOe +Upw+iEiBZHarM+fDYKJ24SteIFOnAxkt0PnoSN9xIN+zKMt6XsBfFJst8vLxfPbvaaZ 3HckQ+Q1gdNF7ujCbFtJv3uZ663GT/F+DlFdAGfuCDSDhUdo39cI2ocg9Zbk7PC+8S6R SF+j5BdZAwEeaifFMgIKWevLMFqE5W82s4o+s3+i3M2wJ2su6gKMyxDi2HGeX2RgZMFQ JI3/Zs3B81JEDn9BoL/g6z8C02w7I8hiSmC6NUvWLgQjb2XHmALoyUA1IvWAVASMiGKR usWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l9UdRDVTVA9p+ZXsVGnKTJDsCrdBiTFetvlG+i4H0yc=; b=BYrobKtdnsvG1jtUAhY3dG680cvPtJQcT8XS9ahQhzUx2RXOUMQxCcs3p69YvVPrhe o2NJd1GI7a0UenvMQtHwzREdXLpQ+PfgCJCcHR0acSEfv6blnRaNEYCA1dYVixYaQyNT /RepXIQs2HPfVL/bpwfPa3CZKCLFe7fbAjxltJqHu5WAs+mHUmo1hP6gLQOvQOSyGaCi mY0ue/8Pvj5eAtJ1z1ZPw53Y2e3mPvyL+zpjWmCjoZ/aoDrYghFOK+J9nAg0V9Eq77g+ q8Q0aNeemoYbY8p/FHYfit6dxuvOUH1Zss8Cdmw7KlPle27kRM2v+uwD8IDTCdov8Tp8 rs3Q== X-Gm-Message-State: AOAM532r8YhM88kgVOXl7y1VXhv+Z59qm6QKwu/63Ws+m3t6T5StyPKy RBVwxSAcViA0H3zmXlS3b65WXg== X-Google-Smtp-Source: ABdhPJxzUgobcNi9dq/vlyFNji/6h++fpZQcRbnQtl2TuCezjK46BhKasiCuxIhitc0mAP75n+y8iw== X-Received: by 2002:a17:902:9a48:b029:d6:e0ba:f301 with SMTP id x8-20020a1709029a48b02900d6e0baf301mr1794976plv.30.1605265496908; Fri, 13 Nov 2020 03:04:56 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.04.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:04:56 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 21/21] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two Date: Fri, 13 Nov 2020 18:59:52 +0800 Message-Id: <20201113105952.11638-22-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We only can free the unused vmemmap to the buddy system when the size of struct page is a power of two. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index f67aec6e3bb1..a0a5df9dba6b 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -635,7 +635,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int order = huge_page_order(h); unsigned int vmemmap_pages; - if (hugetlb_free_vmemmap_disabled) { + if (hugetlb_free_vmemmap_disabled || + !is_power_of_2(sizeof(struct page))) { h->nr_free_vmemmap_pages = 0; pr_info("disable free vmemmap pages for %s\n", h->name); return;