From patchwork Sun Nov 8 14:10:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889587 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AFF2516C1 for ; Sun, 8 Nov 2020 14:11:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4C41E20B1F for ; Sun, 8 Nov 2020 14:11:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="w3Gf4gFk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4C41E20B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 861026B005D; Sun, 8 Nov 2020 09:11:55 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7EB366B0068; Sun, 8 Nov 2020 09:11:55 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6159F6B006C; Sun, 8 Nov 2020 09:11:55 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id 308AB6B005D for ; Sun, 8 Nov 2020 09:11:55 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C4F198249980 for ; Sun, 8 Nov 2020 14:11:54 +0000 (UTC) X-FDA: 77461439748.07.frog64_43053e3272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id AD4D81803F9AA for ; Sun, 8 Nov 2020 14:11:54 +0000 (UTC) X-Spam-Summary: 1,0,0,ac730e61000e1d61,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:1:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1801:2194:2196:2199:2200:2393:2538:2559:2562:2636:3138:3139:3140:3141:3142:3308:3865:3866:3867:3868:3870:3871:3874:4250:4321:4385:4605:5007:6117:6119:6261:6653:6737:6738:7903:9036:9592:10004:11026:11232:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13255:13894:13972:14096:14394:21080:21444:21451:21627:21972:21990:30003:30012:30054:30064:30067:30070:30075:30076,0,RBL:209.85.215.171:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yf6puh7sakunzye1sejoaxfoiqaociddi1epmqjfmwfm1eweexkocdf8bwaju.4z4idax36mxtimy58cdxexrmshum3uur8fswu1yg64mr5yrmgnnc1z374p9mmqj.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:128,LU A_SUMMAR X-HE-Tag: frog64_43053e3272e4 X-Filterd-Recvd-Size: 14192 Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:11:53 +0000 (UTC) Received: by mail-pg1-f171.google.com with SMTP id r186so4629607pgr.0 for ; Sun, 08 Nov 2020 06:11:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Rk2HsY6r/UP2ZIxsYA2jNVKnMzvj7IcFze5tetV5kyU=; b=w3Gf4gFkpbxdquMusINuIJDlLReUNMhtVsIMAqVw8un3rSbYhi4Q0z+CafvhLdFKDj ieURzbshsTPrPU+PwZ34V14r3sAvXb4FPIWXoqqcPBFgt4cn8b0IaA7e6ZzEWyx2KMTm ecUepRhpAiuJiMnSq/yAQtTkbUwIYb3AxKHxcElKbZWPvy4dEclVpujdoDHsXD3gTvZL 57eyH3XSY8paau8vVxtxGKGK3WCM5RyKHfi+jWbpJKEaZ7JyEYe9tz6LL6gZvfVK9M7i tHvX+Z0AIbZPtDuspafRv6aujNLKJrIELc3lgZysw+tXVHApU3V1crtPwf5ziqx1HuO0 f7ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Rk2HsY6r/UP2ZIxsYA2jNVKnMzvj7IcFze5tetV5kyU=; b=daTCDoy7ehjUB1rTjNfK9UNdcAp7YC3HYOhvIKEOPiT0MoQi9dY/ZcGya1lzq7eZea 00lTLYiWcqR38Ut2ySrHOT4er0P962xkGLAZJJPweZETBOcOR76O5x0MW7s7cIrsYbeh jg/lEN8SHx0vpgvo6/Wgtg8T70j9Ctpc9gVEDtbH0O60SyIDBRjyBkqiCQvGq3BVPGKh yuE57Z0F9YfcCYK4MV3fSamy9L85O+tRvOj7+liT3OKnn3yGoX8nFAkCVW50ZrgomyN9 HuaKrXu4sOnt58RjgygHvW2loj+/7XRJa7hgduVlYP1mlrJcYzBcOqjD7L9igpsajknu tNGw== X-Gm-Message-State: AOAM530u7Ml5o7mGjzxGypEGMHG/NId183mcNGsY0w3GluJMyVXH83/A /GMwMbpn6rbN+O7tBHlpbCiCAw== X-Google-Smtp-Source: ABdhPJwKI6dyt6Qhs7l1N6wOdsy3aguPB5q7JiWCELSpjsikXyPubkkhmaKotrjrNoK/hJl0JKdw+Q== X-Received: by 2002:a62:cd0d:0:b029:18b:a1cc:a5be with SMTP id o13-20020a62cd0d0000b029018ba1cca5bemr10080171pfg.67.1604844711081; Sun, 08 Nov 2020 06:11:51 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.11.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:11:50 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 01/21] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Date: Sun, 8 Nov 2020 22:10:53 +0800 Message-Id: <20201108141113.65450-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move bootmem info registration common API to individual bootmem_info.c for later patch use. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz --- arch/x86/mm/init_64.c | 1 + include/linux/bootmem_info.h | 27 ++++++++++++ include/linux/memory_hotplug.h | 23 ---------- mm/Makefile | 1 + mm/bootmem_info.c | 99 ++++++++++++++++++++++++++++++++++++++++++ mm/memory_hotplug.c | 91 +------------------------------------- 6 files changed, 129 insertions(+), 113 deletions(-) create mode 100644 include/linux/bootmem_info.h create mode 100644 mm/bootmem_info.c diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index b5a3fa4033d3..c7f7ad55b625 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h new file mode 100644 index 000000000000..65bb9b23140f --- /dev/null +++ b/include/linux/bootmem_info.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_BOOTMEM_INFO_H +#define __LINUX_BOOTMEM_INFO_H + +#include + +/* + * Types for free bootmem stored in page->lru.next. These have to be in + * some random range in unsigned long space for debugging purposes. + */ +enum { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, + SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, + MIX_SECTION_INFO, + NODE_INFO, + MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, +}; + +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE +void __init register_page_bootmem_info_node(struct pglist_data *pgdat); +#else +static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) +{ +} +#endif + +#endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 51a877fec8da..19e5d067294c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -33,18 +33,6 @@ struct vmem_altmap; ___page; \ }) -/* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. - */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, - SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, - MIX_SECTION_INFO, - NODE_INFO, - MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, -}; - /* Types for control the zone type of onlined and offlined memory */ enum { /* Offline the memory. */ @@ -209,13 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat); -#else -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} -#endif extern void put_page_bootmem(struct page *page); extern void get_page_bootmem(unsigned long ingo, struct page *page, unsigned long type); @@ -254,10 +235,6 @@ static inline int mhp_notimplemented(const char *func) return -ENOSYS; } -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} - static inline int try_online_node(int nid) { return 0; diff --git a/mm/Makefile b/mm/Makefile index d5649f1c12c0..752111587c99 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -82,6 +82,7 @@ obj-$(CONFIG_SLAB) += slab.o obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_FAILSLAB) += failslab.o +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c new file mode 100644 index 000000000000..39fa8fc120bc --- /dev/null +++ b/mm/bootmem_info.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/mm/bootmem_info.c + * + * Copyright (C) + */ +#include +#include +#include +#include +#include + +#ifndef CONFIG_SPARSEMEM_VMEMMAP +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + /* Get section's memmap address */ + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + /* + * Get page for the memmap's phys address + * XXX: need more consideration for sparse_vmemmap... + */ + page = virt_to_page(memmap); + mapsize = sizeof(struct page) * PAGES_PER_SECTION; + mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; + + /* remember memmap's page */ + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, SECTION_INFO); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); + +} +#else /* CONFIG_SPARSEMEM_VMEMMAP */ +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); +} +#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ + +void __init register_page_bootmem_info_node(struct pglist_data *pgdat) +{ + unsigned long i, pfn, end_pfn, nr_pages; + int node = pgdat->node_id; + struct page *page; + + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; + page = virt_to_page(pgdat); + + for (i = 0; i < nr_pages; i++, page++) + get_page_bootmem(node, page, NODE_INFO); + + pfn = pgdat->node_start_pfn; + end_pfn = pgdat_end_pfn(pgdat); + + /* register section info */ + for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { + /* + * Some platforms can assign the same pfn to multiple nodes - on + * node0 as well as nodeN. To avoid registering a pfn against + * multiple nodes we check that this pfn does not already + * reside in some other nodes. + */ + if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) + register_page_bootmem_info_section(pfn); + } +} diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index baded53b9ff9..2da4ad071456 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -167,96 +168,6 @@ void put_page_bootmem(struct page *page) } } -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - /* Get section's memmap address */ - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - /* - * Get page for the memmap's phys address - * XXX: need more consideration for sparse_vmemmap... - */ - page = virt_to_page(memmap); - mapsize = sizeof(struct page) * PAGES_PER_SECTION; - mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; - - /* remember memmap's page */ - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, SECTION_INFO); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); - -} -#else /* CONFIG_SPARSEMEM_VMEMMAP */ -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); -} -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ - -void __init register_page_bootmem_info_node(struct pglist_data *pgdat) -{ - unsigned long i, pfn, end_pfn, nr_pages; - int node = pgdat->node_id; - struct page *page; - - nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; - page = virt_to_page(pgdat); - - for (i = 0; i < nr_pages; i++, page++) - get_page_bootmem(node, page, NODE_INFO); - - pfn = pgdat->node_start_pfn; - end_pfn = pgdat_end_pfn(pgdat); - - /* register section info */ - for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - /* - * Some platforms can assign the same pfn to multiple nodes - on - * node0 as well as nodeN. To avoid registering a pfn against - * multiple nodes we check that this pfn does not already - * reside in some other nodes. - */ - if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) - register_page_bootmem_info_section(pfn); - } -} -#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { From patchwork Sun Nov 8 14:10:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889591 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5B3F414C0 for ; Sun, 8 Nov 2020 14:12:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0278320B1F for ; Sun, 8 Nov 2020 14:12:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="1qwUPf+4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0278320B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 298046B0068; Sun, 8 Nov 2020 09:12:04 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2451C6B006C; Sun, 8 Nov 2020 09:12:04 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E6E56B006E; Sun, 8 Nov 2020 09:12:04 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id D4BE36B0068 for ; Sun, 8 Nov 2020 09:12:03 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7B53D180AD801 for ; Sun, 8 Nov 2020 14:12:03 +0000 (UTC) X-FDA: 77461440126.01.rod73_5e1300c272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 4E3A110047824 for ; Sun, 8 Nov 2020 14:12:03 +0000 (UTC) X-Spam-Summary: 1,0,0,d4a3287262ef9b26,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1606:1730:1747:1777:1792:1801:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3355:3865:3866:3867:3870:3871:4119:4321:4385:4605:5007:6261:6653:6737:6738:8957:9592:10004:11026:11232:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13255:13894:13972:14096:14394:21080:21094:21323:21444:21451:21627:21987:21990:30012:30054:30064:30070:30075,0,RBL:209.85.216.65:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yr7h8pp3k7nx6491cboi78ub119ycmys4fhwdpdqi3s5hsta18xiief6rhubh.nddcf861a5cyz55h877coxkxxgxashkg1mi7d1g3yzqi3rs3gr43wbxuugtt641.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: rod73_5e1300c272e4 X-Filterd-Recvd-Size: 8814 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:12:02 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id w20so1894823pjh.1 for ; Sun, 08 Nov 2020 06:12:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Yl6+SJF9ldn739CCgiYUc2ELWybi9wl11V9jgMTWzRM=; b=1qwUPf+4WDCtRZPJhdoDvf2qYVo64FMSRgpIxtyEK7wweIbYOPZN9+314JMlifOKlO N/3BBE1NenJI8/fOa3LhBK/G6ig7Djo96Txmi9QBrkFscMYFnjyvF3BmIgC+iXYgv/1I VuvFZzDJcRW1AvYRh8UcF52hdqmCbVXr3IsivR0+UBczyIS+fna6Wa/lGkY6LZb+wHjv wmy0TbCH2OOpawuTnkNuQt2W1uvPFnl34i9SK1QSH/aMQ5bf5tP1swlYDMi+eQ2fzcqz N5VR4PB5mTUazCudUxX+xz/ihBuzlVIpJ1+QFMisAYTqyOmX293NuGXC7fH0SKsTsv7+ S43g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Yl6+SJF9ldn739CCgiYUc2ELWybi9wl11V9jgMTWzRM=; b=KX78osmxTlpI4OtAGz5zukcWCMpqeL5yOh4XkmxmkM0NPXq+oLCv9QjC1QVNLfQYBY hBeqXAI1eLoMzKPXZnyt2DgbMcH7xXpMg4UdK8exZFuWSukdgcVumZLQ+a1n5rywrkNJ VHu3RFQVC3z4lA82R83/7oWLkaCa0a9YpuiaJHPGJiVXtL6z43xWWa1P3U6mnfOsh2bw mlW0xVK2I/BOPPNS6+l75S15Q2W+rAFegQyic3f9hAwwPlMxY03Ra5FA4QvIwc7yBio7 NbJ59OaCX09H6LZN8BUX+u2YISZlkeV8+qcYChosNl1f2GREzI6mH+A+5wUpP5U1+JZf GnBw== X-Gm-Message-State: AOAM532Oj1K4g21JbW5cKNFLYsAO5yw637QDn6YBQz9Yfycn7Syb2Wi4 NO5nCHaq76AYPNzlc4OE4EAtFQ== X-Google-Smtp-Source: ABdhPJzmdHHFnoW3gcj23YuvnESkzoLYirGY7Wy32hA3OjJt2nJtMAvfzjpram5L+8eJJeetNpAGBA== X-Received: by 2002:a17:90a:154b:: with SMTP id y11mr6942974pja.75.1604844721214; Sun, 08 Nov 2020 06:12:01 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.11.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:12:00 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 02/21] mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c Date: Sun, 8 Nov 2020 22:10:54 +0800 Message-Id: <20201108141113.65450-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we will use {get,put}_page_bootmem() to initialize the page for vmemmap or free vmemmap page to buddy. So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz --- arch/x86/mm/init_64.c | 2 +- include/linux/bootmem_info.h | 13 +++++++++++++ include/linux/memory_hotplug.h | 4 ---- mm/bootmem_info.c | 26 ++++++++++++++++++++++++++ mm/memory_hotplug.c | 27 --------------------------- mm/sparse.c | 1 + 6 files changed, 41 insertions(+), 32 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index c7f7ad55b625..0a45f062826e 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1572,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return err; } -#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE) +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void register_page_bootmem_memmap(unsigned long section_nr, struct page *start_page, unsigned long nr_pages) { diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 65bb9b23140f..4ed6dee1adc9 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -18,10 +18,23 @@ enum { #ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void __init register_page_bootmem_info_node(struct pglist_data *pgdat); + +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type); +void put_page_bootmem(struct page *page); #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { } + +static inline void put_page_bootmem(struct page *page) +{ +} + +static inline void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ +} #endif #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 19e5d067294c..c9f3361fe84b 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -197,10 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -extern void put_page_bootmem(struct page *page); -extern void get_page_bootmem(unsigned long ingo, struct page *page, - unsigned long type); - void get_online_mems(void); void put_online_mems(void); diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index 39fa8fc120bc..d276e96e487f 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -10,6 +10,32 @@ #include #include +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ + page->freelist = (void *)type; + SetPagePrivate(page); + set_page_private(page, info); + page_ref_inc(page); +} + +void put_page_bootmem(struct page *page) +{ + unsigned long type; + + type = (unsigned long) page->freelist; + BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || + type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); + + if (page_ref_dec_return(page) == 1) { + page->freelist = NULL; + ClearPagePrivate(page); + set_page_private(page, 0); + INIT_LIST_HEAD(&page->lru); + free_reserved_page(page); + } +} + #ifndef CONFIG_SPARSEMEM_VMEMMAP static void register_page_bootmem_info_section(unsigned long start_pfn) { diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 2da4ad071456..ae57eedc341f 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include @@ -142,32 +141,6 @@ static void release_memory_resource(struct resource *res) } #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE -void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) -{ - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); - page_ref_inc(page); -} - -void put_page_bootmem(struct page *page) -{ - unsigned long type; - - type = (unsigned long) page->freelist; - BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || - type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); - - if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - INIT_LIST_HEAD(&page->lru); - free_reserved_page(page); - } -} - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { diff --git a/mm/sparse.c b/mm/sparse.c index b25ad8e64839..a4138410d890 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "internal.h" #include From patchwork Sun Nov 8 14:10:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889595 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7E5CC139F for ; Sun, 8 Nov 2020 14:12:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 31CD820B1F for ; Sun, 8 Nov 2020 14:12:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="o0HEIRih" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 31CD820B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 641516B006C; Sun, 8 Nov 2020 09:12:15 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5EE936B006E; Sun, 8 Nov 2020 09:12:15 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 46B3E6B0070; Sun, 8 Nov 2020 09:12:15 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id 1AB8E6B006C for ; Sun, 8 Nov 2020 09:12:15 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AD3A3362A for ; Sun, 8 Nov 2020 14:12:14 +0000 (UTC) X-FDA: 77461440588.05.wren11_5a14a85272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 804D318016125 for ; Sun, 8 Nov 2020 14:12:14 +0000 (UTC) X-Spam-Summary: 10,1,0,88ef75af582c5fb9,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:404:541:800:960:965:966:973:981:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3867:3870:3871:3872:3874:4250:4321:4385:4390:4395:4605:5007:6261:6653:6737:6738:7904:8957:10004:11026:11473:11658:11914:12043:12048:12114:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13894:13972:14093:14096:14181:14394:14721:21080:21094:21323:21444:21451:21627:30054:30075,0,RBL:209.85.214.194:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04y8oyns4u54tfume3fehbccw51q5yp6trgpyhbg7q5bee7czi8mdzcqp63dmkq.oni8u141yxu5sas66fpyffw6zwto3pi4n3q3e849oy5793sboh36z6p4fwrnyhi.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: wren11_5a14a85272e4 X-Filterd-Recvd-Size: 5825 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:12:14 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id t22so3254435plr.9 for ; Sun, 08 Nov 2020 06:12:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7V5l6iVptpyNChOEqgaC9UYBunmQshAPJJMGH6oVOO8=; b=o0HEIRih4qIk8FKiaCrU/8jPPZ5TYT6PQWg9ap++vRxQ08ObrhpaevX4214Ob2VCY2 bHwFHgK7NR9A1or3FsJjDVthim8iB3WkhekAJ3nEByhI9pBN6luYwVbGAUs/2G+uay5s gooJjeKFBKWzlkf2GamyV+vc4R6VeS43+OqoTKNV73tC1vCFejqm9W4GiKyrG5+yHeM3 xd7pwh2QL62pQA13zHEZhVG27MtZ23E0H1cdnwbg4gAxNqun3pYYuoLtce5HF2bN6yCH HE4blj+ArUxxMH33jBUNT0HzpGNcmth0I2XO7RFjfCMZqNitjz16k+F4CXTTflMWN6uF mbGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7V5l6iVptpyNChOEqgaC9UYBunmQshAPJJMGH6oVOO8=; b=Vqk7GhwXzgXMmRb3kwzOna6b5MODM31GZFSFyhVX/fczeFUgQZbHjxmNuSeGjFyX1k M874JZYEYTyvQ8j07cZbTRGyuenMcYoBQ4pB3Er4kryj545Q1o4dH2Lf75XlXvoh/iGi WJELl7syAToaCLxfhtVexbCH1gNd2hZ2+Yyxq2fx04bOlzBNZWNh4ZSSDi4UopkBTMZV vUaTJ33XlMr8awlnTZ+CzNJQacFrf2wiyZqY4kHUes58M0rvNErqTuT/amOPppIp+Wtr 9Z1rvR7WROfqUPhKIh8GxIF6yrYgXSr+U3wBOe4goHqHJ7JRHA8pWptckGQl2ewEzkyH 9h1A== X-Gm-Message-State: AOAM532MJdkVkBnsi9G9/USK0yr1bEB8iklhh59Z1cE48mOnWY1BHXdm FTha20vddhZgCdGC4jDYvfPVow== X-Google-Smtp-Source: ABdhPJwKM/CCRKYOI1UvT8sA6mZlrIw7idUpNoiTVHntW3fTsKokmQ5P/+hpT6dCP1V9Wu+5WLqwkg== X-Received: by 2002:a17:902:7049:b029:d7:e413:8aba with SMTP id h9-20020a1709027049b02900d7e4138abamr284690plt.30.1604844731474; Sun, 08 Nov 2020 06:12:11 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.12.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:12:10 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 03/21] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Date: Sun, 8 Nov 2020 22:10:55 +0800 Message-Id: <20201108141113.65450-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure whether to enable the feature of freeing unused vmemmap associated with HugeTLB pages. Now only support x86. Signed-off-by: Muchun Song --- arch/x86/mm/init_64.c | 2 +- fs/Kconfig | 16 ++++++++++++++++ mm/bootmem_info.c | 3 +-- 3 files changed, 18 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0a45f062826e..0435bee2e172 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall; static void __init register_page_bootmem_info(void) { -#ifdef CONFIG_NUMA +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) int i; for_each_online_node(i) diff --git a/fs/Kconfig b/fs/Kconfig index 976e8b9033c4..21b8d39a9715 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -245,6 +245,22 @@ config HUGETLBFS config HUGETLB_PAGE def_bool HUGETLBFS +config HUGETLB_PAGE_FREE_VMEMMAP + bool "Free unused vmemmap associated with HugeTLB pages" + default y + depends on X86 + depends on HUGETLB_PAGE + depends on SPARSEMEM_VMEMMAP + depends on HAVE_BOOTMEM_INFO_NODE + help + There are many struct page structures associated with each HugeTLB + page. But we only use a few struct page structures. In this case, + it wastes some memory. It is better to free the unused struct page + structures to buddy system which can save some memory. For + architectures that support it, say Y here. + + If unsure, say N. + config MEMFD_CREATE def_bool TMPFS || HUGETLBFS diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index d276e96e487f..fcab5a3f8cc0 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -10,8 +10,7 @@ #include #include -void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) +void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { page->freelist = (void *)type; SetPagePrivate(page); From patchwork Sun Nov 8 14:10:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889597 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EC2EA139F for ; Sun, 8 Nov 2020 14:12:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A7C7721D40 for ; Sun, 8 Nov 2020 14:12:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="tot+BcAr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A7C7721D40 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D45B66B006E; Sun, 8 Nov 2020 09:12:23 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CF4B26B0070; Sun, 8 Nov 2020 09:12:23 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBBE86B0071; Sun, 8 Nov 2020 09:12:23 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 8A1076B006E for ; Sun, 8 Nov 2020 09:12:23 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 356D5180AD802 for ; Sun, 8 Nov 2020 14:12:23 +0000 (UTC) X-FDA: 77461440966.09.whip69_18008d9272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 1DE24180AD801 for ; Sun, 8 Nov 2020 14:12:23 +0000 (UTC) X-Spam-Summary: 1,0,0,21b63635982f5c72,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4118:4321:4385:4390:4395:4605:5007:6261:6653:6737:6738:7875:8603:9036:10004:11026:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13138:13231:13255:13894:14181:14394:14721:21080:21094:21323:21444:21451:21627:21987:21990:30029:30054:30064,0,RBL:209.85.210.193:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yfz7ubi9i4ybtxkh3rxqrfpzkbxycpfb7qdmsm8dx4csqs9h3n6opfc9nj4ch.6p9o9biqjz4bszqde19zz59y16hw844g7nzzmm9574in38qb7p7osjm8jmxf3zg.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_S UMMARY:n X-HE-Tag: whip69_18008d9272e4 X-Filterd-Recvd-Size: 7084 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:12:22 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id q10so5534294pfn.0 for ; Sun, 08 Nov 2020 06:12:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5TzMLMYXz/tt2Duq9f+ytVwz3M0d3tJpioRexK8gCnY=; b=tot+BcArmsCT+Ujl2UrzCeAIugUfEiKhiHWRy395c/4yXQteyeyWe97yiLCpAccecK TXztPy/aljEbw+NZyD5a95ZhoRolPuBuHQyoCpR2bVqNmGNJGXOdAUb40MIrCg2PBiAA 6WoW9LJDfCfRS3xcGx7fdWVbQvKE1LxfnGb3Ret1ynvISN8iAMNXO1Vu8BPJE1HUmyd0 q+4E8kQ9WJsy1s1AZkdvogEUarTd/xDtVHAu5lk6HzZClsQqDqiJ035r2caXlcA2OSS4 wXu80VKjCTmIwYbjkWOo3kIK2fPSLXZ8rGkAvgP7933OvGQ/KskCDzP0Ini5ZtHtgtKB TO7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5TzMLMYXz/tt2Duq9f+ytVwz3M0d3tJpioRexK8gCnY=; b=V8Qjtz96tsoBjBx2rznW44TsxIeoG6RWOk7grY60HxD782nIjdAMwJ7y/L8NfffpCn tnvn28Zo7XFLPf6YMNO6aGKeGYGGO8mjC4RuiHaEVBnfZOmkj89qaR773NdKmxCtZt1/ 9nQcAkYsI2dynX+eD984o3svd+emczvmPkR9pcuMfn+jvWA/qk9TD5XiVs6FFgsH745A rp5FmZMtpoWZQ95H3+nTWKLYuSSovP3SW6kZs1p0/GRYPwhhG3jo2fkaJZIiw74FnpC9 txVJJscdCq7iwcqXqG3SkQj+vYKqZx/pXQ5kS3KHIrhcB97vTwyZrZIyREuBwJZLOT1s lrcg== X-Gm-Message-State: AOAM532KRnEbxrrEO+Hh/8ziGAnQY6lfClkEZeLTRDZb7QM33+IbgG4c qrBg+x77go1GFL5F1ZZZb4SGgQ== X-Google-Smtp-Source: ABdhPJwlFJQvzR9hK35z2RWma7Iesgmp9PzPHb9J6pbPZFxIZR8tlFKidvzMRT4M6nvoWJhgVyoekA== X-Received: by 2002:aa7:83c2:0:b029:156:5ece:98b6 with SMTP id j2-20020aa783c20000b02901565ece98b6mr10070588pfn.4.1604844741468; Sun, 08 Nov 2020 06:12:21 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.12.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:12:20 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Date: Sun, 8 Nov 2020 22:10:56 +0800 Message-Id: <20201108141113.65450-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the size of hugetlb page is 2MB, we need 512 struct page structures (8 pages) to be associated with it. As far as I know, we only use the first 4 struct page structures. Use of first 4 struct page structures comes from HUGETLB_CGROUP_MIN_ORDER. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structs. We map the virtual addresses of the remaining 6 pages of tail page structs to the first tail page struct, and then free these 6 pages. Therefore, we need to reserve at least 2 pages as vmemmap areas. So we introduce a new nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a hugetlb page that we can free to buddy system. Signed-off-by: Muchun Song Acked-by: Mike Kravetz --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 38 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 41 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d5cc5f802dd4..eed3dd3bd626 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -492,6 +492,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 81a41aa080a5..a0007902fafb 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1292,6 +1292,42 @@ static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +/* + * There are 512 struct page structs(8 pages) associated with each 2MB + * hugetlb page. For tail pages, the value of compound_dtor is the same. + * So we can reuse first page of tail page structs. We map the virtual + * addresses of the remaining 6 pages of tail page structs to the first + * tail page struct, and then free these 6 pages. Therefore, we need to + * reserve at least 2 pages as vmemmap areas. + */ +#define RESERVE_VMEMMAP_NR 2U + +static void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int order = huge_page_order(h); + unsigned int vmemmap_pages; + + vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page not free to buddy system, + * the others page will map to the first tail page. So there are + * (@vmemmap_pages - RESERVE_VMEMMAP_NR) pages can be freed. + */ + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + else + h->nr_free_vmemmap_pages = 0; + + pr_debug("HugeTLB: can free %d vmemmap pages for %s\n", + h->nr_free_vmemmap_pages, h->name); +} +#else +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} +#endif + static void update_and_free_page(struct hstate *h, struct page *page) { int i; @@ -3285,6 +3321,8 @@ void __init hugetlb_add_hstate(unsigned int order) snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); + parsed_hstate = h; } From patchwork Sun Nov 8 14:10:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889601 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 14BF7139F for ; Sun, 8 Nov 2020 14:12:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 84D6420B1F for ; Sun, 8 Nov 2020 14:12:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="Rhp9WxDm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 84D6420B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C5ACA6B0070; Sun, 8 Nov 2020 09:12:33 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C08AE6B0071; Sun, 8 Nov 2020 09:12:33 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA9BB6B0072; Sun, 8 Nov 2020 09:12:33 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id 7F6876B0070 for ; Sun, 8 Nov 2020 09:12:33 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2E249181AEF09 for ; Sun, 8 Nov 2020 14:12:33 +0000 (UTC) X-FDA: 77461441386.05.teeth24_0c0b81e272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 1126B18014A02 for ; Sun, 8 Nov 2020 14:12:33 +0000 (UTC) X-Spam-Summary: 1,0,0,370c19192593b29a,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1544:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3355:3622:3865:3867:3868:3871:4119:4250:4321:4385:4390:4395:4605:5007:6119:6261:6653:6737:6738:7875:7903:9036:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13894:14096:14181:14394:14721:21080:21444:21451:21627:21990:30054,0,RBL:209.85.210.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04ygy79jgmgw4hqyc86s46ttqyk4ryc7hctuagj7skzgtg7rtp9jndhzbk3w3of.mj9ffynhto936cgxehhotecn47myt9xdpan6nthorzktjqfeydtpdmqkadtgzmq.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: teeth24_0c0b81e272e4 X-Filterd-Recvd-Size: 8339 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:12:32 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id g7so5525299pfc.2 for ; Sun, 08 Nov 2020 06:12:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YtIyQzAvHroZKptlekhFrmJwjKkMKzILzouKGzt2f6k=; b=Rhp9WxDmyv/RhEytkDFxoQkEq1V5pi1tKU3OVQVsW77wJdXH5f/2Z6/ZGtm1PIRL6g q0wr6tFpcF1PgghtqZMN/8uys3MOEYWcBedrFIk5Fa6gEX1+P6XZ6qLV6JxETXkYo3Qq TWKnTnb3ZweeJ1oCforZ2f3rKk++psVY2q7vMPdA0c7nQZXIKHxe69gvZL8EZUef6oyZ DaWBU71aB9pdgFB0v5RyRMOJ0C9eXQwPU44+gUhLBHeJ13wwcS+Wwxz8PGDMu+4OHHNB imdOAxrP5OBW2JuG8eZDRK+xBu4CPgNzAkUFT/se7MUH4lOUIbulFXpAi2IzdxrNUhOl wokA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YtIyQzAvHroZKptlekhFrmJwjKkMKzILzouKGzt2f6k=; b=l8ZaI0IkEFntMIYhJT/U1uo30c5ZwJpsM4VoEAIEzdZh4Y6snSEs212Y37FnC61bcG G8/dKmI++kNf+VFZhWLnimXu/YwuPa6GeFgzcxPT7YvYTaC50wwmJrvuJbZ403lOlbCz 3zV2gWehG+7OrRo2LSeP/MThGF8vthArhxSbFtghe09Fo/Jt1OrSRN71bcFDX3qxKHcd nNSIxiwxCSqLlN8zL5qc5uHa5FysbnIk3WVt+C46LJfySERsvuHF7DaQTQQR/ESlcO6s soQwp40v7gBfjrQTnnQXNeoE/Cg36kMZBXQdOdYLdhf+kloWWp3AdzRrv/+1G/oLTW3D xFpg== X-Gm-Message-State: AOAM532prjb6Ga+eV9ZhMEIOgmtxw58nnMncTNPMEyel1OWt2Da0lwqg 5E2W+EMgBYrHCs/bAsZ+QGhdTQ== X-Google-Smtp-Source: ABdhPJyBBqHXdVJzJb0PmTdi9AVnwB/gLdr18ecPlbIl65Vq7v6QcCD8JbdsWRJwBKvGxFKR9Uosrg== X-Received: by 2002:aa7:8481:0:b029:18b:f647:45f7 with SMTP id u1-20020aa784810000b029018bf64745f7mr3139497pfn.58.1604844751504; Sun, 08 Nov 2020 06:12:31 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.12.21 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:12:30 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers Date: Sun, 8 Nov 2020 22:10:57 +0800 Message-Id: <20201108141113.65450-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On x86_64, vmemmap is always PMD mapped if the machine has hugepages support and if we have 2MB contiguos pages and PMD aligned. If we want to free the unused vmemmap pages, we have to split the huge pmd firstly. So we should pre-allocate pgtable to split PMD to PTE. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 10 +++++ mm/hugetlb.c | 111 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 121 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index eed3dd3bd626..d81c262418db 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -593,6 +593,16 @@ static inline unsigned int blocks_per_huge_page(struct hstate *h) #include +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +#ifndef VMEMMAP_HPAGE_SHIFT +#define VMEMMAP_HPAGE_SHIFT HPAGE_SHIFT +#endif +#define VMEMMAP_HPAGE_ORDER (VMEMMAP_HPAGE_SHIFT - PAGE_SHIFT) +#define VMEMMAP_HPAGE_NR (1 << VMEMMAP_HPAGE_ORDER) +#define VMEMMAP_HPAGE_SIZE ((1UL) << VMEMMAP_HPAGE_SHIFT) +#define VMEMMAP_HPAGE_MASK (~(VMEMMAP_HPAGE_SIZE - 1)) +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ + #ifndef is_hugepage_only_range static inline int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr, unsigned long len) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a0007902fafb..5c7be2ee7e15 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1303,6 +1303,108 @@ static inline void destroy_compound_gigantic_page(struct page *page, */ #define RESERVE_VMEMMAP_NR 2U +#define page_huge_pte(page) ((page)->pmd_huge_pte) + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return h->nr_free_vmemmap_pages; +} + +static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) +{ + return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; +} + +static inline unsigned long vmemmap_pages_size_per_hpage(struct hstate *h) +{ + return (unsigned long)vmemmap_pages_per_hpage(h) << PAGE_SHIFT; +} + +static inline unsigned int pgtable_pages_to_prealloc_per_hpage(struct hstate *h) +{ + unsigned long vmemmap_size = vmemmap_pages_size_per_hpage(h); + + /* + * No need pre-allocate page tabels when there is no vmemmap pages + * to free. + */ + if (!free_vmemmap_pages_per_hpage(h)) + return 0; + + return ALIGN(vmemmap_size, VMEMMAP_HPAGE_SIZE) >> VMEMMAP_HPAGE_SHIFT; +} + +static inline void vmemmap_pgtable_init(struct page *page) +{ + page_huge_pte(page) = NULL; +} + +static void vmemmap_pgtable_deposit(struct page *page, pgtable_t pgtable) +{ + /* FIFO */ + if (!page_huge_pte(page)) + INIT_LIST_HEAD(&pgtable->lru); + else + list_add(&pgtable->lru, &page_huge_pte(page)->lru); + page_huge_pte(page) = pgtable; +} + +static pgtable_t vmemmap_pgtable_withdraw(struct page *page) +{ + pgtable_t pgtable; + + /* FIFO */ + pgtable = page_huge_pte(page); + page_huge_pte(page) = list_first_entry_or_null(&pgtable->lru, + struct page, lru); + if (page_huge_pte(page)) + list_del(&pgtable->lru); + return pgtable; +} + +static int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) +{ + int i; + pgtable_t pgtable; + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h); + + if (!nr) + return 0; + + vmemmap_pgtable_init(page); + + for (i = 0; i < nr; i++) { + pte_t *pte_p; + + pte_p = pte_alloc_one_kernel(&init_mm); + if (!pte_p) + goto out; + vmemmap_pgtable_deposit(page, virt_to_page(pte_p)); + } + + return 0; +out: + while (i-- && (pgtable = vmemmap_pgtable_withdraw(page))) + pte_free_kernel(&init_mm, page_to_virt(pgtable)); + return -ENOMEM; +} + +static void vmemmap_pgtable_free(struct hstate *h, struct page *page) +{ + pgtable_t pgtable; + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h); + + if (!nr) + return; + + pgtable = page_huge_pte(page); + if (!pgtable) + return; + + while (nr-- && (pgtable = vmemmap_pgtable_withdraw(page))) + pte_free_kernel(&init_mm, page_to_virt(pgtable)); +} + static void __init hugetlb_vmemmap_init(struct hstate *h) { unsigned int order = huge_page_order(h); @@ -1326,6 +1428,15 @@ static void __init hugetlb_vmemmap_init(struct hstate *h) static inline void hugetlb_vmemmap_init(struct hstate *h) { } + +static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) +{ + return 0; +} + +static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) +{ +} #endif static void update_and_free_page(struct hstate *h, struct page *page) From patchwork Sun Nov 8 14:10:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889605 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E4B814C0 for ; Sun, 8 Nov 2020 14:12:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5D75E22201 for ; Sun, 8 Nov 2020 14:12:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="e4eqV9yh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5D75E22201 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8B08C6B0071; Sun, 8 Nov 2020 09:12:44 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8386B6B0072; Sun, 8 Nov 2020 09:12:44 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 728D46B0073; Sun, 8 Nov 2020 09:12:44 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0203.hostedemail.com [216.40.44.203]) by kanga.kvack.org (Postfix) with ESMTP id 3F2386B0071 for ; Sun, 8 Nov 2020 09:12:44 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E633D824999B for ; Sun, 8 Nov 2020 14:12:43 +0000 (UTC) X-FDA: 77461441806.29.class76_18175ae272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id CD3DF18086CD2 for ; Sun, 8 Nov 2020 14:12:43 +0000 (UTC) X-Spam-Summary: 1,0,0,f681b726688a4fc5,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1541:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3867:3868:3870:4385:4390:4395:4605:5007:6261:6653:6737:6738:8957:10004:11026:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:13972:14096:14181:14384:14394:14721:21080:21444:21451:21627:30054:30075,0,RBL:209.85.216.68:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04ygqpi1gcu8qqyc6ou9umdixde5eyp1iq1boq96uzau4cswmuxyst7dkq4x93h.j66g7tremc7uj7uy8fxhxdwi1fmzjm317bosox78o6ttbzaq81o5bqfxo9fgxwz.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: class76_18175ae272e4 X-Filterd-Recvd-Size: 5375 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:12:43 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id oc3so1896243pjb.4 for ; Sun, 08 Nov 2020 06:12:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yu3H5eYwcDByY9yS9k1fn3ciqLRHmKRE4UBx/XbnYY0=; b=e4eqV9yhEUExSYRbWzCm21PsnGh1NEdrGDRdhhON+gDaZXLXHbsoNIT10oEewPnDC3 KypcDjoL93yGqT97HyT/56XgfSJIO027PvlTaYhtIyFIz1jTWWuOJFKxhISAtIcJ0EJe ZzDYbEVVmzvoiCFzIHimXr5M8qJpM9KubOdqnRCmPAy6mb5xjAGNaYfkKexKH29qJvkl TXu2TvOMGTuz5MTO5p43xUqbBRf9y3qq880Bz3yhcY+KAnquSGpcfggfiefcg+xKNvJN oJWQNpqRsAnp6dvafg2SiyuWh1o8b1QyntftLS9TypI8e17lVR/xAncsFwrYzUvRqfn6 WBlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yu3H5eYwcDByY9yS9k1fn3ciqLRHmKRE4UBx/XbnYY0=; b=r3vipbpt4e1EcKcX7G4UXEjPbgAElzF7WbugZ97sup4/6E76dDhU2ush577TWieYva JZNVAgaEib746/7zpjD/pMASWz4iixf7prJ0eswvKrMp0qM7A/UNze316J+MH3AOdMqA D0Td9mwegfJpjCT8jq4RRiZ9p8NSdaT+kCtSvUZUcpU7HNQ7nEtECjn+o4l+px6Z12+A Ql9/8l1a5oyj7/ip5U6C9m7yyn8XJ346TeMmz+tK80u6ve9C/Qrurlkw35Adk6hTP++b bEk8n+p9J2VJku4Y/dVUkJjUQD4Nl3ZfV6VoDaulHhpk7BwcShkr3MPqszOAhAOAbbly gibQ== X-Gm-Message-State: AOAM531UEuOqSqq+EILMG5IqsmSDwtqOnrZOLe4A3Wtb6lYetFh0JIu6 qUWm8V+PRvoVEgYgNFGF3CYL1Q== X-Google-Smtp-Source: ABdhPJzxCPLnV3mJOn9u75dG9TeuUrTa2poVJDVNY2XGCBu5/a6Jfk4nU4IOFOx8xMTHQwQ0HIv1Ng== X-Received: by 2002:a17:902:9347:b029:d3:b2c6:1500 with SMTP id g7-20020a1709029347b02900d3b2c61500mr9248406plp.5.1604844762246; Sun, 08 Nov 2020 06:12:42 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.12.31 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:12:41 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 06/21] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Date: Sun, 8 Nov 2020 22:10:58 +0800 Message-Id: <20201108141113.65450-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we can use the free_vmemmap_page() to free the unused vmemmap pages and initialize a page for vmemmap page using via prepare_vmemmap_page(). Signed-off-by: Muchun Song --- include/linux/bootmem_info.h | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..ce9d8c97369d 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -3,6 +3,7 @@ #define __LINUX_BOOTMEM_INFO_H #include +#include /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +23,30 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +static inline void free_vmemmap_page(struct page *page) +{ + VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2); + + /* bootmem page has reserved flag in the reserve_bootmem_region */ + if (PageReserved(page)) { + unsigned long magic = (unsigned long)page->freelist; + + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + put_page_bootmem(page); + else + WARN_ON(1); + } +} + +static inline void prepare_vmemmap_page(struct page *page) +{ + unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page)); + + get_page_bootmem(section_nr, page, SECTION_INFO); + __SetPageReserved(page); + adjust_managed_page_count(page, -1); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { From patchwork Sun Nov 8 14:10:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889609 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B7B6139F for ; Sun, 8 Nov 2020 14:12:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C667120B1F for ; Sun, 8 Nov 2020 14:12:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="RV5JZEtY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C667120B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 09A8B6B0036; Sun, 8 Nov 2020 09:12:54 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 048A26B0072; Sun, 8 Nov 2020 09:12:53 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE65B6B0073; Sun, 8 Nov 2020 09:12:53 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0173.hostedemail.com [216.40.44.173]) by kanga.kvack.org (Postfix) with ESMTP id B37DA6B0036 for ; Sun, 8 Nov 2020 09:12:53 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 68FFE362B for ; Sun, 8 Nov 2020 14:12:53 +0000 (UTC) X-FDA: 77461442226.23.tub94_1106617272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 4C64B37608 for ; Sun, 8 Nov 2020 14:12:53 +0000 (UTC) X-Spam-Summary: 1,0,0,ced593b0077807d9,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3354:3865:3867:3868:3870:3871:3874:4118:4321:4385:4390:4395:4605:5007:6261:6653:6737:6738:8957:9036:10004:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:13161:13229:13255:13894:13972:14096:14181:14394:14721:21080:21222:21444:21451:21627:21990:30054:30075,0,RBL:209.85.214.195:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yrmz86nf4tzhy379933oe8695ooyp5bifsxnzy4c4xojzog6fe4bummfur4cu.3edtm8td59aipru19mpiwuxnwuytbafw5pe47mu5qfq1gjzkheb86tqaekff9is.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: tub94_1106617272e4 X-Filterd-Recvd-Size: 7485 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:12:52 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id w11so3261631pll.8 for ; Sun, 08 Nov 2020 06:12:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ViDdOmBRgb5hnUIFaB4WtjDnZp7Cey4qhqZPozCdPTE=; b=RV5JZEtYb/jv7Riusz889INkfBW9YWjsSGPo5y797XUr7ibDqmjMDwMXeDRp2VnhQ6 YLSNJOUGgifIwE+IZbgRibhBefG6pazChnJcsKjt6fU89esN88rrrUHsl8Q6L0d48X5i lx8hUtrG4ClboxdDiF2fbNVzESbgG/RpdzsS53VubU+J/mA9YAg1vqLEk0wPqgYg0cT2 18Jvw2isDRL7Lp1hEFqM3ZtCHvKloUU9cc+nW1XhHPs4kPZ/q0NqHbrHLpZJgUQpphB+ eUsDYuQPE2wl/32SgDiGQeHZJMPMtG4Ml9WF1BuAiiEBr9SdZj+TTc2cWt0Mkjo585mV dqNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ViDdOmBRgb5hnUIFaB4WtjDnZp7Cey4qhqZPozCdPTE=; b=rPe40bATnjuXXIYI5gUNxyn9XjIX/W9v9D9ZkzdGb2/k4XU3DhZK/V3FY/wJf4X4oQ l8KLMINjK8wGjnO04GCm50dAOIILDBvm5m70wHofbwBMR9an3i5KdApNx/6MYVmwoNXx TqyC4jww0SSsfRRVrXlxDtPwAKxKoMvwNWX+WSpDaAvQ+itShrb3H24YVjzi4/790AxI xMI+LIHrg3my/lR+U/30V0lrZYE4y0FS9y4R6eC+2tjITSagqPLKEQxu4bhVYtExcF6c bv1AgqizXmvBUIP9GMg2cU+Jm/5hWozMjTrjZTiYz9g3uIzDZqEldbOL3XcmP6P4bNzG JPyA== X-Gm-Message-State: AOAM533fpmVf7n03IeIsi7u/A7XDEg9OhulOxH7OlZLyiAMUOMjpy5kA 9JVs5MWpoxKDuoCXQ9BSc2Uomw== X-Google-Smtp-Source: ABdhPJzaq6xPhI1MKttFTsTVH2ADrn75dQVLbF8ukP71lvwo8Zled9eFiBNGlvOEJ0zaNUDRfMGnZg== X-Received: by 2002:a17:90a:648:: with SMTP id q8mr8674681pje.176.1604844771991; Sun, 08 Nov 2020 06:12:51 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.12.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:12:51 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 07/21] mm/bootmem_info: Combine bootmem info and type into page->freelist Date: Sun, 8 Nov 2020 22:10:59 +0800 Message-Id: <20201108141113.65450-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The page->private shares storage with page->ptl. In the later patch, we will use the page->ptl. So here we combine bootmem info and type into page->freelist so that we can do not use page->private. Signed-off-by: Muchun Song --- include/linux/bootmem_info.h | 18 ++++++++++++++++-- mm/bootmem_info.c | 12 ++++++------ mm/sparse.c | 4 ++-- 3 files changed, 24 insertions(+), 10 deletions(-) diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index ce9d8c97369d..b5786a8b412e 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -6,7 +6,7 @@ #include /* - * Types for free bootmem stored in page->lru.next. These have to be in + * Types for free bootmem stored in page->freelist. These have to be in * some random range in unsigned long space for debugging purposes. */ enum { @@ -17,6 +17,20 @@ enum { MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, }; +#define BOOTMEM_TYPE_BITS 4 +#define BOOTMEM_TYPE_MAX ((1UL << BOOTMEM_TYPE_BITS) - 1) +#define BOOTMEM_INFO_MAX (ULONG_MAX >> BOOTMEM_TYPE_BITS) + +static inline unsigned long page_bootmem_type(struct page *page) +{ + return (unsigned long)page->freelist & BOOTMEM_TYPE_MAX; +} + +static inline unsigned long page_bootmem_info(struct page *page) +{ + return (unsigned long)page->freelist >> BOOTMEM_TYPE_BITS; +} + #ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void __init register_page_bootmem_info_node(struct pglist_data *pgdat); @@ -30,7 +44,7 @@ static inline void free_vmemmap_page(struct page *page) /* bootmem page has reserved flag in the reserve_bootmem_region */ if (PageReserved(page)) { - unsigned long magic = (unsigned long)page->freelist; + unsigned long magic = page_bootmem_type(page); if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) put_page_bootmem(page); diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index fcab5a3f8cc0..9baf163965fd 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -12,9 +12,9 @@ void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); + BUG_ON(info > BOOTMEM_INFO_MAX); + BUG_ON(type > BOOTMEM_TYPE_MAX); + page->freelist = (void *)((info << BOOTMEM_TYPE_BITS) | type); page_ref_inc(page); } @@ -22,14 +22,12 @@ void put_page_bootmem(struct page *page) { unsigned long type; - type = (unsigned long) page->freelist; + type = page_bootmem_type(page); BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); if (page_ref_dec_return(page) == 1) { page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); INIT_LIST_HEAD(&page->lru); free_reserved_page(page); } @@ -101,6 +99,8 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat) int node = pgdat->node_id; struct page *page; + BUILD_BUG_ON(MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE > BOOTMEM_TYPE_MAX); + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; page = virt_to_page(pgdat); diff --git a/mm/sparse.c b/mm/sparse.c index a4138410d890..fca5fa38c2bc 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -740,12 +740,12 @@ static void free_map_bootmem(struct page *memmap) >> PAGE_SHIFT; for (i = 0; i < nr_pages; i++, page++) { - magic = (unsigned long) page->freelist; + magic = page_bootmem_type(page); BUG_ON(magic == NODE_INFO); maps_section_nr = pfn_to_section_nr(page_to_pfn(page)); - removing_section_nr = page_private(page); + removing_section_nr = page_bootmem_info(page); /* * When this function is called, the removing section is From patchwork Sun Nov 8 14:11:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889613 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E1EAA139F for ; Sun, 8 Nov 2020 14:13:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA02A208B6 for ; Sun, 8 Nov 2020 14:13:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="NVXNzHd6" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA02A208B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E082B6B005D; Sun, 8 Nov 2020 09:13:03 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DB57C6B0072; Sun, 8 Nov 2020 09:13:03 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C56586B0073; Sun, 8 Nov 2020 09:13:03 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id 981796B005D for ; Sun, 8 Nov 2020 09:13:03 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 45A2F180AD802 for ; Sun, 8 Nov 2020 14:13:03 +0000 (UTC) X-FDA: 77461442646.22.money74_1f155dc272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 1E84518038E67 for ; Sun, 8 Nov 2020 14:13:03 +0000 (UTC) X-Spam-Summary: 1,0,0,cfff28adcdfa6a7e,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:4118:4321:4385:5007:6119:6261:6653:6737:6738:7903:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13161:13229:13894:13972:14096:14181:14394:14721:21080:21222:21444:21451:21611:21627:21990:30012:30054,0,RBL:209.85.215.194:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yf8f81tgshtt1o3qeuwh8beix6gyc7xshcgd6fpgxssno7c8zupa3yiu1j1qs.ieapzjjpoopb9d1sbkywxmtc644oogfqkz9ra9mdbmm437n83sxbtmmr4t5rtyc.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: money74_1f155dc272e4 X-Filterd-Recvd-Size: 7246 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:13:02 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id f27so1219978pgl.1 for ; Sun, 08 Nov 2020 06:13:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gJvwt6IYpSrNpCnp/nDtM/yb/962d0wM2I3fDXrMctE=; b=NVXNzHd63ZTvB8FSYDnHadOs+KBue+oFdqo7vfMUXTqNE8PQnK5L9eUWqwLW/e55sJ ZcnzJpzUGn1xd5HuBD2MyemUrRCo+xFzWgTVcv+sK05Bg/9w6yYMOQyt0SjgkAiRq9dh GpUmasIs9Us+iBPjlDX4YUMBylwVI1EYVgEdYE4NrrTjD0ay5aAmYrAv1tIq81NMSYqC bhJgDF8y5kWDpTmbOY/V2QwIGAwVS0PHHEWV1dTByf0tpvqn/k9NfkoEH2ZnAA6Bbk2y t9LGrhY4z1x2ghjn3oNu+bo/+k5lyhy7Did+wyTeR71fnpEUthynykNZnFNCNtW70Iuo FRuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gJvwt6IYpSrNpCnp/nDtM/yb/962d0wM2I3fDXrMctE=; b=p3UPDDZRnkHFty6RtwzWKgP5bffAQz4J3unmOj9EptXFPIqjDOX2gs1+ohxSgNgfOO m/E8Sn6LQfL+WIPxMsL2/hOv9d/IzAEI7X6Cj3GnAUJ6EvYvXINGl1PT0ohLGmBrLR8L 2BOJFXX4t+2vUAB+Avg9yUcucwUvEbv/HbELyjVtkKMNCn5ij6FgfVcYEy5Mn6w4D2QE wvKSzFNygjsE0qKIJ9m2RYWoJfYSIPZ3S7vZ7TweljPBhmTXS4JUOWIEpiRbJpkQ6SXk 49X87nco01GYxMTyPYZi4nxmhJsx1ve9XdQX6ZAqQ6iq5C2u3kdEnDOApBM0VU8gfKUq 6QUg== X-Gm-Message-State: AOAM533CLNpbIeydJdFsISj72EGiSnRZ0N2DAomVSpeMTQEH4lvkn59e kV8Juj3nxaM9NT5T9izoJixQwg== X-Google-Smtp-Source: ABdhPJyQqWPpgmRvgnD972Khgu3G9Ptsqvw1zov+M+MkcF3Btc4R4kANj2R843j2oHElxn5CHv31+Q== X-Received: by 2002:a62:194:0:b029:18b:b0f6:1e1b with SMTP id 142-20020a6201940000b029018bb0f61e1bmr9820395pfb.71.1604844781837; Sun, 08 Nov 2020 06:13:01 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.12.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:13:01 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 08/21] mm/vmemmap: Initialize page table lock for vmemmap Date: Sun, 8 Nov 2020 22:11:00 +0800 Message-Id: <20201108141113.65450-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the register_page_bootmem_memmap, the slab allocator is not ready yet. So when ALLOC_SPLIT_PTLOCKS, we use init_mm.page_table_lock. otherwise we use per page table lock(page->ptl). In the later patch, we will use the vmemmap page table lock to guard the splitting of the vmemmap huge PMD. Signed-off-by: Muchun Song --- arch/x86/mm/init_64.c | 2 ++ include/linux/mm.h | 45 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0435bee2e172..a010101bbe24 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1610,6 +1610,8 @@ void register_page_bootmem_memmap(unsigned long section_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); + vmemmap_ptlock_init(pud_page(*pud)); + if (!boot_cpu_has(X86_FEATURE_PSE)) { next = (addr + PAGE_SIZE) & PAGE_MASK; pmd = pmd_offset(pud, addr); diff --git a/include/linux/mm.h b/include/linux/mm.h index a12354e63e49..ce429614d1ab 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2169,6 +2169,26 @@ static inline bool ptlock_init(struct page *page) return true; } +#if ALLOC_SPLIT_PTLOCKS +static inline void vmemmap_ptlock_init(struct page *page) +{ +} +#else +static inline void vmemmap_ptlock_init(struct page *page) +{ + /* + * prep_new_page() initialize page->private (and therefore page->ptl) + * with 0. Make sure nobody took it in use in between. + * + * It can happen if arch try to use slab for page table allocation: + * slab code uses page->{s_mem, counters}, which share storage with + * page->ptl. + */ + VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page); + spin_lock_init(ptlock_ptr(page)); +} +#endif + #else /* !USE_SPLIT_PTE_PTLOCKS */ /* * We use mm->page_table_lock to guard all pagetable pages of the mm. @@ -2180,6 +2200,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) static inline void ptlock_cache_init(void) {} static inline bool ptlock_init(struct page *page) { return true; } static inline void ptlock_free(struct page *page) {} +static inline void vmemmap_ptlock_init(struct page *page) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ static inline void pgtable_init(void) @@ -2244,6 +2265,18 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_to_page(pmd)); } +#if ALLOC_SPLIT_PTLOCKS +static inline spinlock_t *vmemmap_pmd_lockptr(pmd_t *pmd) +{ + return &init_mm.page_table_lock; +} +#else +static inline spinlock_t *vmemmap_pmd_lockptr(pmd_t *pmd) +{ + return ptlock_ptr(pmd_to_page(pmd)); +} +#endif + static inline bool pmd_ptlock_init(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -2269,6 +2302,11 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } +static inline spinlock_t *vmemmap_pmd_lockptr(pmd_t *pmd) +{ + return &init_mm.page_table_lock; +} + static inline bool pmd_ptlock_init(struct page *page) { return true; } static inline void pmd_ptlock_free(struct page *page) {} @@ -2283,6 +2321,13 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } +static inline spinlock_t *vmemmap_pmd_lock(pmd_t *pmd) +{ + spinlock_t *ptl = vmemmap_pmd_lockptr(pmd); + spin_lock(ptl); + return ptl; +} + static inline bool pgtable_pmd_page_ctor(struct page *page) { if (!pmd_ptlock_init(page)) From patchwork Sun Nov 8 14:11:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889617 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E5010139F for ; Sun, 8 Nov 2020 14:13:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8D1B020760 for ; Sun, 8 Nov 2020 14:13:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="YHO0BKW1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8D1B020760 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B467D6B0068; Sun, 8 Nov 2020 09:13:13 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AD1016B0072; Sun, 8 Nov 2020 09:13:13 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 973956B0073; Sun, 8 Nov 2020 09:13:13 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id 6ACCA6B0068 for ; Sun, 8 Nov 2020 09:13:13 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 16199181AEF07 for ; Sun, 8 Nov 2020 14:13:13 +0000 (UTC) X-FDA: 77461443066.13.hate33_4116b12272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id EE55A18140B67 for ; Sun, 8 Nov 2020 14:13:12 +0000 (UTC) X-Spam-Summary: 1,0,0,9daf1fef0db9c380,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:1:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1801:1981:2194:2196:2198:2199:2200:2201:2393:2559:2562:2636:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3872:3874:4321:4385:4390:4395:4605:5007:6119:6261:6653:6737:6738:7875:7903:8603:9010:9036:10004:11026:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13255:13894:14096:14394:21080:21094:21323:21444:21451:21627:21990:30003:30029:30054,0,RBL:209.85.215.195:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04ygabzri7gmhojzbxwkxwuwbk7upocqbzhsynqoih733u6pbnj1kamjxatimfi.gqmk4k61kg4bpgkirqontjeo6mpzp7mngr48gi8bdr4381sp1cwnaiyuqr3dc6a.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: hate33_4116b12272e4 X-Filterd-Recvd-Size: 13844 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:13:12 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id m26so1153953pgd.9 for ; Sun, 08 Nov 2020 06:13:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=64D+N+0MVvWM9CdOj8pR+qrCH2heyzoEsv3UfStivXA=; b=YHO0BKW1fx79wDC79BAWuug52MyDh/EgB0rN0ElAB9xnF4PrR92u3HE3v40ok0kdac Ce2dmTu5RhvZytZ+2S1UUtw4AJtqKON6ls6JXd7gcWeLyI+ac5Uufimi8HjBdZcrFz6/ tWDZ82uKhV6s2Sy83UaBq7ngmX+THsslF35dEBnMSMkZPN1taDe2wXF0V6Sj3LAdtHdm 5e4vuoIHAfZTdAUUneBe7MOtWX/v/DQgbnIC9CHVJV9xUpiwdemzcveUC41tAaOmwdKB 93i05RfYj2lcfS+2rL2a5OJKF2CylWBygSnTkA6tvgVUW6VuUH+nPMRQ1HeJwqIrOIOH 6jlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=64D+N+0MVvWM9CdOj8pR+qrCH2heyzoEsv3UfStivXA=; b=OS4eyvMyu8y5WC2EssQ9tdxORNiU2EIYZ74GuY0unxo1UIukCNuen59KD1jkBMz9yF zIo9GFBIXVfTBSPX8QL4+ThvGUvAAKyjCA3uyyxLjf1GAhSzz+41Rv/F2VtS5XOH/Bpg TBy4+L4oW6g1hc9nssZ7PJJIEyTUbYcemHsGigpcWJ/cFGPRVrn96hgCDM4I1kVcHWqq ldtoAY0EG8R4GltMKmiPpPptIN0uO9ppzT5qcRWnMJT+6KmTMeGDFZCrqV5fCA0DiR1s 37y15kjXLBnD5rxI9Guc9LghxvWd5/MM3HzqWzd3ugLdszMMIrR3q1xi01IOjf9gbaFS QPog== X-Gm-Message-State: AOAM530wEE46tmLFmDBsBHFvDtP1XFsDY9IrQcDypUy6hTy9p9YvdOwj 5h6J/WDZH6nm4oWy5APRZVm08A== X-Google-Smtp-Source: ABdhPJz0e561hXaRylaWrB6gD59rsigwP9Yo2wg0eYXAjwErF0d71vHGZgInuk9nCvTWwBld6fO54A== X-Received: by 2002:a63:df01:: with SMTP id u1mr9399642pgg.140.1604844791532; Sun, 08 Nov 2020 06:13:11 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.13.02 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:13:11 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Date: Sun, 8 Nov 2020 22:11:01 +0800 Message-Id: <20201108141113.65450-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we allocate a hugetlb page from the buddy, we should free the unused vmemmap pages associated with it. We can do that in the prep_new_huge_page(). Signed-off-by: Muchun Song --- arch/x86/include/asm/hugetlb.h | 9 ++ arch/x86/include/asm/pgtable_64_types.h | 8 ++ include/linux/hugetlb.h | 8 ++ include/linux/mm.h | 4 + mm/hugetlb.c | 166 ++++++++++++++++++++++++++++++++ mm/sparse-vmemmap.c | 31 ++++++ 6 files changed, 226 insertions(+) diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h index 1721b1aadeb1..c601fe042832 100644 --- a/arch/x86/include/asm/hugetlb.h +++ b/arch/x86/include/asm/hugetlb.h @@ -4,6 +4,15 @@ #include #include +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +#define vmemmap_pmd_huge vmemmap_pmd_huge +static inline bool vmemmap_pmd_huge(pmd_t *pmd) +{ + return pmd_large(*pmd); +} +#endif #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 52e5f5f2240d..bedbd2e7d06c 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -139,6 +139,14 @@ extern unsigned int ptrs_per_p4d; # define VMEMMAP_START __VMEMMAP_BASE_L4 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ +/* + * VMEMMAP_SIZE - allows the whole linear region to be covered by + * a struct page array. + */ +#define VMEMMAP_SIZE (1UL << (__VIRTUAL_MASK_SHIFT - PAGE_SHIFT - \ + 1 + ilog2(sizeof(struct page)))) +#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE) + #define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d81c262418db..afb9b18771c4 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -594,6 +594,14 @@ static inline unsigned int blocks_per_huge_page(struct hstate *h) #include #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +#ifndef vmemmap_pmd_huge +#define vmemmap_pmd_huge vmemmap_pmd_huge +static inline bool vmemmap_pmd_huge(pmd_t *pmd) +{ + return pmd_huge(*pmd); +} +#endif + #ifndef VMEMMAP_HPAGE_SHIFT #define VMEMMAP_HPAGE_SHIFT HPAGE_SHIFT #endif diff --git a/include/linux/mm.h b/include/linux/mm.h index ce429614d1ab..480faca94c23 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3025,6 +3025,10 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +pmd_t *vmemmap_to_pmd(const void *page); +#endif + void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5c7be2ee7e15..27f0269aab70 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1293,6 +1293,8 @@ static inline void destroy_compound_gigantic_page(struct page *page, #endif #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +#include + /* * There are 512 struct page structs(8 pages) associated with each 2MB * hugetlb page. For tail pages, the value of compound_dtor is the same. @@ -1305,6 +1307,13 @@ static inline void destroy_compound_gigantic_page(struct page *page, #define page_huge_pte(page) ((page)->pmd_huge_pte) +#define vmemmap_hpage_addr_end(addr, end) \ +({ \ + unsigned long __boundary; \ + __boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK;\ + (__boundary - 1 < (end) - 1) ? __boundary : (end); \ +}) + static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return h->nr_free_vmemmap_pages; @@ -1424,6 +1433,147 @@ static void __init hugetlb_vmemmap_init(struct hstate *h) pr_debug("HugeTLB: can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, h->name); } + +static inline int freed_vmemmap_hpage(struct page *page) +{ + return atomic_read(&page->_mapcount) + 1; +} + +static inline int freed_vmemmap_hpage_inc(struct page *page) +{ + return atomic_inc_return_relaxed(&page->_mapcount) + 1; +} + +static inline int freed_vmemmap_hpage_dec(struct page *page) +{ + return atomic_dec_return_relaxed(&page->_mapcount) + 1; +} + +static inline void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, + unsigned long start, + unsigned int nr_free, + struct list_head *free_pages) +{ + /* Make the tail pages are mapped read-only. */ + pgprot_t pgprot = PAGE_KERNEL_RO; + pte_t entry = mk_pte(reuse, pgprot); + unsigned long addr; + unsigned long end = start + (nr_free << PAGE_SHIFT); + + for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) { + struct page *page; + pte_t old = *ptep; + + VM_WARN_ON(!pte_present(old)); + page = pte_page(old); + list_add(&page->lru, free_pages); + + set_pte_at(&init_mm, addr, ptep, entry); + } +} + +static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, + unsigned long addr, + struct list_head *free_pages) +{ + unsigned long next; + unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE; + unsigned long end = addr + vmemmap_pages_size_per_hpage(h); + struct page *reuse = NULL; + + addr = start; + do { + unsigned int nr_pages; + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) + reuse = pte_page(ptep[-1]); + + next = vmemmap_hpage_addr_end(addr, end); + nr_pages = (next - addr) >> PAGE_SHIFT; + __free_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages, + free_pages); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr) +{ + int i; + pgprot_t pgprot = PAGE_KERNEL; + struct mm_struct *mm = &init_mm; + struct page *page; + pmd_t old_pmd, _pmd; + + old_pmd = READ_ONCE(*pmd); + page = pmd_page(old_pmd); + pmd_populate_kernel(mm, &_pmd, pte_p); + + for (i = 0; i < VMEMMAP_HPAGE_NR; i++, addr += PAGE_SIZE) { + pte_t entry, *pte; + + entry = mk_pte(page + i, pgprot); + pte = pte_offset_kernel(&_pmd, addr); + VM_BUG_ON(!pte_none(*pte)); + set_pte_at(mm, addr, pte, entry); + } + + /* make pte visible before pmd */ + smp_wmb(); + pmd_populate_kernel(mm, pmd, pte_p); +} + +static void split_vmemmap_huge_page(struct hstate *h, struct page *head, + pmd_t *pmd) +{ + pgtable_t pgtable; + unsigned long start = (unsigned long)head & VMEMMAP_HPAGE_MASK; + unsigned long addr = start; + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h); + + while (nr-- && (pgtable = vmemmap_pgtable_withdraw(head))) { + VM_BUG_ON(freed_vmemmap_hpage(pgtable)); + split_vmemmap_pmd(pmd++, page_to_virt(pgtable), addr); + addr += VMEMMAP_HPAGE_SIZE; + } + + flush_tlb_kernel_range(start, addr); +} + +static void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + pmd_t *pmd; + spinlock_t *ptl; + LIST_HEAD(free_pages); + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + pmd = vmemmap_to_pmd(head); + ptl = vmemmap_pmd_lock(pmd); + if (vmemmap_pmd_huge(pmd)) { + VM_BUG_ON(!pgtable_pages_to_prealloc_per_hpage(h)); + split_vmemmap_huge_page(h, head, pmd); + } + + __free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages); + freed_vmemmap_hpage_inc(pmd_page(*pmd)); + spin_unlock(ptl); + + free_vmemmap_page_list(&free_pages); +} #else static inline void hugetlb_vmemmap_init(struct hstate *h) { @@ -1437,6 +1587,10 @@ static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) { } + +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} #endif static void update_and_free_page(struct hstate *h, struct page *page) @@ -1645,6 +1799,10 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + free_huge_page_vmemmap(h, page); + /* Must be called before the initialization of @page->lru */ + vmemmap_pgtable_free(h, page); + INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); @@ -1897,6 +2055,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, if (!page) return NULL; + if (vmemmap_pgtable_prealloc(h, page)) { + if (hstate_is_gigantic(h)) + free_gigantic_page(page, huge_page_order(h)); + else + put_page(page); + return NULL; + } + if (hstate_is_gigantic(h)) prep_compound_gigantic_page(page, huge_page_order(h)); prep_new_huge_page(h, page, page_to_nid(page)); diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 16183d85a7d5..4b35d1655a2f 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -263,3 +263,34 @@ struct page * __meminit __populate_section_memmap(unsigned long pfn, return pfn_to_page(pfn); } + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +/* + * Walk a vmemmap address to the pmd it maps. + */ +pmd_t *vmemmap_to_pmd(const void *page) +{ + unsigned long addr = (unsigned long)page; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + + if (addr < VMEMMAP_START || addr >= VMEMMAP_END) + return NULL; + + pgd = pgd_offset_k(addr); + if (pgd_none(*pgd)) + return NULL; + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + return NULL; + pud = pud_offset(p4d, addr); + + if (pud_none(*pud) || pud_bad(*pud)) + return NULL; + pmd = pmd_offset(pud, addr); + + return pmd; +} +#endif From patchwork Sun Nov 8 14:11:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889621 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 611D5139F for ; Sun, 8 Nov 2020 14:13:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1A38B208B6 for ; Sun, 8 Nov 2020 14:13:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="Q2qHU7V4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1A38B208B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4B70E6B006C; Sun, 8 Nov 2020 09:13:24 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 43F7C6B0072; Sun, 8 Nov 2020 09:13:24 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E1336B0073; Sun, 8 Nov 2020 09:13:24 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id 022B86B006C for ; Sun, 8 Nov 2020 09:13:23 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id ABFBD180AD801 for ; Sun, 8 Nov 2020 14:13:23 +0000 (UTC) X-FDA: 77461443486.17.shop69_0e0b9cb272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 8D988180D0181 for ; Sun, 8 Nov 2020 14:13:23 +0000 (UTC) X-Spam-Summary: 13,1.2,0,4467115c0c91b285,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:2:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4120:4321:4385:4390:4395:4605:5007:6261:6653:6737:6738:8660:8957:9010:9592:10008:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13148:13230:13894:14096:14110:14394:21080:21433:21444:21451:21627:21740:21939:21990:30054,0,RBL:209.85.216.68:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yfmersm1fjz69q9wh3hndgpfs41ocs1hmcdrtoh7rn1s9ctmsuknsy61xt13f.t55ktfica8kbk5mjednckhgz11h8f4df18mn7ghhukm8kh3oqs9rs3orkd5y1b8.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: shop69_0e0b9cb272e4 X-Filterd-Recvd-Size: 9267 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:13:22 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id gi3so1894489pjb.3 for ; Sun, 08 Nov 2020 06:13:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rXSn3D2YKe0+X0WLHIvZOcLAPiLQlYOefZQ1A7s7AtE=; b=Q2qHU7V4lYexamo2M7caIRpE5GFV3Xh9Dk2KgSI8RavPh3p0jsS341WkT4TBpMGbtE Jzm2zAXu/zi9SY2X2TooQNmx4aZd0F/+YLyeSXgCAuoChxZdzZo4qYZjjS4y+C0bwJgw fJdiDqyEAHK8bu3R0kCfB/lu/JoHInS27ZAoq9WHbVrvbBISB3Oa3ZB6TLd9uI3Jajii tSmllNlMLCdBSQE/Lk+UmIf5b1YGGibwA+vx1tW7zOz6vypKM8QVQ2FfOYASx6+vgAF5 Xgr8764EmXA5nktDLN5X8f4dCWd0c+WME/e/gU8TlAwqgs5ogGJsQYqiVFcjnT3ySdUv 8uVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rXSn3D2YKe0+X0WLHIvZOcLAPiLQlYOefZQ1A7s7AtE=; b=ie2DoLBieoqRJOPE5xqW/3ds/1V8lMvhGoN9Npnd8Jj/9Xxae747q0Npe2cnU9vHOV Rf4TTgk6o52i4tmQyfwudhtCapHcxYNBTkb5Kp3GMUrsDS5LjAPB4ctFV8jBf81TxVqK Qt3Kixo3QMZfDtHo+DFn1J8rSD8rYiyv1J7g/Vd6KLmsD/epw2eXv3r9hbQInN4dA2w8 RcMgA04eeYDPo5KX8jFOVYVShUGT94ZBn0mtfxLeHDbgecgWepHsFunuq1MWKH5vWQxs MyxOjXM2QfwAXq84Uf/CVZke5b6tKt1OE8lBuy/ohqB73Wop+zD+u5H9qPBgzzp5i/J3 WJ+A== X-Gm-Message-State: AOAM530NM/egFevXjGpcpqVwVXAJ17QnTiJYeD0e754+I6rAN0/e7kEl MP+3WpYV/I6/luEpfR2KY3i2Ag== X-Google-Smtp-Source: ABdhPJxP2eZ+jU77mxWdGYFFsWcVpA5y1yjM2Ntif+Ct0/qeAd9IIoR/2NeAhAX+KQKP5LIv69d+aA== X-Received: by 2002:a17:902:8e8b:b029:d2:4276:1df0 with SMTP id bg11-20020a1709028e8bb02900d242761df0mr9025848plb.62.1604844801946; Sun, 08 Nov 2020 06:13:21 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.13.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:13:21 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 10/21] mm/hugetlb: Defer freeing of hugetlb pages Date: Sun, 8 Nov 2020 22:11:02 +0800 Message-Id: <20201108141113.65450-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the subsequent patch, we will allocate the vmemmap pages when free huge pages. But update_and_free_page() is be called from a non-task context(and hold hugetlb_lock), we can defer the actual freeing in a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages. Signed-off-by: Muchun Song --- mm/hugetlb.c | 101 ++++++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 89 insertions(+), 12 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 27f0269aab70..ded7f0fbde35 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1220,7 +1220,7 @@ static void destroy_compound_gigantic_page(struct page *page, __ClearPageHead(page); } -static void free_gigantic_page(struct page *page, unsigned int order) +static void __free_gigantic_page(struct page *page, unsigned int order) { /* * If the page isn't allocated using the cma allocator, @@ -1287,11 +1287,14 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { return NULL; } -static inline void free_gigantic_page(struct page *page, unsigned int order) { } +static inline void __free_gigantic_page(struct page *page, + unsigned int order) { } static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif +static void __free_hugepage(struct hstate *h, struct page *page); + #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP #include @@ -1574,6 +1577,64 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head) free_vmemmap_page_list(&free_pages); } + +/* + * As update_and_free_page() is be called from a non-task context(and hold + * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent + * use GFP_ATOMIC to allocate a lot of vmemmap pages. + * + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of + * pages to be freed and frees them one-by-one. As the page->mapping pointer + * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is + * reused as the llist_node structure of a lockless linked list of huge + * pages to be freed. + */ +static LLIST_HEAD(hpage_update_freelist); + +static void update_hpage_vmemmap_workfn(struct work_struct *work) +{ + struct llist_node *node; + struct page *page; + + node = llist_del_all(&hpage_update_freelist); + + while (node) { + page = container_of((struct address_space **)node, + struct page, mapping); + node = node->next; + page->mapping = NULL; + __free_hugepage(page_hstate(page), page); + + cond_resched(); + } +} +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); + +static inline void __update_and_free_page(struct hstate *h, struct page *page) +{ + /* No need to allocate vmemmap pages */ + if (!free_vmemmap_pages_per_hpage(h)) { + __free_hugepage(h, page); + return; + } + + /* + * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap + * pages. + * + * Only call schedule_work() if hpage_update_freelist is previously + * empty. Otherwise, schedule_work() had been called but the workfn + * hasn't retrieved the list yet. + */ + if (llist_add((struct llist_node *)&page->mapping, + &hpage_update_freelist)) + schedule_work(&hpage_update_work); +} + +static inline void free_gigantic_page(struct hstate *h, struct page *page) +{ + __free_gigantic_page(page, huge_page_order(h)); +} #else static inline void hugetlb_vmemmap_init(struct hstate *h) { @@ -1591,17 +1652,39 @@ static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } + +static inline void __update_and_free_page(struct hstate *h, struct page *page) +{ + __free_hugepage(h, page); +} + +static inline void free_gigantic_page(struct hstate *h, struct page *page) +{ + /* + * Temporarily drop the hugetlb_lock, because + * we might block in __free_gigantic_page(). + */ + spin_unlock(&hugetlb_lock); + __free_gigantic_page(page, huge_page_order(h)); + spin_lock(&hugetlb_lock); +} #endif static void update_and_free_page(struct hstate *h, struct page *page) { - int i; - if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; h->nr_huge_pages--; h->nr_huge_pages_node[page_to_nid(page)]--; + + __update_and_free_page(h, page); +} + +static void __free_hugepage(struct hstate *h, struct page *page) +{ + int i; + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | @@ -1613,14 +1696,8 @@ static void update_and_free_page(struct hstate *h, struct page *page) set_compound_page_dtor(page, NULL_COMPOUND_DTOR); set_page_refcounted(page); if (hstate_is_gigantic(h)) { - /* - * Temporarily drop the hugetlb_lock, because - * we might block in free_gigantic_page(). - */ - spin_unlock(&hugetlb_lock); destroy_compound_gigantic_page(page, huge_page_order(h)); - free_gigantic_page(page, huge_page_order(h)); - spin_lock(&hugetlb_lock); + free_gigantic_page(h, page); } else { __free_pages(page, huge_page_order(h)); } @@ -2057,7 +2134,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, if (vmemmap_pgtable_prealloc(h, page)) { if (hstate_is_gigantic(h)) - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_page(h, page); else put_page(page); return NULL; From patchwork Sun Nov 8 14:11:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889625 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6ABC7139F for ; Sun, 8 Nov 2020 14:13:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2593720B1F for ; Sun, 8 Nov 2020 14:13:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="cLzCshRb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2593720B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 57A9B6B006E; Sun, 8 Nov 2020 09:13:34 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 550A76B0072; Sun, 8 Nov 2020 09:13:34 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A3336B0073; Sun, 8 Nov 2020 09:13:34 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0217.hostedemail.com [216.40.44.217]) by kanga.kvack.org (Postfix) with ESMTP id 0B21C6B006E for ; Sun, 8 Nov 2020 09:13:33 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id AB657181AEF07 for ; Sun, 8 Nov 2020 14:13:33 +0000 (UTC) X-FDA: 77461443906.08.bikes71_3111d1c272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 911E31819E76B for ; Sun, 8 Nov 2020 14:13:33 +0000 (UTC) X-Spam-Summary: 1,0,0,f9cc6ea1ee8b0364,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:3874:4119:4321:4385:4605:5007:6119:6261:6653:6737:6738:7875:7903:9010:9036:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13255:13894:14110:14181:14394:14721:21080:21094:21323:21433:21444:21451:21627:21740:21990:30054,0,RBL:209.85.215.193:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04ygnx69zc7htx8fhm6rptz19bu8bopk9np678j6hbjkp59gf3serxjrwhut1p4.pegn4etsz37im6mxfm71on74dzbyqk8fbwtfs78tydkkmzzx8qf5wtbx9r91mfc.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: bikes71_3111d1c272e4 X-Filterd-Recvd-Size: 8475 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:13:32 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id f38so4624492pgm.2 for ; Sun, 08 Nov 2020 06:13:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UPUydp30fWy+MU4/eu8iga11LsHGz1kfWVa+NKHtmHo=; b=cLzCshRbriymhPy8zuYQduoXq9+a5XMwMsn0or3GXr1jYGac/3kzJgw6uXfG0dXdrB PUPlzhveufYg6Xh5hY3e0YOsUlwRJBLu9R7Pc2y8G1hykgBC/H+7Lx1DWIwiDm8YKZZP 8uhY7TLqHJOQis5OfKVQ3GuqHLgGMwNm4hCxIa2ln4Dyt0RbnGHEZka/BQYXFfjkBag3 D5Qi/qXZBIAKNmcQGXYiAaHIGfhQHx383QjialHV6tS4/TrS9t4/Xtah7vrkDO9+DOjz STWHboPyr55qqSflyWAnyP9OzDc0QcvMego/foAXNVokTvf8c9zcpLTi8W3b8KpdP448 ZLcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UPUydp30fWy+MU4/eu8iga11LsHGz1kfWVa+NKHtmHo=; b=CDYyJr2smmueRvATo8GOy1atwbKQ+qc5BTsg74OwDxVPCNgo1KEXNEZ9qSdsMN3A5F 06YLHS0AwdI/rcZbTP/4p5XGnPXZ4O5OwSfAAoeTYlf9vYqFhY3nLtmHLxTG0KJN6T8Z QcwB8bkIjVwnD3ffE6/Zkc1gEwp7At91/vqVIdAL2nvv/rEuce9UH33zn2/icB56nnrj dhg10+oMtGtIiUORf/il9pEXAapNSFS2p4QaXkjjIlRrvs2r0Fxhnb7NgylnnTog2MvY QzVFRT6WiN3Qc2JmVeleTtO5w6s61f4W7UwHFHPTuzNSSJbKc/VsZijQ92Sepu3+csyS VvEg== X-Gm-Message-State: AOAM531vfnZu0C91rFZCCgs4wj+FvnxDtjblSex1xvsG+RekTRF9m56l xASBY87Yjtd31zkrx2EmKtFFcw== X-Google-Smtp-Source: ABdhPJwEw5PPDdZNhvD9uElUQkqM9KX15pUXzknhBUUDOT9pBIO6vjW9WXNHV5zSOfgtkR+QAZwFjA== X-Received: by 2002:a62:92c5:0:b029:156:6a7f:ccff with SMTP id o188-20020a6292c50000b02901566a7fccffmr10007401pfd.39.1604844812258; Sun, 08 Nov 2020 06:13:32 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.13.22 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:13:31 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 11/21] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Date: Sun, 8 Nov 2020 22:11:03 +0800 Message-Id: <20201108141113.65450-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we free a hugetlb page to the buddy, we should allocate the vmemmap pages associated with it. We can do that in the __free_hugepage(). Signed-off-by: Muchun Song --- mm/hugetlb.c | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 109 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ded7f0fbde35..8295911fe76e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1307,6 +1307,8 @@ static void __free_hugepage(struct hstate *h, struct page *page); * reserve at least 2 pages as vmemmap areas. */ #define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +#define GFP_VMEMMAP_PAGE (GFP_KERNEL | __GFP_NOFAIL | __GFP_MEMALLOC) #define page_huge_pte(page) ((page)->pmd_huge_pte) @@ -1490,7 +1492,7 @@ static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, struct list_head *free_pages) { unsigned long next; - unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE; + unsigned long start = addr + RESERVE_VMEMMAP_SIZE; unsigned long end = addr + vmemmap_pages_size_per_hpage(h); struct page *reuse = NULL; @@ -1578,6 +1580,106 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head) free_vmemmap_page_list(&free_pages); } +static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, + unsigned long start, + unsigned int nr_remap, + struct list_head *remap_pages) +{ + pgprot_t pgprot = PAGE_KERNEL; + void *from = (void *)page_private(reuse); + unsigned long addr, end = start + (nr_remap << PAGE_SHIFT); + + for (addr = start; addr < end; addr += PAGE_SIZE) { + void *to; + struct page *page; + pte_t entry, old = *ptep; + + page = list_first_entry_or_null(remap_pages, struct page, lru); + list_del(&page->lru); + to = page_to_virt(page); + copy_page(to, from); + + /* + * Make sure that any data that writes to the @to is made + * visible to the physical page. + */ + flush_kernel_vmap_range(to, PAGE_SIZE); + + prepare_vmemmap_page(page); + + entry = mk_pte(page, pgprot); + set_pte_at(&init_mm, addr, ptep++, entry); + + VM_BUG_ON(!pte_present(old) || pte_page(old) != reuse); + } +} + +static void __remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, + unsigned long addr, + struct list_head *remap_pages) +{ + unsigned long next; + unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE; + unsigned long end = addr + vmemmap_pages_size_per_hpage(h); + struct page *reuse = NULL; + + addr = start; + do { + unsigned int nr_pages; + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) { + reuse = pte_page(ptep[-1]); + set_page_private(reuse, addr - PAGE_SIZE); + } + + next = vmemmap_hpage_addr_end(addr, end); + nr_pages = (next - addr) >> PAGE_SHIFT; + __remap_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages, + remap_pages); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) +{ + int i; + + for (i = 0; i < free_vmemmap_pages_per_hpage(h); i++) { + struct page *page; + + /* This should not fail */ + page = alloc_page(GFP_VMEMMAP_PAGE); + list_add_tail(&page->lru, list); + } +} + +static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + pmd_t *pmd; + spinlock_t *ptl; + LIST_HEAD(remap_pages); + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + alloc_vmemmap_pages(h, &remap_pages); + + pmd = vmemmap_to_pmd(head); + ptl = vmemmap_pmd_lock(pmd); + __remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, + &remap_pages); + if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) { + /* + * Todo: + * Merge pte to huge pmd if it has ever been split. + */ + } + spin_unlock(ptl); +} + /* * As update_and_free_page() is be called from a non-task context(and hold * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent @@ -1653,6 +1755,10 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } +static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} + static inline void __update_and_free_page(struct hstate *h, struct page *page) { __free_hugepage(h, page); @@ -1685,6 +1791,8 @@ static void __free_hugepage(struct hstate *h, struct page *page) { int i; + alloc_huge_page_vmemmap(h, page); + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | From patchwork Sun Nov 8 14:11:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889629 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25AFB139F for ; Sun, 8 Nov 2020 14:13:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D2FE422210 for ; Sun, 8 Nov 2020 14:13:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="DDuZvysP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D2FE422210 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0BE1B6B0072; Sun, 8 Nov 2020 09:13:46 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 046076B0073; Sun, 8 Nov 2020 09:13:45 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E29FD6B0074; Sun, 8 Nov 2020 09:13:45 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0190.hostedemail.com [216.40.44.190]) by kanga.kvack.org (Postfix) with ESMTP id B66A26B0072 for ; Sun, 8 Nov 2020 09:13:45 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 622161EE6 for ; Sun, 8 Nov 2020 14:13:45 +0000 (UTC) X-FDA: 77461444410.22.fear18_481457c272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 42B1618038E67 for ; Sun, 8 Nov 2020 14:13:45 +0000 (UTC) X-Spam-Summary: 1,0,0,42e8fa0841b6b58a,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:4119:4321:4385:4605:5007:6119:6261:6653:6737:6738:7875:9010:9036:9592:10004:11026:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13161:13229:13894:14096:14110:14181:14394:14721:21080:21444:21451:21627:21990:30012:30054,0,RBL:209.85.215.193:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yr1e8dztz5czgwxx5s5w3t3b4cwypbhforowmn9scwkqwbserjctzi7jcnuez.upggszpq1uqwyx3dgpmha1da88w4xrqktkadrs61bba7rsjk5zg3s3qu543jz7x.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: fear18_481457c272e4 X-Filterd-Recvd-Size: 8241 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:13:44 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id h6so4621614pgk.4 for ; Sun, 08 Nov 2020 06:13:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n905scZFhDOuPwwrdQvDwceEL03IJm1237IQYKLWR9Q=; b=DDuZvysPKyn254ParwTu/jxJqFPkNKkOsOy/7+3NXXhB/+VScSBq4iaKjala1WLFKR J9KVfXjoUy7SOStqoV+5oB9YuBJJChQKZ3Ohnsbx0VsjQh2V3dO3U4jQzIPHhE4VJDzH syFI8GCi67SJ7t8A7KEAhWx0Gm7yLZ2aVVikpemy/iZZjEByXP6Rj4Su9vLjrK4zIqSW Ki6SWeiUDT5/vGmOmukr00QN+qSX8uokarvnA+SRLxkKkcrH6QmgcXFze+7Y1udTVDfX IePXc/1pbli3HWjUUB8czx/VlqI6GLZIV8g/BnqQ52uSD+DZqwfv1T8Wwab/HzmuUUD7 lEug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n905scZFhDOuPwwrdQvDwceEL03IJm1237IQYKLWR9Q=; b=V36a2E6sUtvKq6OXY04JU+IMEIj9APD6ElETnfG4tQ34wRQW2ZOX61Ddz+3FpIGEk2 uw7oMJvBXL7C6hKW32g+MTmtt7xaSisihaoFDd5TV71k/JCdamVhcdkZ/nQ2SAIWLlSY jRkolbrNqfmpMujgSNy4/iwMPiuKoqIfa3iUctObt/dDsAF/pY8wokyxACFMeS7pJ+HY D24DRz7N2fcTbYS0/8ehbg68v3p9o3Jvwval0vepm2mkcINAfGetC1+pOPOVvn/H3co2 dYs2nGeXUU50Qc+1SS9wFFBXsoZxREMPz2+uN+n9wpowWERIIHKOcrjIdvQMZ4hsRzDT tDYw== X-Gm-Message-State: AOAM530WQrt4ICUgEj5JWLBTPc+42ex0lQwaGv7HrvMAvij6EXEEjqV1 mLwlxtKnzLSHecln4LyP/Hi9VQ== X-Google-Smtp-Source: ABdhPJw2atSYFKgDsZDzHgv8quDlXR0V063XteNki3ZFjJ3D0dnZjq9C4fCxJ1So1yirQCgH8pBUwQ== X-Received: by 2002:aa7:8259:0:b029:18b:cf29:4658 with SMTP id e25-20020aa782590000b029018bcf294658mr6020286pfn.70.1604844823766; Sun, 08 Nov 2020 06:13:43 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.13.32 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:13:43 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 12/21] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Date: Sun, 8 Nov 2020 22:11:04 +0800 Message-Id: <20201108141113.65450-13-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The __free_huge_page_pmd_vmemmap and __remap_huge_page_pmd_vmemmap are almost the same code. So introduce remap_free_huge_page_pmd_vmemmap helper to simplify the code. Signed-off-by: Muchun Song --- mm/hugetlb.c | 98 ++++++++++++++++++++++++------------------------------------ 1 file changed, 39 insertions(+), 59 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8295911fe76e..5d3806476212 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1454,6 +1454,41 @@ static inline int freed_vmemmap_hpage_dec(struct page *page) return atomic_dec_return_relaxed(&page->_mapcount) + 1; } +typedef void (*remap_pte_fn)(struct page *reuse, pte_t *ptep, + unsigned long start, unsigned int nr_pages, + struct list_head *pages); + +static void remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, + unsigned long addr, + struct list_head *pages, + remap_pte_fn remap_fn) +{ + unsigned long next; + unsigned long start = addr + RESERVE_VMEMMAP_SIZE; + unsigned long end = addr + vmemmap_pages_size_per_hpage(h); + struct page *reuse = NULL; + + flush_cache_vunmap(start, end); + + addr = start; + do { + unsigned int nr_pages; + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) { + reuse = pte_page(ptep[-1]); + set_page_private(reuse, addr - PAGE_SIZE); + } + + next = vmemmap_hpage_addr_end(addr, end); + nr_pages = (next - addr) >> PAGE_SHIFT; + remap_fn(reuse, ptep, addr, nr_pages, pages); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + static inline void free_vmemmap_page_list(struct list_head *list) { struct page *page, *next; @@ -1487,33 +1522,6 @@ static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, } } -static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, - unsigned long addr, - struct list_head *free_pages) -{ - unsigned long next; - unsigned long start = addr + RESERVE_VMEMMAP_SIZE; - unsigned long end = addr + vmemmap_pages_size_per_hpage(h); - struct page *reuse = NULL; - - addr = start; - do { - unsigned int nr_pages; - pte_t *ptep; - - ptep = pte_offset_kernel(pmd, addr); - if (!reuse) - reuse = pte_page(ptep[-1]); - - next = vmemmap_hpage_addr_end(addr, end); - nr_pages = (next - addr) >> PAGE_SHIFT; - __free_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages, - free_pages); - } while (pmd++, addr = next, addr != end); - - flush_tlb_kernel_range(start, end); -} - static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr) { int i; @@ -1573,7 +1581,8 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head) split_vmemmap_huge_page(h, head, pmd); } - __free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages); + remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages, + __free_huge_page_pte_vmemmap); freed_vmemmap_hpage_inc(pmd_page(*pmd)); spin_unlock(ptl); @@ -1614,35 +1623,6 @@ static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, } } -static void __remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, - unsigned long addr, - struct list_head *remap_pages) -{ - unsigned long next; - unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE; - unsigned long end = addr + vmemmap_pages_size_per_hpage(h); - struct page *reuse = NULL; - - addr = start; - do { - unsigned int nr_pages; - pte_t *ptep; - - ptep = pte_offset_kernel(pmd, addr); - if (!reuse) { - reuse = pte_page(ptep[-1]); - set_page_private(reuse, addr - PAGE_SIZE); - } - - next = vmemmap_hpage_addr_end(addr, end); - nr_pages = (next - addr) >> PAGE_SHIFT; - __remap_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages, - remap_pages); - } while (pmd++, addr = next, addr != end); - - flush_tlb_kernel_range(start, end); -} - static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) { int i; @@ -1669,8 +1649,8 @@ static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) pmd = vmemmap_to_pmd(head); ptl = vmemmap_pmd_lock(pmd); - __remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, - &remap_pages); + remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &remap_pages, + __remap_huge_page_pte_vmemmap); if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) { /* * Todo: From patchwork Sun Nov 8 14:11:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889633 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A9BC14C0 for ; Sun, 8 Nov 2020 14:13:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 211A620760 for ; Sun, 8 Nov 2020 14:13:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="Ep+CuZUX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 211A620760 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4B63C6B0070; Sun, 8 Nov 2020 09:13:56 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 465AC6B0071; Sun, 8 Nov 2020 09:13:56 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 306826B0073; Sun, 8 Nov 2020 09:13:56 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 043016B0070 for ; Sun, 8 Nov 2020 09:13:55 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B056E180AD801 for ; Sun, 8 Nov 2020 14:13:55 +0000 (UTC) X-FDA: 77461444830.03.judge67_4314a6c272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 8F63B28A4E9 for ; Sun, 8 Nov 2020 14:13:55 +0000 (UTC) X-Spam-Summary: 1,0,0,29397a9d591b1bc3,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2196:2199:2393:2553:2559:2562:3138:3139:3140:3141:3142:3353:3865:3867:3871:3872:3874:4321:4385:5007:6119:6261:6653:6737:6738:7903:8957:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12895:12986:13894:14096:14181:14394:14721:21080:21094:21323:21444:21451:21627:21990:30054:30090,0,RBL:209.85.215.193:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yr1hytfrha7s5qijretxbkt3k7qopsmwb74xhupgrhb4k3h5x1rwrr9hhdg4x.6zrwytp3fcxpdb6sx3sgqhadsxjinxacy17bj3snj1m4ds9kp5f5d7phx5oftgw.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: judge67_4314a6c272e4 X-Filterd-Recvd-Size: 5705 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:13:55 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id r10so4602390pgb.10 for ; Sun, 08 Nov 2020 06:13:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qFvWVRuWHt463dQDdqukGR7bqLaRSs18WBT8QT/EalE=; b=Ep+CuZUX/uBfg+ywqlNDPDWSezOzVFRCpwC4tRkIO5YtwQn+xjHA9ywAq9dgs2V1cT jVd/BxEsvDq9s1xrtgaq60SbNKEhiZcSX2L2HsARVmWDr6r245JppS+LyNIjaUXHFZkt /c2+xuW2xqGJCmK6CUkwZ5gEI57L7XM+D8+7c3GBNsP6rKWjLpiz7p757SnSXKbxBQu6 0571GGcCt7T+9UlS5tlBl3PEI2liBVMWR4ZlYm1LzJ8q2DwYuMyZT5gjFQp0spvSijy0 G3nHtcUyF+HOwPf6AD2tiDiMxG+sMdqtIiB+8YjqqPEkJx7OMKBWwt1NWQD7GvMwMQSx mZww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qFvWVRuWHt463dQDdqukGR7bqLaRSs18WBT8QT/EalE=; b=pGE1uGVEc0I1WtSLmlk+hWFS2DlVWaDSRLOc/LMmRa4JRNxzB6Pl+vrx3X5o5/3Fqn Io6rMAQTSuEqzPO1uKMwRBQLWHUoapHBRGJ5I1JhNZpzz8t/zuRqoANcyP5eagBkNLu9 Wio148DQ8hwBD9Hgnvm76o0q2GmNXFC9jVJAv8bXuBml/F0izRYpFODoFJLeuFtHXLke 0U3y+vV6EMMm4HLLwltIEBrTvhwMzZJsCt5x/3wBKUHXfhgJjjTBIU2eyxbcSQrIuLF0 FwcUK3jcbFGx0nFfgUd53sEfe/6MrLTsOFyWCDlbc5SmmaJNHe+25jTKO14Co9jEElNH 4iGQ== X-Gm-Message-State: AOAM530vuST+td6ZpcxDzuLNYPTAuEo2pSyhYteFV3Pi+ZYixmDQrCFc bKF4KE71W2EK9uB2xJftG38MmQ== X-Google-Smtp-Source: ABdhPJz7zuNAIV4IAVZhc7gKx9qk6parvKFljufOCjpJdrnjCZATqW0v3rLLOKJQc2CnPwKMO8GR7w== X-Received: by 2002:a63:3202:: with SMTP id y2mr9079909pgy.97.1604844834110; Sun, 08 Nov 2020 06:13:54 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.13.44 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:13:53 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 13/21] mm/hugetlb: Use PG_slab to indicate split pmd Date: Sun, 8 Nov 2020 22:11:05 +0800 Message-Id: <20201108141113.65450-14-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we allocate hugetlb page from buddy, we may need split huge pmd to pte. When we free the hugetlb page, we can merge pte to pmd. So we need to distinguish whether the previous pmd has been split. The page table is not allocated from slab. So we can reuse the PG_slab to indicate that the pmd has been split. Signed-off-by: Muchun Song --- mm/hugetlb.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5d3806476212..9b1ac52d9fdd 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1565,6 +1565,25 @@ static void split_vmemmap_huge_page(struct hstate *h, struct page *head, flush_tlb_kernel_range(start, addr); } +static inline bool pmd_split(pmd_t *pmd) +{ + return PageSlab(pmd_page(*pmd)); +} + +static inline void set_pmd_split(pmd_t *pmd) +{ + /* + * We should not use slab for page table allocation. So we can set + * PG_slab to indicate that the pmd has been split. + */ + __SetPageSlab(pmd_page(*pmd)); +} + +static inline void clear_pmd_split(pmd_t *pmd) +{ + __ClearPageSlab(pmd_page(*pmd)); +} + static void free_huge_page_vmemmap(struct hstate *h, struct page *head) { pmd_t *pmd; @@ -1579,6 +1598,7 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head) if (vmemmap_pmd_huge(pmd)) { VM_BUG_ON(!pgtable_pages_to_prealloc_per_hpage(h)); split_vmemmap_huge_page(h, head, pmd); + set_pmd_split(pmd); } remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages, @@ -1651,11 +1671,12 @@ static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) ptl = vmemmap_pmd_lock(pmd); remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &remap_pages, __remap_huge_page_pte_vmemmap); - if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) { + if (!freed_vmemmap_hpage_dec(pmd_page(*pmd)) && pmd_split(pmd)) { /* * Todo: * Merge pte to huge pmd if it has ever been split. */ + clear_pmd_split(pmd); } spin_unlock(ptl); } From patchwork Sun Nov 8 14:11:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889637 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 58F8B139F for ; Sun, 8 Nov 2020 14:14:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 14D8920760 for ; Sun, 8 Nov 2020 14:14:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="crcDJvQ/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 14D8920760 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 418586B0071; Sun, 8 Nov 2020 09:14:07 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3C9386B0073; Sun, 8 Nov 2020 09:14:07 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DE9E6B0074; Sun, 8 Nov 2020 09:14:07 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0111.hostedemail.com [216.40.44.111]) by kanga.kvack.org (Postfix) with ESMTP id 02F486B0071 for ; Sun, 8 Nov 2020 09:14:06 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A5B5D180AD801 for ; Sun, 8 Nov 2020 14:14:06 +0000 (UTC) X-FDA: 77461445292.23.gun52_6310155272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 85F7537606 for ; Sun, 8 Nov 2020 14:14:06 +0000 (UTC) X-Spam-Summary: 1,0,0,186cc41a21d8e5ad,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:2898:3138:3139:3140:3141:3142:3353:3865:3867:3868:3870:3871:3872:4118:4321:4385:4390:4395:4605:5007:6261:6653:6737:6738:7903:8957:9010:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13161:13229:13894:14110:14181:14394:14721:21080:21444:21451:21627:21990:30054:30069:30070,0,RBL:209.85.210.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yrq8nzw54y3mo1m9rzucn76kukuycrtnubbt3oh63c9k56fr5iyeohf7d96kh.6d17wsb1curf5gmer19u5w4udq6fm8b4yabma7w8srjnk3bjjmd9ok154j4nogz.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: gun52_6310155272e4 X-Filterd-Recvd-Size: 7414 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:14:06 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id w6so5583pfu.1 for ; Sun, 08 Nov 2020 06:14:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=E5vsp5cz7nqeZvlEeefM8xntwOmhgYGvqoQNYkr0v5o=; b=crcDJvQ/SQIxjVMdesINeBHr8CEREO4/Sd0UyzZJptxHEuHnpmcD7QCeSGudjmxLXh SCAbltgvlJGxyIT9ELDZ+pRUB+zNZXXwtsI0BxcIq9cA9QTlgxRrkt45uJIM6gEEl/IT H1TNRv2Rs+HGAOswp0PNLJrcipFTGpihCmKjsXWBSYiq6/g0acp5Vip60DqtXgLi8GYv 0rbcD4M+rQggzMYmboEtA63ObrfNkKCLFA0WQvSHceGOaOY1Vab7jqwWfpVM61p3W2bJ 1xPJGMxkvpG4eFIxBzWbCpD5td6aN08rdy9mMD7847D1LzViKbqvWnbxhySlMwX+1ydI pOUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=E5vsp5cz7nqeZvlEeefM8xntwOmhgYGvqoQNYkr0v5o=; b=anOVIQqhDQowbtIMcTyWizu5SpdrRhx/iaBoDtA9XMbOsqm+FkuRgBJ1sg8Oa0xZXe IJsaQKdu+SFYGwKTeesU8baJUSlQI4XgKb+8rhXKHtlMlhMhloGlXZ1sImNr3KvrJbvb RBl/9nXqnZU4XM7pE6VVB+oxWxVlsh9oG1bgi9PFnAkg/6N5Lc501y1iGMVzsEix3ey8 7Y8waodOKKRch2DRzRHs917vPg6Zjdt0vJO6eeuiMcQFKYbsLZzmOhMrsZm6ee9ngEV3 kLuxuMdGSaQ1/GnSw67jubAFdhc9PTnnunl7cGSYHXVcSPIrL8drLtGwbjJ7asinrXrf m7AA== X-Gm-Message-State: AOAM533lBAWeXrZo5wQlfEKdhdOXMELeBoIgmQ9wpkOqhD/DSU3HOhm9 wfXAf2T71BkmeSRKuIHtgmJEMg== X-Google-Smtp-Source: ABdhPJyHSdHNLyD+JvJJ9n4ojdMe2eNz7H1OMTUv8Ux8jiTESj6P6wy2aULkhzIVqV2ACIhNbNGlsw== X-Received: by 2002:a62:f245:0:b029:18b:df86:191c with SMTP id y5-20020a62f2450000b029018bdf86191cmr4765153pfl.35.1604844845284; Sun, 08 Nov 2020 06:14:05 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.13.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:14:04 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 14/21] mm/hugetlb: Support freeing vmemmap pages of gigantic page Date: Sun, 8 Nov 2020 22:11:06 +0800 Message-Id: <20201108141113.65450-15-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The gigantic page is allocated by bootmem, if we want to free the unused vmemmap pages. We also should allocate the page table. So we also allocate page tables from bootmem. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 74 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index afb9b18771c4..f8ca4d251aa8 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -506,6 +506,9 @@ struct hstate { struct huge_bootmem_page { struct list_head list; struct hstate *hstate; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + pte_t *vmemmap_pte; +#endif }; struct page *alloc_huge_page(struct vm_area_struct *vma, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9b1ac52d9fdd..ec0d33d2c426 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1419,6 +1419,62 @@ static void vmemmap_pgtable_free(struct hstate *h, struct page *page) pte_free_kernel(&init_mm, page_to_virt(pgtable)); } +static unsigned long __init gather_vmemmap_pgtable_prealloc(void) +{ + struct huge_bootmem_page *m, *tmp; + unsigned long nr_free = 0; + + list_for_each_entry_safe(m, tmp, &huge_boot_pages, list) { + struct hstate *h = m->hstate; + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h); + unsigned int pgtable_size; + + if (!nr) + continue; + + pgtable_size = nr << PAGE_SHIFT; + m->vmemmap_pte = memblock_alloc_try_nid(pgtable_size, + PAGE_SIZE, 0, MEMBLOCK_ALLOC_ACCESSIBLE, + NUMA_NO_NODE); + if (!m->vmemmap_pte) { + nr_free++; + list_del(&m->list); + memblock_free_early(__pa(m), huge_page_size(h)); + } + } + + return nr_free; +} + +static void __init gather_vmemmap_pgtable_init(struct huge_bootmem_page *m, + struct page *page) +{ + int i; + struct hstate *h = m->hstate; + unsigned long pte = (unsigned long)m->vmemmap_pte; + unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h); + + if (!nr) + return; + + vmemmap_pgtable_init(page); + + for (i = 0; i < nr; i++, pte += PAGE_SIZE) { + pgtable_t pgtable = virt_to_page(pte); + + __ClearPageReserved(pgtable); + vmemmap_pgtable_deposit(page, pgtable); + } + + /* + * If we had gigantic hugepages allocated at boot time, we need + * to restore the 'stolen' pages to totalram_pages in order to + * fix confusing memory reports from free(1) and another + * side-effects, like CommitLimit going negative. + */ + adjust_managed_page_count(page, nr); +} + static void __init hugetlb_vmemmap_init(struct hstate *h) { unsigned int order = huge_page_order(h); @@ -1752,6 +1808,16 @@ static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) { } +static inline unsigned long gather_vmemmap_pgtable_prealloc(void) +{ + return 0; +} + +static inline void gather_vmemmap_pgtable_init(struct huge_bootmem_page *m, + struct page *page) +{ +} + static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } @@ -3013,6 +3079,7 @@ static void __init gather_bootmem_prealloc(void) WARN_ON(page_count(page) != 1); prep_compound_huge_page(page, h->order); WARN_ON(PageReserved(page)); + gather_vmemmap_pgtable_init(m, page); prep_new_huge_page(h, page, page_to_nid(page)); put_page(page); /* free it into the hugepage allocator */ @@ -3065,6 +3132,10 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) break; cond_resched(); } + + if (hstate_is_gigantic(h)) + i -= gather_vmemmap_pgtable_prealloc(); + if (i < h->max_huge_pages) { char buf[32]; From patchwork Sun Nov 8 14:11:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889643 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B8F0D14C0 for ; Sun, 8 Nov 2020 14:14:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7EC9A20760 for ; Sun, 8 Nov 2020 14:14:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="NCpYibik" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7EC9A20760 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B1D8E6B0068; Sun, 8 Nov 2020 09:14:17 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ACD756B0073; Sun, 8 Nov 2020 09:14:17 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E3AC6B0074; Sun, 8 Nov 2020 09:14:17 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0235.hostedemail.com [216.40.44.235]) by kanga.kvack.org (Postfix) with ESMTP id 7372F6B0068 for ; Sun, 8 Nov 2020 09:14:17 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 20661181AEF07 for ; Sun, 8 Nov 2020 14:14:17 +0000 (UTC) X-FDA: 77461445754.12.gun39_0e0a2cd272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id E9B5E180054E8 for ; Sun, 8 Nov 2020 14:14:16 +0000 (UTC) X-Spam-Summary: 1,0,0,a2904fbc6d2861e6,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1539:1711:1714:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3350:3865:3866:3867:3871:4321:4385:5007:6261:6653:6737:6738:8603:10004:11026:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:12986:13069:13255:13311:13357:13894:14181:14384:14394:14721:21080:21094:21323:21444:21451:21627:30054:30070,0,RBL:209.85.214.195:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yr3o5qnbtcpms4qctub7ndwjhjpyphxnwkypj9mewre7adcmbdxz1hy56ho44.ysd73f1coc85cx3ju7eoum5ucmcdt4xhjc9yf9ybdjbn7drff59tg33hty64574.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: gun39_0e0a2cd272e4 X-Filterd-Recvd-Size: 4431 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:14:16 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id t6so3252837plq.11 for ; Sun, 08 Nov 2020 06:14:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ENR30W+EQorPFupcfRUDbx6GljgYrLpkJikIXFtJu0Q=; b=NCpYibike1Vkd5HinU4cnqcgpw4ag8hM9Go2X7LHKTN0/j5OF+4dQIpP+Jk+NmmmdF FUokkNm9x+sDHmRECk50iMlxna64qhC6Z21XChzMLZRRjJN5N43DZkJKbxP66AZJKiJ+ CFVUU0M4DJfqWj1f6EjPKqVNmKJ1fKdyuWlMSRDk7re9cOS3G98R8ejo7mquqex20X8l c5pOhg4g4A7AxHCOjyNGbY4gdqrOxv0WKX2pm4otrYsmJspaejQjEWfJG0RMm9taKb4m AYO22TVlrmXu1jxTqVHbHC8rM3gSedj41YSiD5N6r9Z+wy0VaZKEvClkXL0ED7asFs7H md3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ENR30W+EQorPFupcfRUDbx6GljgYrLpkJikIXFtJu0Q=; b=o+7NTB7ecMgTsngIOI6I1NEPNKzl5cHa/xZ2zqVdTOhHYzY6zv5h9OLYZ5nZEerX7e CxelPyKO4Q+ErPWCCTxYCzK+bUCr3y5C0uNq1fi5Knzw/XXWtU+Ku3UYue6JAG+J8e30 ut1ZZuR/bjIatLtHeU/fQRYa1gsekfgZQCvXKHUxG7ErOwU9yiN7w2K7cduw0z1juSEi /H9nw/QGPU216oMB9stOn9r9q0Co2IPB9xll+lIYYaOcWA67UoQLNtUGP517IzrF7Fyv x2bA7mxU2NTd78xUFz2uXTleBuvfW4jMuId7bHEnHDF5d0PCw4sUovlZRe4F/aEryHCz BxDQ== X-Gm-Message-State: AOAM531NKAgt5FF50J8x+Lve83z7oTTcIrd3ZDZvzKbWZ83x55ztvMMQ EcH8y2Yz3i2+0usM6Gfn9xeWag== X-Google-Smtp-Source: ABdhPJxjfNCxml6yFxCnotzbO6De4Cg/809K0G4G2C2mRKpmxr6iMJoZs0AWKO0UXf47Ab41LgC7eA== X-Received: by 2002:a17:90a:c383:: with SMTP id h3mr8147517pjt.150.1604844855310; Sun, 08 Nov 2020 06:14:15 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.14.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:14:14 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 15/21] mm/hugetlb: Add a BUILD_BUG_ON to check if struct page size is a power of two Date: Sun, 8 Nov 2020 22:11:07 +0800 Message-Id: <20201108141113.65450-16-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We only can free the unused vmemmap to the buddy system when the size of struct page is a power of two. So add a BUILD_BUG_ON to check the illegal case. Signed-off-by: Muchun Song --- mm/hugetlb.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ec0d33d2c426..5aaa274b0684 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3764,6 +3764,10 @@ static int __init hugetlb_init(void) { int i; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + BUILD_BUG_ON_NOT_POWER_OF_2(sizeof(struct page)); +#endif + if (!hugepages_supported()) { if (hugetlb_max_hstate || default_hstate_max_huge_pages) pr_warn("HugeTLB: huge pages not supported, ignoring associated command-line parameters\n"); From patchwork Sun Nov 8 14:11:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889647 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5691A139F for ; Sun, 8 Nov 2020 14:14:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 12EA3221FA for ; Sun, 8 Nov 2020 14:14:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="TQI4RgR+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 12EA3221FA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 491556B006C; Sun, 8 Nov 2020 09:14:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 464B46B0073; Sun, 8 Nov 2020 09:14:28 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3066B6B0074; Sun, 8 Nov 2020 09:14:28 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0036.hostedemail.com [216.40.44.36]) by kanga.kvack.org (Postfix) with ESMTP id 042016B006C for ; Sun, 8 Nov 2020 09:14:27 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A6B5E824999B for ; Sun, 8 Nov 2020 14:14:27 +0000 (UTC) X-FDA: 77461446174.08.help39_2513446272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 8BFF51819E76B for ; Sun, 8 Nov 2020 14:14:27 +0000 (UTC) X-Spam-Summary: 1,0,0,54e2345fada7a8d1,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2553:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3872:3874:4117:4385:5007:6261:6653:6737:6738:7875:8957:10004:11026:11232:11658:11914:12043:12048:12291:12297:12438:12517:12519:12555:12683:12895:13894:14110:14181:14394:14721:21080:21444:21451:21627:30054:30090,0,RBL:209.85.215.193:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04y8o3ujw96wae579zauko5mj5sstopwuw4mb4p4abswg81z9dpi3x7yj8qu6z3.9esbxpnycc5bp4jd3aus9j1xxrccjrycwja6fy9s86hdkqptwd5efpajjquzfo6.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:491,LUA_SUMMARY:none X-HE-Tag: help39_2513446272e4 X-Filterd-Recvd-Size: 6536 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:14:26 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id 62so4591246pgg.12 for ; Sun, 08 Nov 2020 06:14:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JQobaiL/2z+J7n7cUT8D73Vv1rVUSdnEtfcckH7xNUc=; b=TQI4RgR+eZKPIpHCcExujStJETh7EgtxIcThT90CaF/OshoASZBCYswpsMUlYJF1Ls F6QCz0qeeZ9bCkGft5YO/iHzAWqWKSn7E58avQlNL8NGabj7QhtvVnfMmHeSZBcYItci uFVAIgzooed5nMDoyuppTHMJal3MkK/QKMt7cMl0eJkyfkQYStJohj2dxygQ0QNtFrYj 35vu4IPC8iVHz8kgtbsqglpHt7SLRgVdVJRmN2viK5a66qVXQWjwlUE2hlvQjm6VEVXd CJwsZ5n/4hPxuEO23NtFXTUu7TOa9Bik/RNq4lbVaRRTcjlNAvIgqp8KQpix9WisNBTv zF/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JQobaiL/2z+J7n7cUT8D73Vv1rVUSdnEtfcckH7xNUc=; b=Qkp8p6y7ETrEGvQkFpHoafr5KD6pj4fIsR0w9b/+3CI7nh1EqMUjMl4OeNpX0h2d0n sl1cnLzpFG1yuXHeim4/8MfJGnYWMVAyLNgCJh5hIj7gxQNDI6fDPWGimlx1f8cvadGw uPJoBAQ212P/DpaNMhVdfmuSTMR5nSTufPpjbRL0G7IsEcflhxq/lI0BNaX1ihZUSFEy VEaoDoJxDvDRfkN6BMcZ92GPBCH8pwS2nLZc8n9SohB9kt4CIVT/Vj/kV2qk/JcnfVWX aiQfT8nU8pMBZGaw/A6rYm0IpVFbOkuUwvVikKJYu1Rb/t4jzb5hJ3/vegY/5tfCO9VA E1ZA== X-Gm-Message-State: AOAM530hOjxAcSu34HJx6Rd7RbUJYjP6/hEBTQAhEiaYV2L7jyMiA7UH d+jsqfJGoNL5OxYL4oXcyz9+AQ== X-Google-Smtp-Source: ABdhPJzfdzQqKF1Uc7p58Hp6FF6AnMq9qsLXcQVTzmZlu+VVJFRP/h/1IY5DA4v2QatxdHbEd5Y3Xw== X-Received: by 2002:a63:4d64:: with SMTP id n36mr234404pgl.203.1604844865926; Sun, 08 Nov 2020 06:14:25 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.14.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:14:25 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 16/21] mm/hugetlb: Set the PageHWPoison to the raw error page Date: Sun, 8 Nov 2020 22:11:08 +0800 Message-Id: <20201108141113.65450-17-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Because we reuse the first tail page, if we set PageHWPosion on a tail page. It indicates that we may set PageHWPoison on a series of pages. So we can use the head[4].mapping to record the real error page index and set the raw error page PageHWPoison later. Signed-off-by: Muchun Song --- mm/hugetlb.c | 50 ++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 42 insertions(+), 8 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5aaa274b0684..00a6e97629aa 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1794,6 +1794,29 @@ static inline void free_gigantic_page(struct hstate *h, struct page *page) { __free_gigantic_page(page, huge_page_order(h)); } + +static inline void subpage_hwpoison_deliver(struct page *head) +{ + struct page *page = head; + + if (PageHWPoison(head)) + page = head + page_private(head + 4); + + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + if (page != head) { + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + +static inline void set_subpage_hwpoison(struct page *head, struct page *page) +{ + if (PageHWPoison(head)) + set_page_private(head + 4, page - head); +} #else static inline void hugetlb_vmemmap_init(struct hstate *h) { @@ -1841,6 +1864,22 @@ static inline void free_gigantic_page(struct hstate *h, struct page *page) __free_gigantic_page(page, huge_page_order(h)); spin_lock(&hugetlb_lock); } + +static inline void subpage_hwpoison_deliver(struct page *head) +{ +} + +static inline void set_subpage_hwpoison(struct page *head, struct page *page) +{ + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + if (PageHWPoison(head) && page != head) { + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} #endif static void update_and_free_page(struct hstate *h, struct page *page) @@ -1859,6 +1898,7 @@ static void __free_hugepage(struct hstate *h, struct page *page) int i; alloc_huge_page_vmemmap(h, page); + subpage_hwpoison_deliver(page); for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | @@ -2416,14 +2456,8 @@ int dissolve_free_huge_page(struct page *page) int nid = page_to_nid(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; - /* - * Move PageHWPoison flag from head page to the raw error page, - * which makes any subpages rather than the error page reusable. - */ - if (PageHWPoison(head) && page != head) { - SetPageHWPoison(page); - ClearPageHWPoison(head); - } + + set_subpage_hwpoison(head, page); list_del(&head->lru); h->free_huge_pages--; h->free_huge_pages_node[nid]--; From patchwork Sun Nov 8 14:11:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889651 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6DAC414C0 for ; Sun, 8 Nov 2020 14:14:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2B2A7208B6 for ; Sun, 8 Nov 2020 14:14:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="c3HHOTtr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2B2A7208B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 578BD6B0073; Sun, 8 Nov 2020 09:14:38 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 527DD6B0074; Sun, 8 Nov 2020 09:14:38 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C8F76B0075; Sun, 8 Nov 2020 09:14:38 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0095.hostedemail.com [216.40.44.95]) by kanga.kvack.org (Postfix) with ESMTP id 113106B0073 for ; Sun, 8 Nov 2020 09:14:38 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BB4B2180AD801 for ; Sun, 8 Nov 2020 14:14:37 +0000 (UTC) X-FDA: 77461446594.05.pets94_1501cbc272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id A416D180143CC for ; Sun, 8 Nov 2020 14:14:37 +0000 (UTC) X-Spam-Summary: 1,0,0,0a9396d453caab01,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1541:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3867:3871:3874:4385:5007:6261:6653:6737:6738:7903:8957:9707:10004:11026:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12895:13069:13161:13229:13311:13357:13894:14181:14384:14394:14721:21080:21094:21323:21444:21451:21627:21740:21990:30054,0,RBL:209.85.210.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04y8mryeqtcs6p335g86ag1ugm7o1opw1mb8t9pjzcw9d9spizyycu9tkyimi1x.79gxuwxwh6gj9fzu4g3jdx87ytrhknsp7xi1r9aajpx7uqphuzdh6fkzie7q9bu.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: pets94_1501cbc272e4 X-Filterd-Recvd-Size: 5359 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:14:37 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id 13so5518351pfy.4 for ; Sun, 08 Nov 2020 06:14:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xGk99oWPGnpN4FYpAClZ/sbMfd7MuDXDW58bOTmCGl0=; b=c3HHOTtr5JYnv2lobf+qA/PZ2JxtIPFBh4I5RRXOVSoFqdiIYGFRs6L0u4GAhrbiMP /fVfbaeQr7GtAw4max524LNMjZM9SnaGVZ0wxp9locWREhOP0c7sF/Oo3/1tAA7lQS0X mC6IUUyrhrNEA9PYsPqGEbhQAMlWa3DyHT8ZSyl3QqHQAEy/Hfo+aBiFkRHXjyp5SM7r 3PYei7eAs/A31qw2IwtrQdKKgfAAk1gr96RwfvdGCmst5oE4nuq6V8lrPpiijk0rlAXW NdAeva3pL7oy6fXrl/513LBYtt0UkRjG09ST0KrCP9vsh9LTBt5E3ou92oxbwm5yxYlN bAGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xGk99oWPGnpN4FYpAClZ/sbMfd7MuDXDW58bOTmCGl0=; b=l5/+Jr29nGnWog4gcvCfIJwY2RPkw/w/r+A5cAbTV8s9I+GKPpIrutMhbh02q+hb8u MdnHeRH7IQqcWf1hol4GxegPANEOTz47bp5qUXCIlCdWDkkVnkLDKEscnz6XpZqYaLPh 0GSaHwKdercrOBYVI8XKYL3Z2vcQ2260e3DUZWt5CbXabLwtLnaskgLJGnkACXkYWZNJ JPnb2y1vrK7IqJObxFjOi1sCuK8t/eUTZ+B8dyvcSqbLtfY9xj1/yv2Aivhmof3l68oW Sz3sU7URnL6PxgR0Rydnp0iw7YSDNDmNHFpYgW6AYhfHJUUg+9SymATHGKc0Zil9cXXA UJLA== X-Gm-Message-State: AOAM532HbKyuNWLoILMZUEXvSb4tXX46pZcj6oZfyG6ekAeDW0N0w3Vn cUuPUX8er9inSi72ayDnFKKnSA== X-Google-Smtp-Source: ABdhPJwmnI9wz7z84dpkmVCHu+Ytsny1UtEsSs+pwgtK14zj7vbi2Q4mYaBE7cbCcEKatyzvawKt1w== X-Received: by 2002:a63:7408:: with SMTP id p8mr9184214pgc.273.1604844876138; Sun, 08 Nov 2020 06:14:36 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.14.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:14:35 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 17/21] mm/hugetlb: Flush work when dissolving hugetlb page Date: Sun, 8 Nov 2020 22:11:09 +0800 Message-Id: <20201108141113.65450-18-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should flush work when dissolving a hugetlb page to make sure that the hugetlb page is freed to the buddy. Signed-off-by: Muchun Song --- mm/hugetlb.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 00a6e97629aa..4cd2f4a6366a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1795,6 +1795,11 @@ static inline void free_gigantic_page(struct hstate *h, struct page *page) __free_gigantic_page(page, huge_page_order(h)); } +static inline void flush_free_huge_page_work(void) +{ + flush_work(&hpage_update_work); +} + static inline void subpage_hwpoison_deliver(struct page *head) { struct page *page = head; @@ -1865,6 +1870,10 @@ static inline void free_gigantic_page(struct hstate *h, struct page *page) spin_lock(&hugetlb_lock); } +static inline void flush_free_huge_page_work(void) +{ +} + static inline void subpage_hwpoison_deliver(struct page *head) { } @@ -2439,6 +2448,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + bool need_flush = false; /* Not to disrupt normal path by vainly holding hugetlb_lock */ if (!PageHuge(page)) @@ -2463,10 +2473,19 @@ int dissolve_free_huge_page(struct page *page) h->free_huge_pages_node[nid]--; h->max_huge_pages--; update_and_free_page(h, head); + need_flush = true; rc = 0; } out: spin_unlock(&hugetlb_lock); + + /* + * We should flush work before return to make sure that + * the hugetlb page is freed to the buddy. + */ + if (need_flush) + flush_free_huge_page_work(); + return rc; } From patchwork Sun Nov 8 14:11:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889653 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 62EC0139F for ; Sun, 8 Nov 2020 14:14:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1FE9A20B1F for ; Sun, 8 Nov 2020 14:14:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="U33zj+M1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1FE9A20B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 52A256B0074; Sun, 8 Nov 2020 09:14:48 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4D99D6B0075; Sun, 8 Nov 2020 09:14:48 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37A376B0078; Sun, 8 Nov 2020 09:14:48 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 0BCD46B0074 for ; Sun, 8 Nov 2020 09:14:48 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AEAC61EE6 for ; Sun, 8 Nov 2020 14:14:47 +0000 (UTC) X-FDA: 77461447014.02.woman35_4f12b0d272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 9343F10097AA1 for ; Sun, 8 Nov 2020 14:14:47 +0000 (UTC) X-Spam-Summary: 1,0,0,df4ac141e5626b66,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:4118:4321:4385:4390:4395:4605:5007:6119:6261:6653:6737:6738:7875:8603:9036:10004:11026:11232:11473:11658:11914:12043:12048:12114:12291:12297:12438:12517:12519:12555:12895:13894:14096:14181:14394:14721:21080:21094:21323:21444:21451:21627:21966:21990:30029:30034:30054,0,RBL:209.85.215.193:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04ygo15gfto5jk6siwwmffg5gue1gyp6x5sr9xszbfkhngq13j6r8jz5bcb85yt.p1s8kdy9ggu9m5i9cmmjrszn9wh5d7zj1eygcareep1q1ecx711mbzonjymrd7i.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: woman35_4f12b0d272e4 X-Filterd-Recvd-Size: 7025 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:14:47 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id w4so4590624pgg.13 for ; Sun, 08 Nov 2020 06:14:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RheNFP0EuiAgdfcYmY6eB4EIrW1DiMLHzYTKBgLQRZE=; b=U33zj+M1CsQ/M+93U8mn/n5QfHFI9wydIavi0QQK4pMsmWfo6FB3v/k6Nu9Nqg5Tlr cHMqxF0gyER3bE0IbFtNULVIa1c6+rVTv0yLMguptq6i8hTKIKpnjEa6w+8yaJk6TUnX b9N52Vg2z4X1svWBHCqJulMqNaTE8WzlokU//3s0FUVWc0UP6qk+VaY94uvHfonD/Hco Gs8g+T8DZwl7m7wEbMteKO/NRzrvFEtZDIIWOF0WvSVWHv4D5fBHmSyvfJHy4TNw0EBT PAOXBnimXxMdxP6ZMFCQd4H8SqHsNcPZ1kuqy1xxCfq44pyecpW9QP5rlDWjom02zQKV kx+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RheNFP0EuiAgdfcYmY6eB4EIrW1DiMLHzYTKBgLQRZE=; b=imB0Egqt8VjSbhYQUhVfFbSj50FFDlyuOzkUV5pJ/dRLJSdvOGiQ6OUWxt5dHBHaoQ 9TwBO4JJ7ASwc9U9fUo1qZ4nR0dNwCMSRCOuU0UaXz033qoy/5N/PlScdwpBxgyGOncX ukjlttYX/J7cE3m/cSySw0cKYGUUgFULD+aZW3EkjRZYZlSYgm0vP3q914+mFQwIWuF0 rM9tbUT9Z2J6QccKbPAGMh9kU2PFCjznPH+uWrwdNlrOAwEu/4Ddk9xjhKKitd4zS5b/ xlbLhXOBT3tB+YxlPL1buF798Avj5M9I85hA5rP26+0t87CbFOURJcZGGdLCycUi/79s Tyxw== X-Gm-Message-State: AOAM531w8Olt9C8ff2gY/ryHyafimwxGzQsOcevbiOdMaVTaRmoPfSmZ kwH/wpkiqparnCVoK/+UnkXDCA== X-Google-Smtp-Source: ABdhPJw2QDTjCMQdldj7gBBZ+4MC0gXtMGu5jJeI4aZlSbwWIzM41YxfsEvorb1eXCUPW7hgYYTvcw== X-Received: by 2002:a63:d24a:: with SMTP id t10mr9661795pgi.344.1604844886440; Sun, 08 Nov 2020 06:14:46 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.14.36 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:14:45 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 18/21] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Date: Sun, 8 Nov 2020 22:11:10 +0800 Message-Id: <20201108141113.65450-19-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a kernel parameter hugetlb_free_vmemmap to disable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. Signed-off-by: Muchun Song --- Documentation/admin-guide/kernel-parameters.txt | 9 +++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ mm/hugetlb.c | 23 +++++++++++++++++++++++ 3 files changed, 35 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 5debfe238027..ccf07293cb63 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1551,6 +1551,15 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] + hugetlb_free_vmemmap= + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, + this controls freeing unused vmemmap pages associated + with each HugeTLB page. + Format: { on (default) | off } + + on: enable the feature + off: disable the feature + hung_task_panic= [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index f7b1c7462991..7d6129ee97dd 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -145,6 +145,9 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this disables freeing + unused vmemmap pages associated each HugeTLB page. When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4cd2f4a6366a..7c97a1d30fd9 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1319,6 +1319,8 @@ static void __free_hugepage(struct hstate *h, struct page *page); (__boundary - 1 < (end) - 1) ? __boundary : (end); \ }) +static bool hugetlb_free_vmemmap_disabled __initdata; + static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return h->nr_free_vmemmap_pages; @@ -1480,6 +1482,13 @@ static void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int order = huge_page_order(h); unsigned int vmemmap_pages; + if (hugetlb_free_vmemmap_disabled) { + h->nr_free_vmemmap_pages = 0; + pr_info("HugeTLB: disable free vmemmap pages for %s\n", + h->name); + return; + } + vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page and the first tail page not free to buddy system, @@ -1822,6 +1831,20 @@ static inline void set_subpage_hwpoison(struct page *head, struct page *page) if (PageHWPoison(head)) set_page_private(head + 4, page - head); } + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "off")) + hugetlb_free_vmemmap_disabled = true; + else if (strcmp(buf, "on")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); #else static inline void hugetlb_vmemmap_init(struct hstate *h) { From patchwork Sun Nov 8 14:11:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889657 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6433139F for ; Sun, 8 Nov 2020 14:14:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 90424208B6 for ; Sun, 8 Nov 2020 14:14:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="Zv7w55D/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 90424208B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A82346B0072; Sun, 8 Nov 2020 09:14:58 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A31366B0075; Sun, 8 Nov 2020 09:14:58 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D1F46B0078; Sun, 8 Nov 2020 09:14:58 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0022.hostedemail.com [216.40.44.22]) by kanga.kvack.org (Postfix) with ESMTP id 5F70F6B0072 for ; Sun, 8 Nov 2020 09:14:58 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 05A271EE6 for ; Sun, 8 Nov 2020 14:14:58 +0000 (UTC) X-FDA: 77461447476.28.alarm17_260e964272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id D98406C1F for ; Sun, 8 Nov 2020 14:14:57 +0000 (UTC) X-Spam-Summary: 1,0,0,3a8aa35291baa3a2,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:2:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3865:3866:3867:3871:3872:3874:4120:4321:4385:4605:5007:6119:6120:6261:6653:6737:6738:7901:7903:8603:9010:10004:11026:11233:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14110:14394:21080:21444:21451:21627:21990:30054,0,RBL:209.85.216.67:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04y8c7pmqbrwfeedun6cd8ncpnm7zocq1r1knzus6qedihkqn7r6oi5cbsnwpqy.w6nubmfrrjxr9qdtbak756w1yskxpk89bx7o64uotodx1oxna74bwb13x4t6tjb.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: alarm17_260e964272e4 X-Filterd-Recvd-Size: 9161 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:14:57 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id g21so1884359pjv.2 for ; Sun, 08 Nov 2020 06:14:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qt8rSyxnY6ivqdXKrIuDFH2FNO7xD/1XUBAOOo28q3E=; b=Zv7w55D/QwZ4Fb0KWjH6xMndfPONJ425wcB6ctKB9PIuL4HW5e8j77Tgcsdd8sBt4a TChCoGjooOfvznw9ryLRGYGeqaqz/Cgd7Ot651qqZ4hjIrpNW3I/76kzm6wMfcN/7Ddy 6mc39Xe3BDPLG2Be3Eb5zfqaiCLQ8LZfFCla06FaKFLwL6h1W5k/ST6HTL+EJMbbkNp3 02mZEk5b9MWZOGeVLzXguPrRucoI2bCdH95wM+g5OXZpea7quuiDiY8/PcgUjsZfi0yD 3kLpsNdZJyCDbyGh2GucV042je/YtVdjQdUFeSsv525r5f3FdaSaYfdvC2wH0my3vdh5 fHIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qt8rSyxnY6ivqdXKrIuDFH2FNO7xD/1XUBAOOo28q3E=; b=q0bO7R6ngkPFzr2jm+O8Yc2VM57UGOmpkCzd3s4Z4bTw3nh7UAuTtve115LRzUyUtp 1BtwaC6f1kiX9Jh4Tvz2U67I9ybqEMJfUDHJ8xBtGwL3cidntWwbTCqUbmhNiFN4aS8c vmK5c9+KIlp3ZejMEcLhY4Sq61B5ZV239AK+eAdd7Yf9q6oHBlO3iuCIDmDhmXmQCNXo OJwG4ov/9wTfJ/y+D5NDEobjlF/Gt3lr/mXLxNYHejq/NNNGNXXHVQkOwAjDJyTSQiia q8SujAfe0PlGP+0kwOLw9qkDGCLWUytqlG+sQVH1EyaVFziGe2RifwhEYvZ0i9HyQ+wp BbNA== X-Gm-Message-State: AOAM531l+QZ2k/xgQKourHjcEHTWTkcDbAOuvlcLEKZcSUjal9OuO7sA K2dUnESrK8iYN8ImvahyGMMZSA== X-Google-Smtp-Source: ABdhPJziW7gkH63G6HMPp/eINNu8bOyx1G0bL2TJha3usgQPV3fodBmg/PyvAkln/Z6pfQFbDewzgw== X-Received: by 2002:a17:902:59cd:b029:d6:7656:af1 with SMTP id d13-20020a17090259cdb02900d676560af1mr9077723plj.43.1604844896289; Sun, 08 Nov 2020 06:14:56 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.14.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:14:55 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 19/21] mm/hugetlb: Merge pte to huge pmd only for gigantic page Date: Sun, 8 Nov 2020 22:11:11 +0800 Message-Id: <20201108141113.65450-20-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Merge pte to huge pmd if it has ever been split. Now only support gigantic page which's vmemmap pages size is an integer multiple of PMD_SIZE. This is the simplest case to handle. Signed-off-by: Muchun Song --- arch/x86/include/asm/hugetlb.h | 8 +++ include/linux/hugetlb.h | 8 +++ mm/hugetlb.c | 108 ++++++++++++++++++++++++++++++++++++++++- 3 files changed, 122 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h index c601fe042832..1de1c519a84a 100644 --- a/arch/x86/include/asm/hugetlb.h +++ b/arch/x86/include/asm/hugetlb.h @@ -12,6 +12,14 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd) { return pmd_large(*pmd); } + +#define vmemmap_pmd_mkhuge vmemmap_pmd_mkhuge +static inline pmd_t vmemmap_pmd_mkhuge(struct page *page) +{ + pte_t entry = pfn_pte(page_to_pfn(page), PAGE_KERNEL_LARGE); + + return __pmd(pte_val(entry)); +} #endif #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index f8ca4d251aa8..32abfb420731 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -605,6 +605,14 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd) } #endif +#ifndef vmemmap_pmd_mkhuge +#define vmemmap_pmd_mkhuge vmemmap_pmd_mkhuge +static inline pmd_t vmemmap_pmd_mkhuge(struct page *page) +{ + return pmd_mkhuge(mk_pmd(page, PAGE_KERNEL)); +} +#endif + #ifndef VMEMMAP_HPAGE_SHIFT #define VMEMMAP_HPAGE_SHIFT HPAGE_SHIFT #endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7c97a1d30fd9..52e56c3a9b72 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1708,6 +1708,63 @@ static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, } } +static void __replace_huge_page_pte_vmemmap(pte_t *ptep, unsigned long start, + unsigned int nr, struct page *huge, + struct list_head *free_pages) +{ + unsigned long addr; + unsigned long end = start + (nr << PAGE_SHIFT); + pgprot_t pgprot = PAGE_KERNEL; + + for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) { + struct page *page; + pte_t old = *ptep; + pte_t entry; + + prepare_vmemmap_page(huge); + + entry = mk_pte(huge++, pgprot); + VM_WARN_ON(!pte_present(old)); + page = pte_page(old); + list_add(&page->lru, free_pages); + + set_pte_at(&init_mm, addr, ptep, entry); + } +} + +static void replace_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start, + struct page *huge, + struct list_head *free_pages) +{ + unsigned long end = start + VMEMMAP_HPAGE_SIZE; + + flush_cache_vunmap(start, end); + __replace_huge_page_pte_vmemmap(pte_offset_kernel(pmd, start), start, + VMEMMAP_HPAGE_NR, huge, free_pages); + flush_tlb_kernel_range(start, end); +} + +static pte_t *merge_vmemmap_pte(pmd_t *pmdp, unsigned long addr) +{ + pte_t *pte; + struct page *page; + + pte = pte_offset_kernel(pmdp, addr); + page = pte_page(*pte); + set_pmd(pmdp, vmemmap_pmd_mkhuge(page)); + + return pte; +} + +static void merge_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start, + struct page *huge, + struct list_head *free_pages) +{ + replace_huge_page_pmd_vmemmap(pmd, start, huge, free_pages); + pte_free_kernel(&init_mm, merge_vmemmap_pte(pmd, start)); + flush_tlb_kernel_range(start, start + VMEMMAP_HPAGE_SIZE); +} + static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) { int i; @@ -1721,6 +1778,15 @@ static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) } } +static inline void dissolve_compound_page(struct page *page, unsigned int order) +{ + int i; + unsigned int nr_pages = 1 << order; + + for (i = 1; i < nr_pages; i++) + set_page_refcounted(page + i); +} + static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) { pmd_t *pmd; @@ -1738,10 +1804,48 @@ static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) __remap_huge_page_pte_vmemmap); if (!freed_vmemmap_hpage_dec(pmd_page(*pmd)) && pmd_split(pmd)) { /* - * Todo: - * Merge pte to huge pmd if it has ever been split. + * Merge pte to huge pmd if it has ever been split. Now only + * support gigantic page which's vmemmap pages size is an + * integer multiple of PMD_SIZE. This is the simplest case + * to handle. */ clear_pmd_split(pmd); + + if (IS_ALIGNED(vmemmap_pages_per_hpage(h), VMEMMAP_HPAGE_NR)) { + unsigned long addr = (unsigned long)head; + unsigned long end = addr + + vmemmap_pages_size_per_hpage(h); + + spin_unlock(ptl); + + for (; addr < end; addr += VMEMMAP_HPAGE_SIZE) { + void *to; + struct page *page; + + page = alloc_pages(GFP_VMEMMAP_PAGE & ~__GFP_NOFAIL, + VMEMMAP_HPAGE_ORDER); + if (!page) + goto out; + + dissolve_compound_page(page, + VMEMMAP_HPAGE_ORDER); + to = page_to_virt(page); + memcpy(to, (void *)addr, VMEMMAP_HPAGE_SIZE); + + /* + * Make sure that any data that writes to the + * @to is made visible to the physical page. + */ + flush_kernel_vmap_range(to, VMEMMAP_HPAGE_SIZE); + + merge_huge_page_pmd_vmemmap(pmd++, addr, page, + &remap_pages); + } + +out: + free_vmemmap_page_list(&remap_pages); + return; + } } spin_unlock(ptl); } From patchwork Sun Nov 8 14:11:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889661 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4D2E61130 for ; Sun, 8 Nov 2020 14:15:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 06EB6208B6 for ; Sun, 8 Nov 2020 14:15:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="KGbXZwTk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 06EB6208B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 352DD6B0070; Sun, 8 Nov 2020 09:15:08 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 304CC6B0075; Sun, 8 Nov 2020 09:15:08 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F46A6B0078; Sun, 8 Nov 2020 09:15:08 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id E71E66B0070 for ; Sun, 8 Nov 2020 09:15:07 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 99F63181AEF09 for ; Sun, 8 Nov 2020 14:15:07 +0000 (UTC) X-FDA: 77461447854.19.juice10_5f0bae6272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 78A7F1AD1B5 for ; Sun, 8 Nov 2020 14:15:07 +0000 (UTC) X-Spam-Summary: 1,0,0,69f4ef12e0cc696e,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:2:41:355:379:541:800:960:966:973:981:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1605:1606:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:2892:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3872:4119:4321:4385:4605:5007:6261:6653:6737:6738:7875:7903:8603:8957:10004:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13894:14394:21080:21444:21451:21627:21990:30054:30075,0,RBL:209.85.215.193:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yfk84nxbm4fsm5o8k8cwedt7xr3yp8yd94omriodtrziuj19wdrco1cra33iz.g5mpg1gyx1afn1bg5iu34aog4m9sidf6rxzz3f7hw3asgxykrb4ujb7kn1oha5u.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: juice10_5f0bae6272e4 X-Filterd-Recvd-Size: 8893 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:15:06 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id i26so4616816pgl.5 for ; Sun, 08 Nov 2020 06:15:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=e+pj5tPMvMXnx32hBMvm1ajVAzlJ7YDVCn4dJUaWnUA=; b=KGbXZwTkiLz84tXlUP4bi2kCpQ0nHeH1CMyl6yYdCOCV/rHFVJQKBAuB09h0oyUUmt zNt4VXowCLFHHUtk9MxSGfUrD1KUwFMmM/vxRNbhEL7s1ey0OVO+Wg6omi8rRoRM0QiH H6gHk30M8hMt7gN6hBHRrX0lCfebp5xZ0GLSSlUtJV/ejG4H2j33GDeHnF6Ms91sWliJ p3AJ4Kdy3Lv8yPPWOdD4f+e5L56y77WRSW6h4vHV5XzXwFTswuOhtra2vOQKrkRH/drY 5Ow2Ao3w5j+USJRqS7226vNfONQEfKs3lTaZaqEIiKVmURoIM5+OH/kbWgZ3mEfGjJNr ce2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=e+pj5tPMvMXnx32hBMvm1ajVAzlJ7YDVCn4dJUaWnUA=; b=Xd6Qv+RVki7Y6lZ1KpIf7BcZa9kHzGgn/SliRI5qCF6DJluWeSwdMSAEelI+lyYZfk 0O5h2AwSzvo8W3MG1ftZOsqEaL37mJO3yojIW4QwecVsRVMzAkpTGz8jLkGHjERuUhNs ER3iAenBzoxcBGSFLhcdMEhtKQHAlf8SdThA4jjOWtlbuGNsXqeaZ63JtyIDOBZDhnUN 54OLJkjDF7aA3d3DdfZPvasHuYCwmoq3gauXrP5xnnb0Ow3m0M7rP0ALFt+7U478ABJS i70vnsxP38u8SiLVCHZzQUgK3L8vyWNSAkxh97jaY8w2eILXZP4E2g7RbW+T2WAH8KVn LvDw== X-Gm-Message-State: AOAM532mpLBCW7xwZM0/LBwyEWDmvWf/WTN9O5Wi0pdbQ42Zv5CrTJps T1u28VCqZL1H7+SUszP+IMciDg== X-Google-Smtp-Source: ABdhPJyEzVCy/CWEDirtuTPvNaVt+XUiPFs02MDJaUgg0dgIGbW9XqiytCuIQ617QOS/N4yH13vxTw== X-Received: by 2002:a63:2145:: with SMTP id s5mr8808739pgm.288.1604844906177; Sun, 08 Nov 2020 06:15:06 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.14.56 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:15:05 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 20/21] mm/hugetlb: Gather discrete indexes of tail page Date: Sun, 8 Nov 2020 22:11:12 +0800 Message-Id: <20201108141113.65450-21-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For hugetlb page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page In this case, it will be easier to add a new tail page index later. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 13 +++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 16 ++++++++-------- 3 files changed, 30 insertions(+), 14 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 32abfb420731..cb604b9dd649 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +enum { + SUBPAGE_INDEX_ACTIVE = 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 52e56c3a9b72..1dd1a9cec008 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1918,7 +1918,7 @@ static inline void subpage_hwpoison_deliver(struct page *head) struct page *page = head; if (PageHWPoison(head)) - page = head + page_private(head + 4); + page = head + page_private(head + SUBPAGE_INDEX_HWPOISON); /* * Move PageHWPoison flag from head page to the raw error page, @@ -1933,7 +1933,7 @@ static inline void subpage_hwpoison_deliver(struct page *head) static inline void set_subpage_hwpoison(struct page *head, struct page *page) { if (PageHWPoison(head)) - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } static int __init early_hugetlb_free_vmemmap_param(char *buf) @@ -2074,20 +2074,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* @@ -2099,17 +2099,17 @@ static inline bool PageHugeTemporary(struct page *page) if (!PageHuge(page)) return false; - return (unsigned long)page[2].mapping == -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U; } static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping = (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U; } static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping = NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL; } static void __free_huge_page(struct page *page) From patchwork Sun Nov 8 14:11:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11889665 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4AD241130 for ; Sun, 8 Nov 2020 14:15:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1090C208B6 for ; Sun, 8 Nov 2020 14:15:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="dtAkgmRZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1090C208B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4189F6B0071; Sun, 8 Nov 2020 09:15:18 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3C78B6B0075; Sun, 8 Nov 2020 09:15:18 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28F486B0078; Sun, 8 Nov 2020 09:15:18 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id F16446B0071 for ; Sun, 8 Nov 2020 09:15:17 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 99B00180AD801 for ; Sun, 8 Nov 2020 14:15:17 +0000 (UTC) X-FDA: 77461448274.15.bread24_13114dd272e4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 7947A1814B0C7 for ; Sun, 8 Nov 2020 14:15:17 +0000 (UTC) X-Spam-Summary: 1,0,0,dd6db6e168837f33,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1539:1568:1711:1714:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3866:3867:4321:4385:5007:6261:6642:6653:6737:6738:8784:10004:11026:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:12986:13069:13161:13229:13255:13311:13357:13894:14181:14384:14394:14721:21080:21444:21451:21627:30054,0,RBL:209.85.216.68:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yr6bfmj9r4ewonb7aq69nx33syqocd1wkop995rfndj4iny9n4ei4xtuf8gyw.5ag3pmdrsbm9gs5p8sos1qg1mw911nxob458fddhrt49ey439noix4wkkhdge86.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: bread24_13114dd272e4 X-Filterd-Recvd-Size: 4376 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Sun, 8 Nov 2020 14:15:16 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id s35so942178pjd.1 for ; Sun, 08 Nov 2020 06:15:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OnGydSB89Ks+7A/rv+/g9HtnO+i1rkH3g7IQtAkMLnc=; b=dtAkgmRZvJyerFmLgvB020I8l7g2zfjdCP5+XPHwtRIVlzmSxGCms9ehfCCLZVR0al K3JEfW3a2cb5lBH7KjGzZyZrkL7hgeK/f2RD4IsS1Kj345ewDSKl0gN8nT7fWfEfkcJI u3b8lh2Q+kTTgqwf898OshwauP1wqeUPYm+BywQDdmDrksbRVO1bTpuqODZQMUA+Tbup U2492FOYkIMh79TDUAZeLqRQT+FdC0R774chbr3yXZ1IsUjGj+FrqjVTaTMu14zfZLdg F7nJH66mGkbImWLK0uFhhdIxQVUgjXdhbDJSAeXpHhZ5XHlHKi8MJEGRs4dxh8HBN5S5 Qoog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OnGydSB89Ks+7A/rv+/g9HtnO+i1rkH3g7IQtAkMLnc=; b=rSNRGf2s5cE9Ne83iJMhpTnS4+oxgIzgN/pGkAOOzKAape1gDaH5/MwSdzPxLdd5Ji H4O9/UR64Zil5gAeP3aUT34VQ2sN/EhAWwAl+takNkIG/wrxKVU4EwOuXBVb3is73c7e rSH/sj1gp3sOPT7Jyinme1EiKZCF09Z91GK2v+T/fD+KByhehBP1Xulj/M/0knvmymd0 9OM5cA3z/p5OOLwV7zl+fknRi+zK3I3aGxsQehobasIrG1K2nhz+ern8ZZDZj/irAhlv zT9UgbYTnCqKSh1s5EHthjnfLDnjxj17o67kKNsWXe/bHOxUDHS8kGHkIdW8+thxzg1S hUkQ== X-Gm-Message-State: AOAM533HLAE6UaQSd7CKX/rRDNUoYFtA5jne1Q/BsSrdhmHmCiEXo51Q aqxS1GPcFN3R06FqcU4xOSB9Dw== X-Google-Smtp-Source: ABdhPJw5tONPh/PhIVeqbhhlTnTuEnWt6avZ/HRA/KkSY5X/hL2YRz93iJrM7qp1GWj08vA2LU067Q== X-Received: by 2002:a17:90a:430b:: with SMTP id q11mr8471967pjg.129.1604844916262; Sun, 08 Nov 2020 06:15:16 -0800 (PST) Received: from localhost.localdomain ([103.136.220.94]) by smtp.gmail.com with ESMTPSA id z11sm8754047pfk.52.2020.11.08.06.15.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Nov 2020 06:15:15 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v3 21/21] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Date: Sun, 8 Nov 2020 22:11:13 +0800 Message-Id: <20201108141113.65450-22-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201108141113.65450-1-songmuchun@bytedance.com> References: <20201108141113.65450-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are only `RESERVE_VMEMMAP_SIZE / sizeof(struct page)` struct pages can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch this invalid usage of tail struct page. Signed-off-by: Muchun Song --- mm/hugetlb.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1dd1a9cec008..66b96705597a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3946,6 +3946,8 @@ static int __init hugetlb_init(void) #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP BUILD_BUG_ON_NOT_POWER_OF_2(sizeof(struct page)); + BUILD_BUG_ON(NR_USED_SUBPAGE >= + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); #endif if (!hugepages_supported()) {