From patchwork Tue Sep 15 12:59:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776485 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9CADD746 for ; Tue, 15 Sep 2020 13:00:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 093B8206A2 for ; Tue, 15 Sep 2020 13:00:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="GgmCZYpj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 093B8206A2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 09772900037; Tue, 15 Sep 2020 09:00:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 044E66B00A5; Tue, 15 Sep 2020 09:00:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4EE3900037; Tue, 15 Sep 2020 09:00:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id C45186B00A4 for ; Tue, 15 Sep 2020 09:00:28 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7A3C0181AC553 for ; Tue, 15 Sep 2020 13:00:28 +0000 (UTC) X-FDA: 77265304536.11.wind82_1f1066a27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id C18F8180F9825 for ; Tue, 15 Sep 2020 13:00:21 +0000 (UTC) X-Spam-Summary: 1,0,0,7ae0e6c137e07f61,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:1:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1801:2194:2196:2199:2200:2393:2538:2559:2562:2636:3138:3139:3140:3141:3142:3308:3865:3866:3867:3868:3870:3874:4250:4321:4385:4605:5007:6117:6119:6120:6261:6653:6737:6738:7903:9036:9592:10004:11026:11232:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13255:13894:13972:14096:14110:21080:21444:21451:21627:21972:21990:30003:30054:30067:30070:30075:30076,0,RBL:209.85.214.193:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yrfpcgnfzec5edouhtja47iu6o1yccu8zkcnjb3fmwfm1ewemw1stowfmcaju.4z4idax36mxtimy58cdxexrmshum3uur8fswu1yg64mr5yrmgnnc1z374p9mmqj.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:non e X-HE-Tag: wind82_1f1066a27111 X-Filterd-Recvd-Size: 13973 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:00:19 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id bd2so1301629plb.7 for ; Tue, 15 Sep 2020 06:00:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8DeR6uCzjtGkfSrSPVsevsVb+uX4MzvHGApgHXTvvUU=; b=GgmCZYpjjGYQ+i7JTgVfCqcMV19OllOjnimbgEh4KD68EpHT1ZrY5UXaJHwW8FUksy VtLEaCoGgXbWy9FiG8ouh4dqVnM2lKHiBWFfKq8c0LfnyMZoi466aHtHjh3VOWx937uB gK2s8ECAvT+FtcYzYs62l9997QgtXzGo4+MdxyxGS72Ts2/tQRzNFTLFMNcXf5ge9iF0 CkKEyt9lUChQI4dZ5gG38O0mDV72IoRwnVvF02mReAUpLlG4NObetw+/zOq+6EEDXd0j 4iENtSW7dUv3yR1JFX6hlWO0E3W9BE5cO7RK5K7kAtgs5UYPJUw5/GrucGpTX5fx3e4C anMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8DeR6uCzjtGkfSrSPVsevsVb+uX4MzvHGApgHXTvvUU=; b=Iq39+ymRecN1ykntOQ4tMpguhG1u0Ryb1CFyO0HdHKLU01dtzqoYFhPFS7lUQXRsfX WuwMiQSGrhYYSLPFVUMMrO9YMT+VlqRIXS32Ss8ar3BNk59OTgsdlOrgDiW4nPuPw4TA 4XsyBqvz/9RQteiqXo7lgd4kgl45HioUUzf+OGhTkqIh3KVJEfSe+Nhu5YPn0O/rlz+6 cupZi0oAO7zh/XL9du+SN8K4siRDIDBXsDQqKoehInpONRNPppMNxqLEYmSjP8noZ/C0 JeVzP1gcTVp7EdL4Mc5TMB6x/5A8lkioCzOzVE0kbLo3dFC+NPRUrIgaoqmGtNSWhndQ bdwA== X-Gm-Message-State: AOAM530k4gGCmV6FFIzV827JpkkPChxzF0Fn/+nWYFnU13rq7/Nn0ayz sSh2EGLwIXX30T67QljSVIUI5Q== X-Google-Smtp-Source: ABdhPJxu7cY6/i1C0V0XQr412JG0Pp/ta/9zN44Xbi2aWgCow/XMuUpaSqNEvok+rqjE/CCZypwPRg== X-Received: by 2002:a17:902:eec7:b029:d1:c2e4:6b58 with SMTP id h7-20020a170902eec7b02900d1c2e46b58mr11022576plb.4.1600174818384; Tue, 15 Sep 2020 06:00:18 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.00.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:00:17 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 01/24] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Date: Tue, 15 Sep 2020 20:59:24 +0800 Message-Id: <20200915125947.26204-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C18F8180F9825 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move bootmem info registration common API to individual bootmem_info.c for later patch use. Signed-off-by: Muchun Song Acked-by: Mike Kravetz --- arch/x86/mm/init_64.c | 1 + include/linux/bootmem_info.h | 27 ++++++++++ include/linux/memory_hotplug.h | 23 -------- mm/Makefile | 1 + mm/bootmem_info.c | 99 ++++++++++++++++++++++++++++++++++ mm/memory_hotplug.c | 91 +------------------------------ 6 files changed, 129 insertions(+), 113 deletions(-) create mode 100644 include/linux/bootmem_info.h create mode 100644 mm/bootmem_info.c diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index b5a3fa4033d3..c7f7ad55b625 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h new file mode 100644 index 000000000000..65bb9b23140f --- /dev/null +++ b/include/linux/bootmem_info.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_BOOTMEM_INFO_H +#define __LINUX_BOOTMEM_INFO_H + +#include + +/* + * Types for free bootmem stored in page->lru.next. These have to be in + * some random range in unsigned long space for debugging purposes. + */ +enum { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, + SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, + MIX_SECTION_INFO, + NODE_INFO, + MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, +}; + +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE +void __init register_page_bootmem_info_node(struct pglist_data *pgdat); +#else +static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) +{ +} +#endif + +#endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 51a877fec8da..19e5d067294c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -33,18 +33,6 @@ struct vmem_altmap; ___page; \ }) -/* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. - */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, - SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, - MIX_SECTION_INFO, - NODE_INFO, - MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, -}; - /* Types for control the zone type of onlined and offlined memory */ enum { /* Offline the memory. */ @@ -209,13 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat); -#else -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} -#endif extern void put_page_bootmem(struct page *page); extern void get_page_bootmem(unsigned long ingo, struct page *page, unsigned long type); @@ -254,10 +235,6 @@ static inline int mhp_notimplemented(const char *func) return -ENOSYS; } -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} - static inline int try_online_node(int nid) { return 0; diff --git a/mm/Makefile b/mm/Makefile index d5649f1c12c0..752111587c99 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -82,6 +82,7 @@ obj-$(CONFIG_SLAB) += slab.o obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_FAILSLAB) += failslab.o +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c new file mode 100644 index 000000000000..39fa8fc120bc --- /dev/null +++ b/mm/bootmem_info.c @@ -0,0 +1,99 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/mm/bootmem_info.c + * + * Copyright (C) + */ +#include +#include +#include +#include +#include + +#ifndef CONFIG_SPARSEMEM_VMEMMAP +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + /* Get section's memmap address */ + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + /* + * Get page for the memmap's phys address + * XXX: need more consideration for sparse_vmemmap... + */ + page = virt_to_page(memmap); + mapsize = sizeof(struct page) * PAGES_PER_SECTION; + mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; + + /* remember memmap's page */ + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, SECTION_INFO); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); + +} +#else /* CONFIG_SPARSEMEM_VMEMMAP */ +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); +} +#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ + +void __init register_page_bootmem_info_node(struct pglist_data *pgdat) +{ + unsigned long i, pfn, end_pfn, nr_pages; + int node = pgdat->node_id; + struct page *page; + + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; + page = virt_to_page(pgdat); + + for (i = 0; i < nr_pages; i++, page++) + get_page_bootmem(node, page, NODE_INFO); + + pfn = pgdat->node_start_pfn; + end_pfn = pgdat_end_pfn(pgdat); + + /* register section info */ + for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { + /* + * Some platforms can assign the same pfn to multiple nodes - on + * node0 as well as nodeN. To avoid registering a pfn against + * multiple nodes we check that this pfn does not already + * reside in some other nodes. + */ + if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) + register_page_bootmem_info_section(pfn); + } +} diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index baded53b9ff9..2da4ad071456 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -167,96 +168,6 @@ void put_page_bootmem(struct page *page) } } -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - /* Get section's memmap address */ - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - /* - * Get page for the memmap's phys address - * XXX: need more consideration for sparse_vmemmap... - */ - page = virt_to_page(memmap); - mapsize = sizeof(struct page) * PAGES_PER_SECTION; - mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; - - /* remember memmap's page */ - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, SECTION_INFO); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); - -} -#else /* CONFIG_SPARSEMEM_VMEMMAP */ -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); -} -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ - -void __init register_page_bootmem_info_node(struct pglist_data *pgdat) -{ - unsigned long i, pfn, end_pfn, nr_pages; - int node = pgdat->node_id; - struct page *page; - - nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; - page = virt_to_page(pgdat); - - for (i = 0; i < nr_pages; i++, page++) - get_page_bootmem(node, page, NODE_INFO); - - pfn = pgdat->node_start_pfn; - end_pfn = pgdat_end_pfn(pgdat); - - /* register section info */ - for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - /* - * Some platforms can assign the same pfn to multiple nodes - on - * node0 as well as nodeN. To avoid registering a pfn against - * multiple nodes we check that this pfn does not already - * reside in some other nodes. - */ - if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) - register_page_bootmem_info_section(pfn); - } -} -#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { From patchwork Tue Sep 15 12:59:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776487 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D103D746 for ; Tue, 15 Sep 2020 13:00:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 766C52054F for ; Tue, 15 Sep 2020 13:00:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="h7SeOiCc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 766C52054F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 242A2900038; Tue, 15 Sep 2020 09:00:46 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1F1A46B00A5; Tue, 15 Sep 2020 09:00:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09326900038; Tue, 15 Sep 2020 09:00:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id E309E6B00A4 for ; Tue, 15 Sep 2020 09:00:45 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9DF8F180ACC20 for ; Tue, 15 Sep 2020 13:00:45 +0000 (UTC) X-FDA: 77265305250.11.desk73_03011a927111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id A5F56180F8B81 for ; Tue, 15 Sep 2020 13:00:39 +0000 (UTC) X-Spam-Summary: 1,0,0,4a4e88422efacaf5,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1606:1730:1747:1777:1792:1801:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3355:3865:3866:3867:3870:4119:4321:4385:4605:5007:6120:6261:6653:6737:6738:8957:9592:10004:11026:11232:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13255:13894:13972:14096:21080:21094:21323:21444:21451:21627:21987:21990:30054:30075,0,RBL:209.85.210.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yg9155j7x3z888quk16bbzjj9jcocmys4fhwdpdqi3s5hsta18xiief6r7uhn.c1gtipwcuekqmazn877coxkxxb8h4asd9yr7d1g3yzqi3rs3gr43wbxuugtt641.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: desk73_03011a927111 X-Filterd-Recvd-Size: 8669 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:00:31 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id o20so1867290pfp.11 for ; Tue, 15 Sep 2020 06:00:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MqToO/0MznVsff9Iws7ZuL7k7n69o4QL0M1jSkqccgc=; b=h7SeOiCcxT7x6e5Ak5+5ZANQAYRDcyvaANfBeZRwr1+FO8L5jlYFMa8sUmVXp5+aCp RnHhcDVSVW4e9ks9Oa7IUpTQ2u1wST5GU+GF9jeqhvvasT16Voswm3aEG7sknbeie6n1 m7OlXgfV/KHgQKOuzOTW8GtK4YLDmz8gP0uNe527k6FCqlGBMXqSjoZ8L+27lM2dXc4I kodegbrx6CE6xtLghrsggW8pKaxJZJW+uVRxAHBjC28//F2Sr9k2DQcLNJenUQbbKreP RpBfTMotO0jMSTMIhaQ3n9xbrFmdRGXYzgN3GUSzE7jNvFaSjRshzoSZ6IzzsfMym3UV +Nvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MqToO/0MznVsff9Iws7ZuL7k7n69o4QL0M1jSkqccgc=; b=PmRjbfozaP8iFMWZlh/5juPtU560Slh5k3vtFyjeAXQYQbAAZYFUuKjH7WgQQmxyDe nJh1UwYY4ImqPQjtdREenNXuJIpr7lG4I6km6RV7VSaf44402KK6/a3h9cTl00k4B4aR J++kUFFq6p34hy53mi3KIvNqO+LJ/aQcg1KjQomuocZ2hJP+/C8NP5R60qp5RBeaJzmV xmiQ74SKmfnwJirUpWS69sYbJer9kKS5rss54cT194FGVlnPfNCKhLgalWabv5un/fIq pGDWf1iIoUvGdTXOv6yFrPcafIBy9SOsJpRX21R8bsHyIZ0XGaZbTAGY1QWdYIDjE+0g RN/Q== X-Gm-Message-State: AOAM530C2cKH3okB2r+hKuwPk2qfAVrhVZ2KMX4KIPu/ii/3jPize5I1 7Rkcz+4fZs2BtFhw7IX/jiK9Rg== X-Google-Smtp-Source: ABdhPJz6zZnmYTckla1P4iMrpY5Pf5cnK1hLW2NVusEiSVfD3o2ljhZnwiLLa6GayVaDhBPdb8Gw6g== X-Received: by 2002:aa7:989a:0:b029:142:2501:34da with SMTP id r26-20020aa7989a0000b0290142250134damr1824793pfl.51.1600174829641; Tue, 15 Sep 2020 06:00:29 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.00.18 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:00:29 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 02/24] mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c Date: Tue, 15 Sep 2020 20:59:25 +0800 Message-Id: <20200915125947.26204-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: A5F56180F8B81 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we will use {get,put}_page_bootmem() to initialize the page for vmemmap or free vmemmap page to buddy. So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. Signed-off-by: Muchun Song Acked-by: Mike Kravetz --- arch/x86/mm/init_64.c | 2 +- include/linux/bootmem_info.h | 13 +++++++++++++ include/linux/memory_hotplug.h | 4 ---- mm/bootmem_info.c | 26 ++++++++++++++++++++++++++ mm/memory_hotplug.c | 27 --------------------------- mm/sparse.c | 1 + 6 files changed, 41 insertions(+), 32 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index c7f7ad55b625..0a45f062826e 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1572,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return err; } -#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE) +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void register_page_bootmem_memmap(unsigned long section_nr, struct page *start_page, unsigned long nr_pages) { diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 65bb9b23140f..4ed6dee1adc9 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -18,10 +18,23 @@ enum { #ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void __init register_page_bootmem_info_node(struct pglist_data *pgdat); + +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type); +void put_page_bootmem(struct page *page); #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { } + +static inline void put_page_bootmem(struct page *page) +{ +} + +static inline void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ +} #endif #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 19e5d067294c..c9f3361fe84b 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -197,10 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -extern void put_page_bootmem(struct page *page); -extern void get_page_bootmem(unsigned long ingo, struct page *page, - unsigned long type); - void get_online_mems(void); void put_online_mems(void); diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index 39fa8fc120bc..d276e96e487f 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -10,6 +10,32 @@ #include #include +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ + page->freelist = (void *)type; + SetPagePrivate(page); + set_page_private(page, info); + page_ref_inc(page); +} + +void put_page_bootmem(struct page *page) +{ + unsigned long type; + + type = (unsigned long) page->freelist; + BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || + type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); + + if (page_ref_dec_return(page) == 1) { + page->freelist = NULL; + ClearPagePrivate(page); + set_page_private(page, 0); + INIT_LIST_HEAD(&page->lru); + free_reserved_page(page); + } +} + #ifndef CONFIG_SPARSEMEM_VMEMMAP static void register_page_bootmem_info_section(unsigned long start_pfn) { diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 2da4ad071456..ae57eedc341f 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include @@ -142,32 +141,6 @@ static void release_memory_resource(struct resource *res) } #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE -void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) -{ - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); - page_ref_inc(page); -} - -void put_page_bootmem(struct page *page) -{ - unsigned long type; - - type = (unsigned long) page->freelist; - BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || - type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); - - if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - INIT_LIST_HEAD(&page->lru); - free_reserved_page(page); - } -} - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { diff --git a/mm/sparse.c b/mm/sparse.c index b25ad8e64839..a4138410d890 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "internal.h" #include From patchwork Tue Sep 15 12:59:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776491 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BBA6B746 for ; Tue, 15 Sep 2020 13:00:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 647462054F for ; Tue, 15 Sep 2020 13:00:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="TPZHG4d9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 647462054F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DDA41900039; Tue, 15 Sep 2020 09:00:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D62846B00A5; Tue, 15 Sep 2020 09:00:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0462900039; Tue, 15 Sep 2020 09:00:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0101.hostedemail.com [216.40.44.101]) by kanga.kvack.org (Postfix) with ESMTP id A33416B00A4 for ; Tue, 15 Sep 2020 09:00:55 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5E33A824999B for ; Tue, 15 Sep 2020 13:00:55 +0000 (UTC) X-FDA: 77265305670.11.corn18_121671e27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 4D02A180FAA90 for ; Tue, 15 Sep 2020 13:00:51 +0000 (UTC) X-Spam-Summary: 10,1,0,3f9a88a9b7db97e7,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:404:541:800:960:965:966:973:981:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3866:3867:3870:3871:3872:3874:4250:4385:4390:4395:5007:6120:6261:6653:6737:6738:7904:10004:11026:11473:11658:11914:12048:12114:12296:12297:12438:12517:12519:12555:12895:13069:13161:13229:13311:13357:13894:14096:14181:14384:14721:21080:21094:21323:21444:21451:21627:30054,0,RBL:209.85.210.193:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04y8t1nsd3d13i9osaso5wty76itjopdi54qdadikodeppsbapaydfz9kdqemgq.9cmhpgjpmwhsxjjx9thr3h4cb16z7jm4u9e5d3ikeqtz7hs9wfamopuxz58o9kz.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: corn18_121671e27111 X-Filterd-Recvd-Size: 4710 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:00:42 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id l126so1892369pfd.5 for ; Tue, 15 Sep 2020 06:00:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IKZjA2WR/MuI4xnBSdt7iJDqwitKA94z026RIC3Xp+k=; b=TPZHG4d9thnMIaeHQMiw11eDPphSU9QQLKK161nNllaQuX7diistYPpLf5nXLM474H dcvgWwtXeY7lLIcB2GIZMFyvZBSaheAlDnFLWsTq6LDwMEziKoiqBiWsxQcJ2NTSmUdN b//8QI1aA2VOfpnCItZ3RCHeMR61GcQ62VXf6H+1ShMG4qMYxz+6TdXxN3xrmupVXOdO KObrLikmjbeUN3lxbQ09j25BtM3g3Fjs7PIdnV/9xYQGJcwiqO8TLLXi4jGBdBwww1Pe U1ixG1aXNbnmG7ln03iol/QRKEkZJFO6/xtH0RWCuzSW9V5Wz4NLkvkO6vyHvw5IqTYJ Yrcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IKZjA2WR/MuI4xnBSdt7iJDqwitKA94z026RIC3Xp+k=; b=tHZguzWmaaKWx/DNPn4Ze9lQ5ncc4XeA/0Yt8/n1Xm/B8AkueyRoFn4mCX9TjVAqho eKDYstrnqQD6bmD2XTeIYSsSyNmncsPOlf0FhPADK97p+p+J30kOaR+SWvyfBKnui6QE Cd/Eyg5J52AH5fYfZ4a8OpOrs59WKhU+zNufk/7rQVY2PYDQ7je+hd8fnxHxFpJeD2A1 NUS1105Q8HlOaxpQt1jO9P1LDraz6y3vhUNMoSoLEG3u5ixpPGkHEqdD4GiGRBkQdib4 EHronydgILU/NVgy83e5vSNJLj9LaQgnLPM+C5zN/eQzahkPfcqFKRiy4bFVMCmjPbn2 lntQ== X-Gm-Message-State: AOAM530JZRGJmzkv9pY6JlVQSB7rpo7EmNiP5XiyL9cM7KwWB32T6H6o i+3l7tjXDMKbeDbJJHLuqtDa2g== X-Google-Smtp-Source: ABdhPJxZZ9ygdTPbozDf1+EUar6et0CWIkjxwPo6KIduNnqBdi+TNOPMzVwjrdBNCWrgb7wYvkHqvQ== X-Received: by 2002:a62:178d:0:b029:13e:d13d:a0f8 with SMTP id 135-20020a62178d0000b029013ed13da0f8mr18083078pfx.20.1600174838496; Tue, 15 Sep 2020 06:00:38 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.00.30 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:00:38 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 03/24] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Date: Tue, 15 Sep 2020 20:59:26 +0800 Message-Id: <20200915125947.26204-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4D02A180FAA90 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure whether to enable the feature of freeing unused vmemmap associated with HugeTLB pages. Signed-off-by: Muchun Song --- fs/Kconfig | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/fs/Kconfig b/fs/Kconfig index 976e8b9033c4..61e9c08096ca 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -245,6 +245,21 @@ config HUGETLBFS config HUGETLB_PAGE def_bool HUGETLBFS +config HUGETLB_PAGE_FREE_VMEMMAP + bool "Free unused vmemmap associated with HugeTLB pages" + default n + depends on HUGETLB_PAGE + depends on SPARSEMEM_VMEMMAP + depends on HAVE_BOOTMEM_INFO_NODE + help + There are many struct page structure associated with each HugeTLB + page. But we only use a few struct page structure. In this case, + it waste some memory. It is better to free the unused struct page + structures to buddy system which can save some memory. For + architectures that support it, say Y here. + + If unsure, say N. + config MEMFD_CREATE def_bool TMPFS || HUGETLBFS From patchwork Tue Sep 15 12:59:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776493 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0AA2814B7 for ; Tue, 15 Sep 2020 13:01:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B684C20829 for ; Tue, 15 Sep 2020 13:01:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="KJWEmRh7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B684C20829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 044726B00A4; Tue, 15 Sep 2020 09:01:09 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F0FA790003A; Tue, 15 Sep 2020 09:01:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB0836B00A6; Tue, 15 Sep 2020 09:01:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id C61566B00A4 for ; Tue, 15 Sep 2020 09:01:08 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 84F498249980 for ; Tue, 15 Sep 2020 13:01:08 +0000 (UTC) X-FDA: 77265306216.11.mask78_1902b0d27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 52F08180F9857 for ; Tue, 15 Sep 2020 13:00:57 +0000 (UTC) X-Spam-Summary: 10,1,0,8c00a04929e260f1,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:404:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1539:1568:1711:1714:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3870:4321:4385:4390:4395:5007:6120:6261:6653:6737:6738:10004:11026:11473:11658:11914:12043:12048:12114:12296:12297:12438:12517:12519:12555:12895:12986:13069:13255:13311:13357:13894:13972:14181:14384:14721:21080:21444:21451:21627:30054:30075,0,RBL:209.85.216.68:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yrwuusqt4bssb9o1tysf545sp3jopofp1tmdfk66jb9i9xbf83xaz4nyt6cyc.yfmmwr6s6a6enznwq8uqjf9ogrckdu55y65tye3thpgsorbzkpwszs8gwcbg8t5.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: mask78_1902b0d27111 X-Filterd-Recvd-Size: 4277 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:00:49 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id kk9so1683150pjb.2 for ; Tue, 15 Sep 2020 06:00:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ph9Npx9Sc9T0289Sa+bHIQdTtsrfAT1jO6vE3p2NbQA=; b=KJWEmRh74MpS+BiJEmcfSUmams10jgYpmHOWHONrwDWFrjooYDHDyiRYhnMnOh+VvH Ty93H0BLab/qOFPkqwZAtkalYiEKAswo3uakzsrS4kepjFgxSQ7RsNb+5LEwpn8pOIlN 8WcE6SBqdZ+KTgt/ZnU7xAqtXD29FYuKipqzPasecDeZxF2J000H2LHOEUoixY8onUxu xyW8ouRBL6mMCL9OCt2RVqaWSXtWesOFCJr0BX/OD1P83WlHpuvpzIBMSm3WVbYbesrO 2qFxvf8ijooHQImhd2dzz/2SA1y0avrE1ON/+PxTnAASjGYoGYzX2ETddAO+mnNasE+B /zsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ph9Npx9Sc9T0289Sa+bHIQdTtsrfAT1jO6vE3p2NbQA=; b=B16vZ+0ctoh1pPrBBYoGnI4gsDjXgR2A3+0ONPjYkgWZCNMBPDuGJnDRsTX2NCUv0c r8pjK5j+EeDnfoav2TL36gHHLxAJbFfWdiSvfspWjpMyGyzH2dldUlMc7coZ35iIMI0i 6+gdEbGAsrqzc4ZR1EMd6TQYJATf+HZpazi6FiO2SCQFcpOr4vxYH446nRrnhDLK3xqx jG0RhjysoI/Q/wdQ73B4gDDjYpmWDhULCdb8ysgCoX7MwRF3p2+S/2b9URxpv1fou6nX C42fdklXuUB1wZEGmh297Tl6/Sz4kA2ZNviBgAMLl2cAeLkOtSyBvfOHz1kCzou92IgX jgog== X-Gm-Message-State: AOAM530Bpgo9lx/T9KmpAYsMQ9+5xuz4lesROO7JXaZO+5iQLBYtJWSf 44iknd+bLC+JyGYsNVtwvKBI0g== X-Google-Smtp-Source: ABdhPJwLaJ+aIVCorogq2CdD6Jk2kg3Rfgolvi723baFH8rkm/8wVxPlvYY+XFAEuzspsZb0g54KWg== X-Received: by 2002:a17:90a:1548:: with SMTP id y8mr3968874pja.113.1600174848501; Tue, 15 Sep 2020 06:00:48 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.00.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:00:47 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 04/24] mm/hugetlb: Register bootmem info when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP Date: Tue, 15 Sep 2020 20:59:27 +0800 Message-Id: <20200915125947.26204-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 52F08180F9857 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We use put_page_bootmem() to free the unused vmemmap pages associated with each hugetlb page, so we need register bootmem info in advance, even if !CONFIG_NUMA. Signed-off-by: Muchun Song --- arch/x86/mm/init_64.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0a45f062826e..0435bee2e172 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall; static void __init register_page_bootmem_info(void) { -#ifdef CONFIG_NUMA +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) int i; for_each_online_node(i) From patchwork Tue Sep 15 12:59:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776563 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5632B92C for ; Tue, 15 Sep 2020 13:10:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F2239206C9 for ; Tue, 15 Sep 2020 13:10:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="E0PsDVeS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2239206C9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4BF4D900050; Tue, 15 Sep 2020 09:10:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4707890004C; Tue, 15 Sep 2020 09:10:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 386EF900050; Tue, 15 Sep 2020 09:10:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id 225B190004C for ; Tue, 15 Sep 2020 09:10:50 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D1F44181AC553 for ; Tue, 15 Sep 2020 13:10:49 +0000 (UTC) X-FDA: 77265330618.07.soup15_0e169db27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id 693F01803FFC6 for ; Tue, 15 Sep 2020 13:10:49 +0000 (UTC) X-Spam-Summary: 1,0,0,8ac2530b79657dc0,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4117:4321:4385:4390:4395:4605:5007:6120:6261:6653:6737:6738:7875:8603:9036:10004:11026:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13138:13231:13255:13894:14181:14721:21080:21094:21323:21444:21451:21627:21972:21987:21990:30029:30054,0,RBL:209.85.219.65:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yrxunenpduonumhg6njybpcxbx9ocpfb7qdmsmrju9ridzmi84xfrj57ig3xc.c8rbqjp68gu1k4ey7dhm4tww1o9ebdepj3dcpzt5bdy6xciydm86oudg9cr9gz8.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUM MARY:non X-HE-Tag: soup15_0e169db27111 X-Filterd-Recvd-Size: 6587 Received: from mail-qv1-f65.google.com (mail-qv1-f65.google.com [209.85.219.65]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:10:48 +0000 (UTC) Received: by mail-qv1-f65.google.com with SMTP id cy2so1678299qvb.0 for ; Tue, 15 Sep 2020 06:10:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wKEqqYxLMCCV7YILHWcPfLsNzogWUGmkwXoJBs8HDY0=; b=E0PsDVeSYmlyIy70MIh5m0hXMwHWvwLKrDYK5X1wkQ8FBA2P3oE8f7JK5FEAyzBh6V pGJ5DHyLYlCaEV0d+66igOqnGYHN1U/vN2wDGiEd2OiAhPcrmeJczLyYK8s6RU6pOfaP egYXUEevR3/zofar2Rn0FAgZ997gEWeaj4Se6MhyrbgWvpOHE1e2ONo/PJNOFDYz0BLa XIHMFYPITNrSAktssfEoAIrl+y28Ig7vi14OUQ0pJ4PkaoKoh7DGEUhazgYrxJsyu1zg EyB+Qo/EYrNJafh7GJRw6ckJ/KjXrDg6v7sMLOqSADxaAgNLJDW1T1qik6fjnjK0tPdx xtFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wKEqqYxLMCCV7YILHWcPfLsNzogWUGmkwXoJBs8HDY0=; b=hY26fU+H3cdzONAVv5X4B7pcgjhewEi9GTIR46LKGloN+noqqJCQoS9TZWcWknZX71 x4jrqvfqTjnvtjaEq2zBmlDCUec3+UHRX5uWMejsfl7wrYykji9MCDotN4gEgRTP2HPW 9SfXysjYjx9Cf8FFjXUwxqP5aVv7rzw9CPv6B7AikDKC8s3Z9Kfn4uNFKZJ0t3zpbMNE EXjkmY2QY2Wz2fDv1LdSQkh2s0F8NUGvDqe682ZVa5UFjf35MzAnRJA28c1z9t1jC6x7 KZogcTpbFhX/C15i+iy1SdAMdWqomLVEd6dFVHYWvwbLtbfQ6bCmS4ueboLe7ucb1H5O +MfQ== X-Gm-Message-State: AOAM532fHJBN7SgnabpqD0rvuMhfDR/R+cEwW4WioulbIE/FLFemJch8 slXBwN9LatYHGvsCT1xPkoBxLGPbSvIVaoiN+r4= X-Google-Smtp-Source: ABdhPJxlSQz9UBnN/6SxzywlbRtzg8E1L0Sk6hjaz2n+M2tJkehWJQpoFBxhfBG7Rd9G4EYxwFksNA== X-Received: by 2002:a62:1c81:0:b029:13e:d13d:a0fa with SMTP id c123-20020a621c810000b029013ed13da0famr17739633pfc.22.1600174857496; Tue, 15 Sep 2020 06:00:57 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.00.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:00:57 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 05/24] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Date: Tue, 15 Sep 2020 20:59:28 +0800 Message-Id: <20200915125947.26204-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 693F01803FFC6 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the size of hugetlb page is 2MB, we need 512 struct page structures (8 pages) to be associated with it. As far as I know, we only use the first 3 struct page structures and only read the compound_dtor members of the remaining struct page structures. For tail page, the value of compound_dtor is the same. So we can reuse first tail page. We map the virtual addresses of the remaining 6 tail pages to the first tail page, and then free these 6 pages. Therefore, we need to reserve at least 2 pages as vmemmap areas. So we introduce a new nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a hugetlb page that we can free to buddy system. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 35 +++++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d5cc5f802dd4..eed3dd3bd626 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -492,6 +492,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 81a41aa080a5..f1b2b733b49b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1292,6 +1292,39 @@ static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +#define RESERVE_VMEMMAP_NR 2U + +static inline unsigned int nr_free_vmemmap(struct hstate *h) +{ + return h->nr_free_vmemmap_pages; +} + +static void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int order = huge_page_order(h); + unsigned int vmemmap_pages; + + vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page not free to buddy system, + * the others page will map to the first tail page. So there are + * (@vmemmap_pages - RESERVE_VMEMMAP_NR) pages can be freed. + */ + if (vmemmap_pages > RESERVE_VMEMMAP_NR) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + else + h->nr_free_vmemmap_pages = 0; + + pr_info("HugeTLB: can free %d vmemmap pages for %s\n", + h->nr_free_vmemmap_pages, h->name); +} +#else +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} +#endif + static void update_and_free_page(struct hstate *h, struct page *page) { int i; @@ -3285,6 +3318,8 @@ void __init hugetlb_add_hstate(unsigned int order) snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); + parsed_hstate = h; } From patchwork Tue Sep 15 12:59:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776495 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 534B9746 for ; Tue, 15 Sep 2020 13:01:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 022A920829 for ; Tue, 15 Sep 2020 13:01:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="K0+TtRoQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 022A920829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9937F90003B; Tue, 15 Sep 2020 09:01:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 91D4D90003A; Tue, 15 Sep 2020 09:01:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BD4290003B; Tue, 15 Sep 2020 09:01:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0025.hostedemail.com [216.40.44.25]) by kanga.kvack.org (Postfix) with ESMTP id 61CB190003A for ; Tue, 15 Sep 2020 09:01:28 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 028D6485C for ; Tue, 15 Sep 2020 13:01:28 +0000 (UTC) X-FDA: 77265307056.30.ball45_33122b227111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id B24E0180B45BE for ; Tue, 15 Sep 2020 13:01:21 +0000 (UTC) X-Spam-Summary: 1,0,0,b18224e9777ce23d,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:2:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1606:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3354:3865:3866:3867:3871:4119:4385:4390:4395:4605:5007:6119:6120:6261:6653:6737:6738:7875:7903:9036:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13161:13229:13894:14096:14110:21080:21444:21451:21627:21990:30054,0,RBL:209.85.215.195:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04y8bunzjiz1hhcbdgcqj48yzj1keyp3nb7tfxi448w8h47y3k6cjh4wb8t5unh.cf1n75p7d5fp1brh9dgarhkf81y6jmpt795bbxggen3i3md1uuohsz8h17zwnak.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:243,LUA_SUMMARY:none X-HE-Tag: ball45_33122b227111 X-Filterd-Recvd-Size: 8930 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:01:09 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id y1so1944063pgk.8 for ; Tue, 15 Sep 2020 06:01:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=d7qZ2BwXX/ZVANEcHk8/fyjZQtNSlbjNG3JjBN1i9+E=; b=K0+TtRoQ0iBLsidzNfDucv/964bgNOsr/noRAqmDAfTjJMtaNxLaRm83+mJUhcr54j KNh3l/4HZdMPba82hDWu7cxPi3ZrtwGlBNyZkuZzQlmeC2t6B8VtRsi+raONTYolNT73 7r1ueK3UAuXFx0RS7WTMb2XdbueEX7Tw/ZU/o9NUO6+BuP9cVlweh2dt23epZyI0f9RA qQjz5EuUOz4OYzz7lCZ4QYZzig1RqkoG0CzEJ06v/ZltzeXQqA8JkDNhKt7KQEHAvdf6 F0RxAEaLFcwVHtW165J5QbIrtdb0RuRKnEuvQWLVc9AUWNBmGjZss5M7+PHZF2WAQPM0 pLfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=d7qZ2BwXX/ZVANEcHk8/fyjZQtNSlbjNG3JjBN1i9+E=; b=CI4mzfDrI9fNTxN/JSwH8DhZhBBnK4aJeoaeXSIMxAbdv+DKRSC58gRUcFAzbGEirt PkCxMJPUNGf9NCGRgxCp+U5tj4tRD/ZFLGnl9tABYu79JilhMNgePyNQUxrsxuvlcUqN FfG1Tsm2xofqyZdjs/WTX9iBX4az3vjYbCyjY6+fTRRmCznLLSAxhDDeiZuVX8krrMfB 7Rqr50ad4iiZkm5aioomDY2AifnPHLAzCf9nZKnffncmYtqPaBacGErbx9Ir90sVlYbD Sei+g7ExvXDyuc0Go3PBzQwJ2fyDD0VMu/C9uRTXsvgDUZ31f8kHonEf9ae9I/pHF0bo 5+Fg== X-Gm-Message-State: AOAM532ThQyu39+5SastTecCve06wUH6ztUBTeGJ04IHURIt93tNWlBX CiiT0sfrH5R6yRjp4HEUwofSfA== X-Google-Smtp-Source: ABdhPJzwahGiSKr1NRGUl09d6iuN/pmblKyvvg/Fup+XIG5qhKQz4c4TIZOJ3KaUzNx9x8K8iv5qkA== X-Received: by 2002:a63:cb0a:: with SMTP id p10mr15017791pgg.314.1600174867618; Tue, 15 Sep 2020 06:01:07 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.00.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:01:07 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 06/24] mm/hugetlb: Introduce pgtable allocation/freeing helpers Date: Tue, 15 Sep 2020 20:59:29 +0800 Message-Id: <20200915125947.26204-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B24E0180B45BE X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On some architectures, the vmemmap areas use huge page mapping. If we want to free the unused vmemmap pages, we have to split the huge pmd firstly. So we should pre-allocate pgtable to split huge pmd. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 17 ++++++ mm/hugetlb.c | 117 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 134 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index eed3dd3bd626..ace304a6196c 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -593,6 +593,23 @@ static inline unsigned int blocks_per_huge_page(struct hstate *h) #include +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +#ifndef arch_vmemmap_support_huge_mapping +static inline bool arch_vmemmap_support_huge_mapping(void) +{ + return false; +} +#endif + +#ifndef VMEMMAP_HPAGE_SHIFT +#define VMEMMAP_HPAGE_SHIFT PMD_SHIFT +#endif +#define VMEMMAP_HPAGE_ORDER (VMEMMAP_HPAGE_SHIFT - PAGE_SHIFT) +#define VMEMMAP_HPAGE_NR (1 << VMEMMAP_HPAGE_ORDER) +#define VMEMMAP_HPAGE_SIZE ((1UL) << VMEMMAP_HPAGE_SHIFT) +#define VMEMMAP_HPAGE_MASK (~(VMEMMAP_HPAGE_SIZE - 1)) +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ + #ifndef is_hugepage_only_range static inline int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr, unsigned long len) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f1b2b733b49b..d6ae9b6876be 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1295,11 +1295,108 @@ static inline void destroy_compound_gigantic_page(struct page *page, #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP #define RESERVE_VMEMMAP_NR 2U +#define page_huge_pte(page) ((page)->pmd_huge_pte) + static inline unsigned int nr_free_vmemmap(struct hstate *h) { return h->nr_free_vmemmap_pages; } +static inline unsigned int nr_vmemmap(struct hstate *h) +{ + return nr_free_vmemmap(h) + RESERVE_VMEMMAP_NR; +} + +static inline unsigned long nr_vmemmap_size(struct hstate *h) +{ + return (unsigned long)nr_vmemmap(h) << PAGE_SHIFT; +} + +static inline unsigned int nr_pgtable(struct hstate *h) +{ + unsigned long vmemmap_size = nr_vmemmap_size(h); + + if (!arch_vmemmap_support_huge_mapping()) + return 0; + + /* + * No need pre-allocate page tabels when there is no vmemmap pages + * to free. + */ + if (!nr_free_vmemmap(h)) + return 0; + + return ALIGN(vmemmap_size, VMEMMAP_HPAGE_SIZE) >> VMEMMAP_HPAGE_SHIFT; +} + +static inline void vmemmap_pgtable_init(struct page *page) +{ + page_huge_pte(page) = NULL; +} + +static void vmemmap_pgtable_deposit(struct page *page, pte_t *pte_p) +{ + pgtable_t pgtable = virt_to_page(pte_p); + + /* FIFO */ + if (!page_huge_pte(page)) + INIT_LIST_HEAD(&pgtable->lru); + else + list_add(&pgtable->lru, &page_huge_pte(page)->lru); + page_huge_pte(page) = pgtable; +} + +static pte_t *vmemmap_pgtable_withdraw(struct page *page) +{ + pgtable_t pgtable; + + /* FIFO */ + pgtable = page_huge_pte(page); + if (unlikely(!pgtable)) + return NULL; + page_huge_pte(page) = list_first_entry_or_null(&pgtable->lru, + struct page, lru); + if (page_huge_pte(page)) + list_del(&pgtable->lru); + return page_to_virt(pgtable); +} + +static int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) +{ + int i; + pte_t *pte_p; + unsigned int nr = nr_pgtable(h); + + if (!nr) + return 0; + + vmemmap_pgtable_init(page); + + for (i = 0; i < nr; i++) { + pte_p = pte_alloc_one_kernel(&init_mm); + if (!pte_p) + goto out; + vmemmap_pgtable_deposit(page, pte_p); + } + + return 0; +out: + while (i-- && (pte_p = vmemmap_pgtable_withdraw(page))) + pte_free_kernel(&init_mm, pte_p); + return -ENOMEM; +} + +static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) +{ + pte_t *pte_p; + + if (!nr_pgtable(h)) + return; + + while ((pte_p = vmemmap_pgtable_withdraw(page))) + pte_free_kernel(&init_mm, pte_p); +} + static void __init hugetlb_vmemmap_init(struct hstate *h) { unsigned int order = huge_page_order(h); @@ -1323,6 +1420,15 @@ static void __init hugetlb_vmemmap_init(struct hstate *h) static inline void hugetlb_vmemmap_init(struct hstate *h) { } + +static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) +{ + return 0; +} + +static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) +{ +} #endif static void update_and_free_page(struct hstate *h, struct page *page) @@ -1531,6 +1637,9 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + /* Must be called before the initialization of @page->lru */ + vmemmap_pgtable_free(h, page); + INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); @@ -1783,6 +1892,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, if (!page) return NULL; + if (vmemmap_pgtable_prealloc(h, page)) { + if (hstate_is_gigantic(h)) + free_gigantic_page(page, huge_page_order(h)); + else + put_page(page); + return NULL; + } + if (hstate_is_gigantic(h)) prep_compound_gigantic_page(page, huge_page_order(h)); prep_new_huge_page(h, page, page_to_nid(page)); From patchwork Tue Sep 15 12:59:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776497 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8735D14B7 for ; Tue, 15 Sep 2020 13:01:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 407B020829 for ; Tue, 15 Sep 2020 13:01:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="rKQ26Puo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 407B020829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0449990003C; Tue, 15 Sep 2020 09:01:40 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F107590003A; Tue, 15 Sep 2020 09:01:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8BAC90003C; Tue, 15 Sep 2020 09:01:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0173.hostedemail.com [216.40.44.173]) by kanga.kvack.org (Postfix) with ESMTP id B82C790003A for ; Tue, 15 Sep 2020 09:01:39 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3352645CD for ; Tue, 15 Sep 2020 13:01:38 +0000 (UTC) X-FDA: 77265307476.20.steel66_2106ae727111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 662BD180C1487 for ; Tue, 15 Sep 2020 13:01:30 +0000 (UTC) X-Spam-Summary: 1,0,0,b4cce527e70bde09,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1534:1540:1568:1711:1714:1730:1747:1777:1792:1981:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:4385:4390:4395:5007:6120:6261:6653:6737:6738:10004:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:14096:14181:14384:14721:21080:21444:21451:21627:21990:30054,0,RBL:209.85.210.194:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yrmm83k7iwzodgy9jou8ekfeb6uocgz4jbk73yyhhiashbfummdfh7xwwutw4.xn63xgm4jpo39j4ny4kwo54usmu699r6zc1jteuuzk7gh81fz9mt9rh7gs9xxwj.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:28,LUA_SUMMARY:none X-HE-Tag: steel66_2106ae727111 X-Filterd-Recvd-Size: 4397 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:01:18 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id l126so1893499pfd.5 for ; Tue, 15 Sep 2020 06:01:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5ZrW0owKm5Gu/md3j3csp/5vC7YMmkDgEblXZgaxGvk=; b=rKQ26Puop/3DC0nrZnzSaunkjQvxjCcWAFcRgsiFlS5oDPySzDrGYBOFscTRKZ4YlM HaZkWdGc00lfINkpTlQMky/65NALAje0ARqIxEgH8YHLW7y7qMzJoRrKyRVuYSbP/nf+ XH+MsNGqG3VKJTq7YO2bdVXuP1ZxSlHHxR7GFhsEItGdJemBjAS1la5HsrJ/eZvvGPT+ uMOO4JsqFuKv6FtCD0c5rWItP6VMCkdTcNiPG/pDcH2K7Gwv8fleIR1REqxGU8d0hvRN zVLvYD5ThV6VB1WeSJIJx6TUdr4YJgoVoF3sNLFaVYR08jBnTX5fXC9FmrD2eOTInsFw GDrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5ZrW0owKm5Gu/md3j3csp/5vC7YMmkDgEblXZgaxGvk=; b=rKFox2p0vlpRiqB4IzTFwDu2CMZ3InQxKAGKPl//hiGEicNN8BXpXG1sh3v4oZSktL /awSxjFzsm3bmlEOU5ExswWFQ4RgDpe+Af+cAaapEnD5HYk2D86yY/Kv40NPfSdI3lj/ xhiVSNGn1oHN/DvDEqtfPij++kP8WDJjiAmyVTItKSsUaJEHlKxOEGgPnBqkWlmNQG5H gHHm2J8cYdUEjXPimZXHT+F6AvDzLztxvG/6Z89UaNihxBXF9BaaMEEs0Ium83l2Qh/Y RknoJ6tyuZYvKebSqDPWhuBH0VF3TT4eUQLZ8xiLHdUJbNBB7jkRQPHa/sM8ZScRAV88 Zkhw== X-Gm-Message-State: AOAM533OF19Fj3FmP1T4W8JRI/N02HbpvgYn+cchDUACs5YfA4MpMiZg yWyCTJc10euYXYFZsaMqZdyzgA== X-Google-Smtp-Source: ABdhPJw36/tEPEk+vlzhPHIn06m+FQnqNgd2oPnWDmzY+7j8re+39DQktUux3tDDjqBaTAXQu8D/0A== X-Received: by 2002:a63:1226:: with SMTP id h38mr14256662pgl.196.1600174876644; Tue, 15 Sep 2020 06:01:16 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.01.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:01:15 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 07/24] mm/hugetlb: Add freeing unused vmemmap pages support for x86 Date: Tue, 15 Sep 2020 20:59:30 +0800 Message-Id: <20200915125947.26204-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 662BD180C1487 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On x86_64 architecture, we use hupe page mapping vmemmap area. We should define VMEMMAP_HPAGE_SHIFT to the correct value to support freeing unused vmemmap pages. Signed-off-by: Muchun Song --- arch/x86/include/asm/hugetlb.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h index 1721b1aadeb1..f5e882f999cd 100644 --- a/arch/x86/include/asm/hugetlb.h +++ b/arch/x86/include/asm/hugetlb.h @@ -5,6 +5,11 @@ #include #include +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +#define VMEMMAP_HPAGE_SHIFT PMD_SHIFT +#define arch_vmemmap_support_huge_mapping() boot_cpu_has(X86_FEATURE_PSE) +#endif + #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE) #endif /* _ASM_X86_HUGETLB_H */ From patchwork Tue Sep 15 12:59:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776571 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC45292C for ; Tue, 15 Sep 2020 13:13:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 68E9720B1F for ; Tue, 15 Sep 2020 13:13:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="OUdkxbex" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 68E9720B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4B7FE900052; Tue, 15 Sep 2020 09:13:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 442CD90004C; Tue, 15 Sep 2020 09:13:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E1C2900052; Tue, 15 Sep 2020 09:13:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0196.hostedemail.com [216.40.44.196]) by kanga.kvack.org (Postfix) with ESMTP id 1395490004C for ; Tue, 15 Sep 2020 09:13:32 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B83C812FC for ; Tue, 15 Sep 2020 13:13:31 +0000 (UTC) X-FDA: 77265337422.12.trade02_150ef5e27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 73D8F18010A59 for ; Tue, 15 Sep 2020 13:13:31 +0000 (UTC) X-Spam-Summary: 1,0,0,f6aadf6644b6f8ea,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1541:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3867:3868:3870:4385:4390:4395:4605:5007:6120:6261:6653:6737:6738:8957:10004:11026:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:13972:14096:14181:14384:14721:21080:21444:21451:21627:30054:30075,0,RBL:209.85.215.193:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yrc5ot45nnkauy5fdy3jobw47icyp1iq1boq96uzau4cswmuxyst7dkq4x93h.j66g7tremc7uj7uy8fxhxdwi1fmzjm317bosox78o6ttbzaq81o5bqfxo9fgxwz.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: trade02_150ef5e27111 X-Filterd-Recvd-Size: 5305 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:13:30 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id g29so1989452pgl.2 for ; Tue, 15 Sep 2020 06:13:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Xbcz6YL/V2+vBJkP0RuPn1EYocXGKAfrd+QWEMaxAUM=; b=OUdkxbexs/MVUGZt2ahUW/InQH4DpqpR5JJ1oBb+J1X5eZKkizEqnmv60+K7HFZrLr OVKqtQHYFVYnqX0kj1o45dnJG6jPv26zLnwRxJNA16fWPRitcW3BHga2oxsq2rGVELhn kUHn3tN214hB5Z2AeqD1vODmkiownOfNFsbYicarsbxcKNvEG6wzTJPQ+l3d47+LMXQe 8pkzMxqwgTNfDyZGx2S3lCJND+F8hegnCnJXesFON/ZNiUGuVFHYXQ9K9zFfvp90WKi2 HOkUQ5hJ81UyQqS1JBwNLnp46AhnNdB6TWKDGFkYdQUpabRyhyZ5eqkZBbRrgZ9JuHl5 +ZCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Xbcz6YL/V2+vBJkP0RuPn1EYocXGKAfrd+QWEMaxAUM=; b=DA1o5gNggGOpUNqXpNq3RxNSMKnx7rx4RyGGkeoi01HFEQybUo/PbrgS6KdR58O9mf NUYggw48UwpU2FWjgPvAnsQk/i9j+p+13XPwMpHvkjpfFzKU6l/FjU25a0+Q3oWFv5di Y94jHEvZXlNNYvChkqWq561tAnjSxF+xs59NK3ylL2EvPhBGIOddxTtsRNyJyCc6DFFB u7F+l+RbWZumZWvDRvFspfbk/6P/9y5rHwypyHtwd2SnSQ2Kze5mKZt5PcvrL4siZ6Z3 FWDNAIy6rdTn+xLUfVZ2iG9KfWHVGWejIJo1sxtZTzngNPWMSDsd/5yR/s4r2e38nPah j6xA== X-Gm-Message-State: AOAM531VclFCj5WYnR8MhHXZCZAlOdV6Ur+Ic/nkCvaLMt3YeLMfVI9m QQzNS+8X8sX7O0uStHj9TXA2eTHWw7gYJhqmdgA= X-Google-Smtp-Source: ABdhPJzenFF0hJojbFeX+YMQrctmN2BJy+iieQr74UR3tyIddb8Jks43HUjUyWDUpivgKWRty3UcRg== X-Received: by 2002:a62:cfc5:0:b029:13e:d13d:a083 with SMTP id b188-20020a62cfc50000b029013ed13da083mr18107687pfg.26.1600174886630; Tue, 15 Sep 2020 06:01:26 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.01.17 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:01:25 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 08/24] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Date: Tue, 15 Sep 2020 20:59:31 +0800 Message-Id: <20200915125947.26204-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 73D8F18010A59 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we can use the free_vmemmap_page() to free the unused vmemmap pages and initialize a page for vmemmap page using via prepare_vmemmap_page(). Signed-off-by: Muchun Song --- include/linux/bootmem_info.h | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..ce9d8c97369d 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -3,6 +3,7 @@ #define __LINUX_BOOTMEM_INFO_H #include +#include /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +23,30 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +static inline void free_vmemmap_page(struct page *page) +{ + VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2); + + /* bootmem page has reserved flag in the reserve_bootmem_region */ + if (PageReserved(page)) { + unsigned long magic = (unsigned long)page->freelist; + + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + put_page_bootmem(page); + else + WARN_ON(1); + } +} + +static inline void prepare_vmemmap_page(struct page *page) +{ + unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page)); + + get_page_bootmem(section_nr, page, SECTION_INFO); + __SetPageReserved(page); + adjust_managed_page_count(page, -1); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { From patchwork Tue Sep 15 12:59:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776503 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D59AC14B7 for ; Tue, 15 Sep 2020 13:02:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8598A20829 for ; Tue, 15 Sep 2020 13:02:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="CI847Zj/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8598A20829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4FDED90003D; Tue, 15 Sep 2020 09:01:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4AD2590003A; Tue, 15 Sep 2020 09:01:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 34E4590003D; Tue, 15 Sep 2020 09:01:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id 1B6C390003A for ; Tue, 15 Sep 2020 09:01:59 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C1ABF12F8 for ; Tue, 15 Sep 2020 13:01:58 +0000 (UTC) X-FDA: 77265308316.17.glass32_2f0787b27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 1C49E180D01B8 for ; Tue, 15 Sep 2020 13:01:40 +0000 (UTC) X-Spam-Summary: 1,0,0,15db51d78a9bcfdf,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3352:3865:3866:3867:3870:3872:4605:5007:6120:6261:6653:6737:6738:7875:10004:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:14181:14384:14721:21080:21444:21451:21627:30054,0,RBL:209.85.215.196:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04ygcpphnmoyqr9iffy8ybuuht5pdyp8wfienafme74n1tmbgw3h5ptdhk7dqcm.oporkur7pxx97dj7ozt3bhbun4dar3hhachxm3ne8dr6edrcqjpkhwe5eyh1tcz.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:35,LUA_SUMMARY:none X-HE-Tag: glass32_2f0787b27111 X-Filterd-Recvd-Size: 4724 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:01:37 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id y1so1944753pgk.8 for ; Tue, 15 Sep 2020 06:01:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HzCS8EXafsw22jW584WXehOSckVkzDoTlfnh4khNpEo=; b=CI847Zj/5nu95tSosrklznikRuaOObypKdPqj4HUZgUNRmt1XbVJmWuAzag/0OYlb6 MIGYKd7dTEHGbBXlsWNf/ZCrc78x/gqum2NxuSvIwKBO715W2IU/8DRtrBTNGrbWvdXd QrygLlTpB14MP06uq2TL7JQ7Yrh01EPoneHp1zmT3h9j+d8WvGM+t3Cwm0rsX2GAdRab eh+43UF1Kan1r/1cT840y1nng8NrB7JVRUvt2zXfY0s+zTg+P+91O7u7DhWIbn7Q43E7 i8jshMwDwUQp1gvaOWrDFLXCmupOF/M7AAEDAQPyKhFtBzsH6aD/dbzHSNep9XripEWb LnoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HzCS8EXafsw22jW584WXehOSckVkzDoTlfnh4khNpEo=; b=cCAvQDb2aGrlo5QpuA9ANigCBoRJIPeOCm9MSBFjM2YUZCnEq9dcgtLlxTiUort/jj ygjMTEmYVg/vJlG9W/ZD4YrDTce7sMXoX6fQkPCS20M4UYvRo7GxL56V33o1P/wZyiVI jqi6AfczK9ftquhEIM7Sjy9X1Os5DESxY0nFdGoef21a+QsqiZZvkWUD00ZN8Nl+4gVH KNWSNqLPYKNayLyRKNJMXapO0Sb7WWg2SXnDrRwEcR/AelEm5pMFXZa93FfqRNUsK2lu YDbswYsVQhsAOEUT9TqF3AOYBPGG1fVLrnWApJtW4wKLEGMxqzNzsJ3aJRazGyHHTedl QQVw== X-Gm-Message-State: AOAM533QUL/F3irq1R72SqpddYOqjgcA2ySH8d+UgOUoPC3LTFL5iA6i YU4xthipiBk6BpJPkA0f7nLCVQ== X-Google-Smtp-Source: ABdhPJwWLmX2bwK4KzR5iePJzhj4/eXQJhfo6bmr7reawoCL346+hlc0xAAcCfx3B+T4ZfR4bCJVUg== X-Received: by 2002:a63:d14b:: with SMTP id c11mr14914316pgj.64.1600174896481; Tue, 15 Sep 2020 06:01:36 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.01.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:01:36 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 09/24] x86/mm: Introduce VMEMMAP_SIZE/VMEMMAP_END macro Date: Tue, 15 Sep 2020 20:59:32 +0800 Message-Id: <20200915125947.26204-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1C49E180D01B8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the later patch, we will walk the page table for vmemmap area. So we want to know the range of vmemmap area addresses in order to distinguish whether it comes from vememmap areas. If not, just we can do not walk the page table. Signed-off-by: Muchun Song --- arch/x86/include/asm/pgtable_64_types.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 52e5f5f2240d..bedbd2e7d06c 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -139,6 +139,14 @@ extern unsigned int ptrs_per_p4d; # define VMEMMAP_START __VMEMMAP_BASE_L4 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ +/* + * VMEMMAP_SIZE - allows the whole linear region to be covered by + * a struct page array. + */ +#define VMEMMAP_SIZE (1UL << (__VIRTUAL_MASK_SHIFT - PAGE_SHIFT - \ + 1 + ilog2(sizeof(struct page)))) +#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE) + #define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) From patchwork Tue Sep 15 12:59:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776511 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 57AF6746 for ; Tue, 15 Sep 2020 13:02:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F19FA20872 for ; Tue, 15 Sep 2020 13:02:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="CIREYWgK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F19FA20872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 34F32900040; Tue, 15 Sep 2020 09:02:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2FF4C90003A; Tue, 15 Sep 2020 09:02:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C8B3900040; Tue, 15 Sep 2020 09:02:31 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0240.hostedemail.com [216.40.44.240]) by kanga.kvack.org (Postfix) with ESMTP id 0670E90003A for ; Tue, 15 Sep 2020 09:02:31 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A6D24441E for ; Tue, 15 Sep 2020 13:02:30 +0000 (UTC) X-FDA: 77265309660.13.level73_080fcb827111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 25D371813F570 for ; Tue, 15 Sep 2020 13:02:25 +0000 (UTC) X-Spam-Summary: 1,0,0,f949a2da53ef71f4,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:1:2:41:355:379:421:541:800:901:960:965:966:967:973:988:989:1260:1263:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:2194:2196:2198:2199:2200:2201:2393:2525:2553:2559:2563:2682:2685:2731:2859:2901:2904:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4052:4250:4321:4385:4390:4395:4605:5007:6119:6120:6261:6653:6737:6738:7875:7903:8599:8603:8957:9010:9025:9036:9121:9388:10004:10030:11026:11257:11320:11473:11658:11914:12043:12048:12291:12295:12296:12297:12438:12517:12519:12555:12683:12895:12986:13255:13894:14096:14107:14110:21063:21080:21094:21323:21444:21451:21627:21966:21990:30003:30029:30054:30055:30064:30083:30090,0,RBL:209.85.210.193:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yr6insrycd4k4fc7yz9wiac4beryp51a1863ya14ggcydfpqpcgyirbqxstjm.p36473 w3ifnikm X-HE-Tag: level73_080fcb827111 X-Filterd-Recvd-Size: 13400 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:01:47 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id x123so1888137pfc.7 for ; Tue, 15 Sep 2020 06:01:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MsxmZcqzO6oU2083Ec8NK/i5aqIGoDCXPmpK1JpMhHU=; b=CIREYWgKG0y5bTAmfbfktZFdPjKSEU5cI0my0xWoAHXu+phapP2kzdoqZzP6oYfOp3 cb4tRG/CCuRHFZ+6hp7B2cKqVKnxHSahJWMv8b0/e5uL2bgioa+c9ywrNOFjOnuF/npb kX9duhMqixMx+wAx3eKjaFg2lHcB5/E4G8bk22aQt9Fj+yhzl8+V7cHeRpa7f4tEMwDj bluG5qNuHjNaL/6aL6rzp08uMymBXb3yObzuvpM2y/vs4+sbI+kiqkiA8b6R/BumcJGW xVf1CpmkgPa8/mBrF+zK/uzyRXEmqVRi1adqci4i1LHQ6Wq/LVLdGw4A7wkgzlMYjVxX C2Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MsxmZcqzO6oU2083Ec8NK/i5aqIGoDCXPmpK1JpMhHU=; b=DzjniHhJq/tXmqadL0xF0EF8PtkxFGlfYlcDee2GVBaKWfgBUPf3usLaU2aYkjxfrD r8Tyl+oZfDGCebAGRCi4PxVdrtwMAFE1HVj5wHoIBW3Np51A07qm5MsA87Ux5J+C1d5Q 2zpxNqNh0YSjCXp2BJOuEHvibrLlH3Tse4GfYv7ORSN1aaQ+z9WQSeB1DaDv0NdF54DK FsIpQObKkB90XABmvLUOP1BgwHIUbxfekxVcv+sgYPQ4azNvjXwmxWL3Kbm3HDp0IwhG I+FVBYc2StBazP7FydwOyEFjoYRRI/4uRLV7tnTfHynkAjG259IQR1IcTT/JgTu2i/tS sGMA== X-Gm-Message-State: AOAM530hQbsk4wd+vrU4r7amjlShXYAnF64M/LDiIt4gr7cA5GPkVEcC 2KQPI1qu2XHoqSBOa68g3Tz/vw== X-Google-Smtp-Source: ABdhPJzjBmfywPW7QQgadAqZhmFowNmnURIr9fhphcku0u1eDJDwXVAKp3ksTUFTm+nEemFa1C0ZpQ== X-Received: by 2002:a63:2c44:: with SMTP id s65mr8444889pgs.210.1600174905697; Tue, 15 Sep 2020 06:01:45 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.01.36 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:01:45 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 10/24] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Date: Tue, 15 Sep 2020 20:59:33 +0800 Message-Id: <20200915125947.26204-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 25D371813F570 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we allocate a hugetlb page from the buddy, we should free the unused vmemmap pages associated with it. We can do that in the prep_new_huge_page(). Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 21 ++++ mm/hugetlb.c | 231 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 252 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ace304a6196c..2561af2ad901 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -601,6 +601,13 @@ static inline bool arch_vmemmap_support_huge_mapping(void) } #endif +#ifndef vmemmap_pmd_huge +static inline bool vmemmap_pmd_huge(pmd_t *pmd) +{ + return pmd_huge(*pmd); +} +#endif + #ifndef VMEMMAP_HPAGE_SHIFT #define VMEMMAP_HPAGE_SHIFT PMD_SHIFT #endif @@ -790,6 +797,15 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +int handle_vmemmap_fault(unsigned long page); +#else +static inline int handle_vmemmap_fault(unsigned long page) +{ + return -EFAULT; +} +#endif + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; @@ -943,6 +959,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline int handle_vmemmap_fault(unsigned long page) +{ + return -EFAULT; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d6ae9b6876be..a628588a075a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1293,10 +1293,20 @@ static inline void destroy_compound_gigantic_page(struct page *page, #endif #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +#include + #define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) #define page_huge_pte(page) ((page)->pmd_huge_pte) +#define vmemmap_hpage_addr_end(addr, end) \ +({ \ + unsigned long __boundary; \ + __boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK;\ + (__boundary - 1 < (end) - 1) ? __boundary : (end); \ +}) + static inline unsigned int nr_free_vmemmap(struct hstate *h) { return h->nr_free_vmemmap_pages; @@ -1416,6 +1426,222 @@ static void __init hugetlb_vmemmap_init(struct hstate *h) pr_info("HugeTLB: can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, h->name); } + +static inline spinlock_t *vmemmap_pmd_lockptr(pmd_t *pmd) +{ + static DEFINE_SPINLOCK(pgtable_lock); + + return &pgtable_lock; +} + +/* + * Walk a vmemmap address to the pmd it maps. + */ +static pmd_t *vmemmap_to_pmd(const void *page) +{ + unsigned long addr = (unsigned long)page; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + + if (addr < VMEMMAP_START || addr >= VMEMMAP_END) + return NULL; + + pgd = pgd_offset_k(addr); + if (pgd_none(*pgd)) + return NULL; + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + return NULL; + pud = pud_offset(p4d, addr); + + WARN_ON_ONCE(pud_bad(*pud)); + if (pud_none(*pud) || pud_bad(*pud)) + return NULL; + pmd = pmd_offset(pud, addr); + + return pmd; +} + +static inline int freed_vmemmap_hpage(struct page *page) +{ + return atomic_read(&page->_mapcount) + 1; +} + +static inline int freed_vmemmap_hpage_inc(struct page *page) +{ + return atomic_inc_return_relaxed(&page->_mapcount) + 1; +} + +static inline int freed_vmemmap_hpage_dec(struct page *page) +{ + return atomic_dec_return_relaxed(&page->_mapcount) + 1; +} + +static inline void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, + unsigned long start, + unsigned int nr_free, + struct list_head *free_pages) +{ + pte_t entry = mk_pte(reuse, PAGE_KERNEL); + unsigned long addr; + unsigned long end = start + (nr_free << PAGE_SHIFT); + + for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) { + struct page *page; + pte_t old = *ptep; + + VM_WARN_ON(!pte_present(old)); + page = pte_page(old); + list_add(&page->lru, free_pages); + + set_pte_at(&init_mm, addr, ptep, entry); + } +} + +static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, + unsigned long addr, + struct list_head *free_pages) +{ + unsigned long next; + unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE; + unsigned long end = addr + nr_vmemmap_size(h); + struct page *reuse = NULL; + + addr = start; + do { + unsigned int nr_pages; + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) + reuse = pte_page(ptep[-1]); + + next = vmemmap_hpage_addr_end(addr, end); + nr_pages = (next - addr) >> PAGE_SHIFT; + __free_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages, + free_pages); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr) +{ + struct mm_struct *mm = &init_mm; + struct page *page; + pmd_t old_pmd, _pmd; + int i; + + /* + * Up to this point the pmd is present and huge and userland has the + * whole access to the hugepage during the split (which happens in + * place). If we overwrite the pmd with the not-huge version pointing + * to the pte here (which of course we could if all CPUs were bug + * free), userland could trigger a small page size TLB miss on the + * small sized TLB while the hugepage TLB entry is still established in + * the huge TLB. Some CPU doesn't like that. + * + * See http://support.amd.com/us/Processor_TechDocs/41322.pdf, Erratum + * 383 on page 93. Intel should be safe but is also warns that it's + * only safe if the permission and cache attributes of the two entries + * loaded in the two TLB is identical (which should be the case here). + * + * So it is generally safer to never allow small and huge TLB entries + * for the same virtual address to be loaded simultaneously. But here + * we should not set pmd non-present first and flush TLB. Because if + * we do that(maybe trriger IPI to other CPUs to flush TLB), we may be + * deadlocked. So we have to break the above rules. Be careful, Let us + * suppose all CPUs are bug free, otherwise, we should not enable the + * feature of freeing unused vmemmap pages on the bug CPU. + * + * Why we should not set pmd non-present first? Here we already hold + * the vmemmap pgtable spinlock on CPU1 and set pmd non-present. If + * CPU0 access the struct page with irqs disabled and the vmemmap + * pgtable lock is held by CPU1. In this case, the CPU0 can not handle + * the IPI interrupt to flush TLB because of the disabling of irqs. + * Then we can deadlock. In order to avoid this issue, we do not set + * pmd non-present. + * + * The deadlock scene is shown below. + * + * CPU0: CPU1: + * disable irqs hold the vmemmap pgtable lock + * set pmd non-present + * read/write `struct page`(page fault) + * jump to handle_vmemmap_fault + * spin for vmemmap pgtable lock + * flush_tlb(send IPI to CPU0) + * set new pmd(small page) + */ + old_pmd = READ_ONCE(*pmd); + page = pmd_page(old_pmd); + pmd_populate_kernel(mm, &_pmd, pte_p); + + for (i = 0; i < VMEMMAP_HPAGE_NR; i++, addr += PAGE_SIZE) { + pte_t entry, *pte; + + entry = mk_pte(page + i, PAGE_KERNEL); + pte = pte_offset_kernel(&_pmd, addr); + VM_BUG_ON(!pte_none(*pte)); + set_pte_at(mm, addr, pte, entry); + } + + /* make pte visible before pmd */ + smp_wmb(); + pmd_populate_kernel(mm, pmd, pte_p); +} + +static void split_vmemmap_huge_page(struct page *head, pmd_t *pmd) +{ + pte_t *pte_p; + unsigned long start = (unsigned long)head & VMEMMAP_HPAGE_MASK; + unsigned long addr = start; + + while ((pte_p = vmemmap_pgtable_withdraw(head))) { + VM_BUG_ON(freed_vmemmap_hpage(virt_to_page(pte_p))); + split_vmemmap_pmd(pmd++, pte_p, addr); + addr += VMEMMAP_HPAGE_SIZE; + } + + flush_tlb_kernel_range(start, addr); +} + +static void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + pmd_t *pmd; + spinlock_t *ptl; + LIST_HEAD(free_pages); + + if (!nr_free_vmemmap(h)) + return; + + pmd = vmemmap_to_pmd(head); + ptl = vmemmap_pmd_lockptr(pmd); + + spin_lock(ptl); + if (vmemmap_pmd_huge(pmd)) { + VM_BUG_ON(!nr_pgtable(h)); + split_vmemmap_huge_page(head, pmd); + } + + __free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages); + freed_vmemmap_hpage_inc(pmd_page(*pmd)); + spin_unlock(ptl); + + free_vmemmap_page_list(&free_pages); +} #else static inline void hugetlb_vmemmap_init(struct hstate *h) { @@ -1429,6 +1655,10 @@ static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page) static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) { } + +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} #endif static void update_and_free_page(struct hstate *h, struct page *page) @@ -1637,6 +1867,7 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + free_huge_page_vmemmap(h, page); /* Must be called before the initialization of @page->lru */ vmemmap_pgtable_free(h, page); From patchwork Tue Sep 15 12:59:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776507 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52B65746 for ; Tue, 15 Sep 2020 13:02:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D732F20684 for ; Tue, 15 Sep 2020 13:02:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="oRGDCwZR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D732F20684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 12E2F90003E; Tue, 15 Sep 2020 09:02:09 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0B5D890003A; Tue, 15 Sep 2020 09:02:09 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E72FE90003E; Tue, 15 Sep 2020 09:02:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0192.hostedemail.com [216.40.44.192]) by kanga.kvack.org (Postfix) with ESMTP id CEA5790003A for ; Tue, 15 Sep 2020 09:02:08 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 703244DBC for ; Tue, 15 Sep 2020 13:02:08 +0000 (UTC) X-FDA: 77265308736.29.coat23_39008cb27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 6D27118085D08 for ; Tue, 15 Sep 2020 13:02:05 +0000 (UTC) X-Spam-Summary: 1,0,0,5ce87d1028e855c6,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1534:1540:1568:1711:1714:1730:1747:1777:1792:1981:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3865:3866:4321:4385:5007:6120:6261:6653:6737:6738:10004:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13069:13311:13357:13894:14096:14181:14384:14721:21080:21444:21451:21627:21990:30054,0,RBL:209.85.214.193:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04y81qtzqzjo3gug1ix5b6izx3568ypnn6nr7kxq8fh86xudnub4n7sotry7p1t.pi7s8n7uxhzurn4x9bahjgtopteskxzqh4ooe91ojx6dsx7mukgwd9argi6ggr4.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: coat23_39008cb27111 X-Filterd-Recvd-Size: 4483 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:01:56 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id y6so1304874plt.9 for ; Tue, 15 Sep 2020 06:01:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1xA5Zo5Cz4t/QRWNMyKZ9/SLQQOd42+iCFRWcfd6Pao=; b=oRGDCwZRQ6h95N0FMYtWzeohLlxHf+tvqEMCR9b+bfmN66PI1duhmvDTZ8TlGCuBHs /MUW77EE7IDPPQ/0AJNYfbVfTyC4PmWmB4Iu5c//emNCMvApNE9v6mOvYVb67WYI6WbT ahYPJ8GfNIz/CzaXlUSZMFy2zV0hKJWnc2HGmocO952ivhDn594Toa/vhUHzcLL62/1q cWq59xzcwSZP1FjMei3f89Z8couHpG2Jb9pw6CjMiEmP55LSQ6Zj6vQzJWa6tvYRvoEs jJIQ8TWzLaWDg/v2TkZGhmpJtexiIzjwW7tJI1av5VPI6OyjBO8dKZcUVr/G9FRgLlCk wqBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1xA5Zo5Cz4t/QRWNMyKZ9/SLQQOd42+iCFRWcfd6Pao=; b=a6gOd1szxZfPnho2+FJ9ud99O67yM6I1g7qWqchRTRWBo9/GGNWPjT7Y1wI04iXbU7 C9hqyczt8hJASn4tULxNCT+rowaFZaWIHMWnK1xkY/X4zp+3haB+Gvgnm17z9EmOefo2 mjBlZ09gxS2GeML4F/C5SF4FKWoXjRfThWbZs2xj9yOMQgoxlWe59sPxmHTr99yZn9h4 Zsnx2rVZiyU8eum5t1bCF5QTJWDM5yr1z85yGH2Gf6CAMthgmkzD8g6jjKyhGy2hTPx+ rL1OoG8NTcF07hEQO/grJo7lVt0Yp6KOpwCUjJo6pfb9YipFCqdvS6ToquAJ77N0LRhD WtCg== X-Gm-Message-State: AOAM533lZrGj85l58cHWmvaY4NFo0AllUsI22MQQL5hDHw61uTGx+flh kLreK5BuGU2EMtypbJR/Ui9+HA== X-Google-Smtp-Source: ABdhPJylXgRgPVmLfSJ8EZxe2tlUYRohlc+NN9//Q+cjPIxAl32Wi0QDL88uhLkUFkuCqwNGmR/FBA== X-Received: by 2002:a17:902:ba83:b029:d1:e5e7:be12 with SMTP id k3-20020a170902ba83b02900d1e5e7be12mr1623727pls.69.1600174915323; Tue, 15 Sep 2020 06:01:55 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.01.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:01:54 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 11/24] mm/hugetlb: Add vmemmap_pmd_huge macro for x86 Date: Tue, 15 Sep 2020 20:59:34 +0800 Message-Id: <20200915125947.26204-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 6D27118085D08 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use pmd_large instead of pmd_huge on x86, so we implement the vmemmap_pmd_huge macro. Signed-off-by: Muchun Song --- arch/x86/include/asm/hugetlb.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h index f5e882f999cd..7c3eb60c2198 100644 --- a/arch/x86/include/asm/hugetlb.h +++ b/arch/x86/include/asm/hugetlb.h @@ -4,10 +4,17 @@ #include #include +#include #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP #define VMEMMAP_HPAGE_SHIFT PMD_SHIFT #define arch_vmemmap_support_huge_mapping() boot_cpu_has(X86_FEATURE_PSE) + +#define vmemmap_pmd_huge vmemmap_pmd_huge +static inline bool vmemmap_pmd_huge(pmd_t *pmd) +{ + return pmd_large(*pmd); +} #endif #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE) From patchwork Tue Sep 15 12:59:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776509 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7092B746 for ; Tue, 15 Sep 2020 13:02:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2202520684 for ; Tue, 15 Sep 2020 13:02:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="kDUWUa2Z" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2202520684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 244BB90003F; Tue, 15 Sep 2020 09:02:17 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1C88E90003A; Tue, 15 Sep 2020 09:02:17 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0450690003F; Tue, 15 Sep 2020 09:02:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id E022090003A for ; Tue, 15 Sep 2020 09:02:16 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9C33D45C1 for ; Tue, 15 Sep 2020 13:02:16 +0000 (UTC) X-FDA: 77265309072.09.kick89_3108ecd27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 69CE7180AD815 for ; Tue, 15 Sep 2020 13:02:11 +0000 (UTC) X-Spam-Summary: 13,1.2,0,970b3b052b795404,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4119:4321:4385:4390:4395:4605:5007:6120:6261:6653:6737:6738:8660:9010:9592:10008:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13148:13230:13894:14110:14181:14721:21080:21433:21444:21451:21627:21740:21939:21990:30054,0,RBL:209.85.215.196:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04y8tryzcdqiizqpbkzj6peie74a1ocgwa5bowryw7rn1s9ctm7dppndz8nw6zg.1ehz168ibudy794r5jkxn7h6p3wny9ud5ptrdej5sjfg6sitrr7u9morkd5y1b8.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: kick89_3108ecd27111 X-Filterd-Recvd-Size: 8265 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:02:05 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id k14so1940102pgi.9 for ; Tue, 15 Sep 2020 06:02:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bRCYHUYk1hH5GlJBxp/yuwD+jDdbOfhVtaE95IbYmOc=; b=kDUWUa2ZT8b/MvJ9cAxjIpRzubPelB4YMWzv/FPUiKvVXnR2wgLHRyJPlBTx2dFGPx WdHqq5NcPH25dOl0mZSnquMGz/e7PlX7cC42O/ZFcuDzbaB1biKNjfd1tzk2Oc6YgTFa d9OrrkV9EnSfmyUvBL+Vagbnl6bIxkyZtvYke2r2FbE6JYYwapT5RlbfRUhSJ53rXePP x0HwhaY92MKbHhHjIYRctLxb0cf76tXco75+M5/SFjZJHxPya0IEKclVXhAXblN2hOfT tJTZ6mqsoBnPfVTUIATM/SZDuHHl8VcDFX/HoR/H8KcZZSIb8UOudCvKwJqbipmZCQHD ZKug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bRCYHUYk1hH5GlJBxp/yuwD+jDdbOfhVtaE95IbYmOc=; b=cue7e8EBGxNI75+uXUmLk7jWBYV3R8auFQHzJSwgfhL1MuvrB0CuVnx7U3e2rlZAx7 dEfK4KTTCE5uPGW24xsOxY2t/qJmVle22lCZ8ya7Wg4d6PUFlPUEWE0Y61iQ2zdqZbUI Ql7h8yuJ7WQp2kwpNAg4jHkqEtiBSWcIK0TeXVxAgykjc/NTe3SE919u8Poz/c2XbsXx 66DFmOLssmUGqVD1DfN8e0S6zE33BHW+TvBvIzJzHLquDiCpqcDML8Myzg9TjU7LKoXI X6gsl0WEP7E4Ce51WMs7n9yzeGuDIBoraYcRNDa9lAGpDs7q2/Bh11tZjF5cpZDGJau6 1geQ== X-Gm-Message-State: AOAM532nrmTsN8iXCxRv5V549IFG2CtwmY0bs2033jjTxJihA152/A99 t+vc35BQXeLlt0kXYG1B9IWnpw== X-Google-Smtp-Source: ABdhPJxenPAUPU06D9i1v/P969ZaM4N7XdPUb9tgmOwsHIoNMdA/XIJz1ZOBc23yyjDVGrDlS8zHiA== X-Received: by 2002:a62:178d:0:b029:13e:d13d:a0f8 with SMTP id 135-20020a62178d0000b029013ed13da0f8mr18090356pfx.20.1600174924831; Tue, 15 Sep 2020 06:02:04 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.01.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:02:04 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 12/24] mm/hugetlb: Defer freeing of hugetlb pages Date: Tue, 15 Sep 2020 20:59:35 +0800 Message-Id: <20200915125947.26204-13-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 69CE7180AD815 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the subsequent patch, we will allocate the vmemmap pages when free huge pages. But update_and_free_page() is be called from a non-task context(and hold hugetlb_lock), we can defer the actual freeing in a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages. Signed-off-by: Muchun Song --- mm/hugetlb.c | 94 +++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 85 insertions(+), 9 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a628588a075a..6b57a1183785 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1292,6 +1292,8 @@ static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif +static void __free_hugepage(struct hstate *h, struct page *page); + #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP #include @@ -1642,6 +1644,64 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head) free_vmemmap_page_list(&free_pages); } + +/* + * As update_and_free_page() is be called from a non-task context(and hold + * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent + * use GFP_ATOMIC to allocate a lot of vmemmap pages. + * + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of + * pages to be freed and frees them one-by-one. As the page->mapping pointer + * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is + * reused as the llist_node structure of a lockless linked list of huge + * pages to be freed. + */ +static LLIST_HEAD(hpage_update_freelist); + +static void update_hpage_vmemmap_workfn(struct work_struct *work) +{ + struct llist_node *node; + struct page *page; + + node = llist_del_all(&hpage_update_freelist); + + while (node) { + page = container_of((struct address_space **)node, + struct page, mapping); + node = node->next; + page->mapping = NULL; + __free_hugepage(page_hstate(page), page); + + cond_resched(); + } +} +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); + +static inline void __update_and_free_page(struct hstate *h, struct page *page) +{ + /* No need to allocate vmemmap pages */ + if (!nr_free_vmemmap(h)) { + __free_hugepage(h, page); + return; + } + + /* + * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap + * pages. + * + * Only call schedule_work() if hpage_update_freelist is previously + * empty. Otherwise, schedule_work() had been called but the workfn + * hasn't retrieved the list yet. + */ + if (llist_add((struct llist_node *)&page->mapping, + &hpage_update_freelist)) + schedule_work(&hpage_update_work); +} + +static inline void free_gigantic_page_comm(struct hstate *h, struct page *page) +{ + free_gigantic_page(page, huge_page_order(h)); +} #else static inline void hugetlb_vmemmap_init(struct hstate *h) { @@ -1659,17 +1719,39 @@ static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } + +static inline void __update_and_free_page(struct hstate *h, struct page *page) +{ + __free_hugepage(h, page); +} + +static inline void free_gigantic_page_comm(struct hstate *h, struct page *page) +{ + /* + * Temporarily drop the hugetlb_lock, because + * we might block in free_gigantic_page(). + */ + spin_unlock(&hugetlb_lock); + free_gigantic_page(page, huge_page_order(h)); + spin_lock(&hugetlb_lock); +} #endif static void update_and_free_page(struct hstate *h, struct page *page) { - int i; - if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; h->nr_huge_pages--; h->nr_huge_pages_node[page_to_nid(page)]--; + + __update_and_free_page(h, page); +} + +static void __free_hugepage(struct hstate *h, struct page *page) +{ + int i; + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | @@ -1681,14 +1763,8 @@ static void update_and_free_page(struct hstate *h, struct page *page) set_compound_page_dtor(page, NULL_COMPOUND_DTOR); set_page_refcounted(page); if (hstate_is_gigantic(h)) { - /* - * Temporarily drop the hugetlb_lock, because - * we might block in free_gigantic_page(). - */ - spin_unlock(&hugetlb_lock); destroy_compound_gigantic_page(page, huge_page_order(h)); - free_gigantic_page(page, huge_page_order(h)); - spin_lock(&hugetlb_lock); + free_gigantic_page_comm(h, page); } else { __free_pages(page, huge_page_order(h)); } From patchwork Tue Sep 15 12:59:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776517 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A063F746 for ; Tue, 15 Sep 2020 13:02:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 53D8620829 for ; Tue, 15 Sep 2020 13:02:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="iM2XomGq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 53D8620829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D1482900042; Tue, 15 Sep 2020 09:02:43 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CEB20900041; Tue, 15 Sep 2020 09:02:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C009B900042; Tue, 15 Sep 2020 09:02:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id A66CD900041 for ; Tue, 15 Sep 2020 09:02:43 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 49E2645CD for ; Tue, 15 Sep 2020 13:02:43 +0000 (UTC) X-FDA: 77265310206.17.bite39_5004c5927111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 06F6D1802EC04 for ; Tue, 15 Sep 2020 13:02:27 +0000 (UTC) X-Spam-Summary: 1,0,0,fbeb5d4a0bd787d9,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:3874:4118:4321:4385:4605:5007:6119:6120:6261:6653:6737:6738:7875:7903:9010:9036:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13255:13894:14110:14181:14721:21080:21094:21323:21433:21444:21451:21627:21740:21990:30054,0,RBL:209.85.214.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yfif6a5chhq4zetuw8gtxqbo4tzypk9np678j6633jbgedeuserxjrwhut1p4.pegn4eub44hgoi77ac7qdfk7xw5a3tj59cgu5szaihxac57bg99ifnsmbac9yk1.y-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: bite39_5004c5927111 X-Filterd-Recvd-Size: 7846 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:02:15 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id bg9so1306955plb.2 for ; Tue, 15 Sep 2020 06:02:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OOFE3ifzi+xdbou1BhqWX71JRgXgTkLEdNFFfTKs9Ws=; b=iM2XomGqOr6vav8SWts2kgwO4r/SIQ083A9ET2quErQScZx633RbxxWH96OcSipOCX etQ478kIKVl/JnnzUUXMu3YnhVVlfmFGIVtMHJq29YxXILuRzbyJ1uFWpeQceXlCD2jY 2C8sffIHvnr9qXcW2LF2qZq5Pch7iEPoKVNf0iQN9oB65y8591n68j7Gx7P9aC+5oy5/ 0fARwEqyJb+m50XkT0C+gNc8xvY+Y6xGbV8aOdsEbsr459+QDdAm/301ctcQiofQjW0B CVN02/XUZPC62uuxxbvHTWp73+iVGDgOgq6+Ou7T5fY9BuvDRNA69A/2fWIPmUNlBxht 6ShA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OOFE3ifzi+xdbou1BhqWX71JRgXgTkLEdNFFfTKs9Ws=; b=QOFv43zQ1SxstLaLY9hMViGpOydLn2yOJlQlJfPvOMP+3Le4MgGbiA4zfFouC10LJo scsY+ntWoDKCqL87WkqDeXMX0UFtAwP5Gn3dgEzAtR3DWFCozhszrA2pve7DamOiQWQn EML0JSot5OzKLVSOhTNuw9/CAiUmC9+ej7kFK44XUIkM8XeChz8OsjQy+AEVDQppo2wA NRhYLei6RwB2o/exqycT3MYyaCuulftwjtPMvy7i3IeEUkQOLQAaHcI3aNpUButthC5M v6rIcK5t/TvdpQggg7pg3pU+VRsjm01OZWiABDpY7+JPev1bLkCkb3z2NYtqTS9Zt+iS 3ryA== X-Gm-Message-State: AOAM533X3MQ3EyVhYpQHFPxacSPPvfJeBmfmfGWOnLWvVHxCIZv5t73Y PktUGQ4p9B+IUZGAgSEaVy2JRw== X-Google-Smtp-Source: ABdhPJxUfGHzGSJNJof5A7RFUKJlSlo67Ou73YEyK8suZijN2KyyyGZq9MsiZY6eorIYIvYj8Hz5Qw== X-Received: by 2002:a17:90a:d3c2:: with SMTP id d2mr4029408pjw.112.1600174934990; Tue, 15 Sep 2020 06:02:14 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.02.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:02:14 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 13/24] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Date: Tue, 15 Sep 2020 20:59:36 +0800 Message-Id: <20200915125947.26204-14-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 06F6D1802EC04 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we free a hugetlb page to the buddy, we should allocate the vmemmap pages associated with it. We can do that in the __free_hugepage(). Signed-off-by: Muchun Song --- mm/hugetlb.c | 108 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 108 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6b57a1183785..d0f09fe531fc 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1299,6 +1299,7 @@ static void __free_hugepage(struct hstate *h, struct page *page); #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +#define GFP_VMEMMAP_PAGE (GFP_KERNEL | __GFP_NOFAIL | __GFP_MEMALLOC) #define page_huge_pte(page) ((page)->pmd_huge_pte) @@ -1645,6 +1646,107 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head) free_vmemmap_page_list(&free_pages); } +static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, + unsigned long start, + unsigned int nr_remap, + struct list_head *remap_pages) +{ + void *from = (void *)page_private(reuse); + unsigned long addr, end = start + (nr_remap << PAGE_SHIFT); + + for (addr = start; addr < end; addr += PAGE_SIZE) { + void *to; + struct page *page; + pte_t entry, old = *ptep; + + page = list_first_entry_or_null(remap_pages, struct page, lru); + list_del(&page->lru); + to = page_to_virt(page); + copy_page(to, from); + + /* + * Make sure that any data that writes to the @to is made + * visible to the physical page. + */ + flush_kernel_vmap_range(to, PAGE_SIZE); + + prepare_vmemmap_page(page); + + entry = mk_pte(page, PAGE_KERNEL); + set_pte_at(&init_mm, addr, ptep++, entry); + + VM_BUG_ON(!pte_present(old) || pte_page(old) != reuse); + } +} + +static void __remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, + unsigned long addr, + struct list_head *remap_pages) +{ + unsigned long next; + unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE; + unsigned long end = addr + nr_vmemmap_size(h); + struct page *reuse = NULL; + + addr = start; + do { + unsigned int nr_pages; + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) { + reuse = pte_page(ptep[-1]); + set_page_private(reuse, addr - PAGE_SIZE); + } + + next = vmemmap_hpage_addr_end(addr, end); + nr_pages = (next - addr) >> PAGE_SHIFT; + __remap_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages, + remap_pages); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) +{ + int i; + + for (i = 0; i < nr_free_vmemmap(h); i++) { + struct page *page; + + /* This should not fail */ + page = alloc_page(GFP_VMEMMAP_PAGE); + list_add_tail(&page->lru, list); + } +} + +static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + pmd_t *pmd; + spinlock_t *ptl; + LIST_HEAD(remap_pages); + + if (!nr_free_vmemmap(h)) + return; + + alloc_vmemmap_pages(h, &remap_pages); + + pmd = vmemmap_to_pmd(head); + ptl = vmemmap_pmd_lockptr(pmd); + + spin_lock(ptl); + __remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, + &remap_pages); + if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) { + /* + * Todo: + * Merge pte to huge pmd if it has ever been split. + */ + } + spin_unlock(ptl); +} + /* * As update_and_free_page() is be called from a non-task context(and hold * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent @@ -1720,6 +1822,10 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } +static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} + static inline void __update_and_free_page(struct hstate *h, struct page *page) { __free_hugepage(h, page); @@ -1752,6 +1858,8 @@ static void __free_hugepage(struct hstate *h, struct page *page) { int i; + alloc_huge_page_vmemmap(h, page); + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | From patchwork Tue Sep 15 12:59:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776515 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CB6A214B7 for ; Tue, 15 Sep 2020 13:02:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8807E20684 for ; Tue, 15 Sep 2020 13:02:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="K6eEVu6p" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8807E20684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5C1D290003A; Tue, 15 Sep 2020 09:02:38 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 54A21900041; Tue, 15 Sep 2020 09:02:38 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CE0690003A; Tue, 15 Sep 2020 09:02:38 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id 26DCB90003A for ; Tue, 15 Sep 2020 09:02:38 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D63698249980 for ; Tue, 15 Sep 2020 13:02:37 +0000 (UTC) X-FDA: 77265309954.29.eyes32_000b21727111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id AD7A118085D1F for ; Tue, 15 Sep 2020 13:02:34 +0000 (UTC) X-Spam-Summary: 1,0,0,973dd846258aa7bd,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:4119:4385:4605:5007:6119:6120:6261:6653:6737:6738:7875:9010:9036:9592:10004:11026:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12517:12519:12555:12683:12895:13161:13229:13894:14096:14110:14181:14721:21080:21444:21451:21627:21990:30012:30054,0,RBL:209.85.216.65:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04ygx85zmyw45kh3heq3ic84skfz1opbhforowmnab9kjw46ehrjctzi7jcnuez.upggszd95kdbx7gncwtqksdpk6nunppefk4q7ajjf8n4csjk5zg3s3qu543jz7x.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: eyes32_000b21727111 X-Filterd-Recvd-Size: 8127 Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:02:24 +0000 (UTC) Received: by mail-pj1-f65.google.com with SMTP id b17so1687097pji.1 for ; Tue, 15 Sep 2020 06:02:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OkXzMD+NhY0rrv1EIRPbwCaZ1TTYsXnyGu+SkY1Mf24=; b=K6eEVu6polJsOytMXuHPIjuG3TFrs/6N5WUI0ZBBKC74qkO0sruMQ1T0TpXQbjcveW V7Yf5dcMI0SKa3/DAk/anYDfj6yRV/HTjUOOA0qbT6abv0okqUkNWwawOXp3DFhTL32d fb5oq5SdPT/cZ2/LBiidsxUx3rZYPzTCJqdYbiRJLpjiUKtcc61zGRGaMVYN7Yq2AysS mW5AmVG93RK3f5VeskZafXZB5UX+R/WEaQYOF+2nyqVZuNKbI11lQYyoWv/7aESzFVzI MXEbmt9TxtVJeKdqVeVu3/roaQT+6AT3rwfdNvJUfm5RkEXUO2l5GTdkRRl6H0/8Hhkc wQkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OkXzMD+NhY0rrv1EIRPbwCaZ1TTYsXnyGu+SkY1Mf24=; b=q8zBf3rYBUKwmCufGk84BOWjqra38HUDLxQJmF9/W5gtSI45d+bWPmWAVChQ63AZXv eflO2t0n8ll0EaKIc87B1WMcPMDbglqxL+6B+CejDUhRI7MLHnupGAeQJ2uaXZnUgzp4 2QovGSGEiI8Be91ZP2jBRab7Mx/kYCN+whDmDbArPQRYYr4MvVdQ/uQIUuQoYbbWDym1 FglrviNI2pTEdH+GLuqWMdTTZ/OshhTbU4uYyhijI3S3UP8J6EWGXCFqOB8Yf5hy8QAu HLLLcyR+l06eo7McMTT5vvUKT1S14pLyMFenJws/auQtUsi97k+Eu4jaSPbtVzgyXBLt NK0w== X-Gm-Message-State: AOAM532gCPMdmEkyuelBhZx8KJgIJZ7hqh+YuxDfRK1kW2q4l508JE0p A7K3i9MLZT6eB1fNhvf9jbPWYA== X-Google-Smtp-Source: ABdhPJzuCDVnx9gwW0vptu4yfb3717i/f+89GR/L7KYBny+0fPn83Ui2UVeaXboaZe1vqz4P5kT+mQ== X-Received: by 2002:a17:902:7896:b029:d0:b9dd:edae with SMTP id q22-20020a1709027896b02900d0b9ddedaemr18610201pll.0.1600174944140; Tue, 15 Sep 2020 06:02:24 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.02.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:02:23 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 14/24] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Date: Tue, 15 Sep 2020 20:59:37 +0800 Message-Id: <20200915125947.26204-15-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: AD7A118085D1F X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The __free_huge_page_pmd_vmemmap and __remap_huge_page_pmd_vmemmap are almost the same code. So introduce remap_free_huge_page_pmd_vmemmap helper to simplify the code. Signed-off-by: Muchun Song --- mm/hugetlb.c | 98 +++++++++++++++++++++------------------------------- 1 file changed, 39 insertions(+), 59 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d0f09fe531fc..5cc796dc3a0a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1482,6 +1482,41 @@ static inline int freed_vmemmap_hpage_dec(struct page *page) return atomic_dec_return_relaxed(&page->_mapcount) + 1; } +typedef void (*remap_pte_fn)(struct page *reuse, pte_t *ptep, + unsigned long start, unsigned int nr_pages, + struct list_head *pages); + +static void remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, + unsigned long addr, + struct list_head *pages, + remap_pte_fn remap_fn) +{ + unsigned long next; + unsigned long start = addr + RESERVE_VMEMMAP_SIZE; + unsigned long end = addr + nr_vmemmap_size(h); + struct page *reuse = NULL; + + flush_cache_vunmap(start, end); + + addr = start; + do { + unsigned int nr_pages; + pte_t *ptep; + + ptep = pte_offset_kernel(pmd, addr); + if (!reuse) { + reuse = pte_page(ptep[-1]); + set_page_private(reuse, addr - PAGE_SIZE); + } + + next = vmemmap_hpage_addr_end(addr, end); + nr_pages = (next - addr) >> PAGE_SHIFT; + remap_fn(reuse, ptep, addr, nr_pages, pages); + } while (pmd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + static inline void free_vmemmap_page_list(struct list_head *list) { struct page *page, *next; @@ -1513,33 +1548,6 @@ static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, } } -static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, - unsigned long addr, - struct list_head *free_pages) -{ - unsigned long next; - unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE; - unsigned long end = addr + nr_vmemmap_size(h); - struct page *reuse = NULL; - - addr = start; - do { - unsigned int nr_pages; - pte_t *ptep; - - ptep = pte_offset_kernel(pmd, addr); - if (!reuse) - reuse = pte_page(ptep[-1]); - - next = vmemmap_hpage_addr_end(addr, end); - nr_pages = (next - addr) >> PAGE_SHIFT; - __free_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages, - free_pages); - } while (pmd++, addr = next, addr != end); - - flush_tlb_kernel_range(start, end); -} - static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr) { struct mm_struct *mm = &init_mm; @@ -1639,7 +1647,8 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head) split_vmemmap_huge_page(head, pmd); } - __free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages); + remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages, + __free_huge_page_pte_vmemmap); freed_vmemmap_hpage_inc(pmd_page(*pmd)); spin_unlock(ptl); @@ -1679,35 +1688,6 @@ static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, } } -static void __remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd, - unsigned long addr, - struct list_head *remap_pages) -{ - unsigned long next; - unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE; - unsigned long end = addr + nr_vmemmap_size(h); - struct page *reuse = NULL; - - addr = start; - do { - unsigned int nr_pages; - pte_t *ptep; - - ptep = pte_offset_kernel(pmd, addr); - if (!reuse) { - reuse = pte_page(ptep[-1]); - set_page_private(reuse, addr - PAGE_SIZE); - } - - next = vmemmap_hpage_addr_end(addr, end); - nr_pages = (next - addr) >> PAGE_SHIFT; - __remap_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages, - remap_pages); - } while (pmd++, addr = next, addr != end); - - flush_tlb_kernel_range(start, end); -} - static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) { int i; @@ -1736,8 +1716,8 @@ static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) ptl = vmemmap_pmd_lockptr(pmd); spin_lock(ptl); - __remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, - &remap_pages); + remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &remap_pages, + __remap_huge_page_pte_vmemmap); if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) { /* * Todo: From patchwork Tue Sep 15 12:59:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776519 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA86714B7 for ; Tue, 15 Sep 2020 13:02:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 98D3920684 for ; Tue, 15 Sep 2020 13:02:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="gHURAa2q" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 98D3920684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 28BD7900043; Tue, 15 Sep 2020 09:02:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 23C2B900041; Tue, 15 Sep 2020 09:02:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B403900043; Tue, 15 Sep 2020 09:02:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0212.hostedemail.com [216.40.44.212]) by kanga.kvack.org (Postfix) with ESMTP id EA127900041 for ; Tue, 15 Sep 2020 09:02:47 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A3DA33646 for ; Tue, 15 Sep 2020 13:02:47 +0000 (UTC) X-FDA: 77265310374.04.wish33_1413c8d27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id AA89F800BAB8 for ; Tue, 15 Sep 2020 13:02:44 +0000 (UTC) X-Spam-Summary: 1,0,0,dea4ece0a9724511,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2196:2199:2393:2553:2559:2562:3138:3139:3140:3141:3142:3353:3865:3867:3871:3872:3874:4321:4385:5007:6119:6120:6261:6653:6737:6738:7903:8957:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12895:13894:14096:14181:14721:21080:21094:21323:21444:21451:21627:21990:30054:30090,0,RBL:209.85.210.194:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04ygbnbswnrco8qtykfo5zzxoy1k1ypsmwb74xhuchodr4raxqbdy1hff54yizb.th5qk9k9qu8bbtmnfitdykeqh8jinxacy178zw5hdbtyah3wfuqbw8phx5oftgw.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:27,LUA_SUMMARY:none X-HE-Tag: wish33_1413c8d27111 X-Filterd-Recvd-Size: 5572 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:02:35 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id f18so1873978pfa.10 for ; Tue, 15 Sep 2020 06:02:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SVV3WTmK6XA7LtxrkcMYqgHDOYcKykPcvtWqvODpAvM=; b=gHURAa2qwURYUv/ja5UoRzgyS1wkJkFAoxB36PqJjiqmVMoqDqHbzyNT1qlRaW9R9w FFwUBjf79ioO3vM8rMphZORI5NfLer+NOcsdXapnm3limLJGlHbEIUIJEfLqoWkMoNsv Xj3zBK/49fR4GZYSNgUft4ZoNumz1JNrGmeyCzNSWArndSMd5B2QPwYll2w3SdctsHuw Q2a+6wpYZYqAqe+r8BFxUtd60lFitcBA2aCxd9pFA0Mnz3rRX7EDutZSxJAcUaenBr8L B0Mm+xGXz3o/8hk8vLZqqA6TGfPoDAn5cHutkKLV5C2gjF/HoxkxihfB8ltGatni1xkf aOGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SVV3WTmK6XA7LtxrkcMYqgHDOYcKykPcvtWqvODpAvM=; b=D3nEYawpiXIhnTtsJFinZlmYisj4S5iLL1YqpgSfzoIL/XB3vBSZG9xurZcv3FpsDC hSbAM4oha06Utc5nfxD7Z6PE2yeXBChefiS2iWIxSroJX3UZfUnRmwrK1wKxNYSDyxEK D9n4FR5LHloFHnhpEEGwQnQNf3hq5aeLfVWOk8IgIWjt2KhSJWbLqgxQt0TkDyMK5Fow 34nppKr47ncHfei3/5X1AHowjijFRo71xCo6qwfN8ng+Ja0vay7uQMZQjbnJJ/733KLW ZenC57tvLJSkEJAdhEWmUx2odaYUT4Co0ACR9LHTOkUXuFzR6iphiE4PzZCf9XLU0sGJ knvg== X-Gm-Message-State: AOAM531JfpqtS/ByPleQlnG6fnKX7r5KGOpAGMx+jyyfLya07jvkHiO7 o44HismiY5wXsuHYLAxywg+u6w== X-Google-Smtp-Source: ABdhPJzvhvBb63cFKpGYA1g3UNc9aZPmEibZEVeHZ1Quj26JsNItVJjEgrqQJPDwM+Q4GgdbshBhhg== X-Received: by 2002:a63:4d5b:: with SMTP id n27mr14337711pgl.360.1600174954601; Tue, 15 Sep 2020 06:02:34 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.02.24 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:02:34 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 15/24] mm/hugetlb: Use PG_slab to indicate split pmd Date: Tue, 15 Sep 2020 20:59:38 +0800 Message-Id: <20200915125947.26204-16-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: AA89F800BAB8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we allocate hugetlb page from buddy, we may need split huge pmd to pte. When we free the hugetlb page, we can merge pte to pmd. So we need to distinguish whether the previous pmd has been split. The page table is not allocated from slab. So we can reuse the PG_slab to indicate that the pmd has been split. Signed-off-by: Muchun Song --- mm/hugetlb.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5cc796dc3a0a..c42c27a12df2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1629,6 +1629,25 @@ static void split_vmemmap_huge_page(struct page *head, pmd_t *pmd) flush_tlb_kernel_range(start, addr); } +static inline bool pmd_split(pmd_t *pmd) +{ + return PageSlab(pmd_page(*pmd)); +} + +static inline void set_pmd_split(pmd_t *pmd) +{ + /* + * We should not use slab for page table allocation. So we can set + * PG_slab to indicate that the pmd has been split. + */ + __SetPageSlab(pmd_page(*pmd)); +} + +static inline void clear_pmd_split(pmd_t *pmd) +{ + __ClearPageSlab(pmd_page(*pmd)); +} + static void free_huge_page_vmemmap(struct hstate *h, struct page *head) { pmd_t *pmd; @@ -1645,6 +1664,7 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head) if (vmemmap_pmd_huge(pmd)) { VM_BUG_ON(!nr_pgtable(h)); split_vmemmap_huge_page(head, pmd); + set_pmd_split(pmd); } remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages, @@ -1718,11 +1738,12 @@ static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) spin_lock(ptl); remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &remap_pages, __remap_huge_page_pte_vmemmap); - if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) { + if (!freed_vmemmap_hpage_dec(pmd_page(*pmd)) && pmd_split(pmd)) { /* * Todo: * Merge pte to huge pmd if it has ever been split. */ + clear_pmd_split(pmd); } spin_unlock(ptl); } From patchwork Tue Sep 15 12:59:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776521 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D2BF714B7 for ; Tue, 15 Sep 2020 13:02:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 86FAC20684 for ; Tue, 15 Sep 2020 13:02:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="cTyfEBaT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 86FAC20684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6D2B6900044; Tue, 15 Sep 2020 09:02:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6A7EE900041; Tue, 15 Sep 2020 09:02:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56F7A900044; Tue, 15 Sep 2020 09:02:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0217.hostedemail.com [216.40.44.217]) by kanga.kvack.org (Postfix) with ESMTP id 388FE900041 for ; Tue, 15 Sep 2020 09:02:57 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B61054DB3 for ; Tue, 15 Sep 2020 13:02:56 +0000 (UTC) X-FDA: 77265310752.07.ink61_3504f5e27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id B86A51803F919 for ; Tue, 15 Sep 2020 13:02:53 +0000 (UTC) X-Spam-Summary: 1,0,0,68271e1a95e0d0dc,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:2898:3138:3139:3140:3141:3142:3353:3865:3867:3871:3872:4117:4321:4385:4390:4395:4605:5007:6120:6261:6653:6737:6738:7903:8957:9010:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13161:13229:13894:14110:14181:14721:21080:21444:21451:21627:21990:30054:30070,0,RBL:209.85.215.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04ygton3ju391sah6tx5m36tihzwhycyc7kjb6xjrhw41gaqh7a3rsarjq76b3e.dwgwu1p1wine9b664w8jxtbgef561wucx5u84crdjj8bb9dp3zx6hxj1ddxnjxr.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: ink61_3504f5e27111 X-Filterd-Recvd-Size: 6873 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:02:45 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id g29so1975163pgl.2 for ; Tue, 15 Sep 2020 06:02:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iqeab9EuD6rx+ThTnTknvhPa5CCnZBGF8qbz9+1B3Rg=; b=cTyfEBaT8uZv91AMd6inH0d/3b50kH2ZmfriY6M0AQFvZ20H8u04GHyk0YwSRO8Xwu R4/nW8u1J0ZffnhIDgmPhhScvMxPmPeMFf38jU1TDGXnTHT4ouHLjR1aZYj4a7ReYyh5 FjXO7PRtC5txw6ftof7Jw8JcZY23SkpoxC0pPLU0Mxrt391VJBF7QfIoUD/ZmlLRhWEM nqav34uGU4Pj1ms1ixaKWy+RHtfMMHEBsau8aO59dXIqSUv+iHdW+YgjMcWv9PmN3lC2 UsGWdWzBhDtojQgvyqKBYGV2aDJOnYmB4mXs9sE0DBDzgWQC09PLCA809Zqxc0TNgENI S5gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iqeab9EuD6rx+ThTnTknvhPa5CCnZBGF8qbz9+1B3Rg=; b=XvJF6t80s0I2t6dWxZCeU8gjWWsWqgyNWiWYb2ip1AvEl95HvyrjQuMZo0anZZ89A4 0v6iW8GH7qHHYSqALkP7Ph6USMBblRvuFcUE8TjF1KoHwKMGFznNb9W1Nlc5+1c7VQC1 881ZzJgBHd0fOryhVeH5ICm4mdFdzAN5x2Lf7QNzOtL6F8LIDuT7iDdMeNX6bLDYtusM KhXoUlcnxMGu4MCTleVmkB9hCsP2JySJIVvTJNj4tWe8o4KGoZDA3NKaBoUjumoJWbaQ Wan3/IjPUexo0/pIE7gu/iE7MA8G3uVkoFCPmP2zFn9kWbLYj72dUR+nqIaODsNskylj F5mg== X-Gm-Message-State: AOAM531dt5SFdZKohg/7w2zh1AqKrUWg0YTHdyw9NNzg5KJrXhvY1BrJ /x3qXHrUtNRr6pqpLbN5Pkc6nQ== X-Google-Smtp-Source: ABdhPJxh7eBoOFLH3hJ5HKuuUOo/K5zM4KapOdf8UN2kqGT13hJ4/HNKAuLAmqUUqzdl5r9Y4jcljw== X-Received: by 2002:aa7:8e54:0:b029:142:2501:34d2 with SMTP id d20-20020aa78e540000b0290142250134d2mr1722923pfr.43.1600174964466; Tue, 15 Sep 2020 06:02:44 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.02.35 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:02:44 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 16/24] mm/hugetlb: Support freeing vmemmap pages of gigantic page Date: Tue, 15 Sep 2020 20:59:39 +0800 Message-Id: <20200915125947.26204-17-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B86A51803F919 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The gigantic page is allocated by bootmem, if we want to free the unused vmemmap pages. We also should allocate the page table. So we also allocate page tables from bootmem. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 57 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 60 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 2561af2ad901..e3aa192f1c39 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -506,6 +506,9 @@ struct hstate { struct huge_bootmem_page { struct list_head list; struct hstate *hstate; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + pte_t *vmemmap_pgtable; +#endif }; struct page *alloc_huge_page(struct vm_area_struct *vma, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c42c27a12df2..7072b849af3d 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1410,6 +1410,48 @@ static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) pte_free_kernel(&init_mm, pte_p); } +static unsigned long __init gather_vmemmap_pgtable_prealloc(void) +{ + struct huge_bootmem_page *m, *tmp; + unsigned long nr_free = 0; + + list_for_each_entry_safe(m, tmp, &huge_boot_pages, list) { + struct hstate *h = m->hstate; + unsigned int pgtable_size = nr_pgtable(h) << PAGE_SHIFT; + + if (!pgtable_size) + continue; + + m->vmemmap_pgtable = memblock_alloc_try_nid(pgtable_size, + PAGE_SIZE, 0, MEMBLOCK_ALLOC_ACCESSIBLE, + NUMA_NO_NODE); + if (!m->vmemmap_pgtable) { + nr_free++; + list_del(&m->list); + memblock_free_early(__pa(m), huge_page_size(h)); + } + } + + return nr_free; +} + +static void __init gather_vmemmap_pgtable_init(struct huge_bootmem_page *m, + struct page *page) +{ + int i; + struct hstate *h = m->hstate; + unsigned long pgtable = (unsigned long)m->vmemmap_pgtable; + unsigned int nr = nr_pgtable(h); + + if (!nr) + return; + + vmemmap_pgtable_init(page); + + for (i = 0; i < nr; i++, pgtable += PAGE_SIZE) + vmemmap_pgtable_deposit(page, (pte_t *)pgtable); +} + static void __init hugetlb_vmemmap_init(struct hstate *h) { unsigned int order = huge_page_order(h); @@ -1819,6 +1861,16 @@ static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page) { } +static inline unsigned long gather_vmemmap_pgtable_prealloc(void) +{ + return 0; +} + +static inline void gather_vmemmap_pgtable_init(struct huge_bootmem_page *m, + struct page *page) +{ +} + static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } @@ -3080,6 +3132,7 @@ static void __init gather_bootmem_prealloc(void) WARN_ON(page_count(page) != 1); prep_compound_huge_page(page, h->order); WARN_ON(PageReserved(page)); + gather_vmemmap_pgtable_init(m, page); prep_new_huge_page(h, page, page_to_nid(page)); put_page(page); /* free it into the hugepage allocator */ @@ -3132,6 +3185,10 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) break; cond_resched(); } + + if (hstate_is_gigantic(h)) + i -= gather_vmemmap_pgtable_prealloc(); + if (i < h->max_huge_pages) { char buf[32]; From patchwork Tue Sep 15 12:59:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776525 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2A70C746 for ; Tue, 15 Sep 2020 13:03:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CEAE020829 for ; Tue, 15 Sep 2020 13:03:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="lc0/0lOX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CEAE020829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4D438900045; Tue, 15 Sep 2020 09:03:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4AB35900041; Tue, 15 Sep 2020 09:03:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C064900045; Tue, 15 Sep 2020 09:03:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0244.hostedemail.com [216.40.44.244]) by kanga.kvack.org (Postfix) with ESMTP id 228C8900041 for ; Tue, 15 Sep 2020 09:03:06 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id DB1B48249980 for ; Tue, 15 Sep 2020 13:03:05 +0000 (UTC) X-FDA: 77265311130.21.ear73_360407e27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 9BCEE180445EC for ; Tue, 15 Sep 2020 13:03:02 +0000 (UTC) X-Spam-Summary: 1,0,0,bca9e00d8e2c4eb4,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1539:1711:1714:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3350:3865:3866:3867:3871:4385:5007:6120:6261:6653:6737:6738:8603:10004:11026:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:13069:13255:13311:13357:13894:14181:14384:14721:21080:21094:21323:21444:21451:21627:30054:30070,0,RBL:209.85.210.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04y8z4mdnup4y49qpepw1y1g1ekumophxnwkypj9mewre7adcmbdxz1hy56ho44.ysd73f1coc85cx3ju7eoum5ucmcdt4xhjc9yf9ybdjbn7drff59tg33hty64574.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:27,LUA_SUMMARY:none X-HE-Tag: ear73_360407e27111 X-Filterd-Recvd-Size: 4392 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:02:56 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id n14so1891937pff.6 for ; Tue, 15 Sep 2020 06:02:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eNchCXlujHVbLKrDly+nb0OZuzhDeUp4zy0iJXX+zts=; b=lc0/0lOXMXW6CqInlrV1vRTsQtfbMiUBOwP4cPSQ/2UvctoSkPq85BGm4npwx+3wUc VqUklikLospgrLDQ4wlW3z7rPyOBrciyG1K0DugATR9FIHAYeCTZSiXDv5SOiU8HNN4e z2S6v7c0SeINkiwtNed/vqPdtCGDDopnBEbxbRUw/oWYqcgtnq9kg/b1BUUAABcHlhTb rDilY8ZLwf2qRc9VvcjtsCtmKcBoZ5vkzsG7FvnWeB2eqtXZGodOHGHj9GslKqG4bjbz DcbPgKTU/Gucf9acbpdx0Zpf0x9H+UEafA7kwQVLDGAd7Ilj/oVoLIVGpKR30LL2G/7m Bspw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eNchCXlujHVbLKrDly+nb0OZuzhDeUp4zy0iJXX+zts=; b=G5l2oa6tR9GCkGvD3dbjAl8hIfXfvFKGJMkG7dsqAb0ft8FSchap0Buk4SBexcjWRY kkI2kk4KBZN7Y6orqQZqzOEPXpenNI3DKq4lq0nv6X0o39lyBTlzytUTJQIDGM95H2TA 5+fyXYrA8kfskRGRm7RqfpALTYcHGUPmCNOse4+n7F62uJ64dQTk+/KVJKrr/TPScb57 9V+9Lo2kZECJt2RTWsBEiEhdhGz2HN0/ThQXKYsKzIkaruuCc31twHGw6PGy+ddOqfCK p1DcxY1B85HJDXeosXrgOVjlFs731hXGmpSbkw1H/AVG52LvNpDM6JrS27FF4ILSI1DI dQmQ== X-Gm-Message-State: AOAM530y2/7JtDeGyCPp41c8lCHWq52CDswAryMc7tgf7mUB0SiPVe7o b5zyVu7YE8arJocbft+xQHGqsw== X-Google-Smtp-Source: ABdhPJwmk58xJFs7bVQwlQ8TOo6QNgpQumZzCogF3iD2x+NuPWVnOo/NDEmSo5qdwcLGuncUFiCQYA== X-Received: by 2002:a62:7ed5:0:b029:13e:d13d:a086 with SMTP id z204-20020a627ed50000b029013ed13da086mr17607925pfc.29.1600174975269; Tue, 15 Sep 2020 06:02:55 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.02.44 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:02:54 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 17/24] mm/hugetlb: Add a BUILD_BUG_ON to check if struct page size is a power of two Date: Tue, 15 Sep 2020 20:59:40 +0800 Message-Id: <20200915125947.26204-18-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 9BCEE180445EC X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We only can free the unused vmemmap to the buddy system when the size of struct page is a power of two. So add a BUILD_BUG_ON to check the illegal case. Signed-off-by: Muchun Song --- mm/hugetlb.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7072b849af3d..34706cec21ec 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3817,6 +3817,10 @@ static int __init hugetlb_init(void) { int i; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + BUILD_BUG_ON_NOT_POWER_OF_2(sizeof(struct page)); +#endif + if (!hugepages_supported()) { if (hugetlb_max_hstate || default_hstate_max_huge_pages) pr_warn("HugeTLB: huge pages not supported, ignoring associated command-line parameters\n"); From patchwork Tue Sep 15 12:59:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776527 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A8D3746 for ; Tue, 15 Sep 2020 13:03:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EFF6820829 for ; Tue, 15 Sep 2020 13:03:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="cXXnrjbp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EFF6820829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3C360900046; Tue, 15 Sep 2020 09:03:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3731A900041; Tue, 15 Sep 2020 09:03:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 260B4900046; Tue, 15 Sep 2020 09:03:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id 0D6EC900041 for ; Tue, 15 Sep 2020 09:03:18 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C1811181AEF1E for ; Tue, 15 Sep 2020 13:03:17 +0000 (UTC) X-FDA: 77265311634.02.mass08_54006c927111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 71E07100AF080 for ; Tue, 15 Sep 2020 13:03:13 +0000 (UTC) X-Spam-Summary: 1,0,0,4ed7d49e92906db7,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2553:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3872:3874:4117:4385:5007:6120:6261:6653:6737:6738:7875:8957:10004:11026:11473:11658:11914:12043:12048:12291:12297:12438:12517:12519:12555:12683:12895:13894:14110:14181:14721:21080:21444:21451:21627:21990:30054:30090,0,RBL:209.85.216.67:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04y86abhk91h9694mgkt77ypr9fcxopcbobkh4ywq3kyyhpju51gnph6dk4ehay.gze6r3uqhiwxc5rzqsw9cuwodm5e5t4qmdb83hasfauayty3fmrfhftsf3nsfsp.y-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: mass08_54006c927111 X-Filterd-Recvd-Size: 6318 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:03:06 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id a9so1744739pjg.1 for ; Tue, 15 Sep 2020 06:03:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X4BTtwaMtESewFNIOUy9zxHCLZirXUfq3fAtH3OWyA0=; b=cXXnrjbpWvONeQdtoAVhTZ1yEdWmSsnQjcHBSITYlGXfagXm3/IJgmyXBhneJ6FD7e RBdUN/kDepP9jC/FvRXoRJtKV3yp8pN7OHM1DHiC4YycfS5R5tW7UQm46HQw2bzjYntD ydYttZbIUk7Y7cp0efMhEeVtwpXC92brqjYiP6AsfeIp0gaHFUffOPUnYz1GAumIXGSt BqXqqefdlV3KyEVSMUu6lo2Y6hLkS6u9lNFi0Mok1IDjXxCUQse9upWTGu7yG/jDHyId juUbZp+JFrctB+XeE//2fQvtkKjeKle6+2zAKMxjOU6dksJTwyX1FTXYl3FgMtS6eHA+ T3UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X4BTtwaMtESewFNIOUy9zxHCLZirXUfq3fAtH3OWyA0=; b=ck95tLM/AKb6WywMFNa4I9fbL9Uc6idz+UktfhZ2QFx87B0pBXYgCqpeXwsmg7qR32 1HdnWeUP3iKzGl5OwN8VtPUWu1KdQiw+DyLU0zvgmbIg6QmSdFX9NjwcGwE64lI29PET dT/WIZIck7IoSoGg0qbIH3IrCo1rL4N0ItXjR5UaG+n/Lj5gLYT+ubJCUoRjfRIeYJB6 W1alPd9Eec8FrrWCVM2upw3HQq4yNHeEqt5PJOA1ZWQlzuBJ2w4ox8re3PdBT7MMVYu/ 2E7p0+4PKio218th7+NEPPlv02Z7KSp3zB6SLFf/4HIB5GmMqIYaEnfCgOXzZpgUOqCw bAVw== X-Gm-Message-State: AOAM533s/wSymTxNJBWFg3rR9jQOOqWwVzbhnXxtPxEu72hcBmUfXJWe WaNhx7XyxjILdQC/h/2e7sfw3g== X-Google-Smtp-Source: ABdhPJzutEEVq5G6/7+ccPxCWjUDCELLq9lbv2LK9MvrA+j214YRs9JOfhtSh5/VK8gk1Qc8NYMz+A== X-Received: by 2002:a17:90b:3c3:: with SMTP id go3mr4345809pjb.64.1600174986059; Tue, 15 Sep 2020 06:03:06 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.02.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:03:05 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 18/24] mm/hugetlb: Clear PageHWPoison on the non-error memory page Date: Tue, 15 Sep 2020 20:59:41 +0800 Message-Id: <20200915125947.26204-19-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 71E07100AF080 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Because we reuse the first tail page, if we set PageHWPosion on a tail page. It indicates that we may set PageHWPoison on a series of pages. So we need to clear PageHWPoison on the non-error pages. We use the head[3].mapping to record the real error page index and clear non-error page PageHWPoison later. Signed-off-by: Muchun Song --- mm/hugetlb.c | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 34706cec21ec..8666cedf9a7b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1847,6 +1847,21 @@ static inline void free_gigantic_page_comm(struct hstate *h, struct page *page) { free_gigantic_page(page, huge_page_order(h)); } + +static inline bool subpage_hwpoison(struct page *head, struct page *page) +{ + return page_private(head + 4) == page - head; +} + +static inline void set_subpage_hwpoison(struct page *head, struct page *page) +{ + set_page_private(head + 4, page - head); +} + +static inline void clear_subpage_hwpoison(struct page *head) +{ + set_page_private(head + 4, 0); +} #else static inline void hugetlb_vmemmap_init(struct hstate *h) { @@ -1894,6 +1909,19 @@ static inline void free_gigantic_page_comm(struct hstate *h, struct page *page) free_gigantic_page(page, huge_page_order(h)); spin_lock(&hugetlb_lock); } + +static inline bool subpage_hwpoison(struct page *head, struct page *page) +{ + return true; +} + +static inline void set_subpage_hwpoison(struct page *head, struct page *page) +{ +} + +static inline void clear_subpage_hwpoison(struct page *head) +{ +} #endif static void update_and_free_page(struct hstate *h, struct page *page) @@ -1918,6 +1946,9 @@ static void __free_hugepage(struct hstate *h, struct page *page) 1 << PG_referenced | 1 << PG_dirty | 1 << PG_active | 1 << PG_private | 1 << PG_writeback); + + if (PageHWPoison(page + i) && !subpage_hwpoison(page, page + i)) + ClearPageHWPoison(page + i); } VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page); VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page); @@ -2107,6 +2138,7 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) free_huge_page_vmemmap(h, page); /* Must be called before the initialization of @page->lru */ vmemmap_pgtable_free(h, page); + clear_subpage_hwpoison(page); INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); @@ -2477,6 +2509,7 @@ int dissolve_free_huge_page(struct page *page) SetPageHWPoison(page); ClearPageHWPoison(head); } + set_subpage_hwpoison(head, page); list_del(&head->lru); h->free_huge_pages--; h->free_huge_pages_node[nid]--; From patchwork Tue Sep 15 12:59:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776529 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2FC66746 for ; Tue, 15 Sep 2020 13:03:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DB82D20684 for ; Tue, 15 Sep 2020 13:03:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="aV3A/QCS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB82D20684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DD351900047; Tue, 15 Sep 2020 09:03:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D83C3900041; Tue, 15 Sep 2020 09:03:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4A84900047; Tue, 15 Sep 2020 09:03:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id ACE0D900041 for ; Tue, 15 Sep 2020 09:03:26 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 61AA4180AD822 for ; Tue, 15 Sep 2020 13:03:26 +0000 (UTC) X-FDA: 77265312012.23.guide04_081493627111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 5367D3762D for ; Tue, 15 Sep 2020 13:03:23 +0000 (UTC) X-Spam-Summary: 1,0,0,e15f2bba8eb1f47d,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1541:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3867:3871:3874:4385:5007:6120:6261:6653:6737:6738:7875:7903:8957:9707:10004:11026:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12895:13069:13161:13229:13311:13357:13894:14181:14384:14721:21080:21094:21323:21444:21451:21627:21740:21990:30054,0,RBL:209.85.210.195:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yrpeqyhfs71ut37k8s1fqqsxjewopk79f1wss4qg17z5bymbx4in68zwo4hu9.sxpcnkhergn531eyh6zjg7y773rhknsp7xisyuoawc8eqs1tb9xb6copixodhwa.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: guide04_081493627111 X-Filterd-Recvd-Size: 5336 Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:03:19 +0000 (UTC) Received: by mail-pf1-f195.google.com with SMTP id o20so1872129pfp.11 for ; Tue, 15 Sep 2020 06:03:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pILI1DDrFe/b2Xrn63/srspFkYH22+x7evVHRLCQbNs=; b=aV3A/QCSeKx/M6lxiGM69I2x22wWXM5E8xYjTXWvW5FmYeJhqDzAMXLFwr+IlVywrd OaHxAqWmnINT8f3NrR+Am89Tusaw27BZ5R8rsk7NnDBqP/+hoRH3B+4d3yGBclXVaBhW DZMwNl7z/XjVPQaKBP9LXodEGaBFjxhm86jA3yT9TLez0jAJ57CLBdMXfobRNs0SUjar ElqNnd+J7SUQWAhjfCR6wWTokDIUS335J42n1QMGV/wD92djIwR5OFoCZm1USSnA5p+p yMoFm9hJhk+WQXlXykrLNJrO/kr0gNbyGT/fxDAbO/DLWVGbtcIqz9lUlWJSt1ikLvly wZVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pILI1DDrFe/b2Xrn63/srspFkYH22+x7evVHRLCQbNs=; b=TJnyRcQHPKcVdjbAtE8tpBSGmIjkg/fWK2N0e3fdmcSohjIJ/Y/1PrTujSIluGoj0U d657t3feVhnaRftYi6BjmxULGDLH+NqYMYgqjgSoSLpZvVV2+rE8Z+/3Wk1o9gX6amY0 X4N3qDRVPz8IdIckjP/+84/gRLMwz+FAoZaIeFVMc+2dIymoDjFwDbQ4zPOqpa2fZqBv Mud/KHAYUViwdcRC95YEsfwXVD1zOZOFT7ZgbaRBU2CeiOLkWKuDdR72AwT3jwlpt18M qbbYc9IM5YUhvbbTGqrVltYBnS58KHXQ97DTmiwm6sEnah8d/5RyAAFCdZ0UCtMn/qJD mOuw== X-Gm-Message-State: AOAM531o55VNdgtHKKT6FW56bgTmlh5hD+w8v0q9fo6gTubAEtPtDAQL ZJeRf2WQ0BerXw8jlp+Z50HDEQ== X-Google-Smtp-Source: ABdhPJwdj/8EX6dbyD+Wrkpe396z326LshE9196SheOxP4atV9x9XuoP4OWtAZUey/v+ouqG80JBaQ== X-Received: by 2002:a63:f34b:: with SMTP id t11mr14668403pgj.111.1600174996425; Tue, 15 Sep 2020 06:03:16 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.03.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:03:15 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 19/24] mm/hugetlb: Flush work when dissolving hugetlb page Date: Tue, 15 Sep 2020 20:59:42 +0800 Message-Id: <20200915125947.26204-20-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5367D3762D X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should flush work when dissolving a hugetlb page to make sure that the hugetlb page is freed to the buddy. Signed-off-by: Muchun Song --- mm/hugetlb.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8666cedf9a7b..56c0bf2370ed 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1848,6 +1848,11 @@ static inline void free_gigantic_page_comm(struct hstate *h, struct page *page) free_gigantic_page(page, huge_page_order(h)); } +static inline void flush_free_huge_page_work(void) +{ + flush_work(&hpage_update_work); +} + static inline bool subpage_hwpoison(struct page *head, struct page *page) { return page_private(head + 4) == page - head; @@ -1910,6 +1915,10 @@ static inline void free_gigantic_page_comm(struct hstate *h, struct page *page) spin_lock(&hugetlb_lock); } +static inline void flush_free_huge_page_work(void) +{ +} + static inline bool subpage_hwpoison(struct page *head, struct page *page) { return true; @@ -2484,6 +2493,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + bool need_flush = false; /* Not to disrupt normal path by vainly holding hugetlb_lock */ if (!PageHuge(page)) @@ -2515,10 +2525,19 @@ int dissolve_free_huge_page(struct page *page) h->free_huge_pages_node[nid]--; h->max_huge_pages--; update_and_free_page(h, head); + need_flush = true; rc = 0; } out: spin_unlock(&hugetlb_lock); + + /* + * We should flush work before return to make sure that + * the hugetlb page is freed to the buddy. + */ + if (need_flush) + flush_free_huge_page_work(); + return rc; } From patchwork Tue Sep 15 12:59:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776533 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7065C746 for ; Tue, 15 Sep 2020 13:03:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 297BF20684 for ; Tue, 15 Sep 2020 13:03:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="EOQN2yWH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 297BF20684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8F0F3900048; Tue, 15 Sep 2020 09:03:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8C845900041; Tue, 15 Sep 2020 09:03:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B98B900048; Tue, 15 Sep 2020 09:03:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0113.hostedemail.com [216.40.44.113]) by kanga.kvack.org (Postfix) with ESMTP id 632EE900041 for ; Tue, 15 Sep 2020 09:03:50 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0C95D8249980 for ; Tue, 15 Sep 2020 13:03:50 +0000 (UTC) X-FDA: 77265313020.13.wash50_1407f8927111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 3DE6018140B82 for ; Tue, 15 Sep 2020 13:03:44 +0000 (UTC) X-Spam-Summary: 1,0,0,b501092bac024107,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:4117:4321:4385:4390:4395:4605:5007:6119:6120:6261:6653:6737:6738:8603:9036:10004:11026:11473:11658:11914:12043:12048:12114:12297:12438:12517:12519:12555:12895:12986:13894:14181:14721:21080:21094:21323:21444:21451:21627:21966:21990:30029:30034:30054,0,RBL:209.85.210.193:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yr6a6c7zd7zfncp89tj15wdkc6syce3n6bc8c9pdzp8dytw96r8jz5bcbrcwi.kpaynwo9ggu9m5i9w3hsgaaejrh5d7zj1eyfwbwb3u8idtwerutfefe7xiwrxr9.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: wash50_1407f8927111 X-Filterd-Recvd-Size: 6908 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:03:27 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id x123so1891043pfc.7 for ; Tue, 15 Sep 2020 06:03:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mOJhqJtv+UWYoFjqXwWvBe8sB0f2lCBZ6nqED2fuMHQ=; b=EOQN2yWHyRuPAk9Gq+vMMRCScrOmbSFwQ3n+S9wZYYojswi6k8MztS4cheCz/qacPa JQH4e+xWH0J27Q2IH3NK20OWrmxC6SH8QTl8gW2KyP1R6iauFKSbXnW6RCyse/SaZqzu H5/jC1r63VISpnzqCqjZbh1KoHJ8PUp5VnvAh87TKRUVm6FgYuM/2IBTVSCqs/moqLOl vDTfbMz93M8Af8EBdTZK3i8xzTOrHp7cqmo+zrubXB95j0mPnK2Jy/6skjemTyoOrdlc VOcBpmuITOubgjyoOWCxAvFXusRUf0Wsf7a9t97nrU1k/gEkmQpssTuulsmZPnh6+CTw /bxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mOJhqJtv+UWYoFjqXwWvBe8sB0f2lCBZ6nqED2fuMHQ=; b=ZupODhDMSp+wFs3Zh28JSna2aAsxg3d8Q+RMuFHbYYy96287E3Qi6RONkICbkCx9Fl OeAecF13LL00y7U5WHhkSa8yZzImMIDkg4vZYXFhohwXs2mPkUOuOYXbGhZj8v/JUn4f UK0xeekJDoq1iMGP1TykeETiKebXK4Zcf1XuW8x4d1L7Ctic90PVZD8y3nQhGx2qeytD pp0PvDUGSZJhDAYhfaxp5n7Lr97hXyEmSHFiFmzz/irzyj0Wf8HHqgTEQTjBbs0lvM9x uM08806t3g5M+UjRm9cczUPlou0rO49cL23iT8HOrPcIdXuh8ViObUwnscxvgYfg0B9A guxA== X-Gm-Message-State: AOAM5332F+CcyIjK7Kb1+gJm/ETyNHd+0HqjRVKf4ei9O7LCVdoJq1aO d/PhsVqA1PTLkBAorC/cTJVyew== X-Google-Smtp-Source: ABdhPJy+syIBhM8kx1V8ywime9iP9xvufd4f0gsLV6gbG6xM7YybTp78AmXxxeqXUOso9/bnBhpTyQ== X-Received: by 2002:a62:3044:0:b029:142:2501:398b with SMTP id w65-20020a6230440000b02901422501398bmr1767708pfw.80.1600175006499; Tue, 15 Sep 2020 06:03:26 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.03.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:03:25 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 20/24] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Date: Tue, 15 Sep 2020 20:59:43 +0800 Message-Id: <20200915125947.26204-21-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3DE6018140B82 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a kernel parameter hugetlb_free_vmemmap to disable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. Signed-off-by: Muchun Song --- .../admin-guide/kernel-parameters.txt | 9 ++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ mm/hugetlb.c | 23 +++++++++++++++++++ 3 files changed, 35 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 5debfe238027..69d18ef6f66b 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1551,6 +1551,15 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] + hugetlb_free_vmemmap= + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, + this disables freeing unused vmemmap pages associated + each HugeTLB page. + Format: { on (default) | off } + + on: enable the feature + off: dosable the feature + hung_task_panic= [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index f7b1c7462991..7d6129ee97dd 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -145,6 +145,9 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this disables freeing + unused vmemmap pages associated each HugeTLB page. When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 56c0bf2370ed..28c154679838 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1310,6 +1310,8 @@ static void __free_hugepage(struct hstate *h, struct page *page); (__boundary - 1 < (end) - 1) ? __boundary : (end); \ }) +static bool hugetlb_free_vmemmap_disabled __initdata; + static inline unsigned int nr_free_vmemmap(struct hstate *h) { return h->nr_free_vmemmap_pages; @@ -1457,6 +1459,13 @@ static void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int order = huge_page_order(h); unsigned int vmemmap_pages; + if (hugetlb_free_vmemmap_disabled) { + h->nr_free_vmemmap_pages = 0; + pr_info("HugeTLB: disable free vmemmap pages for %s\n", + h->name); + return; + } + vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page and the first tail page not free to buddy system, @@ -1867,6 +1876,20 @@ static inline void clear_subpage_hwpoison(struct page *head) { set_page_private(head + 4, 0); } + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "off")) + hugetlb_free_vmemmap_disabled = true; + else if (strcmp(buf, "on")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); #else static inline void hugetlb_vmemmap_init(struct hstate *h) { From patchwork Tue Sep 15 12:59:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776537 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1E9B3746 for ; Tue, 15 Sep 2020 13:04:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C0AC621655 for ; Tue, 15 Sep 2020 13:04:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="Wgzm21bN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C0AC621655 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 630C090004A; Tue, 15 Sep 2020 09:04:01 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5E207900041; Tue, 15 Sep 2020 09:04:01 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4342E90004A; Tue, 15 Sep 2020 09:04:01 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0170.hostedemail.com [216.40.44.170]) by kanga.kvack.org (Postfix) with ESMTP id 293A3900041 for ; Tue, 15 Sep 2020 09:04:01 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 85EBD363B for ; Tue, 15 Sep 2020 13:04:00 +0000 (UTC) X-FDA: 77265313440.27.wood50_25011f627111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id AAF323D8E3 for ; Tue, 15 Sep 2020 13:03:51 +0000 (UTC) X-Spam-Summary: 1,0,0,60d536e15adb1dc8,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3865:3866:3867:3871:3872:3874:4119:4321:4385:4605:5007:6119:6120:6261:6653:6737:6738:7901:7903:8603:9010:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14110:14181:14721:21080:21444:21451:21627:21990:30054,0,RBL:209.85.216.66:@bytedance.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04yf8fe4bg5uqaw1fudpe5pu3qkt7ycw8ya7pccqdnfj6h1ax9oi9wuwg8p989i.7op61wgwua1wpdbyxbraq1s8o6srro4zrenh8whpao1ptqnza38qqg77qniyqzs.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: wood50_25011f627111 X-Filterd-Recvd-Size: 8236 Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:03:36 +0000 (UTC) Received: by mail-pj1-f66.google.com with SMTP id mm21so1735629pjb.4 for ; Tue, 15 Sep 2020 06:03:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ihCUQw41R9FsrvHiFmfh05nKlVrGscgM2mtw6HnVyB4=; b=Wgzm21bNvJZeUfE39i61WSjnncEdFvnHsL6LRcbfbG8qMG80SsNRb0yky3uwNluR3+ umeMtCxwqly+HGJDLRnGorII5eM5IdLf1djntn6fwT/R5UYUthd+BHyHoE4LKDZktijr BpJYY8arAS8SfRzbOhUm/N4L4wd9Ob4CRxu6GZwVvSNL22KgZrpmAG6UinFkq1NYYmqi TRDTb4+UgBz2JjAPc/D3vBT6t28H6VBjmoZo6G5kcrkxVlBQtc0+mkc06gyxgAGNvCus Np4R0taSFSkEnjfVEMOsgW5itFIzztw5KRRwOtKB9RYurZSqtjBLrImHSHKyvmMwWdPq pdkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ihCUQw41R9FsrvHiFmfh05nKlVrGscgM2mtw6HnVyB4=; b=NFSBPnvTtqcglsViN55rEjtdNoB2usxvW1R0G7Cbyw0gNuz8H/NBV4Ysnr1HZjBV5B Yl21m5ZP3JzxUCjsWRQBx8fLyZ3AnULHj6bmEclF99Y2eQ0s2nEAWSzplrrk2yLnqnVz UAbzgr9Dz5REVfCFxfq5DjYJXeQ/HkvrSMbKWvmO1MD3Yx7OQ899J9VVB16VBynF95Ge T6JQ78aJer2qlHAnZHjgmB8NnEP3V4G5GcBQ9LzoaWl+S5yNq/rDVLGE9aK2z5i//gy0 HJjDR4XwmchUaYxSpxq3LPHWgeSJSsMwEOI6KwuCHGK6KGARWNuvlPelGq9sXqpaAxG7 AR7A== X-Gm-Message-State: AOAM531/fU0t886zXMYU2/Pgp+Nhvwrp+oNOSV4xRTezKRZMG82EZvYi 92f8pghNbaAXX3+qtzckCkyoCQ== X-Google-Smtp-Source: ABdhPJwx+1+21x7mTbb5CN2n3sjsO5NUiwrurWvh4dnjgbleOwi+3x3ysk5XetVWc3sCkJRiFuSnPA== X-Received: by 2002:a17:902:70c2:b029:d1:dea3:a4d6 with SMTP id l2-20020a17090270c2b02900d1dea3a4d6mr3314865plt.4.1600175015447; Tue, 15 Sep 2020 06:03:35 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.03.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:03:34 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 21/24] mm/hugetlb: Merge pte to huge pmd only for gigantic page Date: Tue, 15 Sep 2020 20:59:44 +0800 Message-Id: <20200915125947.26204-22-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: AAF323D8E3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Merge pte to huge pmd if it has ever been split. Now only support gigantic page which's vmemmap pages size is an integer multiple of PMD_SIZE. This is the simplest case to handle. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 7 +++ mm/hugetlb.c | 104 +++++++++++++++++++++++++++++++++++++++- 2 files changed, 109 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index e3aa192f1c39..c56df0da7ae5 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -611,6 +611,13 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd) } #endif +#ifndef vmemmap_pmd_mkhuge +static inline pmd_t vmemmap_pmd_mkhuge(struct page *page) +{ + return pmd_mkhuge(mk_pmd(page, PAGE_KERNEL)); +} +#endif + #ifndef VMEMMAP_HPAGE_SHIFT #define VMEMMAP_HPAGE_SHIFT PMD_SHIFT #endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 28c154679838..3ca36e259b4e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1759,6 +1759,62 @@ static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep, } } +static void __replace_huge_page_pte_vmemmap(pte_t *ptep, unsigned long start, + unsigned int nr, struct page *huge, + struct list_head *free_pages) +{ + unsigned long addr; + unsigned long end = start + (nr << PAGE_SHIFT); + + for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) { + struct page *page; + pte_t old = *ptep; + pte_t entry; + + prepare_vmemmap_page(huge); + + entry = mk_pte(huge++, PAGE_KERNEL); + VM_WARN_ON(!pte_present(old)); + page = pte_page(old); + list_add(&page->lru, free_pages); + + set_pte_at(&init_mm, addr, ptep, entry); + } +} + +static void replace_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start, + struct page *huge, + struct list_head *free_pages) +{ + unsigned long end = start + VMEMMAP_HPAGE_SIZE; + + flush_cache_vunmap(start, end); + __replace_huge_page_pte_vmemmap(pte_offset_kernel(pmd, start), start, + VMEMMAP_HPAGE_NR, huge, free_pages); + flush_tlb_kernel_range(start, end); +} + +static pte_t *merge_vmemmap_pte(pmd_t *pmdp, unsigned long addr) +{ + pte_t *pte; + struct page *page; + + pte = pte_offset_kernel(pmdp, addr); + page = pte_page(*pte); + set_pmd(pmdp, vmemmap_pmd_mkhuge(page)); + + return pte; +} + +static void merge_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start, + struct page *huge, + struct list_head *free_pages) +{ + replace_huge_page_pmd_vmemmap(pmd, start, huge, free_pages); + pte_free_kernel(&init_mm, merge_vmemmap_pte(pmd, start)); + flush_tlb_kernel_range(start, start + VMEMMAP_HPAGE_SIZE); +} + static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) { int i; @@ -1772,6 +1828,15 @@ static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list) } } +static inline void dissolve_compound_page(struct page *page, unsigned int order) +{ + int i; + unsigned int nr_pages = 1 << order; + + for (i = 1; i < nr_pages; i++) + set_page_refcounted(page + i); +} + static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) { pmd_t *pmd; @@ -1791,10 +1856,45 @@ static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) __remap_huge_page_pte_vmemmap); if (!freed_vmemmap_hpage_dec(pmd_page(*pmd)) && pmd_split(pmd)) { /* - * Todo: - * Merge pte to huge pmd if it has ever been split. + * Merge pte to huge pmd if it has ever been split. Now only + * support gigantic page which's vmemmap pages size is an + * integer multiple of PMD_SIZE. This is the simplest case + * to handle. */ clear_pmd_split(pmd); + + if (IS_ALIGNED(nr_vmemmap(h), VMEMMAP_HPAGE_NR)) { + unsigned long addr = (unsigned long)head; + unsigned long end = addr + nr_vmemmap_size(h); + + spin_unlock(ptl); + + for (; addr < end; addr += VMEMMAP_HPAGE_SIZE) { + void *to; + struct page *page; + + page = alloc_pages(GFP_VMEMMAP_PAGE & ~__GFP_NOFAIL, + VMEMMAP_HPAGE_ORDER); + if (!page) + goto out; + + to = page_to_virt(page); + memcpy(to, (void *)addr, VMEMMAP_HPAGE_SIZE); + + /* + * Make sure that any data that writes to the + * @to is made visible to the physical page. + */ + flush_kernel_vmap_range(to, VMEMMAP_HPAGE_SIZE); + + merge_huge_page_pmd_vmemmap(pmd++, addr, page, + &remap_pages); + } + +out: + free_vmemmap_page_list(&remap_pages); + return; + } } spin_unlock(ptl); } From patchwork Tue Sep 15 12:59:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776535 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 82F2314B7 for ; Tue, 15 Sep 2020 13:04:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 30CF420684 for ; Tue, 15 Sep 2020 13:04:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="LPFGIiaO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 30CF420684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BD666900049; Tue, 15 Sep 2020 09:03:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B86D7900041; Tue, 15 Sep 2020 09:03:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9C5C900049; Tue, 15 Sep 2020 09:03:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id 9326A900041 for ; Tue, 15 Sep 2020 09:03:59 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4C22A180ACF0E for ; Tue, 15 Sep 2020 13:03:59 +0000 (UTC) X-FDA: 77265313398.08.verse35_210348327111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 8EA231819E627 for ; Tue, 15 Sep 2020 13:03:55 +0000 (UTC) X-Spam-Summary: 1,0,0,f675471a6fa131ad,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1540:1711:1714:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3350:3867:3870:3874:4321:5007:6120:6261:6653:6737:6738:10004:11026:11473:11657:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:12986:13069:13311:13357:13894:14096:14181:14384:14721:21080:21444:21451:21627:21990:30054,0,RBL:209.85.214.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yght5dxnxueyzbij9nt9dh4do6uyc3jtwcuujyafqx5gxj4qwmshubqhnx5dd.txknbncwzaw5tzsr45j91hrwipoegowreuwhcrosi5iddb3395hdwxntw4ohgoz.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: verse35_210348327111 X-Filterd-Recvd-Size: 4420 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:03:46 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id e4so1309867pln.10 for ; Tue, 15 Sep 2020 06:03:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sz6P1Q0kau3GSnlYNyXI0z5mzhU/+ZiSkY10ScUrzyk=; b=LPFGIiaOonFtAsxKer4O1k4VDV/21wx6IGQBP/yW9jfBQCdIAz+nqS5J+q/bUqf1If PxixgYwuqbNpW7u/oclZ3AwhyIwiahOwnzbqvso+VW0dseDH++h8Sm68KIxO6fybwu1y 2W2atiKWeVJ78swW8ou9DyBrzIUeFQVHFU50PO7At9ZNrwfMyZuXvXIpIKk+/K4nC3WL nsVRCcm1tI11TDeDMHaQf4ISI9bDV+JtqFxIFIFkFsUWbbe+DJ60tSjxcU1GcEFLVMRh k5Uah3dz5Cee6AHxCCYK6C3aSYmyjOsx2nAhQmWIXlDXlEQ7NgVVy7GObE49ohiQ+vmo h8aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sz6P1Q0kau3GSnlYNyXI0z5mzhU/+ZiSkY10ScUrzyk=; b=DaX1h7e2ReGpjyxPJyNuWUQC34+c65XKE/hjPxtLd8rnqpvK8bsptGdjKK7UPVzryZ mIQ7X2uFCNS+RynxAa4fj+Mz6XOkXw3lSqczU3XJldfYJHQdZtR4xWzzTCoQ37T1t2TG FYqho6SYP/M03tWx4dVnMqaUmvMnLiVJjaQTYbRT/d6ZtloOh+UeXovWPW1H3Wvr185v VOO4bSK6FsD8owVum1xVQBPC3STLFGQrEURGZxzaeeyQZlYOkIqGlYOq+ogLiLqn72Uu We41jDNetgn8VOqmk6Sv1DIX7fqMqqTyubvsDXH9Y0s90GH9tFXrYVx4dtGnsyd2QyRn dC/Q== X-Gm-Message-State: AOAM5326mPZN9kcK7e3GAYc+OCpBKsQVedrr25+jSX5vlXj1R2UlOrIG AcLRpIH65CTliNU8Q/L8oDMF0aMiG8beLfld0wI= X-Google-Smtp-Source: ABdhPJyoZylboM7Gk4ZA72/1F4sZ9fvO9eKvsVO0QDYpNOSzflVchD8LbsJYKSeiSua9Y3Pi+z1jXQ== X-Received: by 2002:a17:90a:1f43:: with SMTP id y3mr3977140pjy.28.1600175025206; Tue, 15 Sep 2020 06:03:45 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.03.35 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:03:44 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 22/24] mm/hugetlb: Implement vmemmap_pmd_mkhuge macro Date: Tue, 15 Sep 2020 20:59:45 +0800 Message-Id: <20200915125947.26204-23-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 8EA231819E627 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In vmemmap_populate_hugepages(), we use PAGE_KERNEL_LARGE for huge page mapping. So we can inplement vmemmap_pmd_mkhuge macro to do that. Signed-off-by: Muchun Song --- arch/x86/include/asm/hugetlb.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h index 7c3eb60c2198..9f9e19dd0578 100644 --- a/arch/x86/include/asm/hugetlb.h +++ b/arch/x86/include/asm/hugetlb.h @@ -15,6 +15,14 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd) { return pmd_large(*pmd); } + +#define vmemmap_pmd_mkhuge vmemmap_pmd_mkhuge +static inline pmd_t vmemmap_pmd_mkhuge(struct page *page) +{ + pte_t entry = pfn_pte(page_to_pfn(page), PAGE_KERNEL_LARGE); + + return __pmd(pte_val(entry)); +} #endif #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE) From patchwork Tue Sep 15 12:59:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776541 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90FAD746 for ; Tue, 15 Sep 2020 13:04:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 36AD020829 for ; Tue, 15 Sep 2020 13:04:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="aATJ2s12" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 36AD020829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4AB9790004B; Tue, 15 Sep 2020 09:04:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4817D900041; Tue, 15 Sep 2020 09:04:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3711290004B; Tue, 15 Sep 2020 09:04:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id 20A8B900041 for ; Tue, 15 Sep 2020 09:04:14 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B06434410 for ; Tue, 15 Sep 2020 13:04:13 +0000 (UTC) X-FDA: 77265313986.18.trail05_400760227111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id F2A09100AF841 for ; Tue, 15 Sep 2020 13:04:09 +0000 (UTC) X-Spam-Summary: 1,0,0,65918fd2eb3dc7c1,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:2:41:355:379:541:800:960:966:973:981:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1605:1606:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:2892:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3872:4119:4321:4385:4605:5007:6120:6261:6653:6737:6738:7875:7903:8603:8957:10004:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13894:21080:21444:21451:21627:21740:21990:30054:30075,0,RBL:209.85.214.196:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04ygzrk98yx1uz6ho3tonohomynhuopormsm4xdnydtrziuj19wdrco1cra33iz.g5mpg1gyx1afn1bgn1jc1mgooo4g5k31rg6mm38k83i7n9wnsd3y11oqi8gfxfe.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:27,LUA_SUMMARY:none X-HE-Tag: trail05_400760227111 X-Filterd-Recvd-Size: 8815 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:03:55 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id x18so1309977pll.6 for ; Tue, 15 Sep 2020 06:03:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7E8YN4Bzo15kSKmYkpcbgXDUwiznqDsuN1+PQofAulI=; b=aATJ2s12MvQEI3XsalN4InCLiyf3a0dZSqkDoGA01GnlSCCa/Ejvvzcxnta5b1aMWA uyVKxZ/hn2AEWk+uKJGG1nU457Ix7Gq6VV+1lLSCTBlmn7BZiU3Qqs2iIdCjwAkvu0lC dnRk7HQB1eeTDPiNatjmjwaJgmLYKepz7+2Usk0GC7vbYRD12seCVkzfTnTJgWoVc5dZ d4EE2tk+VHNVh6+m1e/ZazHIujzJ91wjbgA5+M9fGXk9ainaPE/hnkBMN/TpqHm4ARMh seOWd1cO5xkdVCThHtWsWU4Agj7a57iUFzNskKz48oAlec92KoHE2STw3GkbyImFbts2 Wb7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7E8YN4Bzo15kSKmYkpcbgXDUwiznqDsuN1+PQofAulI=; b=Tj5rHJPPJ9a2XsujLKpMXQtUheng5TtOoBBBvjFM4qLaiKP2UgEsARTf9+siefI9vn Fyj1YNanPUmtc/SYTqwA+Ts2/ftNHT0r+NHCA9tbHNszeZrPJgspswmeYP5jx8pf4knd rlhn64K0PMlkYQ20LM59xgaSXUfdRkXoYO3CB/Tqt/LMvzjpShtM111ROjv9cIPlWkn2 YGmnSPmsk4A1yW43QyhKf4k/jYsD7/4Zs78UK3Qp2u83zilJKfUT5liUV2TCkt9fY9FR IUiKUW+uoL8wEa6GowEG/aieb2LlzFsea44c3tECOHCJ/BpOTPjj86YMoqU6mVJNm0e6 Rd6A== X-Gm-Message-State: AOAM531gJYZrLwLHyvRntzEJ72iVWlHIKMLkUwQ/5M/fiqbhxR9D9wHI UC0iNXD/rudtxcJ94YnmcOqzpg== X-Google-Smtp-Source: ABdhPJyOBRo9BX58UBvVpbDWwI/6Gx3WY9L0lfs3E9yhbiVOmY0PhRqppxX7XZCAXzzdHV2IBK3F7Q== X-Received: by 2002:a17:90a:ea02:: with SMTP id w2mr4058535pjy.9.1600175034205; Tue, 15 Sep 2020 06:03:54 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.03.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:03:53 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 23/24] mm/hugetlb: Gather discrete indexes of tail page Date: Tue, 15 Sep 2020 20:59:46 +0800 Message-Id: <20200915125947.26204-24-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: F2A09100AF841 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For hugetlb page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page In this case, it will be easier to add a new tail page index later. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 13 +++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 18 +++++++++--------- 3 files changed, 31 insertions(+), 15 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index c56df0da7ae5..358550a53555 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +enum { + SUBPAGE_INDEX_ACTIVE = 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3ca36e259b4e..e66c3f10c583 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1964,17 +1964,17 @@ static inline void flush_free_huge_page_work(void) static inline bool subpage_hwpoison(struct page *head, struct page *page) { - return page_private(head + 4) == page - head; + return page_private(head + SUBPAGE_INDEX_HWPOISON) == page - head; } static inline void set_subpage_hwpoison(struct page *head, struct page *page) { - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } static inline void clear_subpage_hwpoison(struct page *head) { - set_page_private(head + 4, 0); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, 0); } static int __init early_hugetlb_free_vmemmap_param(char *buf) @@ -2114,20 +2114,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* @@ -2139,17 +2139,17 @@ static inline bool PageHugeTemporary(struct page *page) if (!PageHuge(page)) return false; - return (unsigned long)page[2].mapping == -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U; } static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping = (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U; } static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping = NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL; } static void __free_huge_page(struct page *page) From patchwork Tue Sep 15 12:59:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11776545 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 69A9314B7 for ; Tue, 15 Sep 2020 13:04:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 20E3820829 for ; Tue, 15 Sep 2020 13:04:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="UCioyFq/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 20E3820829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A0100900041; Tue, 15 Sep 2020 09:04:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9B26C90004C; Tue, 15 Sep 2020 09:04:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82ACE900041; Tue, 15 Sep 2020 09:04:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 6D41790004C for ; Tue, 15 Sep 2020 09:04:14 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2A5348249980 for ; Tue, 15 Sep 2020 13:04:14 +0000 (UTC) X-FDA: 77265314028.30.leaf72_491654c27111 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 52DA7180B44A0 for ; Tue, 15 Sep 2020 13:04:10 +0000 (UTC) X-Spam-Summary: 1,0,0,30c9c6a9910a3eb0,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1539:1568:1711:1714:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3866:3867:4385:5007:6120:6261:6642:6653:6737:6738:8784:10004:11026:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:12986:13069:13161:13229:13255:13311:13357:13894:14181:14384:14721:21080:21444:21451:21627:30054,0,RBL:209.85.214.193:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yf95t7f8ihq3drah43j411qkkk5ycd1wkop995rfndj4iny9n4ei4xtuf8gyw.5ag3pmdrsbm9gs5p8sos1qg1mw911nxob458fddhrt49ey439noix4wkkhdge86.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: leaf72_491654c27111 X-Filterd-Recvd-Size: 4339 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 13:04:03 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id x18so1310181pll.6 for ; Tue, 15 Sep 2020 06:04:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jiL3ii4AJU2Njj/y/vFrTYFZ6QNs3Srg45f1FwlIOSw=; b=UCioyFq/nMf2pfW7YzyCEcx4LL4/jL1jJUkht2xjt+BMHebuzK/UKImXusybczC6tr hTRa9/L+br3pbdNipbSH20dV6etCd7/zplLBClKqkOe4p99jl807aGZocuBegTAKWegT qW2M0vOxaqwVOFS01ltZ94MXkaAN3yu7UcDSaUKd6lyuAbXXcRx6JaBNLO4WCBHrONrQ mI/sHDT9Q2NI5W8AOCu7XWmJSS7/IKrSpKT5OJnmwubKGNTLD2TMr7GFNW3wXS1xr9xB kQBgwh1xu2xzg1c75B4nGCznnkOnVs7OKDaGcFSPjqv8srW2nRfVyrrMT3OMOu6XNeyH 5Zqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jiL3ii4AJU2Njj/y/vFrTYFZ6QNs3Srg45f1FwlIOSw=; b=YHvZ8y0hEwFWWbIE5VNmwHN7Y8rcJYBFs/sclk5mg+1PZ989mCPlJljHzR7+vLln8Y acvK+u39cN7yAM7SzBgbNXnCfoitxWf2g6gScxGU3ek221oDq6wfVRsybqf2tcsOsIDK 0+X6pl69L+FOD04QT4OPOjPRXCJwmSByLHzWHdKu/9JtsN3GvwStDkQHygculqIECQCZ d20lwugZoDIbUrCtiMIw8egkyYf3j3Mv+ArMAcO6IAgBNqFjHNy6Kyhi0YvfAvKhk3GY lsbzfqY3fvDmnv24tGMY/GepUHWwCJq0e+Ll3aEbRLX/0ZYcBe0NW/1l9C/kKgQgeOoJ 7xCw== X-Gm-Message-State: AOAM5301zL/3CDAwFTkRKDH6u0TBm+TsSR4YKCP69Y1eqxpA/rJivKy/ CJKwSG6LguEFMX27M+BWzPntMg== X-Google-Smtp-Source: ABdhPJzDjZ5u/kFp3j7KzOYJwm3WjE591cWhOwkG8OQSFmLJnogu52u+8IhXbjfuzBZdoCwfGeOYwQ== X-Received: by 2002:a17:902:9887:b029:d1:e626:788d with SMTP id s7-20020a1709029887b02900d1e626788dmr1206774plp.53.1600175043242; Tue, 15 Sep 2020 06:04:03 -0700 (PDT) Received: from localhost.bytedance.net ([103.136.220.66]) by smtp.gmail.com with ESMTPSA id w185sm14269855pfc.36.2020.09.15.06.03.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Sep 2020 06:04:02 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [RFC PATCH 24/24] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Date: Tue, 15 Sep 2020 20:59:47 +0800 Message-Id: <20200915125947.26204-25-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20200915125947.26204-1-songmuchun@bytedance.com> References: <20200915125947.26204-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 52DA7180B44A0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are only `RESERVE_VMEMMAP_SIZE / sizeof(struct page)` struct pages can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch this invalid usage of tail struct page. Signed-off-by: Muchun Song --- mm/hugetlb.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e66c3f10c583..63995ba74b6b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3994,6 +3994,8 @@ static int __init hugetlb_init(void) #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP BUILD_BUG_ON_NOT_POWER_OF_2(sizeof(struct page)); + BUILD_BUG_ON(NR_USED_SUBPAGE >= + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); #endif if (!hugepages_supported()) {