From patchwork Fri Nov 13 10:59:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11902979 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E1F38138B for ; Fri, 13 Nov 2020 11:01:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7A0C322250 for ; Fri, 13 Nov 2020 11:01:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="Sn8J3NHL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7A0C322250 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 942176B00A9; Fri, 13 Nov 2020 06:01:42 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8CB9C6B00AA; Fri, 13 Nov 2020 06:01:42 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 791B46B00AB; Fri, 13 Nov 2020 06:01:42 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 42C1A6B00A9 for ; Fri, 13 Nov 2020 06:01:42 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DCF321EE6 for ; Fri, 13 Nov 2020 11:01:41 +0000 (UTC) X-FDA: 77479104402.25.can19_2c0c3a02730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 953CE1804E3A9 for ; Fri, 13 Nov 2020 11:01:41 +0000 (UTC) X-Spam-Summary: 1,0,0,f701f065461979ed,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:1:2:41:69:355:379:541:800:960:965:966:973:981:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:2194:2196:2198:2199:2200:2201:2393:2538:2539:2559:2562:2693:2731:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4052:4321:4385:4390:4395:4605:5007:6119:6261:6653:6737:6738:7875:7903:8603:8999:9036:9121:10004:11026:11233:11473:11658:11914:12043:12048:12050:12114:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13138:13231:13255:13894:14096:14394:21080:21094:21323:21433:21444:21450:21451:21627:21987:21990:30029:30046:30054:30064:30067:30069:30070,0,RBL:209.85.215.179:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yf7uq86k7fkri918argesgnibqnocsuupigfgbr3g61jneef1671oi99pdb7f.5zudjhq697zwds4mfpy8rwnbnjqsnfj5cmbh89qjn8nio8h38t166jid3r65cf6.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,Domain Cache:0, X-HE-Tag: can19_2c0c3a02730e X-Filterd-Recvd-Size: 12712 Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 11:01:40 +0000 (UTC) Received: by mail-pg1-f179.google.com with SMTP id f18so6797360pgi.8 for ; Fri, 13 Nov 2020 03:01:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tZWBkad1aByik7EPGEdLiaHkKOVuBPryX6lktgLSNfc=; b=Sn8J3NHLWKRmNkgu+5kZJsZ6eEiDa5bBIamYqx2NEg/W+sVELMtYnAZdShaC7CRojq 4Ggn04dhe0mbnOnTEWVOnkWLQI/k8GMQfAUMn0aQL1dEdQ2XD1Yo719jRyFykLauvt/k dPx5D1M1qb8ksnmWbVr5si5dwirvEjztDidJCVJQreHxwafHLEjSR9iH8/Amvc3n/UrX N+IziT7TTAX2XZJgUAL/W5cskTQKSl8DiEmUSdnRm3te5Jaoa32nyNy6pGe6f9AcUptS Dolo3jceYF7i7RJ1mQWjmn6+aEqWcYd/hweF38Ot6vACeYyTtSZNSLqb+deMx6sYAn6r amsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tZWBkad1aByik7EPGEdLiaHkKOVuBPryX6lktgLSNfc=; b=TrjBTRUp+3xky/HGarOEzv63bZsXjemZgozSLMUT3kNuFEqnSGBhf7gZGq7pxN8i1y v1iEcJ7gk8Oo8LtU8td6rpe4AlZwmBztfn1Z44LUchD6hTLRVoUtlflta2bC5wYjmTLC uL0oLro9dyXShwCHmfv9qPvlFFPWxsh6OyyA2dKw1NdiTa77VYmfhxJD8kwbCMkNMM5+ XfZAxwJKyX9cF6rpja8QJL1RU4UOtVxPopGprROKtJV5dnIP8HVJSmjSuCcMDAEB/1NM SyuRtLYLClcd/utOCimy004isTYg3Or/MdEn5sTZQnQhjYYWrQtP8xQd/ZrUc4YvhPBI XGYg== X-Gm-Message-State: AOAM533eQVkUOEkg+wH2ziUOdefw+19nkjO6wnPhl7VUYIHFoqPfZSBv +YsYy2GUEv7goiTcZWDIaROhGQ== X-Google-Smtp-Source: ABdhPJxzdkaQ72DWuqPJHJj9pNTlv7mpkg7nsUPfbN7ITv8pKowvZoLXseNg38sfh+U7wDMt+l9VMg== X-Received: by 2002:a17:90b:154:: with SMTP id em20mr2356952pjb.114.1605265299802; Fri, 13 Nov 2020 03:01:39 -0800 (PST) Received: from localhost.localdomain ([61.120.150.78]) by smtp.gmail.com with ESMTPSA id f1sm8909959pfc.56.2020.11.13.03.01.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Nov 2020 03:01:39 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Date: Fri, 13 Nov 2020 18:59:35 +0800 Message-Id: <20201113105952.11638-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201113105952.11638-1-songmuchun@bytedance.com> References: <20201113105952.11638-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the size of HugeTLB page is 2MB, we need 512 struct page structures (8 pages) to be associated with it. As far as I know, we only use the first 4 struct page structures. Use of first 4 struct page structures comes from HUGETLB_CGROUP_MIN_ORDER. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structs. We map the virtual addresses of the remaining 6 pages of tail page structs to the first tail page struct, and then free these 6 pages. Therefore, we need to reserve at least 2 pages as vmemmap areas. So we introduce a new nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a HugeTLB page that we can free to buddy system. Signed-off-by: Muchun Song Acked-by: Mike Kravetz --- include/linux/hugetlb.h | 3 ++ mm/Makefile | 1 + mm/hugetlb.c | 3 ++ mm/hugetlb_vmemmap.c | 108 ++++++++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 20 +++++++++ 5 files changed, 135 insertions(+) create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d5cc5f802dd4..eed3dd3bd626 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -492,6 +492,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/Makefile b/mm/Makefile index 752111587c99..2a734576bbc0 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP) += frontswap.o obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) += hugetlb_vmemmap.o obj-$(CONFIG_NUMA) += mempolicy.o obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 81a41aa080a5..f88032c24667 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -42,6 +42,7 @@ #include #include #include "internal.h" +#include "hugetlb_vmemmap.h" int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; @@ -3285,6 +3286,8 @@ void __init hugetlb_add_hstate(unsigned int order) snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); + parsed_hstate = h; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c new file mode 100644 index 000000000000..a6c9948302e2 --- /dev/null +++ b/mm/hugetlb_vmemmap.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + * + * Nowadays we track the status of physical page frames using struct page + * structures arranged in one or more arrays. And here exists one-to-one + * mapping between the physical page frame and the corresponding struct page + * structure. + * + * The HugeTLB support is built on top of multiple page size support that + * is provided by most modern architectures. For example, x86 CPUs normally + * support 4K and 2M (1G if architecturally supported) page sizes. Every + * HugeTLB has more than one struct page structure. The 2M HugeTLB has 512 + * struct page structure and 1G HugeTLB has 4096 struct page structures. But + * in the core of HugeTLB only uses the first 4 (Use of first 4 struct page + * structures comes from HUGETLB_CGROUP_MIN_ORDER.) struct page structures to + * store metadata associated with each HugeTLB. The rest of the struct page + * structures are usually read the compound_head field which are all the same + * value. If we can free some struct page memory to buddy system so that we + * can save a lot of memory. + * + * When the system boot up, every 2M HugeTLB has 512 struct page structures + * which size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE). + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | | 1 | -------------> | 1 | + * | | | 2 | -------------> | 2 | + * | | | 3 | -------------> | 3 | + * | | | 4 | -------------> | 4 | + * | 2M | | 5 | -------------> | 5 | + * | | | 6 | -------------> | 6 | + * | | | 7 | -------------> | 7 | + * | | +-----------+ +-----------+ + * | | + * | | + * +-----------+ + * + * + * When a HugeTLB is preallocated, we can change the mapping from above to + * bellow. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | | 1 | -------------> | 1 | + * | | | 2 | -------------> +-----------+ + * | | | 3 | -----------------^ ^ ^ ^ ^ + * | | | 4 | -------------------+ | | | + * | 2M | | 5 | ---------------------+ | | + * | | | 6 | -----------------------+ | + * | | | 7 | -------------------------+ + * | | +-----------+ + * | | + * | | + * +-----------+ + * + * For tail pages, the value of compound_head is the same. So we can reuse + * first page of tail page structures. We map the virtual addresses of the + * remaining 6 pages of tail page structures to the first tail page structures, + * and then free these 6 page frames. Therefore, we need to reserve at least 2 + * pages as vmemmap areas. + * + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for + * vmemmap pages and restore the previous mapping relationship. + */ +#define pr_fmt(fmt) "HugeTLB Vmemmap: " fmt + +#include "hugetlb_vmemmap.h" + +/* + * There are 512 struct page structures(8 pages) associated with each 2MB + * hugetlb page. For tail pages, the value of compound_head is the same. + * So we can reuse first page of tail page structures. We map the virtual + * addresses of the remaining 6 pages of tail page structures to the first + * tail page struct, and then free these 6 pages. Therefore, we need to + * reserve at least 2 pages as vmemmap areas. + */ +#define RESERVE_VMEMMAP_NR 2U + +void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int order = huge_page_order(h); + unsigned int vmemmap_pages; + + vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page are not to be freed to buddy + * system, the others page will map to the first tail page. So there + * are (@vmemmap_pages - RESERVE_VMEMMAP_NR) pages can be freed. + * + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? This is + * not expected to happen unless the system is corrupted. So on the + * safe side, it is only a safety net. + */ + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + else + h->nr_free_vmemmap_pages = 0; + + pr_debug("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, + h->name); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h new file mode 100644 index 000000000000..40c0c7dfb60d --- /dev/null +++ b/mm/hugetlb_vmemmap.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + */ +#ifndef _LINUX_HUGETLB_VMEMMAP_H +#define _LINUX_HUGETLB_VMEMMAP_H +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void __init hugetlb_vmemmap_init(struct hstate *h); +#else +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ +#endif /* _LINUX_HUGETLB_VMEMMAP_H */