From patchwork Sun Sep 26 03:13:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12517885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9FE1C433F5 for ; Sun, 26 Sep 2021 03:14:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D13F61019 for ; Sun, 26 Sep 2021 03:14:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6D13F61019 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 068B46B0072; Sat, 25 Sep 2021 23:14:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0179A6B0073; Sat, 25 Sep 2021 23:14:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DFAA5900002; Sat, 25 Sep 2021 23:14:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id D38E76B0072 for ; Sat, 25 Sep 2021 23:14:54 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8D9A5180D0199 for ; Sun, 26 Sep 2021 03:14:54 +0000 (UTC) X-FDA: 78628257708.20.6C7728E Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf01.hostedemail.com (Postfix) with ESMTP id 3BBD95063262 for ; Sun, 26 Sep 2021 03:14:54 +0000 (UTC) Received: by mail-pf1-f179.google.com with SMTP id 203so12399950pfy.13 for ; Sat, 25 Sep 2021 20:14:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1dXudQrHvFSeLVGp3XT1IXwm4aNAcmRhizebAhpOZbE=; b=AamndfKagl5ICoCHTWYkd/G1OQOUeHfPnKmjJ2viQzk6ViUvpU0n6cNsUZUHDqgyP8 hZHBGXyvjvlz6eEdl6dj0wEjZqiJROxRU+7NUiHVa+QDay1m/66snrHAMkNRBEkEnb+X d27LNlfTvh8F2U9+Tykf4qBYLZ9+dJ3DHQNmTthIIr4lpkUx1XmxF5XLWnD4ResDOI/i fakv7rIzgmUpnPbC+rBdUBCZfGsoCTEveJWNekLupc/30P2pre3RaA/kD9pr5tt1lPM5 OCbyqoVCSojVTV49q/xQ6Mbnj1S/1t0EkU+BkdHlqsFgNhySmNbpZU4CWYu9KJ7yw+kB t2yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1dXudQrHvFSeLVGp3XT1IXwm4aNAcmRhizebAhpOZbE=; b=VpUueTpjBpZ08bkB36kS/unmN4qH/S1DqXrfHPL8aamr8hSZsvD0mZ5OdJVN7Xl06E 1fQC5d547G0anS8Rbg+hmRCwRnKjC/MLuqWrqfwoSu062Og8waOijtcX6iaJzLWSF1Cq RHBY6rbROT9Zdm/OUZdwMZTStlPCg5SY62uR4RfS/ZB7V3xnndM4MHTyirCXOGfsMCBU bCfXjxWkZQ6kXV0ytGGVb7JtZtAxOX1NYAy1aFUld+zYmh4eAhU9uvh70DqEcFa8KuVk DNfnzxnKWK0/KRfKNq39x9an78F+WtO7LvjkyK4dVx2hth9vShHiJ3AlpWoivjeLN5Dl km2g== X-Gm-Message-State: AOAM530wwnQaDA9cNW+s+p4fv3QKesGaQysHU8fQVOSfnQUaMJfTYd9o N78InvjUHI5Rw/eZn12Dt4sUJA== X-Google-Smtp-Source: ABdhPJwCkjsQHY9zS4WweM14x4GhlvACJm5iEPhyKu0tIdBAADQxKy+t3pqKsraiQWn296Nxmq8Hfg== X-Received: by 2002:a63:da49:: with SMTP id l9mr10616504pgj.277.1632626093062; Sat, 25 Sep 2021 20:14:53 -0700 (PDT) Received: from localhost.localdomain ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id v26sm13374862pfm.175.2021.09.25.20.14.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 25 Sep 2021 20:14:52 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, akpm@linux-foundation.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, chenhuang5@huawei.com, bodeddub@amazon.com, corbet@lwn.net, willy@infradead.org, 21cnbao@gmail.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, zhengqi.arch@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH v4 1/5] mm: hugetlb: free the 2nd vmemmap page associated with each HugeTLB page Date: Sun, 26 Sep 2021 11:13:35 +0800 Message-Id: <20210926031339.40043-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210926031339.40043-1-songmuchun@bytedance.com> References: <20210926031339.40043-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 3BBD95063262 X-Stat-Signature: kc15gperrurkua16taq4hau7qsmwmum5 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=AamndfKa; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf01.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1632626094-652205 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, we only free 6 vmemmap pages associated with a 2MB HugeTLB page. However, we can remap all tail vmemmap pages to the page frame mapped to with the head vmemmap page. Finally, we can free 7 vmemmap pages for a 2MB HugeTLB page. It is a fine gain (e.g. we can save extra 2GB memory when there is 1TB HugeTLB pages in the system compared with the current implementation). But the head vmemmap page is not freed to the buddy allocator and all tail vmemmap pages are mapped to the head vmemmap page frame. So we can see more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB page) associated with each HugeTLB page. We should adjust compound_head() to make it returns the real head struct page when the parameter is the tail struct page but with PG_head flag. Signed-off-by: Muchun Song Reviewed-by: Barry Song --- Documentation/admin-guide/kernel-parameters.txt | 2 +- include/linux/page-flags.h | 78 +++++++++++++++++++++++-- mm/hugetlb_vmemmap.c | 60 ++++++++++--------- mm/sparse-vmemmap.c | 21 +++++++ 4 files changed, 129 insertions(+), 32 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 91ba391f9b32..5aaf2f271980 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1617,7 +1617,7 @@ [KNL] Reguires CONFIG_HUGETLB_PAGE_FREE_VMEMMAP enabled. Allows heavy hugetlb users to free up some more - memory (6 * PAGE_SIZE for each 2MB hugetlb page). + memory (7 * PAGE_SIZE for each 2MB hugetlb page). Format: { on | off (default) } on: enable the feature diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 70bf0ec29ee3..b49808e748ce 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -184,13 +184,69 @@ enum pageflags { #ifndef __GENERATING_BOUNDS_H +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +extern bool hugetlb_free_vmemmap_enabled; + +/* + * If the feature of freeing some vmemmap pages associated with each HugeTLB + * page is enabled, the head vmemmap page frame is reused and all of the tail + * vmemmap addresses map to the head vmemmap page frame (furture details can + * refer to the figure at the head of the mm/hugetlb_vmemmap.c). In other + * words, there are more than one page struct with PG_head associated with each + * HugeTLB page. We __know__ that there is only one head page struct, the tail + * page structs with PG_head are fake head page structs. We need an approach + * to distinguish between those two different types of page structs so that + * compound_head() can return the real head page struct when the parameter is + * the tail page struct but with PG_head. + * + * The page_fixed_fake_head() returns the real head page struct if the @page is + * fake page head, otherwise, returns @page which can either be a true page + * head or tail. + */ +static __always_inline const struct page *page_fixed_fake_head(const struct page *page) +{ + if (!hugetlb_free_vmemmap_enabled) + return page; + + /* + * Only addresses aligned with PAGE_SIZE of struct page may be fake head + * struct page. The alignment check aims to avoid access the fields ( + * e.g. compound_head) of the @page[1]. It can avoid touch a (possibly) + * cold cacheline in some cases. + */ + if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) && + test_bit(PG_head, &page->flags)) { + /* + * We can safely access the field of the @page[1] with PG_head + * because the @page is a compound page composed with at least + * two contiguous pages. + */ + unsigned long head = READ_ONCE(page[1].compound_head); + + if (likely(head & 1)) + return (const struct page *)(head - 1); + } + return page; +} +#else +static __always_inline const struct page *page_fixed_fake_head(const struct page *page) +{ + return page; +} +#endif + +static __always_inline int page_is_fake_head(struct page *page) +{ + return page_fixed_fake_head(page) != page; +} + static inline unsigned long _compound_head(const struct page *page) { unsigned long head = READ_ONCE(page->compound_head); if (unlikely(head & 1)) return head - 1; - return (unsigned long)page; + return (unsigned long)page_fixed_fake_head(page); } #define compound_head(page) ((typeof(page))_compound_head(page)) @@ -225,12 +281,13 @@ static inline unsigned long _compound_head(const struct page *page) static __always_inline int PageTail(struct page *page) { - return READ_ONCE(page->compound_head) & 1; + return READ_ONCE(page->compound_head) & 1 || page_is_fake_head(page); } static __always_inline int PageCompound(struct page *page) { - return test_bit(PG_head, &page->flags) || PageTail(page); + return test_bit(PG_head, &page->flags) || + READ_ONCE(page->compound_head) & 1; } #define PAGE_POISON_PATTERN -1l @@ -675,7 +732,20 @@ static inline bool test_set_page_writeback(struct page *page) return set_page_writeback(page); } -__PAGEFLAG(Head, head, PF_ANY) CLEARPAGEFLAG(Head, head, PF_ANY) +static __always_inline bool folio_test_head(struct folio *folio) +{ + return test_bit(PG_head, folio_flags(folio, FOLIO_PF_ANY)); +} + +static __always_inline int PageHead(struct page *page) +{ + PF_POISONED_CHECK(page); + return test_bit(PG_head, &page->flags) && !page_is_fake_head(page); +} + +__SETPAGEFLAG(Head, head, PF_ANY) +__CLEARPAGEFLAG(Head, head, PF_ANY) +CLEARPAGEFLAG(Head, head, PF_ANY) /* Whether there are one or multiple pages in a folio */ static inline bool folio_test_single(struct folio *folio) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index c540c21e26f5..f4a8fca691ee 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -124,9 +124,9 @@ * page of page structs (page 0) associated with the HugeTLB page contains the 4 * page structs necessary to describe the HugeTLB. The only use of the remaining * pages of page structs (page 1 to page 7) is to point to page->compound_head. - * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs + * Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of page structs * will be used for each HugeTLB page. This will allow us to free the remaining - * 6 pages to the buddy allocator. + * 7 pages to the buddy allocator. * * Here is how things look after remapping. * @@ -134,30 +134,30 @@ * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ * | | | 0 | -------------> | 0 | * | | +-----------+ +-----------+ - * | | | 1 | -------------> | 1 | - * | | +-----------+ +-----------+ - * | | | 2 | ----------------^ ^ ^ ^ ^ ^ - * | | +-----------+ | | | | | - * | | | 3 | ------------------+ | | | | - * | | +-----------+ | | | | - * | | | 4 | --------------------+ | | | - * | PMD | +-----------+ | | | - * | level | | 5 | ----------------------+ | | - * | mapping | +-----------+ | | - * | | | 6 | ------------------------+ | - * | | +-----------+ | - * | | | 7 | --------------------------+ + * | | | 1 | ---------------^ ^ ^ ^ ^ ^ ^ + * | | +-----------+ | | | | | | + * | | | 2 | -----------------+ | | | | | + * | | +-----------+ | | | | | + * | | | 3 | -------------------+ | | | | + * | | +-----------+ | | | | + * | | | 4 | ---------------------+ | | | + * | PMD | +-----------+ | | | + * | level | | 5 | -----------------------+ | | + * | mapping | +-----------+ | | + * | | | 6 | -------------------------+ | + * | | +-----------+ | + * | | | 7 | ---------------------------+ * | | +-----------+ * | | * | | * | | * +-----------+ * - * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for + * When a HugeTLB is freed to the buddy system, we should allocate 7 pages for * vmemmap pages and restore the previous mapping relationship. * * For the HugeTLB page of the pud level mapping. It is similar to the former. - * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages. + * We also can use this approach to free (PAGE_SIZE - 1) vmemmap pages. * * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures * (e.g. aarch64) provides a contiguous bit in the translation table entries @@ -166,7 +166,13 @@ * * The contiguous bit is used to increase the mapping size at the pmd and pte * (last) level. So this type of HugeTLB page can be optimized only when its - * size of the struct page structs is greater than 2 pages. + * size of the struct page structs is greater than 1 page. + * + * Notice: The head vmemmap page is not freed to the buddy allocator and all + * tail vmemmap pages are mapped to the head vmemmap page frame. So we can see + * more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB page) + * associated with each HugeTLB page. The compound_head() can handle this + * correctly (more details refer to the comment above compound_head()). */ #define pr_fmt(fmt) "HugeTLB: " fmt @@ -175,14 +181,16 @@ /* * There are a lot of struct page structures associated with each HugeTLB page. * For tail pages, the value of compound_head is the same. So we can reuse first - * page of tail page structures. We map the virtual addresses of the remaining - * pages of tail page structures to the first tail page struct, and then free - * these page frames. Therefore, we need to reserve two pages as vmemmap areas. + * page of head page structures. We map the virtual addresses of all the pages + * of tail page structures to the head page struct, and then free these page + * frames. Therefore, we need to reserve one pages as vmemmap areas. */ -#define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_NR 1U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) -bool hugetlb_free_vmemmap_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON); +bool hugetlb_free_vmemmap_enabled __read_mostly = + IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON); +EXPORT_SYMBOL(hugetlb_free_vmemmap_enabled); static int __init early_hugetlb_free_vmemmap_param(char *buf) { @@ -236,7 +244,6 @@ int alloc_huge_page_vmemmap(struct hstate *h, struct page *head) */ ret = vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse, GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE); - if (!ret) ClearHPageVmemmapOptimized(head); @@ -282,9 +289,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h) vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; /* - * The head page and the first tail page are not to be freed to buddy - * allocator, the other pages will map to the first tail page, so they - * can be freed. + * The head page is not to be freed to buddy allocator, the other tail + * pages will map to the head page, so they can be freed. * * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true * on some architectures (e.g. aarch64). See Documentation/arm64/ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index db6df27c852a..54784d60f19d 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -53,6 +53,17 @@ struct vmemmap_remap_walk { struct list_head *vmemmap_pages; }; +/* + * How many struct page structs need to be reset. When we reuse the head + * struct page, the special metadata (e.g. page->flags or page->mapping) + * cannot copy to the tail struct page structs. The invalid value will be + * checked in the free_tail_pages_check(). In order to avoid the message + * of "corrupted mapping in tail page". We need to reset at least 3 (one + * head struct page struct and two tail struct page structs) struct page + * structs. + */ +#define NR_RESET_STRUCT_PAGE 3 + static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, struct vmemmap_remap_walk *walk) { @@ -245,6 +256,15 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, set_pte_at(&init_mm, addr, pte, entry); } +static inline void reset_struct_pages(struct page *start) +{ + int i; + struct page *from = start + NR_RESET_STRUCT_PAGE; + + for (i = 0; i < NR_RESET_STRUCT_PAGE; i++) + memcpy(start + i, from, sizeof(*from)); +} + static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, struct vmemmap_remap_walk *walk) { @@ -258,6 +278,7 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, list_del(&page->lru); to = page_to_virt(page); copy_page(to, (void *)walk->reuse_addr); + reset_struct_pages(to); set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); } From patchwork Sun Sep 26 03:13:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12517887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38A91C433FE for ; Sun, 26 Sep 2021 03:15:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A765D61019 for ; Sun, 26 Sep 2021 03:15:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A765D61019 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 4ADCC6B0073; Sat, 25 Sep 2021 23:15:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 45DBE900002; Sat, 25 Sep 2021 23:15:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FDDA6B0075; Sat, 25 Sep 2021 23:15:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id 2296C6B0073 for ; Sat, 25 Sep 2021 23:15:02 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D59842BA9D for ; Sun, 26 Sep 2021 03:15:01 +0000 (UTC) X-FDA: 78628258002.33.8C87B2A Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf29.hostedemail.com (Postfix) with ESMTP id 93DE9900013E for ; Sun, 26 Sep 2021 03:15:01 +0000 (UTC) Received: by mail-pl1-f175.google.com with SMTP id bb10so9316465plb.2 for ; Sat, 25 Sep 2021 20:15:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5ilJDkaXh7LSsk+2CbzGdt7+obruL8uLSSSFT+3lohw=; b=nqbT0e8TItR5o3/wHL39X8UsBDXH5hOn3v5nBEgeVS2jYUxf+re1U5X70AIdPHelUE YjGQTjEoB0SLyDuFEvlWDA0AlZsFh0rrmULGTua7RVSnrctMGumYosfbaMJZ5A4FKDJa J6h3CvyOEUt+gYjVD1VZcdirCyTKHBNOdCrLBx2WBEXjjWTPNFj2Wn5FL1twweOY4J0t l+8s3pbqNLiR91hA6EYHLMSrG/LtZIyzV2VAV8ccaquyrTKzfT728yu9aMOumxCRdzWY Be88VpOBa05/aL1aAMBt9C4Jtn/VA6WtxqURPWqextJtlYSerTeMk0nWoT4+VYQslfnN eSYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5ilJDkaXh7LSsk+2CbzGdt7+obruL8uLSSSFT+3lohw=; b=0nW4zFEkYS3OQtcr0LApRomZ+Xruy9PomTzjZBc7xLudbyh5/YxZBclfvWN5Xz5NgU RyqTdS0/Gpk3gM1ZIccx0lpam0Px6dH0L68hgRfbTNTGvNFuEVl4WUPTb+wjqIFdzXid Z4cI0Iz1aLGiP0RTuendSYL1KJ+fRYmXOCis1HQufMVVjfrsi2Q5MKpJou3xH2B8HYRl BpLsA4FQJ1tF+44gL+iig0c+qHm1MRP7H+fQQcuIU4fXV3dlp7bw0mR6heTpx+riydxG E1HLnaW7PQ+Va7tCYvBSi3/2wV01PzK5L9UBIUZkGQbPAsMdv/g2pRgjkpXymUk+P77Y W/eA== X-Gm-Message-State: AOAM533ZFjGY6QyKrVAL5zTV4CQN5uC+62QS09FXXtRTtt4lusbptc9q 2SmkX0XW0DSEhQOdITcvqZCSHw== X-Google-Smtp-Source: ABdhPJy9/a/hcrXJCLT/1kui8x/aIagv3blDvi1/0Oi0qfoui0T6GrgWME8+PthvdIAaNfxdVJbuiA== X-Received: by 2002:a17:90b:8cb:: with SMTP id ds11mr11500318pjb.66.1632626100693; Sat, 25 Sep 2021 20:15:00 -0700 (PDT) Received: from localhost.localdomain ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id v26sm13374862pfm.175.2021.09.25.20.14.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 25 Sep 2021 20:15:00 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, akpm@linux-foundation.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, chenhuang5@huawei.com, bodeddub@amazon.com, corbet@lwn.net, willy@infradead.org, 21cnbao@gmail.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, zhengqi.arch@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH v4 2/5] mm: hugetlb: replace hugetlb_free_vmemmap_enabled with a static_key Date: Sun, 26 Sep 2021 11:13:36 +0800 Message-Id: <20210926031339.40043-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210926031339.40043-1-songmuchun@bytedance.com> References: <20210926031339.40043-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 93DE9900013E X-Stat-Signature: szmkagtshgxdpxdwp9u8qs6ysatg4gpu Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=nqbT0e8T; spf=pass (imf29.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam06 X-HE-Tag: 1632626101-4084 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The page_head_if_fake() is used throughout memory management and the conditional check requires checking a global variable, although the overhead of this check may be small, it increases when the memory cache comes under pressure. Also, the global variable will not be modified after system boot, so it is very appropriate to use static key machanism. Signed-off-by: Muchun Song Reviewed-by: Barry Song --- include/linux/hugetlb.h | 6 ------ include/linux/page-flags.h | 18 +++++++++++++++--- mm/hugetlb_vmemmap.c | 12 ++++++------ mm/memory_hotplug.c | 2 +- 4 files changed, 22 insertions(+), 16 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3cbf60464398..a90cc88195da 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1055,12 +1055,6 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr } #endif /* CONFIG_HUGETLB_PAGE */ -#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP -extern bool hugetlb_free_vmemmap_enabled; -#else -#define hugetlb_free_vmemmap_enabled false -#endif - static inline spinlock_t *huge_pte_lock(struct hstate *h, struct mm_struct *mm, pte_t *pte) { diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index b49808e748ce..26e540fd3393 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -185,7 +185,14 @@ enum pageflags { #ifndef __GENERATING_BOUNDS_H #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP -extern bool hugetlb_free_vmemmap_enabled; +DECLARE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON, + hugetlb_free_vmemmap_enabled_key); + +static __always_inline bool hugetlb_free_vmemmap_enabled(void) +{ + return static_branch_maybe(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON, + &hugetlb_free_vmemmap_enabled_key); +} /* * If the feature of freeing some vmemmap pages associated with each HugeTLB @@ -205,7 +212,7 @@ extern bool hugetlb_free_vmemmap_enabled; */ static __always_inline const struct page *page_fixed_fake_head(const struct page *page) { - if (!hugetlb_free_vmemmap_enabled) + if (!hugetlb_free_vmemmap_enabled()) return page; /* @@ -229,10 +236,15 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page return page; } #else -static __always_inline const struct page *page_fixed_fake_head(const struct page *page) +static inline const struct page *page_fixed_fake_head(const struct page *page) { return page; } + +static inline bool hugetlb_free_vmemmap_enabled(void) +{ + return false; +} #endif static __always_inline int page_is_fake_head(struct page *page) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index f4a8fca691ee..22ecb5e21686 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -188,9 +188,9 @@ #define RESERVE_VMEMMAP_NR 1U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) -bool hugetlb_free_vmemmap_enabled __read_mostly = - IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON); -EXPORT_SYMBOL(hugetlb_free_vmemmap_enabled); +DEFINE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON, + hugetlb_free_vmemmap_enabled_key); +EXPORT_SYMBOL(hugetlb_free_vmemmap_enabled_key); static int __init early_hugetlb_free_vmemmap_param(char *buf) { @@ -204,9 +204,9 @@ static int __init early_hugetlb_free_vmemmap_param(char *buf) return -EINVAL; if (!strcmp(buf, "on")) - hugetlb_free_vmemmap_enabled = true; + static_branch_enable(&hugetlb_free_vmemmap_enabled_key); else if (!strcmp(buf, "off")) - hugetlb_free_vmemmap_enabled = false; + static_branch_disable(&hugetlb_free_vmemmap_enabled_key); else return -EINVAL; @@ -284,7 +284,7 @@ void __init hugetlb_vmemmap_init(struct hstate *h) BUILD_BUG_ON(__NR_USED_SUBPAGE >= RESERVE_VMEMMAP_SIZE / sizeof(struct page)); - if (!hugetlb_free_vmemmap_enabled) + if (!hugetlb_free_vmemmap_enabled()) return; vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 4ea91c3ff768..66eaa4e2e76f 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1341,7 +1341,7 @@ bool mhp_supports_memmap_on_memory(unsigned long size) * populate a single PMD. */ return memmap_on_memory && - !hugetlb_free_vmemmap_enabled && + !hugetlb_free_vmemmap_enabled() && IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) && size == memory_block_size_bytes() && IS_ALIGNED(vmemmap_size, PMD_SIZE) && From patchwork Sun Sep 26 03:13:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12517889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D2D3C433FE for ; Sun, 26 Sep 2021 03:15:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 21B3261029 for ; Sun, 26 Sep 2021 03:15:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 21B3261029 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id B8AA16B0074; Sat, 25 Sep 2021 23:15:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B3AEC6B0075; Sat, 25 Sep 2021 23:15:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A04776B0078; Sat, 25 Sep 2021 23:15:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0095.hostedemail.com [216.40.44.95]) by kanga.kvack.org (Postfix) with ESMTP id 93CAB6B0074 for ; Sat, 25 Sep 2021 23:15:11 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 45F53180D07AA for ; Sun, 26 Sep 2021 03:15:11 +0000 (UTC) X-FDA: 78628258422.06.1F91F69 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by imf08.hostedemail.com (Postfix) with ESMTP id F2E0330000A8 for ; Sun, 26 Sep 2021 03:15:10 +0000 (UTC) Received: by mail-pj1-f49.google.com with SMTP id me5-20020a17090b17c500b0019af76b7bb4so12613422pjb.2 for ; Sat, 25 Sep 2021 20:15:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=E3IBKUmZ2TK9p56On/1012uGYDHT8S08Ku/uSqquklo=; b=B8k1SahYifjb1ZtIEOTYbQdjTAp9GsCTnqAUbuBDkmnCNy6pKBLgclYQc5VIdxHXpT Q1DKuBAgua2S0W4BunrP3+CDwKtN6ATHT8d57SaUwS9AY/4mK11shN9GeAuzfRjuYdqD HfGcSPjeoXf0H9PlY0NvsXbn8gHqOqAgFyKDphuAjTwX0Ikb2mPWnDO6opE+g6fJI1kK 12Q4/vIzR4ay2rNRZN7PtDadtHDtSUnhkPluQKMX8JPYFOi/gJKPDZyzomkN2M37jRmr 2xNW1BCGA97/1NfnvBtq7qoMratTC+1ynEegzYduDFeMJVWnf+L4rmQXJDmqgtd1NdL/ EkEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=E3IBKUmZ2TK9p56On/1012uGYDHT8S08Ku/uSqquklo=; b=aPzzAB+a93gxfA1omRg1A9khc7T0dY/gEjuQJhPNrYHLQkLL8DE2zwD7WYYOgPU1Lo NZzgrxkBRUXSsnFypfZ/ic9xS1bPUWTWChjOrYzaEs9BVA2NaoAwAEgskWNJRJel2fYL MiweM8ulvlViiFt0HzljySL3nG6qIovC1HYsMyKBrUSBQ5GqljpwWCRByaLLs2kD2BmK G0C11DD/pcLXZK3Wmco7i5qL1vamCHu31x04UQMdhzxTd5YYpPAIRUM8MTxncjj+PQwN Bysxw1LNXLsUcXi0ScjsIKh+9ks+pIijYO2CqXX9iJfvk677j7SbmO8STqqieLff3kb/ buIg== X-Gm-Message-State: AOAM530wBvcnnT4DMuWu0y4oFkh5z3hoVqjfxVP2eTKxZ2SzjDqbYWpy KFKUMWtDEEGLq6TTCft8PeNReA== X-Google-Smtp-Source: ABdhPJy4XigGlKPqzx975YrG/lkTwJrq236kvQexeKtRIabNlwd8j5UP3CqGPWkUG0zOtrOdj0CbYg== X-Received: by 2002:a17:90b:3797:: with SMTP id mz23mr11529242pjb.216.1632626110154; Sat, 25 Sep 2021 20:15:10 -0700 (PDT) Received: from localhost.localdomain ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id v26sm13374862pfm.175.2021.09.25.20.15.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 25 Sep 2021 20:15:09 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, akpm@linux-foundation.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, chenhuang5@huawei.com, bodeddub@amazon.com, corbet@lwn.net, willy@infradead.org, 21cnbao@gmail.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, zhengqi.arch@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH v4 3/5] mm: sparsemem: use page table lock to protect kernel pmd operations Date: Sun, 26 Sep 2021 11:13:37 +0800 Message-Id: <20210926031339.40043-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210926031339.40043-1-songmuchun@bytedance.com> References: <20210926031339.40043-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: F2E0330000A8 X-Stat-Signature: 8cwoeuygcnj5wzn9tp7fzf3d1pd3u6dn Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=B8k1SahY; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf08.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.49 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1632626110-596909 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The init_mm.page_table_lock is used to protect kernel page tables, we can use it to serialize splitting vmemmap PMD mappings instead of mmap write lock, which can increase the concurrency of vmemmap_remap_free(). Actually, It increase the concurrency between allocations of HugeTLB pages. But it is not the only benefit. There are a lot of users of mmap read lock of init_mm. The mmap write lock is holding through vmemmap_remap_free(), removing mmap write lock usage to make it does not affect other users of mmap read lock. It is not making anything worse and always a win to move. Signed-off-by: Muchun Song --- mm/ptdump.c | 16 ++++++++++++---- mm/sparse-vmemmap.c | 49 ++++++++++++++++++++++++++++++++++--------------- 2 files changed, 46 insertions(+), 19 deletions(-) diff --git a/mm/ptdump.c b/mm/ptdump.c index da751448d0e4..eea3d28d173c 100644 --- a/mm/ptdump.c +++ b/mm/ptdump.c @@ -40,8 +40,10 @@ static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr, if (st->effective_prot) st->effective_prot(st, 0, pgd_val(val)); - if (pgd_leaf(val)) + if (pgd_leaf(val)) { st->note_page(st, addr, 0, pgd_val(val)); + walk->action = ACTION_CONTINUE; + } return 0; } @@ -61,8 +63,10 @@ static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr, if (st->effective_prot) st->effective_prot(st, 1, p4d_val(val)); - if (p4d_leaf(val)) + if (p4d_leaf(val)) { st->note_page(st, addr, 1, p4d_val(val)); + walk->action = ACTION_CONTINUE; + } return 0; } @@ -82,8 +86,10 @@ static int ptdump_pud_entry(pud_t *pud, unsigned long addr, if (st->effective_prot) st->effective_prot(st, 2, pud_val(val)); - if (pud_leaf(val)) + if (pud_leaf(val)) { st->note_page(st, addr, 2, pud_val(val)); + walk->action = ACTION_CONTINUE; + } return 0; } @@ -101,8 +107,10 @@ static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr, if (st->effective_prot) st->effective_prot(st, 3, pmd_val(val)); - if (pmd_leaf(val)) + if (pmd_leaf(val)) { st->note_page(st, addr, 3, pmd_val(val)); + walk->action = ACTION_CONTINUE; + } return 0; } diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 54784d60f19d..d486a7a48512 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -64,8 +64,8 @@ struct vmemmap_remap_walk { */ #define NR_RESET_STRUCT_PAGE 3 -static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, - struct vmemmap_remap_walk *walk) +static int __split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, + struct vmemmap_remap_walk *walk) { pmd_t __pmd; int i; @@ -87,15 +87,37 @@ static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, set_pte_at(&init_mm, addr, pte, entry); } - /* Make pte visible before pmd. See comment in pmd_install(). */ - smp_wmb(); - pmd_populate_kernel(&init_mm, pmd, pgtable); + spin_lock(&init_mm.page_table_lock); + if (likely(pmd_leaf(*pmd))) { + /* Make pte visible before pmd. See comment in pmd_install(). */ + smp_wmb(); + pmd_populate_kernel(&init_mm, pmd, pgtable); + flush_tlb_kernel_range(start, start + PMD_SIZE); + spin_unlock(&init_mm.page_table_lock); - flush_tlb_kernel_range(start, start + PMD_SIZE); + return 0; + } + spin_unlock(&init_mm.page_table_lock); + pte_free_kernel(&init_mm, pgtable); return 0; } +static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, + struct vmemmap_remap_walk *walk) +{ + int ret; + + spin_lock(&init_mm.page_table_lock); + ret = pmd_leaf(*pmd); + spin_unlock(&init_mm.page_table_lock); + + if (ret) + ret = __split_vmemmap_huge_pmd(pmd, start, walk); + + return ret; +} + static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct vmemmap_remap_walk *walk) @@ -132,13 +154,12 @@ static int vmemmap_pmd_range(pud_t *pud, unsigned long addr, pmd = pmd_offset(pud, addr); do { - if (pmd_leaf(*pmd)) { - int ret; + int ret; + + ret = split_vmemmap_huge_pmd(pmd, addr & PMD_MASK, walk); + if (ret) + return ret; - ret = split_vmemmap_huge_pmd(pmd, addr & PMD_MASK, walk); - if (ret) - return ret; - } next = pmd_addr_end(addr, end); vmemmap_pte_range(pmd, addr, next, walk); } while (pmd++, addr = next, addr != end); @@ -321,10 +342,8 @@ int vmemmap_remap_free(unsigned long start, unsigned long end, */ BUG_ON(start - reuse != PAGE_SIZE); - mmap_write_lock(&init_mm); + mmap_read_lock(&init_mm); ret = vmemmap_remap_range(reuse, end, &walk); - mmap_write_downgrade(&init_mm); - if (ret && walk.nr_walked) { end = reuse + walk.nr_walked * PAGE_SIZE; /* From patchwork Sun Sep 26 03:13:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12517891 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A037C433EF for ; Sun, 26 Sep 2021 03:15:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C239861019 for ; Sun, 26 Sep 2021 03:15:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C239861019 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 60E926B0075; Sat, 25 Sep 2021 23:15:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5BC056B0078; Sat, 25 Sep 2021 23:15:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AC236B007B; Sat, 25 Sep 2021 23:15:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id 3E3666B0075 for ; Sat, 25 Sep 2021 23:15:21 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id DF0C68249980 for ; Sun, 26 Sep 2021 03:15:20 +0000 (UTC) X-FDA: 78628258800.40.C50C78E Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by imf15.hostedemail.com (Postfix) with ESMTP id AB43FD000096 for ; Sun, 26 Sep 2021 03:15:20 +0000 (UTC) Received: by mail-pf1-f171.google.com with SMTP id 203so12400510pfy.13 for ; Sat, 25 Sep 2021 20:15:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/sS8htj/D1lJKYWboIkf/9D9ISm975jzQKVeLPct0Xw=; b=gsAYg6baLsgPB2+1iRjN6m9//n6nEi5AiP0rACDAlWarIpO5LLFPzOhJybrY1GCtum KOI5VXYHR0HadoIVFPpybI4yGACJ6fXKMJXOaaAe8oitlEIUBALZiAHBg+bsDHxlbsVp AtSbrRefbCrV/pqMMjqYI7CH0TBYWf0cQ+UQoKLgbafu1d77B10BZGQW9dTv9VqpphoA x4VuTuvBLOp4dvgat3sdUrbD9sg7tPzHuhFgGzIR7DacO79Ljwsdw5V8Nde1WSg5iSvp qHTMonwTK9ppNrqAzto9psQqXMVnJdRHbFeRf0yPCvfpMuW/JnswX80l275FiAajzcJ4 JeCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/sS8htj/D1lJKYWboIkf/9D9ISm975jzQKVeLPct0Xw=; b=Lh/wEdrthc5NeDpM6GOouRPrJxNF6PzW+IEERTsNEPerF6+3yNoHPlB64NcPOkDpdJ DXYhcQ+SvvITCz54fLMVSZ54zdYVP5b2/1AlJRmSFhcewbfUp9tEf/j8GP2mo1JeHVcI TP1j4KYE1kAF4yH/VpFoIvLctMpX2r4lh4lh8rRnWfmXgtaSfZLZiGnCQ88mO1/7RHMU d/W2MQS/tE90SiY/rRgSxiSs0JVVNRLsFGB2F1o5yKZZLeb8LWM+8GTZVnCmJMzZGIZZ D5P1o7RctOC7Rt2Kf89+/wjueb+T1pgGqCRxDE5Ln5WsPcPvL4P14DK1nWvG7jpFKljf P/nA== X-Gm-Message-State: AOAM531GsYPSg+XwuY4zBiXn79q3d37kUkT1+OD7rmkFMmQ4BPD4b1+t HfuxGDdAKsPBjflZwhvx1gkY+A== X-Google-Smtp-Source: ABdhPJw/FfAJSxR6dp3i99QtTU2Jguiy5SN//EPCdBQUFYMVlQJbVnLyqf+I4eCltwj6UnOtcXgrAw== X-Received: by 2002:a63:5d5f:: with SMTP id o31mr9794175pgm.312.1632626119795; Sat, 25 Sep 2021 20:15:19 -0700 (PDT) Received: from localhost.localdomain ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id v26sm13374862pfm.175.2021.09.25.20.15.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 25 Sep 2021 20:15:19 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, akpm@linux-foundation.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, chenhuang5@huawei.com, bodeddub@amazon.com, corbet@lwn.net, willy@infradead.org, 21cnbao@gmail.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, zhengqi.arch@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH v4 4/5] selftests: vm: add a hugetlb test case Date: Sun, 26 Sep 2021 11:13:38 +0800 Message-Id: <20210926031339.40043-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210926031339.40043-1-songmuchun@bytedance.com> References: <20210926031339.40043-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: AB43FD000096 X-Stat-Signature: zpxeksyeoi4zoisc8zdfbp7wrornuy7d Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=gsAYg6ba; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf15.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.171 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1632626120-269952 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since the head vmemmap page frame associated with each HugeTLB page is reused, we should hide the PG_head flag of tail struct page from the user. Add a tese case to check whether it is work properly. The test steps are as follows. 1) alloc 2MB hugeTLB 2) get each page frame 3) apply those APIs in each page frame 4) Those APIs work completely the same as before. Reading the flags of a page by /proc/kpageflags is done in stable_page_flags(), which has invoked PageHead(), PageTail(), PageCompound() and compound_head(). If those APIs work properly, the head page must have 15 and 17 bits set. And tail pages must have 16 and 17 bits set but 15 bit unset. Those flags are checked in check_page_flags(). Signed-off-by: Muchun Song Reviewed-by: Barry Song --- tools/testing/selftests/vm/vmemmap_hugetlb.c | 144 +++++++++++++++++++++++++++ 1 file changed, 144 insertions(+) create mode 100644 tools/testing/selftests/vm/vmemmap_hugetlb.c diff --git a/tools/testing/selftests/vm/vmemmap_hugetlb.c b/tools/testing/selftests/vm/vmemmap_hugetlb.c new file mode 100644 index 000000000000..557bdbd4f87e --- /dev/null +++ b/tools/testing/selftests/vm/vmemmap_hugetlb.c @@ -0,0 +1,144 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * A test case of using hugepage memory in a user application using the + * mmap system call with MAP_HUGETLB flag. Before running this program + * make sure the administrator has allocated enough default sized huge + * pages to cover the 2 MB allocation. + */ +#include +#include +#include +#include +#include + +#define MAP_LENGTH (2UL * 1024 * 1024) + +#ifndef MAP_HUGETLB +#define MAP_HUGETLB 0x40000 /* arch specific */ +#endif + +#define PAGE_SIZE 4096 + +#define PAGE_COMPOUND_HEAD (1UL << 15) +#define PAGE_COMPOUND_TAIL (1UL << 16) +#define PAGE_HUGE (1UL << 17) + +#define HEAD_PAGE_FLAGS (PAGE_COMPOUND_HEAD | PAGE_HUGE) +#define TAIL_PAGE_FLAGS (PAGE_COMPOUND_TAIL | PAGE_HUGE) + +#define PM_PFRAME_BITS 55 +#define PM_PFRAME_MASK ~((1UL << PM_PFRAME_BITS) - 1) + +/* + * For ia64 architecture, Linux kernel reserves Region number 4 for hugepages. + * That means the addresses starting with 0x800000... will need to be + * specified. Specifying a fixed address is not required on ppc64, i386 + * or x86_64. + */ +#ifdef __ia64__ +#define MAP_ADDR (void *)(0x8000000000000000UL) +#define MAP_FLAGS (MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB | MAP_FIXED) +#else +#define MAP_ADDR NULL +#define MAP_FLAGS (MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB) +#endif + +static void write_bytes(char *addr, size_t length) +{ + unsigned long i; + + for (i = 0; i < length; i++) + *(addr + i) = (char)i; +} + +static unsigned long virt_to_pfn(void *addr) +{ + int fd; + unsigned long pagemap; + + fd = open("/proc/self/pagemap", O_RDONLY); + if (fd < 0) + return -1UL; + + lseek(fd, (unsigned long)addr / PAGE_SIZE * sizeof(pagemap), SEEK_SET); + read(fd, &pagemap, sizeof(pagemap)); + close(fd); + + return pagemap & ~PM_PFRAME_MASK; +} + +static int check_page_flags(unsigned long pfn) +{ + int fd, i; + unsigned long pageflags; + + fd = open("/proc/kpageflags", O_RDONLY); + if (fd < 0) + return -1; + + lseek(fd, pfn * sizeof(pageflags), SEEK_SET); + + read(fd, &pageflags, sizeof(pageflags)); + if ((pageflags & HEAD_PAGE_FLAGS) != HEAD_PAGE_FLAGS) { + close(fd); + printf("Head page flags (%lx) is invalid\n", pageflags); + return -1; + } + + /* + * pages other than the first page must be tail and shouldn't be head; + * this also verifies kernel has correctly set the fake page_head to tail + * while hugetlb_free_vmemmap is enabled. + */ + for (i = 1; i < MAP_LENGTH / PAGE_SIZE; i++) { + read(fd, &pageflags, sizeof(pageflags)); + if ((pageflags & TAIL_PAGE_FLAGS) != TAIL_PAGE_FLAGS || + (pageflags & HEAD_PAGE_FLAGS) == HEAD_PAGE_FLAGS) { + close(fd); + printf("Tail page flags (%lx) is invalid\n", pageflags); + return -1; + } + } + + close(fd); + + return 0; +} + +int main(int argc, char **argv) +{ + void *addr; + unsigned long pfn; + + addr = mmap(MAP_ADDR, MAP_LENGTH, PROT_READ | PROT_WRITE, MAP_FLAGS, -1, 0); + if (addr == MAP_FAILED) { + perror("mmap"); + exit(1); + } + + /* Trigger allocation of HugeTLB page. */ + write_bytes(addr, MAP_LENGTH); + + pfn = virt_to_pfn(addr); + if (pfn == -1UL) { + munmap(addr, MAP_LENGTH); + perror("virt_to_pfn"); + exit(1); + } + + printf("Returned address is %p whose pfn is %lx\n", addr, pfn); + + if (check_page_flags(pfn) < 0) { + munmap(addr, MAP_LENGTH); + perror("check_page_flags"); + exit(1); + } + + /* munmap() length of MAP_HUGETLB memory must be hugepage aligned */ + if (munmap(addr, MAP_LENGTH)) { + perror("munmap"); + exit(1); + } + + return 0; +} From patchwork Sun Sep 26 03:13:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12517893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF3B0C433F5 for ; Sun, 26 Sep 2021 03:15:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8C9F261029 for ; Sun, 26 Sep 2021 03:15:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8C9F261029 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2D8FA6B0078; Sat, 25 Sep 2021 23:15:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 288376B007B; Sat, 25 Sep 2021 23:15:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 176DA6B007D; Sat, 25 Sep 2021 23:15:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id 09AEB6B0078 for ; Sat, 25 Sep 2021 23:15:29 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B156B180D0797 for ; Sun, 26 Sep 2021 03:15:28 +0000 (UTC) X-FDA: 78628259136.14.6CC3BCF Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by imf25.hostedemail.com (Postfix) with ESMTP id 76040B00008E for ; Sun, 26 Sep 2021 03:15:28 +0000 (UTC) Received: by mail-pg1-f171.google.com with SMTP id w8so14204595pgf.5 for ; Sat, 25 Sep 2021 20:15:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lBVREcbULLdqaPu2xVXA/zzEvcIr2XcW00aGGgEt1R0=; b=AIt908PRuKK8faRpMJ6vdoIzcoydx21Jsb8fY7MrXRpJmjdkBtjzxlfPQLHCUXvqZK DVOLTHIHa8EYlxC1ziWx5RVNhDDrFwtVgEJQavOLkucLoYFg91hCTvk+edMhvHzWrzIA qQY+h1deTymLNbzXPLNvYsUXEHylU81xQ3UJP+oaphv2oWJ+6UeGctVlXDvL2XAL/2rQ 8VzPBewHjvvhGKcAJOcTB2cHkrRdvEJ+xEI+574MDWiFtj3KE5Q8Nu4SkeXEfdOwxFYp QwMMm5vMMTeU+ytG6apzi2wRMvgqQM+kyH78VD9nQGBel77Duf619wZr/sYVdj+wlF0g /ndA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lBVREcbULLdqaPu2xVXA/zzEvcIr2XcW00aGGgEt1R0=; b=vko4+Mpqq8sn8JpzO5HrMJOpkIQRa9oPWvDB+n8VcT4KCbKMVK1WPv42aUmTyrAalm QZ3iNR72PEdC8F1T3C5ncnBeC8YzP0AG9YP/9W1mKMb6jPitZ5GeNn9w5kaDlxe0xK3y u6hbRDsBgPx1IV+DkgY3vC6WCo/N3zeyLANChsTtBPdGJnqaftRCwkRqDYnn2QkTRbn8 rAlcXSsJibZwG1Vm/ZSgka4V0I32D0n8ErSOaMjrpaMBjIIsOEMguXsqfvRxhCvoSXHU ifyIU19glokx3A0+OnYWd5/ZHVhpCR6Twvqjl/IRDzrqlzFR2HuK0ruKn3e9Lu6ynsbj MsDQ== X-Gm-Message-State: AOAM530sJSftOOkpgZqC8NYLyeTETl1bZIN8v0z7VpJok7IfBOiRpcWq av+u+/hQM9K/k07kZOOweef91A== X-Google-Smtp-Source: ABdhPJy4n8ITV5TO8x69HoiW/b5VWS7o6Hbt05+tkS6I4fn84pt8iljnvISDWFvfVDkdcEfSaiBJJg== X-Received: by 2002:a65:640f:: with SMTP id a15mr510106pgv.106.1632626127582; Sat, 25 Sep 2021 20:15:27 -0700 (PDT) Received: from localhost.localdomain ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id v26sm13374862pfm.175.2021.09.25.20.15.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 25 Sep 2021 20:15:27 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, akpm@linux-foundation.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, chenhuang5@huawei.com, bodeddub@amazon.com, corbet@lwn.net, willy@infradead.org, 21cnbao@gmail.com Cc: duanxiongchun@bytedance.com, fam.zheng@bytedance.com, smuchun@gmail.com, zhengqi.arch@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH v4 5/5] mm: sparsemem: move vmemmap related to HugeTLB to CONFIG_HUGETLB_PAGE_FREE_VMEMMAP Date: Sun, 26 Sep 2021 11:13:39 +0800 Message-Id: <20210926031339.40043-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210926031339.40043-1-songmuchun@bytedance.com> References: <20210926031339.40043-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 76040B00008E X-Stat-Signature: p4w5taw9wmkdsuc7jmpqgxei4fj6s9p5 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=AIt908PR; spf=pass (imf25.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.171 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1632626128-701050 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The vmemmap_remap_free/alloc are relevant to HugeTLB, so move those functiongs to the scope of CONFIG_HUGETLB_PAGE_FREE_VMEMMAP. Signed-off-by: Muchun Song Reviewed-by: Barry Song --- include/linux/mm.h | 2 ++ mm/sparse-vmemmap.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 00bb2d938df4..a706e7ffda94 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3182,10 +3182,12 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP int vmemmap_remap_free(unsigned long start, unsigned long end, unsigned long reuse); int vmemmap_remap_alloc(unsigned long start, unsigned long end, unsigned long reuse, gfp_t gfp_mask); +#endif void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index d486a7a48512..3c7dd41c3164 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -34,6 +34,7 @@ #include #include +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP /** * struct vmemmap_remap_walk - walk vmemmap page table * @@ -423,6 +424,7 @@ int vmemmap_remap_alloc(unsigned long start, unsigned long end, return 0; } +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ /* * Allocate a block of memory to be used to back the virtual memory map