From patchwork Mon Apr 4 07:46:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12799962 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B042CC433F5 for ; Mon, 4 Apr 2022 07:48:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 319C86B0074; Mon, 4 Apr 2022 03:47:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C7A66B0075; Mon, 4 Apr 2022 03:47:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16BBE6B0078; Mon, 4 Apr 2022 03:47:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id 070516B0074 for ; Mon, 4 Apr 2022 03:47:36 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id AF01E183F68A6 for ; Mon, 4 Apr 2022 07:47:25 +0000 (UTC) X-FDA: 79318416450.18.514AD76 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by imf02.hostedemail.com (Postfix) with ESMTP id 4094F80019 for ; Mon, 4 Apr 2022 07:47:25 +0000 (UTC) Received: by mail-pj1-f54.google.com with SMTP id u14so4191197pjj.0 for ; Mon, 04 Apr 2022 00:47:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fLdrPfzfFhe0FD0vYILkdtb9G4w8CHIGtiLey2rvfw8=; b=78DXuMTunFMf5V2vrc40rzu6yjpZRWkp0Ts1Za48QZbHqzxa78WeDuvLe47XTZYvNX KCRWou8zyu4lG8i0WMA4x00bsrnBpxpuVvsh6C/94GisJnUTPkHV0pVncSsThaiVab2J bMWY32eHZtnklZaKysstLy0UnE0EZeNPZnqA7zOZXzr0eKt+8662KHpmyloNVo9DoLzv Y1rSSVnjJ7V1sPZIp4N3i9dVJBw2iNKE05vsWQOJ7Gyd7Y/87GqoPMoEGwm47RhuHPGH Ltom9FCpbey3Y57RMvVif1zH7MF08RG/ViEPTmUwhnzJhsBOI1y9X/JGzPDZHyryKwqP MbNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fLdrPfzfFhe0FD0vYILkdtb9G4w8CHIGtiLey2rvfw8=; b=z2An0G4VIOqQSRv25A/fe/VuAMUF6oqdnQbPvAw9i8ZSuypbiK+YI79738wwoMr2rN 4N2mq6Qvl9ur6bzcK+AtUKVerrQlhEuAIc8oU162wLQFHCMAAulsdpjYPnMM6hd+V9rE jhcxc1xBXI4wker3eQ5TG9WxfcwhltYC9oVFI2iLbpkOhDvRjrHpzCGf0L6iZS+3r/+N EVF+sspdsXXL7RIOTSAI+8GwiMZ+G+hY34ZY1hCIyCk5A8lKAhLG5xiviEDigd1+0D7u V/nLB4DC24EbykyTbp8Q2IJ9yv8gECgmQPYiE4/L0saoezG0/F06SY7s8DesgP2+exh+ qf7A== X-Gm-Message-State: AOAM531hNem0xFUaNEGeAELNwZ7OEDVw07BedMNUddPCdFykjA89fpOL JGnlTPXP6ggeRXslHWyd+xBfTg== X-Google-Smtp-Source: ABdhPJyXQqHUSwFeKwVmfqRrJa7how2ljLBKmlg1Oa5U6X04bi8XhqkZsPdPuCRM5vf1hNRH9R0JVA== X-Received: by 2002:a17:902:f70a:b0:153:88c7:774 with SMTP id h10-20020a170902f70a00b0015388c70774mr21867931plo.166.1649058444200; Mon, 04 Apr 2022 00:47:24 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id g70-20020a636b49000000b003823dd39d41sm9376579pgc.64.2022.04.04.00.47.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 00:47:23 -0700 (PDT) From: Muchun Song To: mike.kravetz@oracle.com, akpm@linux-foundation.org, david@redhat.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, smuchun@bytedance.com, Muchun Song Subject: [PATCH 2/3] mm: hugetlb_vmemmap: cleanup hugetlb_free_vmemmap_enabled* Date: Mon, 4 Apr 2022 15:46:51 +0800 Message-Id: <20220404074652.68024-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220404074652.68024-1-songmuchun@bytedance.com> References: <20220404074652.68024-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: g3wg5fjkn4cyh47b1zacmgycx8i53736 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=78DXuMTu; spf=pass (imf02.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 4094F80019 X-HE-Tag: 1649058445-100904 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The word of "free" is not expressive enough to express the feature of optimizing vmemmap pages associated with each HugeTLB, rename this keywork to "optimeze". In this patch , cheanup the static key and hugetlb_free_vmemmap_enabled() to make code more expressive. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz Reviewed-by: David Hildenbrand --- arch/arm64/mm/flush.c | 2 +- include/linux/page-flags.h | 12 ++++++------ mm/hugetlb_vmemmap.c | 10 +++++----- mm/memory_hotplug.c | 2 +- 4 files changed, 13 insertions(+), 13 deletions(-) diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 1efd01e10cba..d19a13234a81 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -85,7 +85,7 @@ void flush_dcache_page(struct page *page) * set since the head vmemmap page frame is reused (more details can * refer to the comments above page_fixed_fake_head()). */ - if (hugetlb_free_vmemmap_enabled() && PageHuge(page)) + if (hugetlb_optimize_vmemmap_enabled() && PageHuge(page)) page = compound_head(page); if (test_bit(PG_dcache_clean, &page->flags)) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 9f488668a1d7..557d15ef3dc0 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -201,16 +201,16 @@ enum pageflags { #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP DECLARE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON, - hugetlb_free_vmemmap_enabled_key); + hugetlb_optimize_vmemmap_key); -static __always_inline bool hugetlb_free_vmemmap_enabled(void) +static __always_inline bool hugetlb_optimize_vmemmap_enabled(void) { return static_branch_maybe(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON, - &hugetlb_free_vmemmap_enabled_key); + &hugetlb_optimize_vmemmap_key); } /* - * If the feature of freeing some vmemmap pages associated with each HugeTLB + * If the feature of optimizing vmemmap pages associated with each HugeTLB * page is enabled, the head vmemmap page frame is reused and all of the tail * vmemmap addresses map to the head vmemmap page frame (furture details can * refer to the figure at the head of the mm/hugetlb_vmemmap.c). In other @@ -227,7 +227,7 @@ static __always_inline bool hugetlb_free_vmemmap_enabled(void) */ static __always_inline const struct page *page_fixed_fake_head(const struct page *page) { - if (!hugetlb_free_vmemmap_enabled()) + if (!hugetlb_optimize_vmemmap_enabled()) return page; /* @@ -256,7 +256,7 @@ static inline const struct page *page_fixed_fake_head(const struct page *page) return page; } -static inline bool hugetlb_free_vmemmap_enabled(void) +static inline bool hugetlb_optimize_vmemmap_enabled(void) { return false; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 91b79b9d9e25..f25294973398 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -189,8 +189,8 @@ #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) DEFINE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON, - hugetlb_free_vmemmap_enabled_key); -EXPORT_SYMBOL(hugetlb_free_vmemmap_enabled_key); + hugetlb_optimize_vmemmap_key); +EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); static int __init hugetlb_vmemmap_early_param(char *buf) { @@ -204,9 +204,9 @@ static int __init hugetlb_vmemmap_early_param(char *buf) return -EINVAL; if (!strcmp(buf, "on")) - static_branch_enable(&hugetlb_free_vmemmap_enabled_key); + static_branch_enable(&hugetlb_optimize_vmemmap_key); else if (!strcmp(buf, "off")) - static_branch_disable(&hugetlb_free_vmemmap_enabled_key); + static_branch_disable(&hugetlb_optimize_vmemmap_key); else return -EINVAL; @@ -282,7 +282,7 @@ void __init hugetlb_vmemmap_init(struct hstate *h) BUILD_BUG_ON(__NR_USED_SUBPAGE >= RESERVE_VMEMMAP_SIZE / sizeof(struct page)); - if (!hugetlb_free_vmemmap_enabled()) + if (!hugetlb_optimize_vmemmap_enabled()) return; vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 3fb4196094d9..74430f88853d 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1273,7 +1273,7 @@ bool mhp_supports_memmap_on_memory(unsigned long size) * populate a single PMD. */ return memmap_on_memory && - !hugetlb_free_vmemmap_enabled() && + !hugetlb_optimize_vmemmap_enabled() && IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) && size == memory_block_size_bytes() && IS_ALIGNED(vmemmap_size, PMD_SIZE) &&