From patchwork Fri Apr 29 12:18:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12831931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC091C433FE for ; Fri, 29 Apr 2022 12:18:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6332B6B0073; Fri, 29 Apr 2022 08:18:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5DF136B0074; Fri, 29 Apr 2022 08:18:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A7FC6B0075; Fri, 29 Apr 2022 08:18:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 3A1EA6B0073 for ; Fri, 29 Apr 2022 08:18:40 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0FE8F27C35 for ; Fri, 29 Apr 2022 12:18:40 +0000 (UTC) X-FDA: 79409820000.01.189B0FE Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by imf29.hostedemail.com (Postfix) with ESMTP id 37F54120042 for ; Fri, 29 Apr 2022 12:18:36 +0000 (UTC) Received: by mail-pf1-f176.google.com with SMTP id x23so1483358pff.9 for ; Fri, 29 Apr 2022 05:18:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Sb7cKusP7HmwEMFLljkRCnuPfmcRLCQDQWff8XX78N4=; b=ujf++oGTRkcmvgbJDzD35LqcmXlDmxx9MV7JBsCDJlobs2RrJt7Oin3Tp6YJWZSb4v 9pD3a81OGs+Rvr9bhGzIguAqbwWgBnZtv6XULJJozikDR56kziPLN1jhnHZTf/HusSzV w63qNhi0UKdgf6Q9Fq+cz+V4qw1+H3gfd1FB5CgYLwAD7FKIg+U2A3PKdV62na10dGaP uHFViPBg8CIa2xG7vAItmaI0KXTl/Td/1o3ipN73lzlFryjc/F+CyUtsT9v0X6W/1RuW a9MOjWqpYGACasXXh7MNQMe9KERCZSJqchBd9Dvm3EvJmQbv8dV4Tg9o5hNkY+lx9T2W j4pQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Sb7cKusP7HmwEMFLljkRCnuPfmcRLCQDQWff8XX78N4=; b=8HNRElPRlsAxGbO4dLo8j5S9Kgtca0n1rMOlY09AmbcWnJVa4+f5KZIprUaakR5Cwp Qu+BGNb2XlBV7k21OUOx42hV4jdDJlRAnh7xvfmf9Ua3HvqPRz47Ilu2PhH5fji4NXr5 9uyFy043p73DZkdOzr2Gys/AhFV3St6LfKoEtPwxQxcuV31ysiXt8girGSu3vnPh1miY lekJYnPIG5uJC26U3VIAV0zym95pkgWY14ZxXZdG6NWlf/soKIo0ZTOzx+sy2QJnU3SQ fvklRkc/BooyB9z67ax72xM/4pUCjNYy95O4KM4sijx2ko3J+7ZkpqMO2pkHJ5EqwK/N QDQA== X-Gm-Message-State: AOAM530p8yO+CkXS5Nc/m/kOPmr1rHxVuWmillQrBCWxd1oSAlcfzWEl TAMZUi6Gic3mIqjuNH+PnxUF3w== X-Google-Smtp-Source: ABdhPJyV4XCJ1uaJg+Pde1nxM1j4YKLXpIc83hyQsnEeiM1URERFxVgDV1/VvFaifwpneZ3ujEuceg== X-Received: by 2002:a63:d145:0:b0:3c1:4ba0:d890 with SMTP id c5-20020a63d145000000b003c14ba0d890mr10020401pgj.607.1651234718613; Fri, 29 Apr 2022 05:18:38 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.239]) by smtp.gmail.com with ESMTPSA id k11-20020a056a00168b00b004f7e1555538sm3101421pfc.190.2022.04.29.05.18.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Apr 2022 05:18:38 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v9 1/4] mm: hugetlb_vmemmap: disable hugetlb_optimize_vmemmap when struct page crosses page boundaries Date: Fri, 29 Apr 2022 20:18:13 +0800 Message-Id: <20220429121816.37541-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220429121816.37541-1-songmuchun@bytedance.com> References: <20220429121816.37541-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 37F54120042 X-Stat-Signature: u8f6xjhddx6pktkok5wqkunqs6megnim Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=ujf++oGT; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf29.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.176 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1651234716-699903 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the size of "struct page" is not the power of two but with the feature of minimizing overhead of struct page associated with each HugeTLB is enabled, then the vmemmap pages of HugeTLB will be corrupted after remapping (panic is about to happen in theory). But this only exists when !CONFIG_MEMCG && !CONFIG_SLUB on x86_64. However, it is not a conventional configuration nowadays. So it is not a real word issue, just the result of a code review. But we cannot prevent anyone from configuring that combined configure. This hugetlb_optimize_vmemmap should be disable in this case to fix this issue. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- mm/hugetlb_vmemmap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 29554c6ef2ae..6254bb2d4ae5 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -28,12 +28,6 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); static int __init hugetlb_vmemmap_early_param(char *buf) { - /* We cannot optimize if a "struct page" crosses page boundaries. */ - if (!is_power_of_2(sizeof(struct page))) { - pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); - return 0; - } - if (!buf) return -EINVAL; @@ -119,6 +113,12 @@ void __init hugetlb_vmemmap_init(struct hstate *h) if (!hugetlb_optimize_vmemmap_enabled()) return; + if (!is_power_of_2(sizeof(struct page))) { + pr_warn_once("cannot optimize vmemmap pages because \"struct page\" crosses page boundaries\n"); + static_branch_disable(&hugetlb_optimize_vmemmap_key); + return; + } + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page is not to be freed to buddy allocator, the other tail From patchwork Fri Apr 29 12:18:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12831932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB17DC433EF for ; Fri, 29 Apr 2022 12:18:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2FC766B0074; Fri, 29 Apr 2022 08:18:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 283556B0075; Fri, 29 Apr 2022 08:18:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FDD46B0078; Fri, 29 Apr 2022 08:18:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 020FC6B0074 for ; Fri, 29 Apr 2022 08:18:45 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id DCF08602C3 for ; Fri, 29 Apr 2022 12:18:44 +0000 (UTC) X-FDA: 79409820168.21.6B6A726 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf12.hostedemail.com (Postfix) with ESMTP id 3BF384005F for ; Fri, 29 Apr 2022 12:18:33 +0000 (UTC) Received: by mail-pl1-f171.google.com with SMTP id n8so7002333plh.1 for ; Fri, 29 Apr 2022 05:18:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nJJ2yjd2/4fpAJdLzUZPduPx0rYtfHlVNZ/74R+S6Q4=; b=LQd2JOMlImsT/p4HS41D/RtsHSJZpYb3+IgaBw60oQMRt6xScDB2h3Cyv1Qgjeu9FP b8bzT9WGQeRMlS7OuyfciF4JYpY8ls0XRS1bKUgVCVt5DBlLWj4lG+cPRGPpcVC2XtTE F5v37DJLI/0QKaTs+yal0pE+KhbU0B9ch6P6DRcQpgXDBaPiwrd1Lwpe9GtDh2M4ark7 0cdobSG90wKyu0bVuPGgv2HxsN6B3wFfiXNClO//SmKe8T/SdM6ObvpJNJ6Ud6c4AvYl WMft94Lmfu0J6U9LbnBiK/xoT5tuljWb9LLQSJSXyxlTEzT03Gjl4YkVbeUyzhylRQls MhPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nJJ2yjd2/4fpAJdLzUZPduPx0rYtfHlVNZ/74R+S6Q4=; b=TvxND2sHo4fvBm7p4d3KM/uYErH4wgm5bxASyHTEfY/zTl1O3Ny6jqhgH8XqNd4Vdy Xcbx7oL5Nl12K6mRD2J4V2bRVwok0wX0DUJUtpS7rtRf2SjJtXXX0GqTaeuGtSPNSnpq brsL6lhJpEVTwkRP+HgbHwPcZUcMXURMK113RspV45qxfVsjWZY8wecTEIO/hC7CaulX hIK0ozuEvsjSCKBDv6LnvBipGXpB3TLbI2VE/J8yq6A223CXwvppHcYQs4KA/xIhtxWu ACgNw114CmaP3zeLIayb22sx8GDbMVsUokPi/iJsU06vacCZruOT+VO9uNTpkMxDPqs2 7riQ== X-Gm-Message-State: AOAM531BJgV6bg6oaWGhYxmMn3XQwYJXhetHf79ahDCCIsieIMQA1P2n dH896BqJiHVxa8EY7kWxdG6LWJKNi9O1Rw== X-Google-Smtp-Source: ABdhPJzsAHVZK1rLNyDCsVai7UxB75Y9vMNVPvKk2z6nj+5DQi5ljAM0esq9Q83iXNA0RQHyq6G78g== X-Received: by 2002:a17:90a:dd46:b0:1b8:8:7303 with SMTP id u6-20020a17090add4600b001b800087303mr3592108pjv.197.1651234723558; Fri, 29 Apr 2022 05:18:43 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.239]) by smtp.gmail.com with ESMTPSA id k11-20020a056a00168b00b004f7e1555538sm3101421pfc.190.2022.04.29.05.18.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Apr 2022 05:18:43 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v9 2/4] mm: memory_hotplug: override memmap_on_memory when hugetlb_free_vmemmap=on Date: Fri, 29 Apr 2022 20:18:14 +0800 Message-Id: <20220429121816.37541-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220429121816.37541-1-songmuchun@bytedance.com> References: <20220429121816.37541-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=LQd2JOMl; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 3BF384005F X-Stat-Signature: 917h193jqnsod6n1sk5hha5p4nyitono X-HE-Tag: 1651234713-955486 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When "hugetlb_free_vmemmap=on" and "memory_hotplug.memmap_on_memory" are both passed to boot cmdline, the variable of "memmap_on_memory" will be set to 1 even if the vmemmap pages will not be allocated from the hotadded memory since the former takes precedence over the latter. In the next patch, we want to enable or disable the feature of freeing vmemmap pages of HugeTLB via sysctl. We need a way to know if the feature of memory_hotplug.memmap_on_memory is enabled when enabling the feature of freeing vmemmap pages since those two features are not compatible, however, the variable of "memmap_on_memory" cannot indicate this nowadays. Do not set "memmap_on_memory" to 1 when both parameters are passed to cmdline, in this case, "memmap_on_memory" could indicate if this feature is enabled by the users. Also introduce mhp_memmap_on_memory() helper to move the definition of "memmap_on_memory" to the scope of CONFIG_MHP_MEMMAP_ON_MEMORY. In the next patch, mhp_memmap_on_memory() will also be exported to be used in hugetlb_vmemmap.c. Signed-off-by: Muchun Song Acked-by: Mike Kravetz --- mm/memory_hotplug.c | 32 ++++++++++++++++++++++++++------ 1 file changed, 26 insertions(+), 6 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 111684878fd9..a6101ae402f9 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -42,14 +42,36 @@ #include "internal.h" #include "shuffle.h" +#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY +static int memmap_on_memory_set(const char *val, const struct kernel_param *kp) +{ + if (hugetlb_optimize_vmemmap_enabled()) + return 0; + return param_set_bool(val, kp); +} + +static const struct kernel_param_ops memmap_on_memory_ops = { + .flags = KERNEL_PARAM_OPS_FL_NOARG, + .set = memmap_on_memory_set, + .get = param_get_bool, +}; /* * memory_hotplug.memmap_on_memory parameter */ static bool memmap_on_memory __ro_after_init; -#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY -module_param(memmap_on_memory, bool, 0444); +module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory, 0444); MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hotplug"); + +static inline bool mhp_memmap_on_memory(void) +{ + return memmap_on_memory; +} +#else +static inline bool mhp_memmap_on_memory(void) +{ + return false; +} #endif enum { @@ -1263,9 +1285,7 @@ bool mhp_supports_memmap_on_memory(unsigned long size) * altmap as an alternative source of memory, and we do not exactly * populate a single PMD. */ - return memmap_on_memory && - !hugetlb_optimize_vmemmap_enabled() && - IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) && + return mhp_memmap_on_memory() && size == memory_block_size_bytes() && IS_ALIGNED(vmemmap_size, PMD_SIZE) && IS_ALIGNED(remaining_size, (pageblock_nr_pages << PAGE_SHIFT)); @@ -2083,7 +2103,7 @@ static int __ref try_remove_memory(u64 start, u64 size) * We only support removing memory added with MHP_MEMMAP_ON_MEMORY in * the same granularity it was added - a single memory block. */ - if (memmap_on_memory) { + if (mhp_memmap_on_memory()) { nr_vmemmap_pages = walk_memory_blocks(start, size, NULL, get_nr_vmemmap_pages_cb); if (nr_vmemmap_pages) { From patchwork Fri Apr 29 12:18:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12831933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1AD9C433FE for ; Fri, 29 Apr 2022 12:18:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67DA96B0075; Fri, 29 Apr 2022 08:18:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 606526B0078; Fri, 29 Apr 2022 08:18:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A6E46B007B; Fri, 29 Apr 2022 08:18:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 3B39A6B0075 for ; Fri, 29 Apr 2022 08:18:50 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 154C17B3 for ; Fri, 29 Apr 2022 12:18:50 +0000 (UTC) X-FDA: 79409820420.08.38329F9 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf10.hostedemail.com (Postfix) with ESMTP id CA4DDC006A for ; Fri, 29 Apr 2022 12:18:38 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id c23so7029268plo.0 for ; Fri, 29 Apr 2022 05:18:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TjSe9ss2+05m7bNnHnq06g/24z1s/6oiiRPfk3T645g=; b=V4zTolFkDRy+bEhCxQOBfvMydHiQ3SDhwpCdYR9LSjJYAgp4qwzEPoZTrrY1r+QnIR 5poY1Y5Q+CRau1IQNkm6BtRRdKx4KIxTNJmSyi/EcQhsHxzmoF94quYhEKljm34++Aio R92KIYGjemhsxaueyDuUq8u71R/hQL5lkSg0TB+CDKAFOkjRTE5M2grrgQgULjQcC/2F nH13a8tMufrBC96PWlBzBLLxuOJMRSco17kYWrm683IVcJs8VLct/BumiZWjOkn83+yz 5e49ckj0+KdIpb8CYtz1LTPXUd0rqFdymS2P1gDYe5ErW1f5PUyG68iP4l028mIezWJS 9Wjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TjSe9ss2+05m7bNnHnq06g/24z1s/6oiiRPfk3T645g=; b=tOOxCYMtpnj2FqN/UM3svgXUto9OUklGJyCHMmASiZFtS06QDjKhk5DqM5pzZAliJb 9VTvhUZ/X7eytbbaKi5+3SNoWqdvpHmB0VuUo8g46hV8ujmks1BpUqkPq21Mr6r6qhJ9 /CySGcYD8mhKDXZRLUI1VCd6JdTbVFdMWF/zLP2ear7RGClOOntCK2dhh8EOBP8+NRs0 j9gIg1v18Pf2HSaznTnGTp72YE9n0ZxPJcQiy6+1I9k0pkaAwJ9N4rH9EnAV62T7lMr6 ily5vx6WJzIZDoAvCf9JNu/ASx7YisamszRXtEdoJ7laR0EbIRYn1cCAesU0OH9uG33k 1FyA== X-Gm-Message-State: AOAM530T2pmeWGRviukV4itn0+VnW/KjJAz4R5es9dSRViOUt80Wp3HA fOvDrbYAh1Mwi1ZXGWi88hYopQ== X-Google-Smtp-Source: ABdhPJx1iBA5oFoann5/gl9fGdgcEAPAYUM965dVZhzsaPjPjJkZI/6OtjdgqxpK9Cr3f72bhF+Rfw== X-Received: by 2002:a17:903:2352:b0:15e:5aad:af5c with SMTP id c18-20020a170903235200b0015e5aadaf5cmr9612588plh.116.1651234728764; Fri, 29 Apr 2022 05:18:48 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.239]) by smtp.gmail.com with ESMTPSA id k11-20020a056a00168b00b004f7e1555538sm3101421pfc.190.2022.04.29.05.18.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Apr 2022 05:18:48 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v9 3/4] mm: hugetlb_vmemmap: use kstrtobool for hugetlb_vmemmap param parsing Date: Fri, 29 Apr 2022 20:18:15 +0800 Message-Id: <20220429121816.37541-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220429121816.37541-1-songmuchun@bytedance.com> References: <20220429121816.37541-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=V4zTolFk; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf10.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CA4DDC006A X-Rspam-User: X-Stat-Signature: fagizeu8wrw6ot9y43tfnjr6okkcqyb1 X-HE-Tag: 1651234718-332869 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use kstrtobool rather than open coding "on" and "off" parsing in mm/hugetlb_vmemmap.c, which is more powerful to handle all kinds of parameters like 'Yy1Nn0' or [oO][NnFf] for "on" and "off". Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- Documentation/admin-guide/kernel-parameters.txt | 6 +++--- mm/hugetlb_vmemmap.c | 10 +++++----- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 308da668bbb1..43b8385073ad 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1703,10 +1703,10 @@ enabled. Allows heavy hugetlb users to free up some more memory (7 * PAGE_SIZE for each 2MB hugetlb page). - Format: { on | off (default) } + Format: { [oO][Nn]/Y/y/1 | [oO][Ff]/N/n/0 (default) } - on: enable the feature - off: disable the feature + [oO][Nn]/Y/y/1: enable the feature + [oO][Ff]/N/n/0: disable the feature Built with CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=y, the default is on. diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 6254bb2d4ae5..cc4ec752ec16 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -28,15 +28,15 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); static int __init hugetlb_vmemmap_early_param(char *buf) { - if (!buf) + bool enable; + + if (kstrtobool(buf, &enable)) return -EINVAL; - if (!strcmp(buf, "on")) + if (enable) static_branch_enable(&hugetlb_optimize_vmemmap_key); - else if (!strcmp(buf, "off")) - static_branch_disable(&hugetlb_optimize_vmemmap_key); else - return -EINVAL; + static_branch_disable(&hugetlb_optimize_vmemmap_key); return 0; } From patchwork Fri Apr 29 12:18:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12831934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D19D2C433EF for ; Fri, 29 Apr 2022 12:18:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6F2E36B007D; Fri, 29 Apr 2022 08:18:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A2526B007B; Fri, 29 Apr 2022 08:18:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 542EA6B007D; Fri, 29 Apr 2022 08:18:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 4690E6B0078 for ; Fri, 29 Apr 2022 08:18:55 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 1A5781206E9 for ; Fri, 29 Apr 2022 12:18:55 +0000 (UTC) X-FDA: 79409820630.02.7B7C25C Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf18.hostedemail.com (Postfix) with ESMTP id 481761C002F for ; Fri, 29 Apr 2022 12:18:49 +0000 (UTC) Received: by mail-pl1-f172.google.com with SMTP id l6so612938pls.10 for ; Fri, 29 Apr 2022 05:18:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zDXpEWP/M8b8xOVREYB3k63KnnxPDMvYn9pJUThphoA=; b=AmcOpfwPVIFaelJEou6Pn+NQDCIakRdzRx7dzQEVHRWQAQRSMhHiB9svFeOGWbUMQ3 FOw1fM+wJNwn1NJx94nhrerK0ScBS0xz8zuyzm7lfypZ1aKrPishIKLEah9Bo8S+uWd9 TLQkb5d51/AYp0UVAQU2hAjwMb/YrEh21CiGH8VyZnJlqcrsCZUEAnPgkR1Y8mpQmr49 ZNEzn/xwtTjHFXwBZ4P7YyR6envMkD2vw3m7FeQZ2ZtwLr+Sv2+l05wDQT8FeMvDQvd5 xrTZ6jJL3qgQImWAuWgwWjZa46M6lKvQ9n9Fv018yb61zoe10wEIpKZTRrzNeJnfGMlI whWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zDXpEWP/M8b8xOVREYB3k63KnnxPDMvYn9pJUThphoA=; b=CBIXYaTnsvDk5xMeVpeC8Nfp4oUJDuKgieMGnxxDIPNtQXBm5enIdgkHKv8SZwda1K 67Rs3HhWVxYBOD/U8qej/nOehVXBsKaEAloeuOF1OrwyoDWh9BVl60JDRsvM+ERlPsxT oB7AF+1x3S8qYDxdKRbM+YMsq4B9mk+Oa9eLRD/RkGK/0uQX65z4hd/Jg7pdOJIruhn0 mDJ56nBKLzOYYWrMAauOgsvBxOQt9YEn1d6oOGt3etGgZNwjqIp1GquVQpucNmiQqFF8 fsQvBMZN6jZKPl8w/jyUrckST2HnDp5c5sT+xjQEUdwGMTi+XwFdvYMKBQIm+hx1Njkb sQFg== X-Gm-Message-State: AOAM530JMJ8LAhyXL7fprHuLMBVdWNNuQJngX9JJ0X3Ou6/u/j7q1Kau 6en8C8tHG/KkYxbyha/R3gkjZw== X-Google-Smtp-Source: ABdhPJwKHR7kT2EV6lDbvTrCLziq87suBz9Ay1g9D+Q4dhe222lhHxp0kqbAVc/XcJ2nNVp91WNHwQ== X-Received: by 2002:a17:902:c2c7:b0:159:9f9:85f3 with SMTP id c7-20020a170902c2c700b0015909f985f3mr37791962pla.18.1651234733640; Fri, 29 Apr 2022 05:18:53 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.239]) by smtp.gmail.com with ESMTPSA id k11-20020a056a00168b00b004f7e1555538sm3101421pfc.190.2022.04.29.05.18.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Apr 2022 05:18:53 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v9 4/4] mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl Date: Fri, 29 Apr 2022 20:18:16 +0800 Message-Id: <20220429121816.37541-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220429121816.37541-1-songmuchun@bytedance.com> References: <20220429121816.37541-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: rtri4t4kjp6fxscmchy5y1tdhtutikbn X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 481761C002F X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=AmcOpfwP; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf18.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1651234729-289012 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We must add hugetlb_free_vmemmap=on (or "off") to the boot cmdline and reboot the server to enable or disable the feature of optimizing vmemmap pages associated with HugeTLB pages. However, rebooting usually takes a long time. So add a sysctl to enable or disable the feature at runtime without rebooting. Why we need this? There are 3 use cases. 1) The feature of minimizing overhead of struct page associated with each HugeTLB is disabled by default without passing "hugetlb_free_vmemmap=on" to the boot cmdline. When we (ByteDance) deliver the servers to the users who want to enable this feature, they have to configure the grub (change boot cmdline) and reboot the servers, whereas rebooting usually takes a long time (we have thousands of servers). It's a very bad experience for the users. So we need a approach to enable this feature after rebooting. This is a use case in our practical environment. 2) Some use cases are that HugeTLB pages are allocated 'on the fly' instead of being pulled from the HugeTLB pool, those workloads would be affected with this feature enabled. Those workloads could be identified by the characteristics of they never explicitly allocating huge pages with 'nr_hugepages' but only set 'nr_overcommit_hugepages' and then let the pages be allocated from the buddy allocator at fault time. We can confirm it is a real use case from the commit 099730d67417. For those workloads, the page fault time could be ~2x slower than before. We suspect those users want to disable this feature if the system has enabled this before and they don't think the memory savings benefit is enough to make up for the performance drop. 3) If the workload which wants vmemmap pages to be optimized and the workload which wants to set 'nr_overcommit_hugepages' and does not want the extera overhead at fault time when the overcommitted pages be allocated from the buddy allocator are deployed in the same server. The user could enable this feature and set 'nr_hugepages' and 'nr_overcommit_hugepages', then disable the feature. In this case, the overcommited HugeTLB pages will not encounter the extra overhead at fault time. Signed-off-by: Muchun Song --- Documentation/admin-guide/sysctl/vm.rst | 30 +++++++++++ include/linux/memory_hotplug.h | 9 ++++ mm/hugetlb_vmemmap.c | 92 +++++++++++++++++++++++++++++---- mm/hugetlb_vmemmap.h | 4 +- mm/memory_hotplug.c | 7 +-- 5 files changed, 126 insertions(+), 16 deletions(-) diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst index 747e325ebcd0..00434789cf26 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -562,6 +562,36 @@ Change the minimum size of the hugepage pool. See Documentation/admin-guide/mm/hugetlbpage.rst +hugetlb_optimize_vmemmap +======================== + +Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap pages +associated with each HugeTLB page. + +Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from +buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages +per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be +optimized. When those optimized HugeTLB pages are freed from the HugeTLB pool +to the buddy allocator, the vmemmap pages representing that range needs to be +remapped again and the vmemmap pages discarded earlier need to be rellocated +again. If your use case is that HugeTLB pages are allocated 'on the fly' (e.g. +never explicitly allocating HugeTLB pages with 'nr_hugepages' but only set +'nr_overcommit_hugepages', those overcommitted HugeTLB pages are allocated 'on +the fly') instead of being pulled from the HugeTLB pool, you should weigh the +benefits of memory savings against the more overhead (~2x slower than before) +of allocation or freeing HugeTLB pages between the HugeTLB pool and the buddy +allocator. Another behavior to note is that if the system is under heavy memory +pressure, it could prevent the user from freeing HugeTLB pages from the HugeTLB +pool to the buddy allocator since the allocation of vmemmap pages could be +failed, you have to retry later if your system encounter this situation. + +Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from +buddy allocator will not be optimized meaning the extra overhead at allocation +time from buddy allocator disappears, whereas already optimized HugeTLB pages +will not be affected. If you want to make sure there is no optimized HugeTLB +pages, you can set "nr_hugepages" to 0 first and then disable this. + + nr_hugepages_mempolicy ====================== diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 029fb7e26504..917112661b5c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -351,4 +351,13 @@ void arch_remove_linear_mapping(u64 start, u64 size); extern bool mhp_supports_memmap_on_memory(unsigned long size); #endif /* CONFIG_MEMORY_HOTPLUG */ +#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY +bool mhp_memmap_on_memory(void); +#else +static inline bool mhp_memmap_on_memory(void) +{ + return false; +} +#endif + #endif /* __LINUX_MEMORY_HOTPLUG_H */ diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index cc4ec752ec16..5820a681a724 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -10,6 +10,7 @@ */ #define pr_fmt(fmt) "HugeTLB: " fmt +#include #include "hugetlb_vmemmap.h" /* @@ -22,21 +23,40 @@ #define RESERVE_VMEMMAP_NR 1U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +enum vmemmap_optimize_mode { + VMEMMAP_OPTIMIZE_OFF, + VMEMMAP_OPTIMIZE_ON, +}; + DEFINE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON, hugetlb_optimize_vmemmap_key); EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); +static enum vmemmap_optimize_mode vmemmap_optimize_mode = + IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON); + +static void vmemmap_optimize_mode_switch(enum vmemmap_optimize_mode to) +{ + if (vmemmap_optimize_mode == to) + return; + + if (to == VMEMMAP_OPTIMIZE_OFF) + static_branch_dec(&hugetlb_optimize_vmemmap_key); + else + static_branch_inc(&hugetlb_optimize_vmemmap_key); + vmemmap_optimize_mode = to; +} + static int __init hugetlb_vmemmap_early_param(char *buf) { bool enable; + enum vmemmap_optimize_mode mode; if (kstrtobool(buf, &enable)) return -EINVAL; - if (enable) - static_branch_enable(&hugetlb_optimize_vmemmap_key); - else - static_branch_disable(&hugetlb_optimize_vmemmap_key); + mode = enable ? VMEMMAP_OPTIMIZE_ON : VMEMMAP_OPTIMIZE_OFF; + vmemmap_optimize_mode_switch(mode); return 0; } @@ -60,6 +80,8 @@ int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head) vmemmap_end = vmemmap_addr + (vmemmap_pages << PAGE_SHIFT); vmemmap_reuse = vmemmap_addr - PAGE_SIZE; + VM_BUG_ON_PAGE(!vmemmap_pages, head); + /* * The pages which the vmemmap virtual address range [@vmemmap_addr, * @vmemmap_end) are mapped to are freed to the buddy allocator, and @@ -69,8 +91,10 @@ int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head) */ ret = vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse, GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE); - if (!ret) + if (!ret) { ClearHPageVmemmapOptimized(head); + static_branch_dec(&hugetlb_optimize_vmemmap_key); + } return ret; } @@ -84,6 +108,8 @@ void hugetlb_vmemmap_free(struct hstate *h, struct page *head) if (!vmemmap_pages) return; + static_branch_inc(&hugetlb_optimize_vmemmap_key); + vmemmap_addr += RESERVE_VMEMMAP_SIZE; vmemmap_end = vmemmap_addr + (vmemmap_pages << PAGE_SHIFT); vmemmap_reuse = vmemmap_addr - PAGE_SIZE; @@ -93,7 +119,9 @@ void hugetlb_vmemmap_free(struct hstate *h, struct page *head) * to the page which @vmemmap_reuse is mapped to, then free the pages * which the range [@vmemmap_addr, @vmemmap_end] is mapped to. */ - if (!vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse)) + if (vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse)) + static_branch_dec(&hugetlb_optimize_vmemmap_key); + else SetHPageVmemmapOptimized(head); } @@ -110,9 +138,6 @@ void __init hugetlb_vmemmap_init(struct hstate *h) BUILD_BUG_ON(__NR_USED_SUBPAGE >= RESERVE_VMEMMAP_SIZE / sizeof(struct page)); - if (!hugetlb_optimize_vmemmap_enabled()) - return; - if (!is_power_of_2(sizeof(struct page))) { pr_warn_once("cannot optimize vmemmap pages because \"struct page\" crosses page boundaries\n"); static_branch_disable(&hugetlb_optimize_vmemmap_key); @@ -134,3 +159,52 @@ void __init hugetlb_vmemmap_init(struct hstate *h) pr_info("can optimize %d vmemmap pages for %s\n", h->optimize_vmemmap_pages, h->name); } + +#ifdef CONFIG_PROC_SYSCTL +static int hugetlb_optimize_vmemmap_handler(struct ctl_table *table, int write, + void *buffer, size_t *length, + loff_t *ppos) +{ + int ret; + enum vmemmap_optimize_mode mode; + static DEFINE_MUTEX(sysctl_mutex); + + if (write && !capable(CAP_SYS_ADMIN)) + return -EPERM; + + mutex_lock(&sysctl_mutex); + mode = vmemmap_optimize_mode; + table->data = &mode; + ret = proc_dointvec_minmax(table, write, buffer, length, ppos); + if (write && !ret) + vmemmap_optimize_mode_switch(mode); + mutex_unlock(&sysctl_mutex); + + return ret; +} + +static struct ctl_table hugetlb_vmemmap_sysctls[] = { + { + .procname = "hugetlb_optimize_vmemmap", + .maxlen = sizeof(enum vmemmap_optimize_mode), + .mode = 0644, + .proc_handler = hugetlb_optimize_vmemmap_handler, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, + }, + { } +}; + +static __init int hugetlb_vmemmap_sysctls_init(void) +{ + /* + * If "memory_hotplug.memmap_on_memory" is enabled or "struct page" + * crosses page boundaries, the vmemmap pages cannot be optimized. + */ + if (!mhp_memmap_on_memory() && is_power_of_2(sizeof(struct page))) + register_sysctl_init("vm", hugetlb_vmemmap_sysctls); + + return 0; +} +late_initcall(hugetlb_vmemmap_sysctls_init); +#endif /* CONFIG_PROC_SYSCTL */ diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 109b0a53b6fe..19840aa900fd 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -21,7 +21,9 @@ void hugetlb_vmemmap_init(struct hstate *h); */ static inline unsigned int hugetlb_optimize_vmemmap_pages(struct hstate *h) { - return h->optimize_vmemmap_pages; + if (hugetlb_optimize_vmemmap_enabled()) + return h->optimize_vmemmap_pages; + return 0; } #else static inline int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index a6101ae402f9..c72070cdd055 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -63,15 +63,10 @@ static bool memmap_on_memory __ro_after_init; module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory, 0444); MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hotplug"); -static inline bool mhp_memmap_on_memory(void) +bool mhp_memmap_on_memory(void) { return memmap_on_memory; } -#else -static inline bool mhp_memmap_on_memory(void) -{ - return false; -} #endif enum {