From patchwork Thu Dec 17 12:13:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11979709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6B63C2BBCD for ; Thu, 17 Dec 2020 12:18:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9BBBB238E8 for ; Thu, 17 Dec 2020 12:18:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728437AbgLQMRu (ORCPT ); Thu, 17 Dec 2020 07:17:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727246AbgLQMRs (ORCPT ); Thu, 17 Dec 2020 07:17:48 -0500 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6963CC0611CC for ; Thu, 17 Dec 2020 04:17:08 -0800 (PST) Received: by mail-pl1-x630.google.com with SMTP id j1so15061666pld.3 for ; Thu, 17 Dec 2020 04:17:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=82iH06gv0pfV8oeCrw6OMsUI6CAkk3A2C2xnsJNWudA=; b=ctcgr3uyz0WIWDjOq8PScxifWrBHsYSnHCoIG998yQI59xxOrk6xl3NsYzTeZ/SgyS eRuwJAo/QocZC21OQ7/sSm2NLIR4oPiV9T24IqTdQUx6bx0oiIgynSmDkKciJSy8dlsI mrPmvPS9hiWyScKlaJrWCOezCNESt17vmH7bhQkYaXb/kXUDr4asWr2ZCmOtPXzBAFFa Hk9d7Lxl8SBLOTyeFogxSNvinKL98Nrh/kuEQtpOp7YMd9qjF9oVICM7zHMj6OEM7fun 5tsoE51M9gKEwtO5zanUGIqluHRGhsIYY3YXTNjGLOqSM6tXPOyJ9oNE1qL69PY8N3sw T3bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=82iH06gv0pfV8oeCrw6OMsUI6CAkk3A2C2xnsJNWudA=; b=qUQDMfcZm7dBZZ+2VKR7Z07t0f/FOhzIfIFro4SDSQ1Toqees1mUook2c9MkS+9AXV S6z1/Gos0/XQuqU1jnqoqYhK9/GaYEwk+BdkMvrHR6coSrlFyRsfRubf5WcQ8428gbxE AS1FZ3qOBHMzw+v6nJLCJH+hz9Y8V8bTvg9g/yaTnCyvV4fSzk7wLwaB5nO4IrtURqRe qnhBCubGLx70/b3R3EDKgBX+uw0WBoJ0/PFUeSMf3iZeWxIYtOqnuKssCXnUI5C9HYjX Y3IQPpkWlWa2Eyts7PLt1CaIRp/KYYw2CoFy55zkA0XOd1s6ZctRM4AxouB4wkF76xzR dYPA== X-Gm-Message-State: AOAM533Msq0lXLbd5ldc59RiIt0BJKjcueLMl259BmmjY5eiVRNdMt5p sEvrixYXdCTdIdQr82cMqExcMg== X-Google-Smtp-Source: ABdhPJy+c/BzCm9+/D+0KwgZro7McwzDx3r9+rj6U5DJa52cVkDIltdz4fKjnCAZ0691Ym4kRxM8fw== X-Received: by 2002:a17:902:8bc5:b029:dc:1e79:e746 with SMTP id r5-20020a1709028bc5b02900dc1e79e746mr5089783plo.77.1608207427955; Thu, 17 Dec 2020 04:17:07 -0800 (PST) Received: from localhost.localdomain ([139.177.225.237]) by smtp.gmail.com with ESMTPSA id n15sm2775691pgl.31.2020.12.17.04.16.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Dec 2020 04:17:07 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v10 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Date: Thu, 17 Dec 2020 20:13:00 +0800 Message-Id: <20201217121303.13386-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201217121303.13386-1-songmuchun@bytedance.com> References: <20201217121303.13386-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add a kernel parameter hugetlb_free_vmemmap to enable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Barry Song --- Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ arch/x86/mm/init_64.c | 8 ++++++-- include/linux/hugetlb.h | 19 +++++++++++++++++++ mm/hugetlb_vmemmap.c | 16 ++++++++++++++++ 5 files changed, 58 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 3ae25630a223..44dde9be7e00 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1551,6 +1551,20 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] + hugetlb_free_vmemmap= + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, + this controls freeing unused vmemmap pages associated + with each HugeTLB page. When this option is enabled, + we disable PMD/huge page mapping of vmemmap pages which + increase page table pages. So if a user/sysadmin only + uses a small number of HugeTLB pages (as a percentage + of system memory), they could end up using more memory + with hugetlb_free_vmemmap on as opposed to off. + Format: { on | off (default) } + + on: enable the feature + off: disable the feature + hung_task_panic= [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index f7b1c7462991..3a23c2377acc 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -145,6 +145,9 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing + unused vmemmap pages associated with each HugeTLB page. When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0435bee2e172..1bce5f20e6ca 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, { int err; - if (end - start < PAGES_PER_SECTION * sizeof(struct page)) + if (is_hugetlb_free_vmemmap_enabled() || + end - start < PAGES_PER_SECTION * sizeof(struct page)) err = vmemmap_populate_basepages(start, end, node, NULL); else if (boot_cpu_has(X86_FEATURE_PSE)) err = vmemmap_populate_hugepages(start, end, node, altmap); @@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr, pmd_t *pmd; unsigned int nr_pmd_pages; struct page *page; + bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) || + is_hugetlb_free_vmemmap_enabled(); for (; addr < end; addr = next) { pte_t *pte = NULL; @@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); - if (!boot_cpu_has(X86_FEATURE_PSE)) { + if (base_mapping) { next = (addr + PAGE_SIZE) & PAGE_MASK; pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ebca2ef02212..7f47f0eeca3b 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -770,6 +770,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +extern bool hugetlb_free_vmemmap_enabled; + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return hugetlb_free_vmemmap_enabled; +} +#else +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} +#endif + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; @@ -923,6 +937,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 273816dd95b6..9e9bd458a3f5 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -178,6 +178,22 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +bool hugetlb_free_vmemmap_enabled; + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "on")) + hugetlb_free_vmemmap_enabled = true; + else if (strcmp(buf, "off")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); + static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;