From patchwork Fri Sep 29 11:44:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13404140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EDC63E7F154 for ; Fri, 29 Sep 2023 11:46:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RvZbUj4wNF4FGeakisFL84QObU5poW4GDhlax5f8Us4=; b=e5UyoEGQO1Lk3R XVafpT89lOVSRtT+Bru0SF2IBBR8SVDRP1RIWhAJyxyChvpv2HEcxaC7Ym7FN0orCHljwj0rZwZ+i woZ1yZxRkC7eXu6ThC/EJfNbiWbvQvI/yIozkdsc3TUbkElbDblLMKvVzQ+QcpBGLlq0+rGm/8PT8 j9YDwFFBsvs5zhUnduZa5RVVADe66dIPCw3u96NGHR0tbgCXKAmki7rQ5Nwf0tCbjvrXPMI0Z1POY u2FiOKgR7WCCJwwL670/kk7ZZvXGQ5uu0yz4VKFLqjZqZPm/68DxtJXtPZnpvi6Fq6SOBfsgP/yeq 0EpKbwdfgL7Dj8zgvepA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qmBwD-007pdZ-1k; Fri, 29 Sep 2023 11:45:41 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qmBvP-007p8E-1H for linux-arm-kernel@lists.infradead.org; Fri, 29 Sep 2023 11:44:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 523D91007; Fri, 29 Sep 2023 04:45:28 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7040B3F59C; Fri, 29 Sep 2023 04:44:47 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v6 6/9] mm: thp: Add "recommend" option for anon_orders Date: Fri, 29 Sep 2023 12:44:17 +0100 Message-Id: <20230929114421.3761121-7-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230929114421.3761121-1-ryan.roberts@arm.com> References: <20230929114421.3761121-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230929_044451_526335_7BBCBF96 X-CRM114-Status: GOOD ( 18.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In addition to passing a bitfield of folio orders to enable for THP, allow the string "recommend" to be written, which has the effect of causing the system to enable the orders preferred by the architecture and by the mm. The user can see what these orders are by subsequently reading back the file. Note that these recommended orders are expected to be static for a given boot of the system, and so the keyword "auto" was deliberately not used, as I want to reserve it for a possible future use where the "best" order is chosen more dynamically at runtime. Recommended orders are determined as follows: - PMD_ORDER: The traditional THP size - arch_wants_pte_order() if implemented by the arch - PAGE_ALLOC_COSTLY_ORDER: The largest order kept on per-cpu free list arch_wants_pte_order() can be overridden by the architecture if desired. Some architectures (e.g. arm64) can coalsece TLB entries if a contiguous set of ptes map physically contigious, naturally aligned memory, so this mechanism allows the architecture to optimize as required. Here we add the default implementation of arch_wants_pte_order(), used when the architecture does not define it, which returns -1, implying that the HW has no preference. Signed-off-by: Ryan Roberts --- Documentation/admin-guide/mm/transhuge.rst | 4 ++++ include/linux/pgtable.h | 13 +++++++++++++ mm/huge_memory.c | 14 +++++++++++--- 3 files changed, 28 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 732c3b2f4ba8..d6363d4efa3a 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -187,6 +187,10 @@ pages (=16K if the page size is 4K). The example above enables order-9 By enabling multiple orders, allocation of each order will be attempted, highest to lowest, until a successful allocation is made. If the PMD-order is unset, then no PMD-sized THPs will be allocated. +It is also possible to enable the recommended set of orders, which +will be optimized for the architecture and mm:: + + echo recommend >/sys/kernel/mm/transparent_hugepage/anon_orders The kernel will ignore any orders that it does not support so read the file back to determine which orders are enabled:: diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index af7639c3b0a3..0e110ce57cc3 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -393,6 +393,19 @@ static inline void arch_check_zapped_pmd(struct vm_area_struct *vma, } #endif +#ifndef arch_wants_pte_order +/* + * Returns preferred folio order for pte-mapped memory. Must be in range [0, + * PMD_ORDER) and must not be order-1 since THP requires large folios to be at + * least order-2. Negative value implies that the HW has no preference and mm + * will choose it's own default order. + */ +static inline int arch_wants_pte_order(void) +{ + return -1; +} +#endif + #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bcecce769017..e2e2d3906a21 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -464,10 +464,18 @@ static ssize_t anon_orders_store(struct kobject *kobj, int err; int ret = count; unsigned int orders; + int arch; - err = kstrtouint(buf, 0, &orders); - if (err) - ret = -EINVAL; + if (sysfs_streq(buf, "recommend")) { + arch = max(arch_wants_pte_order(), PAGE_ALLOC_COSTLY_ORDER); + orders = BIT(arch); + orders |= BIT(PAGE_ALLOC_COSTLY_ORDER); + orders |= BIT(PMD_ORDER); + } else { + err = kstrtouint(buf, 0, &orders); + if (err) + ret = -EINVAL; + } if (ret > 0) { orders &= THP_ORDERS_ALL_ANON;