From patchwork Thu Sep 26 08:27:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13813030 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 390D0CCFA13 for ; Thu, 26 Sep 2024 08:27:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BADDC6B00A5; Thu, 26 Sep 2024 04:27:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B363D6B00AB; Thu, 26 Sep 2024 04:27:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98A276B00B8; Thu, 26 Sep 2024 04:27:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 75A106B00A5 for ; Thu, 26 Sep 2024 04:27:58 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E0FC3806A2 for ; Thu, 26 Sep 2024 08:27:57 +0000 (UTC) X-FDA: 82606211394.17.B5D617F Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) by imf23.hostedemail.com (Postfix) with ESMTP id CB3E9140024 for ; Thu, 26 Sep 2024 08:27:53 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=kMQ1hzyJ; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.99 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727339260; a=rsa-sha256; cv=none; b=EaMfHvrzx+cmOg3ozg2gxsK35cMAftfUX5XHhdHZrw2/e7sYjRnGhkHxJcjqdOZImxZRH3 tUlAk+gsWFSyoWuKM/l3BIv2XdKfDKCo/HL6gXI5EPdvqNOewF9bv9/sqcQU679+Y6DRqa rC5gPsLdXH3pI0NEFpaSKc6wOmaIVpY= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=kMQ1hzyJ; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf23.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.99 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727339260; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YCfvBtZVEzVJoPVwJXjBCYZjSgR2b99w/DEiUCw3oWM=; b=QQphaOUheUz/epJFmKMKRh8dGz+LecFwZbc5IS2EAt3/WXL75XKfnMn/8FGOGbI/h57NHZ bphwDTuIpf9wGFyBLxh8AZuRiYS/1nnGs+i8MKMropO8heCNenSjb161piop9zgtiollfx 699SxsYB5NsdfQ/w7ZtGs/LPK+xnaS8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1727339260; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=YCfvBtZVEzVJoPVwJXjBCYZjSgR2b99w/DEiUCw3oWM=; b=kMQ1hzyJus9aNnm85hQFm9ePVhsig82KImD/mnLT0omzt/UbSMnmQkDz1Z5FPRpkUIdUGmrpInmeM2eDf5TZvGZi4x9c9tIz9dh99o92FZ31vz48XUrMVTBNKiipq6KYcEBRNfGb/VpyjwYEU77i7N+hk9x8KutLpgOrCcb+kF4= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WFmcoqg_1727339258) by smtp.aliyun-inc.com; Thu, 26 Sep 2024 16:27:39 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ioworker0@gmail.com, da.gomez@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 1/2] mm: shmem: add large folio support to the write and fallocate paths Date: Thu, 26 Sep 2024 16:27:26 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: r39t5iow963wjaoq7jbi7n8y4iwchrqr X-Rspamd-Queue-Id: CB3E9140024 X-Rspamd-Server: rspam02 X-HE-Tag: 1727339273-28242 X-HE-Meta: U2FsdGVkX1944UcncLYckxmOIW7IdVagrXE0ijG0MweKgQ39vrIV8UbO54o9OEgsA+jbijmn12savfDgrsmuydVPuFQJ4rBZA74OJPrzm+19E6lcJxKXy10kiQFSKpQ7vOen+YSyfCTCOj8GX1saOlG/4LTVYkX0MqWYWaEGPl2qLxnAXkqHJ3yjEEd7SLKTng8axb3aFtfbtim6sZUCzbqBIpwbonzhsb3jATS4I41GCR3U3BxAnBKf/UZG8JyK+WpRjIa3khVSgMkg7/Ne4jiaWfjH+vcBJxowVdrSoBNOYUlymqPs9czTz8jkQEAmIOnyEVck8SvbNm995mYnaFOKzaWhwH1KPdtllno98/aCtL0CEJGOdJv+OZnBQG0h33MGUnuDStjMqYcPiedJQ1AdjA4LwQ7jsGhQiVRhTV0z6IrLvNbmniEZJDmTiYGguLYFFTyMHeUmHkMhGoARjo49b/IfSmCi211DwDhwRt1eL0TLKnCS6ElUSNzAtlQeAAMrpXvxJNO7zb+XPdtw0oHNYJ7RQi/hn1sXD+/w83Hq2ll5Y9SyhpgzHfhX3/YFnG41lXAu8fgxjbPI2rs16SD5xbbS5nvljFeJSZDhxmJ5DYlcxYo54EgIp9UTUPVLJ7jPa6ygS8ZnzVSz7aKtJXH8CtQ4extwmkTMb0NC9myTfRFkTXyMJ1SPV97m80qLGru8yfaFnQbzPQO43Ybi5+vk4hxB4IDa6HpNjAyHCR7QOZmErgrZW1ea2fY26qOCN5MLEWTXpTaKNP8vB/bvltjuqW3N3BPVNu7Ju/UvmpvbFjZ4sR5ZEoGWXjFvT3O/yeiqdKSiqh4hV01ufK/+oMhWWDIwET2L0B/qMc38WHhuoLTR8GTI7D9tFveuY/8XtN/fpWDrZTee4hgFiZIqa97yTkGNKWfU0m5U3yVZW0uLmURsWLYAYElkA3cNc9z50rKbgSP1PPcMvdhqPvs 1u4Euk6x EnWI3FCrgaYTttFD4SyfQB8YL8MinyKT7ndZIhqdwYCH1iDKXAGoS15YnkU5TRCXczTifokrGdbNQ/5vnLSjkDdqgL0hvOZefom3lr7nGLwHSiHJQ/RxFdubCeAmydjHVZhzJ/VL93NaUyKwMi58EIlMUhkVN9llw61HwPvXP2YQcllZQHm89EXgLNq+Vf3Z/LFa+n5G4t/hbtb71qXe540isZR+aqvTbETN+4RzGtvoOqxV9Jj4v8S0MZ+L8vbqkAYvo X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Daniel Gomez Add large folio support for shmem write and fallocate paths matching the same high order preference mechanism used in the iomap buffered IO path as used in __filemap_get_folio(). Add shmem_mapping_size_order() to get a hint for the order of the folio based on the file size which takes care of the mapping requirements. If the top level huge page (controlled by '/sys/kernel/mm/transparent_hugepage/shmem_enabled') is enabled, we just allow PMD sized THP to keep interface backward compatibility. Co-developed-by: Baolin Wang Signed-off-by: Daniel Gomez Signed-off-by: Baolin Wang --- mm/shmem.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 48 insertions(+), 3 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 0613421e09e7..6dece90ff421 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1672,6 +1672,36 @@ bool shmem_hpage_pmd_enabled(void) return false; } +/** + * shmem_mapping_size_order - Get maximum folio order for the given file size. + * @mapping: Target address_space. + * @index: The page index. + * @size: The suggested size of the folio to create. + * + * This returns a high order for folios (when supported) based on the file size + * which the mapping currently allows at the given index. The index is relevant + * due to alignment considerations the mapping might have. The returned order + * may be less than the size passed. + * + * Like __filemap_get_folio order calculation. + * + * Return: The order. + */ +static inline unsigned int +shmem_mapping_size_order(struct address_space *mapping, pgoff_t index, size_t size) +{ + unsigned int order = get_order(max_t(size_t, size, PAGE_SIZE)); + + if (!mapping_large_folio_support(mapping)) + return 0; + + /* If we're not aligned, allocate a smaller folio */ + if (index & ((1UL << order) - 1)) + order = __ffs(index); + + return min_t(size_t, order, MAX_PAGECACHE_ORDER); +} + unsigned long shmem_allowable_huge_orders(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, loff_t write_end, bool shmem_huge_force) @@ -1694,11 +1724,26 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, global_huge = shmem_huge_global_enabled(inode, index, write_end, shmem_huge_force, vma, vm_flags); if (!vma || !vma_is_anon_shmem(vma)) { + size_t len; + + /* + * For tmpfs, if top level huge page is enabled, we just allow + * PMD sized THP to keep interface backward compatibility. + */ + if (global_huge) + return BIT(HPAGE_PMD_ORDER); + + if (!write_end) + return 0; + /* - * For tmpfs, we now only support PMD sized THP if huge page - * is enabled, otherwise fallback to order 0. + * Otherwise, get a highest order hint based on the size of + * write and fallocate paths, then will try each allowable + * huge orders. */ - return global_huge ? BIT(HPAGE_PMD_ORDER) : 0; + len = write_end - (index << PAGE_SHIFT); + order = shmem_mapping_size_order(inode->i_mapping, index, len); + return order > 0 ? BIT(order + 1) - 1 : 0; } /* From patchwork Thu Sep 26 08:27:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13813028 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA03ECCFA13 for ; Thu, 26 Sep 2024 08:27:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A41C6B00B6; Thu, 26 Sep 2024 04:27:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 62D3F6B00B8; Thu, 26 Sep 2024 04:27:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A7896B00BA; Thu, 26 Sep 2024 04:27:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 273916B00B6 for ; Thu, 26 Sep 2024 04:27:53 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A51D2AAEF0 for ; Thu, 26 Sep 2024 08:27:52 +0000 (UTC) X-FDA: 82606211184.19.65AF752 Received: from out30-118.freemail.mail.aliyun.com (out30-118.freemail.mail.aliyun.com [115.124.30.118]) by imf06.hostedemail.com (Postfix) with ESMTP id F3438180007 for ; Thu, 26 Sep 2024 08:27:49 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=dbkI0f5H; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf06.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.118 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727339182; a=rsa-sha256; cv=none; b=DvV24rRrSqrhLt7EMrzrh9Zi52YPDWnMjUqL2no0OWPLzx3KeKV5Qi+Y3kl3KLEWKOc4xz 87qQTs8ekTSEwjkXD7QwXgdG3Y3Sb/zLTtGIYhWKlFGKE7WEkvIYTk8ZE9LNctz6e62qtm OpoIGUHR+sNdPMLEGuh0BnaCJLHKd1c= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=dbkI0f5H; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf06.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.118 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727339182; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JgLfYDPlwcFd/0+L3cGA3cxYkitaWODfvWI7HPn1wc4=; b=XWhrwSZDHYI+1KeR8PkAtQ6dcwPjhyoxY+LoAxCl1cuEDkkWc0+NXvPLFA49HEoF/Q3FR1 pxPBApI15ftegx6aN7glqvH8UW+XyJ5QhYn50GSuthhHhCEe80WhsyA1PFfPNamZ06VLC6 HtgtQ1kgBQjN2vQ/j7IO77BO6HsxBek= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1727339261; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=JgLfYDPlwcFd/0+L3cGA3cxYkitaWODfvWI7HPn1wc4=; b=dbkI0f5HZwyaXtjB6se2OajeVHdM/+fO1WmTna9YeSyOLBBZFgJeboTI0XB1m0RQkkkEIoLjJotYFL4nLW1K66/4f6+U0t11oE4yGK/POFzZ/AzNHdEtDZogCdJEq+znFnDQR629HiXz314oNryFdqgMHMOQCsu64ACB3Ydw/gY= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WFmcorO_1727339259) by smtp.aliyun-inc.com; Thu, 26 Sep 2024 16:27:40 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ioworker0@gmail.com, da.gomez@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 2/2] mm: shmem: use mTHP interface to control huge orders for tmpfs Date: Thu, 26 Sep 2024 16:27:27 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: F3438180007 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: jhduxtq3gebr8yym6gjdo3619qicijzr X-HE-Tag: 1727339269-863237 X-HE-Meta: U2FsdGVkX18OlSijM3B2owNOQBL44SkJCTrm7Lyvon9Xof4Yzk/Dwc24M6n4z+3byIrWY3/ri3Eq/1J4lfcy+smWOqxXuwMphJ58gXEJub9zRZr/GpaQcnecBN2Z4Kbvrmw0jeGDSkwQKUt3EI2fZWyfilbYle8RoOzINbOyw5oKdHJCamKHuT02PJipqG6UsmKUaVxWuh3yH23EUT8ITA0PHq1pvdGpp7exal3Z0pw8Tj208/oVXML+2OjYUL6OHB64UWIzQJ9xuI1/SurbZ2JmGUwENJ2qerHZisXs1l/kyZ6dftyS39SIuh+5xqOjSqmZzjDlBUnJqKOmjEcJuQYPViwugncWP0t8iJPpMSyLu3gEtrl9QP94YUTtjqcVXUcIapFbPZxdZfQZgBr9cdrD4xfUm0Y978+vsoqWzefg4uKYi6g2uF6rRKo0Z/z/8bS9MnlWyDDRc5ydp0l1+wRObCbUTxeDopoxQsVaG/B7Dydu+hv+D0zOLBVZTG28o5/7fztGjyKHi6H5zqAip8D63sg8nnRDjIu567bwv+2BdxmyFF75d8ixiB3pFjcJyENGY6AKq0HF1yP8Th+hVamLRl/rhayx5Bt7Mak8AuK0uIJF5f2xzygVonjWIxpnl6yYA0LOZ+J/sQbOzZWlm4dowgdplvQPbOmhKJt30B4y/+KR7gezAZmU3uQSjNug08UjB1J2jfql7rDSOaS50VEUEcaWKtes4OQe96zc22DKK22mNXSx3c18gMoB2533hwSQtovIOSYpe5jwJGI7IJWk67glQxFYMaGyW1IsMTx2rpQ+gbboBrBV2vIqIjSuAwAGXbIIcCUNpOytm7TMYd0aBgWUD89PZd5k3WvTqEUHuourUScIWCuWFlGMAGB6CzgyyM4YWrSLSd5Rakqr+lo+rSWmgk6gH7B3GzfjcDk+8Ah1K8Z+rfpFxcFnatVfEsHMkcdb5yL7MMqgmOr g93swBFf X10Op18P/ogQAV0ALNwgZEIA/RNUq5kU9G688leE53Awp8/UyCC8+Uud86q//EBkUcj+agFno5nH0L03e+gnThL5bbfwX/OTmBRn3qoh5u/nq3UDeFmePeaytbpPkSoL4NerrMWGVCb6DANzwdzaM2tbg9OG8TdiSL+t4f4jcEHBwCmoqW0BDzL1ttxAGZAtc526v84LZMloxiBMtGo9ZP33k0wZE8H+9QGVx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For the huge orders allowed by writable mmap() faults on tmpfs, the mTHP interface is used to control the allowable huge orders, while 'huge_shmem_orders_inherit' maintains backward compatibility with top-level interface. For the huge orders allowed by write() and fallocate() paths on tmpfs, getting a highest order hint based on the size of write and fallocate paths, then will try each allowable huge orders filtered by the mTHP interfaces if set. Signed-off-by: Baolin Wang --- mm/memory.c | 4 ++-- mm/shmem.c | 51 ++++++++++++++++++++++++++------------------------- 2 files changed, 28 insertions(+), 27 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 2366578015ad..99dd75b84605 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5098,10 +5098,10 @@ vm_fault_t finish_fault(struct vm_fault *vmf) /* * Using per-page fault to maintain the uffd semantics, and same - * approach also applies to non-anonymous-shmem faults to avoid + * approach also applies to non shmem/tmpfs faults to avoid * inflating the RSS of the process. */ - if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) { + if (!vma_is_shmem(vma) || unlikely(userfaultfd_armed(vma))) { nr_pages = 1; } else if (nr_pages > 1) { pgoff_t idx = folio_page_idx(folio, page); diff --git a/mm/shmem.c b/mm/shmem.c index 6dece90ff421..569d3ab37161 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1721,31 +1721,6 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED)) return 0; - global_huge = shmem_huge_global_enabled(inode, index, write_end, - shmem_huge_force, vma, vm_flags); - if (!vma || !vma_is_anon_shmem(vma)) { - size_t len; - - /* - * For tmpfs, if top level huge page is enabled, we just allow - * PMD sized THP to keep interface backward compatibility. - */ - if (global_huge) - return BIT(HPAGE_PMD_ORDER); - - if (!write_end) - return 0; - - /* - * Otherwise, get a highest order hint based on the size of - * write and fallocate paths, then will try each allowable - * huge orders. - */ - len = write_end - (index << PAGE_SHIFT); - order = shmem_mapping_size_order(inode->i_mapping, index, len); - return order > 0 ? BIT(order + 1) - 1 : 0; - } - /* * Following the 'deny' semantics of the top level, force the huge * option off from all mounts. @@ -1776,9 +1751,35 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, if (vm_flags & VM_HUGEPAGE) mask |= READ_ONCE(huge_shmem_orders_madvise); + global_huge = shmem_huge_global_enabled(inode, index, write_end, + shmem_huge_force, vma, vm_flags); if (global_huge) mask |= READ_ONCE(huge_shmem_orders_inherit); + /* + * For the huge orders allowed by writable mmap() faults on tmpfs, + * the mTHP interface is used to control the allowable huge orders, + * while 'huge_shmem_orders_inherit' maintains backward compatibility + * with top-level interface. + * + * For the huge orders allowed by write() and fallocate() paths on tmpfs, + * get a highest order hint based on the size of write and fallocate + * paths, then will try each allowable huge orders filtered by the mTHP + * interfaces if set. + */ + if (!vma && !global_huge) { + size_t len; + + if (!write_end) + return 0; + + len = write_end - (index << PAGE_SHIFT); + order = shmem_mapping_size_order(inode->i_mapping, index, len); + if (!mask) + return order > 0 ? BIT(order + 1) - 1 : 0; + + mask &= BIT(order + 1) - 1; + } return THP_ORDERS_ALL_FILE_DEFAULT & mask; }