From patchwork Thu Aug 15 09:08:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13764594 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 691FBC52D7D for ; Thu, 15 Aug 2024 09:09:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BEAEF6B00A2; Thu, 15 Aug 2024 05:09:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA19B6B00A3; Thu, 15 Aug 2024 05:09:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A644B6B00A4; Thu, 15 Aug 2024 05:09:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8999A6B00A2 for ; Thu, 15 Aug 2024 05:09:12 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 0AD114146C for ; Thu, 15 Aug 2024 09:09:12 +0000 (UTC) X-FDA: 82453905744.17.C6046C5 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) by imf16.hostedemail.com (Postfix) with ESMTP id 43800180007 for ; Thu, 15 Aug 2024 09:09:09 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=umcQCzS1; spf=pass (imf16.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723712893; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=u0tiHABEgqhnV2gJwGru6HzUMC13qjezx2teK4ST2MU=; b=4b1duTfGI087W3CasaemIyLobk2ftV1Wj7iAfo1zMOi0iP0sXiGW15SX32l7fWoR4X/8fM 0tZGg9cwgzyO+NWQyXV2DT501pRHyDjCmUny7grj1ejWg1NjBLzT105INGbGP8UYMNuxFW cfvTQFpYKd2OROVp5vu/6sAhxR6xS4U= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=umcQCzS1; spf=pass (imf16.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723712893; a=rsa-sha256; cv=none; b=RU93MJ/DCY9Yke8F/HL5S2cgjR5ivxohEkD08I0iH73wUwW8M8QAyvbvYjp5KYRv2pmNl2 jO+won03tltH0kgNwLJquUXnq5nC0y4xnlNHAL9oripFsC5Sx5SRImLNNDtWWfIZ+HcT0G 3/oieyNkbJoRvrlb40lYOsJGQY7bTH4= Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4Wkznt3Q2Xz9sm9; Thu, 15 Aug 2024 11:09:06 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1723712946; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u0tiHABEgqhnV2gJwGru6HzUMC13qjezx2teK4ST2MU=; b=umcQCzS10ETZJ+vn2ckCDLs8A5gkT0UZedyMv0JpzUg6/E2Y2ORvjR4fM1eeOd439Haz3P 4jtm4CfmeqiAKB1Jctal9IQPXfLXavMCijHrxHM0kwWIyUSt0GufMeM385p8mjG5lywqba LQwBEBdNuPPy/z+dXUowbp9q3ldoslJiihUdNjV6BXohNMYDO6O1bLubMoP9yiS0CeJ4AI 20Ys9ujUnK+xNe11hhS4eYaT4htBiAR/Bp6cUXwQH//Rf/saS3G3FOs8fHDitFWMGGwje/ g2fZTgLpt3d8KECr3UsZvqLV4bM1FQpiiLj0BSQqah1cLAM2/nhG5xTmwm7jWA== From: "Pankaj Raghav (Samsung)" To: brauner@kernel.org, akpm@linux-foundation.org Cc: chandan.babu@oracle.com, linux-fsdevel@vger.kernel.org, djwong@kernel.org, hare@suse.de, gost.dev@samsung.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, hch@lst.de, david@fromorbit.com, Zi Yan , yang@os.amperecomputing.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, john.g.garry@oracle.com, cl@os.amperecomputing.com, p.raghav@samsung.com, mcgrof@kernel.org, ryan.roberts@arm.com Subject: [PATCH v12 01/10] fs: Allow fine-grained control of folio sizes Date: Thu, 15 Aug 2024 11:08:40 +0200 Message-ID: <20240815090849.972355-2-kernel@pankajraghav.com> In-Reply-To: <20240815090849.972355-1-kernel@pankajraghav.com> References: <20240815090849.972355-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Stat-Signature: tsmfyxmprtdtajk497mxgykampbsojoc X-Rspam-User: X-Rspamd-Queue-Id: 43800180007 X-Rspamd-Server: rspam02 X-HE-Tag: 1723712949-147173 X-HE-Meta: U2FsdGVkX1/mQvjvBOw4nc9YiCdfvTYtjid0f6BLGI83nygGNFsYxMSWVLjufw8CoM5aai1grzRg8rDlNmfsRvnrw3qn1BhNrVT4psVOX0TTsynjKI4uQVNSK06j+SSL8x+dfcdndFqsQQ6Iyyd7OUUqBuK6sGSvlz34LVX6HfoJ9x38DqOZgLDMJyz08Er65HXyDOxOlM9kF+mlBWyzJHG6elVn9UTCYMDujYJSyNi7Mhz+8uCeHzqm6UITQ7rIMV4ruUKpZER6UfLScd6FSZxgJUqARNL7PRuzsLEpNHFUJRUHOUlhsNmDEQtIsn4TaKTCMsajEnU/Zv7FfHD510pghiDVbLYP0DRRD6Pv5mAcYOKbXaTy0HbeaK9xB13/A15oVrb1BKRT7NNUoDexu9YrVrQ5bts7if00fweL9+8C3c1FuoumtPj18CAD3RCeMX05jIhA/KvHEZwhS1E8u9NpIHllXj6+YqweDoH65NeSfirjVMwplMXp0cKMaTPIFf528zNEtGIeU272GQsixer1N5z7jMHjZnya0dRMW8WhXb8BGmSZ+R7Uho6M2pPD3Pw925kDM15IIUIu+GbQhJ9T3h6IKtD/ofUAOgcTYz6noPl4OejLZPGb4q1jT0MRbcOnNlHlCK5mXAuHg88CagW/trZiope/JjQMGd2SMOh3UvMa+kX9CZAX57n7RgsG5/5xHjH1IxizetwvvxT058aKX1Ru+bVTpHTVUKh8WtRUptLOBegholSpFb/spszCXWmN5GfJATndjf0mxyHARJoF43enVmW+3XzNYZONg+CwERTOr4ZExrDarJCXtSWjwd24tO0bGGltfkCipYCMWwaYyesOAY+MAFRzdpe8j0PGjjnmY6CLpUHeR70RdfaHKGSBRbuYQ3YW6irPrSEnzIRKGRUTVvrPIzq6lhKCpZWafbmZM+WUb+XcsmSZsvhAcH9xTM38K0VD58cM1Ob 7b5JNKR+ HqyEjg3ZDxiKh1qNcXjjYMKyDBQRSG6SzfqYcEkJlLoLBbfhBg1JtDmwt1TBGh8Pxf54jbqiOT8fQXpTdK+GIRPtmw6PARFgUkIEg6tLepOdpQTNCdDdwEg7W2CpWXzJMpgSWQ23SuJd+qqZpoYeVocaWMAMIkDMEBZI9ufXp+uTXibEwxf+wcOD0vaKRT/sJzrHkD+lYsCoKvg38/1FkcpHFJvXe32MwwyaXDm9WFKjfMtzER7IoTm42z5QvrOaLQ3cO6nDZfQ/q+3125hq6yiuhs+t3DETAH9BUY/UHdTIJKMeUMsBbI+PspWn/VsMfFw9Vbjw6OEYhGYzfZMQeAAt5MrGG2et8cdKJ1RK+Q5HvRM22PYeBswvhpjAmDfP1+nqL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Matthew Wilcox (Oracle)" We need filesystems to be able to communicate acceptable folio sizes to the pagecache for a variety of uses (e.g. large block sizes). Support a range of folio sizes between order-0 and order-31. Signed-off-by: Matthew Wilcox (Oracle) Co-developed-by: Pankaj Raghav Signed-off-by: Pankaj Raghav Reviewed-by: Hannes Reinecke Reviewed-by: Darrick J. Wong --- include/linux/pagemap.h | 89 ++++++++++++++++++++++++++++++++++------- mm/filemap.c | 6 +-- mm/readahead.c | 4 +- 3 files changed, 79 insertions(+), 20 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index d9c7edb6422bd..75bbe88b89904 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -204,14 +204,20 @@ enum mapping_flags { AS_EXITING = 4, /* final truncate in progress */ /* writeback related tags are not used */ AS_NO_WRITEBACK_TAGS = 5, - AS_LARGE_FOLIO_SUPPORT = 6, - AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */ - AS_STABLE_WRITES, /* must wait for writeback before modifying + AS_RELEASE_ALWAYS = 6, /* Call ->release_folio(), even if no private data */ + AS_STABLE_WRITES = 7, /* must wait for writeback before modifying folio contents */ - AS_INACCESSIBLE, /* Do not attempt direct R/W access to the mapping, - including to move the mapping */ + AS_INACCESSIBLE = 8, /* Do not attempt direct R/W access to the mapping */ + /* Bits 16-25 are used for FOLIO_ORDER */ + AS_FOLIO_ORDER_BITS = 5, + AS_FOLIO_ORDER_MIN = 16, + AS_FOLIO_ORDER_MAX = AS_FOLIO_ORDER_MIN + AS_FOLIO_ORDER_BITS, }; +#define AS_FOLIO_ORDER_MASK ((1u << AS_FOLIO_ORDER_BITS) - 1) +#define AS_FOLIO_ORDER_MIN_MASK (AS_FOLIO_ORDER_MASK << AS_FOLIO_ORDER_MIN) +#define AS_FOLIO_ORDER_MAX_MASK (AS_FOLIO_ORDER_MASK << AS_FOLIO_ORDER_MAX) + /** * mapping_set_error - record a writeback error in the address_space * @mapping: the mapping in which an error should be set @@ -367,9 +373,51 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) #define MAX_XAS_ORDER (XA_CHUNK_SHIFT * 2 - 1) #define MAX_PAGECACHE_ORDER min(MAX_XAS_ORDER, PREFERRED_MAX_PAGECACHE_ORDER) +/* + * mapping_set_folio_order_range() - Set the orders supported by a file. + * @mapping: The address space of the file. + * @min: Minimum folio order (between 0-MAX_PAGECACHE_ORDER inclusive). + * @max: Maximum folio order (between @min-MAX_PAGECACHE_ORDER inclusive). + * + * The filesystem should call this function in its inode constructor to + * indicate which base size (min) and maximum size (max) of folio the VFS + * can use to cache the contents of the file. This should only be used + * if the filesystem needs special handling of folio sizes (ie there is + * something the core cannot know). + * Do not tune it based on, eg, i_size. + * + * Context: This should not be called while the inode is active as it + * is non-atomic. + */ +static inline void mapping_set_folio_order_range(struct address_space *mapping, + unsigned int min, + unsigned int max) +{ + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + return; + + if (min > MAX_PAGECACHE_ORDER) + min = MAX_PAGECACHE_ORDER; + + if (max > MAX_PAGECACHE_ORDER) + max = MAX_PAGECACHE_ORDER; + + if (max < min) + max = min; + + mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) | + (min << AS_FOLIO_ORDER_MIN) | (max << AS_FOLIO_ORDER_MAX); +} + +static inline void mapping_set_folio_min_order(struct address_space *mapping, + unsigned int min) +{ + mapping_set_folio_order_range(mapping, min, MAX_PAGECACHE_ORDER); +} + /** * mapping_set_large_folios() - Indicate the file supports large folios. - * @mapping: The file. + * @mapping: The address space of the file. * * The filesystem should call this function in its inode constructor to * indicate that the VFS can use large folios to cache the contents of @@ -380,7 +428,23 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) */ static inline void mapping_set_large_folios(struct address_space *mapping) { - __set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); + mapping_set_folio_order_range(mapping, 0, MAX_PAGECACHE_ORDER); +} + +static inline unsigned int +mapping_max_folio_order(const struct address_space *mapping) +{ + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + return 0; + return (mapping->flags & AS_FOLIO_ORDER_MAX_MASK) >> AS_FOLIO_ORDER_MAX; +} + +static inline unsigned int +mapping_min_folio_order(const struct address_space *mapping) +{ + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + return 0; + return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN; } /* @@ -389,20 +453,17 @@ static inline void mapping_set_large_folios(struct address_space *mapping) */ static inline bool mapping_large_folio_support(struct address_space *mapping) { - /* AS_LARGE_FOLIO_SUPPORT is only reasonable for pagecache folios */ + /* AS_FOLIO_ORDER is only reasonable for pagecache folios */ VM_WARN_ONCE((unsigned long)mapping & PAGE_MAPPING_ANON, "Anonymous mapping always supports large folio"); - return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && - test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); + return mapping_max_folio_order(mapping) > 0; } /* Return the maximum folio size for this pagecache mapping, in bytes. */ -static inline size_t mapping_max_folio_size(struct address_space *mapping) +static inline size_t mapping_max_folio_size(const struct address_space *mapping) { - if (mapping_large_folio_support(mapping)) - return PAGE_SIZE << MAX_PAGECACHE_ORDER; - return PAGE_SIZE; + return PAGE_SIZE << mapping_max_folio_order(mapping); } static inline int filemap_nr_thps(struct address_space *mapping) diff --git a/mm/filemap.c b/mm/filemap.c index 29fec1fccd0a6..6c4489ada3ecc 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1933,10 +1933,8 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, if (WARN_ON_ONCE(!(fgp_flags & (FGP_LOCK | FGP_FOR_MMAP)))) fgp_flags |= FGP_LOCK; - if (!mapping_large_folio_support(mapping)) - order = 0; - if (order > MAX_PAGECACHE_ORDER) - order = MAX_PAGECACHE_ORDER; + if (order > mapping_max_folio_order(mapping)) + order = mapping_max_folio_order(mapping); /* If we're not aligned, allocate a smaller folio */ if (index & ((1UL << order) - 1)) order = __ffs(index); diff --git a/mm/readahead.c b/mm/readahead.c index 517c0be7ce665..3e5239e9e1777 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -449,10 +449,10 @@ void page_cache_ra_order(struct readahead_control *ractl, limit = min(limit, index + ra->size - 1); - if (new_order < MAX_PAGECACHE_ORDER) + if (new_order < mapping_max_folio_order(mapping)) new_order += 2; - new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order); + new_order = min(mapping_max_folio_order(mapping), new_order); new_order = min_t(unsigned int, new_order, ilog2(ra->size)); /* See comment in page_cache_ra_unbounded() */