From patchwork Mon Jul 15 09:44:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13733194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07CE1C3DA5E for ; Mon, 15 Jul 2024 09:45:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68B7E6B008A; Mon, 15 Jul 2024 05:45:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 63B866B008C; Mon, 15 Jul 2024 05:45:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 529E46B0092; Mon, 15 Jul 2024 05:45:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3B74E6B008A for ; Mon, 15 Jul 2024 05:45:19 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0196F1A156D for ; Mon, 15 Jul 2024 09:45:18 +0000 (UTC) X-FDA: 82341503958.28.2894D4B Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) by imf02.hostedemail.com (Postfix) with ESMTP id 376A380012 for ; Mon, 15 Jul 2024 09:45:16 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=RVLnDS+N; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf02.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721036675; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iVeOVLNn5PKvSoACNvjh+SwYtpf2/QUWqFh2OAeof8s=; b=5GSRnmkYy0rJT31/6XEO6NkDdOrBRUld5zg7pe3rC+6EmKwtb5fv3DzlQY3MMTMyviMz2w 1nK2vgPylNkcXKmyTcEGXXTmOoeCG7IRM9xGUBp6uBL1Paa26cf7VAfsGQNI9a4KD5VhnH cyw2n0lkXjqgYyUvpkQxYnjKVtjsirw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721036675; a=rsa-sha256; cv=none; b=01EdbGwSil9TWyiIuQ9cJyj9lby+jq+jjknmWb0UfAFDE6DvBB24D3d3Z5LaEb+zuZ5t7S LYSGbwVGNVt7qYLpa7VYMyUaFNfiv24nMayVgCpGKRjUSlpiW8F5BdJzVatIdA1c2UKwPy dSDsmJDAi6EbceO4baD1m4Ki8gWKf6w= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=RVLnDS+N; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf02.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com Received: from smtp1.mailbox.org (smtp1.mailbox.org [IPv6:2001:67c:2050:b231:465::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4WMy3s3l5Jz9sS2; Mon, 15 Jul 2024 11:45:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1721036713; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iVeOVLNn5PKvSoACNvjh+SwYtpf2/QUWqFh2OAeof8s=; b=RVLnDS+N6kgzLaAN2r/msOIlhnogkVXm7MiGpKj1WBRUh+hwZrEMC2ZAiKPLVbQphJt7h6 zL7X5ZwbMIWvImMkXGhYPh5fxhHue1l29Mf8XoUinLxSxc5DBVxdcqt7zeg1jRbjx1si5+ xAMhuT5CzuqZ6gXHrXe8OvI/GpgvPduOnO8c7nGnMcuAgpYpSjjat3CSUQM3fRYJJWtNuI f0u7PfopX3cC2MS+mxBzgxgdBt7uxtWcW89WKCLYDTi5af2e9LpP+CrXAfrIz2hk6VpTen /vRBrCFrzXX15aO9Dpa6ESEq6fkpDD90CHUtvW9TLDRJrxeB4ivMKTGoa2qyqw== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, ryan.roberts@arm.com, hch@lst.de, Zi Yan Subject: [PATCH v10 01/10] fs: Allow fine-grained control of folio sizes Date: Mon, 15 Jul 2024 11:44:48 +0200 Message-ID: <20240715094457.452836-2-kernel@pankajraghav.com> In-Reply-To: <20240715094457.452836-1-kernel@pankajraghav.com> References: <20240715094457.452836-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 376A380012 X-Stat-Signature: kg65q3am8tpmbbmfg41tfjnoy5rbr4q7 X-Rspam-User: X-HE-Tag: 1721036716-101885 X-HE-Meta: U2FsdGVkX18xNEHeSFWQNInIZDpyiC4ForTH+RfPVD8/E9FvqLxZ49Vv+ae18HBil70wtdBwDaP1U5jjRC6r1OSCGLYd/XO+TKgCjzRVAjm0zdTdNV+Ciq+4LT9uTj1t3cnFatg4U4RejThWvpcc6IVJYD2svgQOuOwhcxVjrpd42bVdXllHlLlHu9i+gEASvMxX4WVVjCvZzTJ/BUufgEkMzqIvOdV4PFQ4XcqaJB99UbiQcvpSVbZ0fGkN3/usYsRH40aO4nj5JXSt/ieuO4mpovLUc0gV040pwAQmCpXFP3etwxZm0Blrtxs39on40AytUBQUYG9lWTZgwl8s0GbQogtrdAfmU3ECwEHy+1bBUxJUMf6vDgm0QbuafEt/ds2xNowu3XszPPCM/ljy+VDAbASMLCgNj8ABVgViNW0+lQqA6tQZZNYnQs8yI0kzmWI2OjXJh9LF0XqVLKjt5QnPBJtfSTVxSk6ZDJQjHw8R+eqRG6/axmiv6yWfD6tPM310vSYOUF+GAUJQV+bbC4Lu3GMWU8uZ7KColxiGWLjH/gPorJBehVIUOL7iGaUeS5P6RuN7oJjNUa51BxWY6HhKGlHCzEAIHGsfXawIVSLCKsKt/54DCFgpoUBsk8JEVKgsqyyaiOvO7497F8n1ORmoCqiMdzk7tONBiap7VP8wZsiQWuLSi9QgZl08753SX9qvlPfr/ZqEJH8u8UziCksYsMYDMSuYcQd5cJa3ontSTATK83JtqOaataJNf8KvX62y7ZC5du+g0J5IyL7gq8knkpL0dl2xlLDIu4nBmakcecYM6Tr+mUOVzxEH5mYQSoQPHl/7fMFxpw+CmLASMubCZ8JwRrHPPmQ/9qevBpQnFSqfmfoj+1H8dU9Pncv0mWRdjYntZP3/YisdQm4GiPPQWCTl1WPhpX3raWQkuGkPREc63lutpVyPkp3AboevAwD+gP6pVoAr+Ztxh1U 0BuYMqJI vKD7Hs1/HF2k5lH6qS4N7lv1mYmVT8IkDTXKJDJr6oI/mafrU9Z8PjTRzS1wm4LLKsgdZRsz4u9SgIFFprlXLUm/yFtK0yz6n+AbKBNCNV87ausRdVjKlEv7RlESi5aMMwpB0yUjjFeAiVDVLntuXZRn+QzIWlhSw2rwGml/MQ7kZKxYoICp3aN06WrUvWkzF8app/NVaIrC9zGjQ2N8KWJfRgKvlnn8qN+/qtVJm6JPmYAyI/dt//jymSTY1msRJuCd3aKktU7NSKaVCf4EasxVXk9lW5cjZ11lDP/bIog/Kr35YuK+juQRz05QIXyCXUk/+c355Go17bAFaa9POaAlyxHP0/aIfytF3FJ6RGH8Ndr86LZxIzhb/QqBeYVkIoTld X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Matthew Wilcox (Oracle)" We need filesystems to be able to communicate acceptable folio sizes to the pagecache for a variety of uses (e.g. large block sizes). Support a range of folio sizes between order-0 and order-31. Signed-off-by: Matthew Wilcox (Oracle) Co-developed-by: Pankaj Raghav Signed-off-by: Pankaj Raghav Reviewed-by: Hannes Reinecke Reviewed-by: Darrick J. Wong --- include/linux/pagemap.h | 107 +++++++++++++++++++++++++++++++++++----- mm/filemap.c | 6 +-- mm/readahead.c | 4 +- 3 files changed, 98 insertions(+), 19 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 8026a8a433d36..8d2b5c51461b0 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -204,14 +204,21 @@ enum mapping_flags { AS_EXITING = 4, /* final truncate in progress */ /* writeback related tags are not used */ AS_NO_WRITEBACK_TAGS = 5, - AS_LARGE_FOLIO_SUPPORT = 6, - AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */ - AS_STABLE_WRITES, /* must wait for writeback before modifying + AS_RELEASE_ALWAYS = 6, /* Call ->release_folio(), even if no private data */ + AS_STABLE_WRITES = 7, /* must wait for writeback before modifying folio contents */ - AS_UNMOVABLE, /* The mapping cannot be moved, ever */ - AS_INACCESSIBLE, /* Do not attempt direct R/W access to the mapping */ + AS_UNMOVABLE = 8, /* The mapping cannot be moved, ever */ + AS_INACCESSIBLE = 9, /* Do not attempt direct R/W access to the mapping */ + /* Bits 16-25 are used for FOLIO_ORDER */ + AS_FOLIO_ORDER_BITS = 5, + AS_FOLIO_ORDER_MIN = 16, + AS_FOLIO_ORDER_MAX = AS_FOLIO_ORDER_MIN + AS_FOLIO_ORDER_BITS, }; +#define AS_FOLIO_ORDER_MASK ((1u << AS_FOLIO_ORDER_BITS) - 1) +#define AS_FOLIO_ORDER_MIN_MASK (AS_FOLIO_ORDER_MASK << AS_FOLIO_ORDER_MIN) +#define AS_FOLIO_ORDER_MAX_MASK (AS_FOLIO_ORDER_MASK << AS_FOLIO_ORDER_MAX) + /** * mapping_set_error - record a writeback error in the address_space * @mapping: the mapping in which an error should be set @@ -367,9 +374,70 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) #define MAX_XAS_ORDER (XA_CHUNK_SHIFT * 2 - 1) #define MAX_PAGECACHE_ORDER min(MAX_XAS_ORDER, PREFERRED_MAX_PAGECACHE_ORDER) +/* + * mapping_max_folio_size_supported() - Check the max folio size supported + * + * The filesystem should call this function at mount time if there is a + * requirement on the folio mapping size in the page cache. + */ +static inline size_t mapping_max_folio_size_supported(void) +{ + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + return 1U << (PAGE_SHIFT + MAX_PAGECACHE_ORDER); + return PAGE_SIZE; +} + +/* + * mapping_set_folio_order_range() - Set the orders supported by a file. + * @mapping: The address space of the file. + * @min: Minimum folio order (between 0-MAX_PAGECACHE_ORDER inclusive). + * @max: Maximum folio order (between @min-MAX_PAGECACHE_ORDER inclusive). + * + * The filesystem should call this function in its inode constructor to + * indicate which base size (min) and maximum size (max) of folio the VFS + * can use to cache the contents of the file. This should only be used + * if the filesystem needs special handling of folio sizes (ie there is + * something the core cannot know). + * Do not tune it based on, eg, i_size. + * + * Context: This should not be called while the inode is active as it + * is non-atomic. + */ +static inline void mapping_set_folio_order_range(struct address_space *mapping, + unsigned int min, + unsigned int max) +{ + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + return; + + if (min > MAX_PAGECACHE_ORDER) { + VM_WARN_ONCE(1, + "min order > MAX_PAGECACHE_ORDER. Setting min_order to MAX_PAGECACHE_ORDER"); + min = MAX_PAGECACHE_ORDER; + } + + if (max > MAX_PAGECACHE_ORDER) { + VM_WARN_ONCE(1, + "max order > MAX_PAGECACHE_ORDER. Setting max_order to MAX_PAGECACHE_ORDER"); + max = MAX_PAGECACHE_ORDER; + } + + if (max < min) + max = min; + + mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) | + (min << AS_FOLIO_ORDER_MIN) | (max << AS_FOLIO_ORDER_MAX); +} + +static inline void mapping_set_folio_min_order(struct address_space *mapping, + unsigned int min) +{ + mapping_set_folio_order_range(mapping, min, MAX_PAGECACHE_ORDER); +} + /** * mapping_set_large_folios() - Indicate the file supports large folios. - * @mapping: The file. + * @mapping: The address space of the file. * * The filesystem should call this function in its inode constructor to * indicate that the VFS can use large folios to cache the contents of @@ -380,7 +448,23 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) */ static inline void mapping_set_large_folios(struct address_space *mapping) { - __set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); + mapping_set_folio_order_range(mapping, 0, MAX_PAGECACHE_ORDER); +} + +static inline unsigned int +mapping_max_folio_order(const struct address_space *mapping) +{ + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + return 0; + return (mapping->flags & AS_FOLIO_ORDER_MAX_MASK) >> AS_FOLIO_ORDER_MAX; +} + +static inline unsigned int +mapping_min_folio_order(const struct address_space *mapping) +{ + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + return 0; + return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN; } /* @@ -393,16 +477,13 @@ static inline bool mapping_large_folio_support(struct address_space *mapping) VM_WARN_ONCE((unsigned long)mapping & PAGE_MAPPING_ANON, "Anonymous mapping always supports large folio"); - return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && - test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); + return mapping_max_folio_order(mapping) > 0; } /* Return the maximum folio size for this pagecache mapping, in bytes. */ -static inline size_t mapping_max_folio_size(struct address_space *mapping) +static inline size_t mapping_max_folio_size(const struct address_space *mapping) { - if (mapping_large_folio_support(mapping)) - return PAGE_SIZE << MAX_PAGECACHE_ORDER; - return PAGE_SIZE; + return PAGE_SIZE << mapping_max_folio_order(mapping); } static inline int filemap_nr_thps(struct address_space *mapping) diff --git a/mm/filemap.c b/mm/filemap.c index d62150418b910..ad5e4a848070e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1933,10 +1933,8 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, if (WARN_ON_ONCE(!(fgp_flags & (FGP_LOCK | FGP_FOR_MMAP)))) fgp_flags |= FGP_LOCK; - if (!mapping_large_folio_support(mapping)) - order = 0; - if (order > MAX_PAGECACHE_ORDER) - order = MAX_PAGECACHE_ORDER; + if (order > mapping_max_folio_order(mapping)) + order = mapping_max_folio_order(mapping); /* If we're not aligned, allocate a smaller folio */ if (index & ((1UL << order) - 1)) order = __ffs(index); diff --git a/mm/readahead.c b/mm/readahead.c index 517c0be7ce665..3e5239e9e1777 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -449,10 +449,10 @@ void page_cache_ra_order(struct readahead_control *ractl, limit = min(limit, index + ra->size - 1); - if (new_order < MAX_PAGECACHE_ORDER) + if (new_order < mapping_max_folio_order(mapping)) new_order += 2; - new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order); + new_order = min(mapping_max_folio_order(mapping), new_order); new_order = min_t(unsigned int, new_order, ilog2(ra->size)); /* See comment in page_cache_ra_unbounded() */ From patchwork Mon Jul 15 09:44:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13733195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD660C3DA5D for ; Mon, 15 Jul 2024 09:45:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5F20B6B0093; Mon, 15 Jul 2024 05:45:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5A04C6B0095; Mon, 15 Jul 2024 05:45:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41D5A6B0096; Mon, 15 Jul 2024 05:45:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 203846B0093 for ; Mon, 15 Jul 2024 05:45:25 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id CBA231A1571 for ; Mon, 15 Jul 2024 09:45:24 +0000 (UTC) X-FDA: 82341504168.15.FF922A1 Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) by imf04.hostedemail.com (Postfix) with ESMTP id EDFA440005 for ; Mon, 15 Jul 2024 09:45:22 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=U+bLMh81; spf=pass (imf04.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721036704; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8ZYOFJ5P/VY9rKNoUL4YY6KJOafbO93OrjcSUWzfqT8=; b=nCJP2O6d8Tm9PY2pd1+hZGGf9MMtFFMXvPnyGCi89eyrJD/5u7sLcSD9dnWJuZrE/V3WYZ AkbKFjDDdnUULVR4ie7QOFQm7Zz/h5Zom9nwPQzewcXE2EGmvg4PgwTd2EFbeVfnKm69pu 9m0QmLPx2sEbv+NMt+KKYqDAlh7vnXI= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=U+bLMh81; spf=pass (imf04.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721036704; a=rsa-sha256; cv=none; b=grzdZlnk0SPQaquOeZlkxhCb4bYGVZ30gQsUOehRlEnqjl+hEqdf0GoBxkawi+mvATeK/U 8dFyceW+w/rNHlPpt5OD6ErS8IeHllRMEKUZqpHfuktSe+u8vneHgpXApl5NdtjjjQhDgl TyxF3WKEi2pDPYojZuMiWo4VIMwn4vg= Received: from smtp1.mailbox.org (smtp1.mailbox.org [10.196.197.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4WMy3z4kd4z9shb; Mon, 15 Jul 2024 11:45:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1721036719; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8ZYOFJ5P/VY9rKNoUL4YY6KJOafbO93OrjcSUWzfqT8=; b=U+bLMh81rZAD88aO93Zk6o93HUHVdTk/j05mVwIqiDKOQkJKwVOu0py2aYW1hjKKlKpob2 6n4LMGwVdGmkrtxF9S7uQYsbGRxVKDtI5S2GHx45O16exybKjBvghZRXWo6D17I+OKcaAT j6Eppobgt0SgXpjK23XnsZvw0VZJGGomcWzYMjQFAodqTGVqkGD1rLELbNBWsX+cgwk3qF ar8l7LlSEiVSidN5fqG5/LSS2EpJ8o1vaMNPhwHQ6lHBLzytvkzni7imQYUUWPWbCa70Ak jTpvqZD5t+kdzSNAuOu6rpkPNNqzXP2cT62tfDG7L99r1ZgwCQ4y3Oqi6TiaCQ== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, ryan.roberts@arm.com, hch@lst.de, Zi Yan Subject: [PATCH v10 02/10] filemap: allocate mapping_min_order folios in the page cache Date: Mon, 15 Jul 2024 11:44:49 +0200 Message-ID: <20240715094457.452836-3-kernel@pankajraghav.com> In-Reply-To: <20240715094457.452836-1-kernel@pankajraghav.com> References: <20240715094457.452836-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: EDFA440005 X-Stat-Signature: beyfcg3xmr7s9r5xghzoq9p3q8sziy49 X-HE-Tag: 1721036722-461285 X-HE-Meta: U2FsdGVkX1/gM7SP/MOW11gV7ATZ6IedGNVcvssyKKSaQ4oVLLdFrBLocKuIke0wYanYff1p749WM12STWpRGocHIJAgi+PCFGQGnLa4yYJ/RtyjBqzTO8UODytpy+SLil12xQ5mDYlfqL7gGpIe4OuhoBQWzAuwpdfP3RhRzY0rQk6ho6AjBO0uiVYshV82NfysSMb/+h7WKz8hhuii4xe+szsjMVcSpzocoPu69LbIJAwdI/QPoHGbTy9ua7kxoLKt7rpZPjU2MMNyLvQlbjxZXLjpa63q/M8KCK8NMQ/xi+e01zsEaUSna64UdQ9Kxd5RHlpU7zk5h915dPf6ERLWAO1ZiX830/8ex9Exkz7ASf98/RlsIG6fWg1pmaEJmnCKfwoBYb8RfaSTus3ALpA7bDZcUtZVv3VioCmoOrhuEkHh/3SqtobWE5bvYIUj3o45QJbGLl2r+7SPm9gwWLrGxiWVr5PP4B6lCT2me3PI3/Eht7inThNFH9fGvZdz1matQcFV9Bg3++vexgMpJhstyc73dVklG3KMLUGgR/ZfvAnYxcDlXW71uoHFiVau5cm9US7Qjdkm3QKe/j8FMiprtcrXtnaLOyD5eSZpRRPFp3JBpSbw9SuUCd0Y2/AUavSfVAWy9SsgKLEKSFB8YWKpUjq75/YGyJpThHCf+WytgNvKUjsWIbczuy45+/S0lNK/1FCkksuJ+UQaN7SCCVVbhIzk0iBc/T+AqCN+EuBzStBi3MV1KKC27YiKLg0NI5FCQ7DF6gp+r2gCeJX3OqYf/kHQJZw4FYOyyhcghmsPbSCycNza779wo3MFwis3RLLi21LiIfrFv8UyQBQ56NMoZdk+kAVLvRcYFVk4zpWN2FdA1Z6gHa7qEVe69hg6KplRA6bRueOxxOUfDwsWXC383JDwvEdOviBKExvg20lm2hxXVlQx6ZKj0NzsG5hSSLyn0/oFL/tD7CBZbs8 NPxEeC/5 FWpC4Bzk/jyvQ+2JhL/NsbG3xZN94FzAGrlvGN5StCaLBIDBpGz186Sjt5ThbRkKNhwu0QK1MZl20dZXmxE+LnLzdYY6X9PTjfS0YVvK19+pSCfLGpZRsNhORtJ1bkGzGMGazyZcINqdWbBfQpDxQXmCPIlOLDCk40AFmbSzq+orLNWnGKSe9zuXJ2OH5x3OgYEUo1U7pc5lNCvnk4ocu6gogq2AVZH8agljpGRmvqmxkNXr22ICpWyG+eUMV3aWNubR+cnpVoixTxEblsHLgXsIn6xxd103yqn+VcxatVeRMBWxtDjNs0l5LjgJYob/x9NfU/FR0/DADURzvl2004JuFy1qfrGnj1gFLr/UedQvVe2ceiXL+YUCxkejZERdMeABbf98wrbBjEi5a0oi8AO0E5A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav filemap_create_folio() and do_read_cache_folio() were always allocating folio of order 0. __filemap_get_folio was trying to allocate higher order folios when fgp_flags had higher order hint set but it will default to order 0 folio if higher order memory allocation fails. Supporting mapping_min_order implies that we guarantee each folio in the page cache has at least an order of mapping_min_order. When adding new folios to the page cache we must also ensure the index used is aligned to the mapping_min_order as the page cache requires the index to be aligned to the order of the folio. Co-developed-by: Luis Chamberlain Signed-off-by: Luis Chamberlain Signed-off-by: Pankaj Raghav Reviewed-by: Hannes Reinecke Reviewed-by: Darrick J. Wong Reviewed-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 20 ++++++++++++++++++++ mm/filemap.c | 24 ++++++++++++++++-------- 2 files changed, 36 insertions(+), 8 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 8d2b5c51461b0..68edbea9ae25a 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -467,6 +467,26 @@ mapping_min_folio_order(const struct address_space *mapping) return (mapping->flags & AS_FOLIO_ORDER_MIN_MASK) >> AS_FOLIO_ORDER_MIN; } +static inline unsigned long +mapping_min_folio_nrpages(struct address_space *mapping) +{ + return 1UL << mapping_min_folio_order(mapping); +} + +/** + * mapping_align_index() - Align index for this mapping. + * @mapping: The address_space. + * + * The index of a folio must be naturally aligned. If you are adding a + * new folio to the page cache and need to know what index to give it, + * call this function. + */ +static inline pgoff_t mapping_align_index(struct address_space *mapping, + pgoff_t index) +{ + return round_down(index, mapping_min_folio_nrpages(mapping)); +} + /* * Large folio support currently depends on THP. These dependencies are * being worked on but are not yet fixed. diff --git a/mm/filemap.c b/mm/filemap.c index ad5e4a848070e..d27e9ac54309d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -859,6 +859,8 @@ noinline int __filemap_add_folio(struct address_space *mapping, VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio); + VM_BUG_ON_FOLIO(folio_order(folio) < mapping_min_folio_order(mapping), + folio); mapping_set_update(&xas, mapping); VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio); @@ -1919,8 +1921,10 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, folio_wait_stable(folio); no_page: if (!folio && (fgp_flags & FGP_CREAT)) { - unsigned order = FGF_GET_ORDER(fgp_flags); + unsigned int min_order = mapping_min_folio_order(mapping); + unsigned int order = max(min_order, FGF_GET_ORDER(fgp_flags)); int err; + index = mapping_align_index(mapping, index); if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping)) gfp |= __GFP_WRITE; @@ -1943,7 +1947,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, gfp_t alloc_gfp = gfp; err = -ENOMEM; - if (order > 0) + if (order > min_order) alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN; folio = filemap_alloc_folio(alloc_gfp, order); if (!folio) @@ -1958,7 +1962,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, break; folio_put(folio); folio = NULL; - } while (order-- > 0); + } while (order-- > min_order); if (err == -EEXIST) goto repeat; @@ -2447,13 +2451,15 @@ static int filemap_update_page(struct kiocb *iocb, } static int filemap_create_folio(struct file *file, - struct address_space *mapping, pgoff_t index, + struct address_space *mapping, loff_t pos, struct folio_batch *fbatch) { struct folio *folio; int error; + unsigned int min_order = mapping_min_folio_order(mapping); + pgoff_t index; - folio = filemap_alloc_folio(mapping_gfp_mask(mapping), 0); + folio = filemap_alloc_folio(mapping_gfp_mask(mapping), min_order); if (!folio) return -ENOMEM; @@ -2471,6 +2477,7 @@ static int filemap_create_folio(struct file *file, * well to keep locking rules simple. */ filemap_invalidate_lock_shared(mapping); + index = (pos >> (PAGE_SHIFT + min_order)) << min_order; error = filemap_add_folio(mapping, folio, index, mapping_gfp_constraint(mapping, GFP_KERNEL)); if (error == -EEXIST) @@ -2531,8 +2538,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count, if (!folio_batch_count(fbatch)) { if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ)) return -EAGAIN; - err = filemap_create_folio(filp, mapping, - iocb->ki_pos >> PAGE_SHIFT, fbatch); + err = filemap_create_folio(filp, mapping, iocb->ki_pos, fbatch); if (err == AOP_TRUNCATED_PAGE) goto retry; return err; @@ -3748,9 +3754,11 @@ static struct folio *do_read_cache_folio(struct address_space *mapping, repeat: folio = filemap_get_folio(mapping, index); if (IS_ERR(folio)) { - folio = filemap_alloc_folio(gfp, 0); + folio = filemap_alloc_folio(gfp, + mapping_min_folio_order(mapping)); if (!folio) return ERR_PTR(-ENOMEM); + index = mapping_align_index(mapping, index); err = filemap_add_folio(mapping, folio, index, gfp); if (unlikely(err)) { folio_put(folio); From patchwork Mon Jul 15 09:44:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13733196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 810DDC3DA59 for ; Mon, 15 Jul 2024 09:45:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 135B36B0096; Mon, 15 Jul 2024 05:45:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0BF466B0098; Mon, 15 Jul 2024 05:45:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E79B36B0099; Mon, 15 Jul 2024 05:45:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C702E6B0096 for ; Mon, 15 Jul 2024 05:45:29 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 21E9B812A0 for ; Mon, 15 Jul 2024 09:45:29 +0000 (UTC) X-FDA: 82341504378.08.B4DDB62 Received: from mout-p-201.mailbox.org (mout-p-201.mailbox.org [80.241.56.171]) by imf04.hostedemail.com (Postfix) with ESMTP id 50EAF40002 for ; Mon, 15 Jul 2024 09:45:27 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=MPzv2L2c; spf=pass (imf04.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721036709; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+r0q1aMOVXTo8fjQ8nnAWfzQgFhULOwQhra2cgoZxE8=; b=f/LlwjewZ2rsejw5k2PFyF3QkzpHS244p0bNpgNQbHUcD5mati5yLAIwi7W+ezTxDiTNq0 A9vHHzYD3UfMk6ilTvD6tvzvCEsl5IdiKtuX48pB+cQB1R7kIRS9rEM1Mprv352AdUCOd8 HyKJhiXTGanh7/VBnzhEKtunsYxgiJ4= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=MPzv2L2c; spf=pass (imf04.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.171 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721036709; a=rsa-sha256; cv=none; b=M4wk0ViZj0oCv+qub7PPu7oA/Cejs5tSUSd41uVqjeMaobeMMKWJE9OBs8irVyH0XkzJdm i1ddl1Ur/Um/3XkOVlXO9OlfkDCXKC261MPH8NUT/UD+gHVlmSnMZTuVVHIBzIsUnWlfZt /e02r+vKUXcWo3OvYpDejYOJ+OsoOjs= Received: from smtp1.mailbox.org (smtp1.mailbox.org [IPv6:2001:67c:2050:b231:465::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-201.mailbox.org (Postfix) with ESMTPS id 4WMy440n14z9sTZ; Mon, 15 Jul 2024 11:45:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1721036724; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+r0q1aMOVXTo8fjQ8nnAWfzQgFhULOwQhra2cgoZxE8=; b=MPzv2L2cbA7bhRj+HBdFq7IV5GhVXEzrexjnJkSoPIfhP2GEZGuhXxyhEOe0Ms7y/ocywO L8K4TcQWxWuJQUtPJF8f4gQixz+ygHJ+Pc1fsSWAukfyFfxGFEhVd9kt7hQCCTj14EewnK 4KpZMXbpnaUcPM47K6StBjFZIytFf8y6EtakBuEYUZsm5rJwDniWRAfGYc+A/qdWdm/EBS XTmkLr+RHuetG5Hl7nlqoaXIqnBws83vgzpcrTsDX58GkUHqpYl0EEmhN3y+WKxQzLsncY iB561MZ2QwASg5TVPrMXiO2VbVZ56Ujygc4zRbBuOgyK4xKw4RKnnld+tJT7Aw== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, ryan.roberts@arm.com, hch@lst.de, Zi Yan Subject: [PATCH v10 03/10] readahead: allocate folios with mapping_min_order in readahead Date: Mon, 15 Jul 2024 11:44:50 +0200 Message-ID: <20240715094457.452836-4-kernel@pankajraghav.com> In-Reply-To: <20240715094457.452836-1-kernel@pankajraghav.com> References: <20240715094457.452836-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 50EAF40002 X-Stat-Signature: d4p5uppzbxziqj6mineugukoo8j9kxiq X-HE-Tag: 1721036727-886059 X-HE-Meta: U2FsdGVkX19A9kAIAGJDYa/xKLlACUbmNWv72Ki53W8MXjUcvJECUawpFz9WPmRb8ulf8orZDRKGM2FxYo1wqGwcajgPOKs0Ex4RL5blG6ePr/jMDeud40ByEBASAsLeN5EXG+ADIBi0mvPQ/y69g7/z306hO0oe9e1A3yvTzOGLww59bkSwTfsJGl7Ix7i8AQMX4862V+PwjHJEyXTv8OUy6X2OFjYPsf+MHa2W0Gbi8VbUZkUAl8/l+CU7toux4whJKsuBZ31W2XFQXUwAYdWaTiZHh/tP143bpFF7BrGg0gNKf+pBYAglkUDAcchepY//RDU1oquz4fozusJzNwExQk9A/y1qQW1kbKzXpfzqwbUaj0eINDOpUHVBrw3KrHTte6pRPg70jRuV5XGZhi/ODtGwfenwPO7MBmHigI2HrQAvRJXpE/BjDOpUNRwkaND2SpHCtQNpkbkln6jJ9Y2E1rZ8CWUs53jIoxRIDKeZkvbTDT+jcnwkj2H1FZTTnAsuH3lg7iEG42I7l0pZXnAl8elY4R8Q+ziN1FKs46jZTTNPIbF5S7IcGo/nc/pLr9/Gwo85VJEpOIFzuaDnjteEy7Gpf33tjemjf0cjSGMA+/ly9MafwIWwb45CLoKSVJEOvYvvYW8uPLwp42J6kUR9T95PH2OPjJptCORfx2ZhtmfW45G6VXXah7UTEe1ni1uXH2QjS7vz+AiPG6M/NIlcq3odKLOZ/QshizKvFjo2+Jt2EYN7rE7BdejeayOWAOKQ7hX2JLuCopa937tKbW63UT6NEX2l8psGIJE6NRwoMEJ8anjKuvH276R86A63LnJRoO2vQS9mudfNKn8hQzC2G3GiTNkUv+HqWvqqe4lfYWJ01ciiL1J2CdAbFmep0T+4S1StY6P5shP0IVFfICMUzdUERkqHsJDH8IdQ9lcrL6nS0FAzMmrSJCdpnHDWve7fPZw22F1SGiARLmP +KETN4p6 BVY7pxdbGYLQOeMU7XmZLCZE2sMAhLATve6yihY0YmblTxaoA2NqlubsbyISO4vjSWNAeTJWeTNH46XfxoBsckYRnWxaEkFnSklrGTKL5w/nfcauaYPZ87E0ztUlT8KL64o/acSNcws7fesl3HW+s4yvpPBPdCDD5p7SOYd40PwoyZKB1pePFueCxwP/GtAnpwEsehXZTfLfwmMyyRT5iJE3wVQSXgzMzOY7+YWsN6om2ijxxzhLAMFbpD8L7yBTUFFbztb3unyiUq3KxXXVnJuCmhWwNY1hV456brAcOIHwnkeszaNdvDSV253faOE/HEoYtThlzemZGkBRPr5zdfeKTvYFcnhGT3IYuwQ7/N6he/FVgIEpfELh9Dw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav page_cache_ra_unbounded() was allocating single pages (0 order folios) if there was no folio found in an index. Allocate mapping_min_order folios as we need to guarantee the minimum order if it is set. page_cache_ra_order() tries to allocate folio to the page cache with a higher order if the index aligns with that order. Modify it so that the order does not go below the mapping_min_order requirement of the page cache. This function will do the right thing even if the new_order passed is less than the mapping_min_order. When adding new folios to the page cache we must also ensure the index used is aligned to the mapping_min_order as the page cache requires the index to be aligned to the order of the folio. readahead_expand() is called from readahead aops to extend the range of the readahead so this function can assume ractl->_index to be aligned with min_order. Signed-off-by: Pankaj Raghav Co-developed-by: Hannes Reinecke Signed-off-by: Hannes Reinecke Acked-by: Darrick J. Wong --- mm/readahead.c | 79 ++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 61 insertions(+), 18 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 3e5239e9e1777..2078c42777a62 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -206,9 +206,10 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, unsigned long nr_to_read, unsigned long lookahead_size) { struct address_space *mapping = ractl->mapping; - unsigned long index = readahead_index(ractl); + unsigned long ra_folio_index, index = readahead_index(ractl); gfp_t gfp_mask = readahead_gfp_mask(mapping); - unsigned long i; + unsigned long mark, i = 0; + unsigned int min_nrpages = mapping_min_folio_nrpages(mapping); /* * Partway through the readahead operation, we will have added @@ -223,10 +224,24 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, unsigned int nofs = memalloc_nofs_save(); filemap_invalidate_lock_shared(mapping); + index = mapping_align_index(mapping, index); + + /* + * As iterator `i` is aligned to min_nrpages, round_up the + * difference between nr_to_read and lookahead_size to mark the + * index that only has lookahead or "async_region" to set the + * readahead flag. + */ + ra_folio_index = round_up(readahead_index(ractl) + nr_to_read - lookahead_size, + min_nrpages); + mark = ra_folio_index - index; + nr_to_read += readahead_index(ractl) - index; + ractl->_index = index; + /* * Preallocate as many pages as we will need. */ - for (i = 0; i < nr_to_read; i++) { + while (i < nr_to_read) { struct folio *folio = xa_load(&mapping->i_pages, index + i); int ret; @@ -240,12 +255,13 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, * not worth getting one just for that. */ read_pages(ractl); - ractl->_index++; - i = ractl->_index + ractl->_nr_pages - index - 1; + ractl->_index += min_nrpages; + i = ractl->_index + ractl->_nr_pages - index; continue; } - folio = filemap_alloc_folio(gfp_mask, 0); + folio = filemap_alloc_folio(gfp_mask, + mapping_min_folio_order(mapping)); if (!folio) break; @@ -255,14 +271,15 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, if (ret == -ENOMEM) break; read_pages(ractl); - ractl->_index++; - i = ractl->_index + ractl->_nr_pages - index - 1; + ractl->_index += min_nrpages; + i = ractl->_index + ractl->_nr_pages - index; continue; } - if (i == nr_to_read - lookahead_size) + if (i == mark) folio_set_readahead(folio); ractl->_workingset |= folio_test_workingset(folio); - ractl->_nr_pages++; + ractl->_nr_pages += min_nrpages; + i += min_nrpages; } /* @@ -438,13 +455,19 @@ void page_cache_ra_order(struct readahead_control *ractl, struct address_space *mapping = ractl->mapping; pgoff_t start = readahead_index(ractl); pgoff_t index = start; + unsigned int min_order = mapping_min_folio_order(mapping); pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT; pgoff_t mark = index + ra->size - ra->async_size; unsigned int nofs; int err = 0; gfp_t gfp = readahead_gfp_mask(mapping); + unsigned int min_ra_size = max(4, mapping_min_folio_nrpages(mapping)); - if (!mapping_large_folio_support(mapping) || ra->size < 4) + /* + * Fallback when size < min_nrpages as each folio should be + * at least min_nrpages anyway. + */ + if (!mapping_large_folio_support(mapping) || ra->size < min_ra_size) goto fallback; limit = min(limit, index + ra->size - 1); @@ -454,10 +477,19 @@ void page_cache_ra_order(struct readahead_control *ractl, new_order = min(mapping_max_folio_order(mapping), new_order); new_order = min_t(unsigned int, new_order, ilog2(ra->size)); + new_order = max(new_order, min_order); /* See comment in page_cache_ra_unbounded() */ nofs = memalloc_nofs_save(); filemap_invalidate_lock_shared(mapping); + /* + * If the new_order is greater than min_order and index is + * already aligned to new_order, then this will be noop as index + * aligned to new_order should also be aligned to min_order. + */ + ractl->_index = mapping_align_index(mapping, index); + index = readahead_index(ractl); + while (index <= limit) { unsigned int order = new_order; @@ -465,7 +497,7 @@ void page_cache_ra_order(struct readahead_control *ractl, if (index & ((1UL << order) - 1)) order = __ffs(index); /* Don't allocate pages past EOF */ - while (index + (1UL << order) - 1 > limit) + while (order > min_order && index + (1UL << order) - 1 > limit) order--; err = ra_alloc_folio(ractl, index, mark, order, gfp); if (err) @@ -703,8 +735,15 @@ void readahead_expand(struct readahead_control *ractl, struct file_ra_state *ra = ractl->ra; pgoff_t new_index, new_nr_pages; gfp_t gfp_mask = readahead_gfp_mask(mapping); + unsigned long min_nrpages = mapping_min_folio_nrpages(mapping); + unsigned int min_order = mapping_min_folio_order(mapping); new_index = new_start / PAGE_SIZE; + /* + * Readahead code should have aligned the ractl->_index to + * min_nrpages before calling readahead aops. + */ + VM_BUG_ON(!IS_ALIGNED(ractl->_index, min_nrpages)); /* Expand the leading edge downwards */ while (ractl->_index > new_index) { @@ -714,9 +753,11 @@ void readahead_expand(struct readahead_control *ractl, if (folio && !xa_is_value(folio)) return; /* Folio apparently present */ - folio = filemap_alloc_folio(gfp_mask, 0); + folio = filemap_alloc_folio(gfp_mask, min_order); if (!folio) return; + + index = mapping_align_index(mapping, index); if (filemap_add_folio(mapping, folio, index, gfp_mask) < 0) { folio_put(folio); return; @@ -726,7 +767,7 @@ void readahead_expand(struct readahead_control *ractl, ractl->_workingset = true; psi_memstall_enter(&ractl->_pflags); } - ractl->_nr_pages++; + ractl->_nr_pages += min_nrpages; ractl->_index = folio->index; } @@ -741,9 +782,11 @@ void readahead_expand(struct readahead_control *ractl, if (folio && !xa_is_value(folio)) return; /* Folio apparently present */ - folio = filemap_alloc_folio(gfp_mask, 0); + folio = filemap_alloc_folio(gfp_mask, min_order); if (!folio) return; + + index = mapping_align_index(mapping, index); if (filemap_add_folio(mapping, folio, index, gfp_mask) < 0) { folio_put(folio); return; @@ -753,10 +796,10 @@ void readahead_expand(struct readahead_control *ractl, ractl->_workingset = true; psi_memstall_enter(&ractl->_pflags); } - ractl->_nr_pages++; + ractl->_nr_pages += min_nrpages; if (ra) { - ra->size++; - ra->async_size++; + ra->size += min_nrpages; + ra->async_size += min_nrpages; } } } From patchwork Mon Jul 15 09:44:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13733197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5195C3DA5D for ; Mon, 15 Jul 2024 09:45:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68BEB6B0099; Mon, 15 Jul 2024 05:45:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 63DF26B009A; Mon, 15 Jul 2024 05:45:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B57B6B009B; Mon, 15 Jul 2024 05:45:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2D3F96B0099 for ; Mon, 15 Jul 2024 05:45:34 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AB95FA1566 for ; Mon, 15 Jul 2024 09:45:33 +0000 (UTC) X-FDA: 82341504546.11.F14B353 Received: from mout-p-102.mailbox.org (mout-p-102.mailbox.org [80.241.56.152]) by imf02.hostedemail.com (Postfix) with ESMTP id DF4FD8001F for ; Mon, 15 Jul 2024 09:45:31 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=vKp4PhSQ; spf=pass (imf02.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.152 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721036695; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+ToOeUI0Q85Jph7Cl2b3XbtSv31nGYoKCrXgwgV3xFo=; b=2lVI/nb3MLzJnxMGJlQ5lwgjDPuN2cgqWrP+E1JnM1Uqxsj31Qp3jpl2VQwG7JoiA5pdQg GLz56tDddfvycB/HZ+CfklexRvmTzGQ1CW/7eEMPF98JHkI1Abg+uDf2zQYCPC6bLQeDl3 GPT10lkQI08YwJ190MCNI4IJ0/oKc+I= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721036695; a=rsa-sha256; cv=none; b=Oi8ZofmgtW59ZA4kfCFYGTHlFzsqnWoKDGdH17aeADU8tSLexCYg+z6fLUCAp+h/wewGqZ 9p4/84B3wpdh6lPTJmlseC/kgXRxWi4zIIi/zrCkokkY1yVNeabqRX9TFn4Nr5To/EOHQf d/0BskWOrZeSO+25TPVZUwv1mGYWzyU= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=vKp4PhSQ; spf=pass (imf02.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.152 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com Received: from smtp102.mailbox.org (smtp102.mailbox.org [10.196.197.102]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-102.mailbox.org (Postfix) with ESMTPS id 4WMy485Gpzz9sb6; Mon, 15 Jul 2024 11:45:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1721036728; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+ToOeUI0Q85Jph7Cl2b3XbtSv31nGYoKCrXgwgV3xFo=; b=vKp4PhSQp4EnJr4FcZto6if1untrb2j45gRqsxA9hWNGiIfP7AwjUhuThX9UCdDjpbCa+X YuRDoyElJViodEfb747vf7vl4gSCyw+oqFYLTa34taXk5NzzdtWxjrPuqjtNjYhqPgAHAJ 0NsKjzQsCHSJGYnpSuYvC9PoTC8gAH+ljFUY2prftVzZaReBzJOMLl+5ptx0jhShQOrewb kFuCI1kDo2rbOwY3msH/UTLJLdVFawAtJZ1hy7etNQj/Q+4iyr/Vwj5clkZBWldzdpbdfJ 4BanE2Jk7goXfqb6DYRsEJfUU2ZMUIh/jHb6Prv4jub7q79XsEgnlQiZ7oJ4rQ== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, ryan.roberts@arm.com, hch@lst.de, Zi Yan Subject: [PATCH v10 04/10] mm: split a folio in minimum folio order chunks Date: Mon, 15 Jul 2024 11:44:51 +0200 Message-ID: <20240715094457.452836-5-kernel@pankajraghav.com> In-Reply-To: <20240715094457.452836-1-kernel@pankajraghav.com> References: <20240715094457.452836-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: DF4FD8001F X-Stat-Signature: dtc6ujwo89i3kbwyt4bcb5riwumeacsp X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1721036731-922888 X-HE-Meta: U2FsdGVkX1/02ZA4j7dXb6avaNU3Dw2L6i9x4HfnJQKtnxIpaFU0a+5+4cdfQbyBIEGwPgaukgIx5BEzNDRgw7hCKXHAXX1Hc6b0l0XLsH7GFkyzsgKf7sdjr1hPb4+STU/z3X9CZOl8p0K9QTQ+yDjxPyQtwXwVHOP8aLff/IEvSY4CIsE+FhrsQRwbWmTkK/frXNmy8WD/PostyhQyXMLww4cGwq/+rGlTA9MxEI5XsWGAce5an5lpzmTm7lc8OBJuect3kdYLWCG5+8n3ZX71rXyFoVPV0+UhsHQOt4YtHzgF9iqz1fd2VEG8Of9xsd6YDFuGOw9SDeRRZq78ZdRKvZVv3YU4Aa958CcKUBGqZ6WU5XAHp4FjlNHc5RZkbV44dQorzqan6CzS+LvZ7yIgEBR1ae4j/zlbKiOKjjPJSFZpkazlgzvqvhvE6bkitmPrTGX8OL12dyRvZxoDT9a0W4kxEA40ZNsAHojUXo1yL3Wvc60Xl7T+hw9z9rWXEMLwI2WQn18pD5061x8El/6wyPW+ya6Mk0yo4Gqpjmxc2jBj0Id1pemnmYLoG8mwhW3wiFflH5hK0rDkZJGZZzrtHv5luRy141FZInt4tYqjyqGq8HANABEyLp+8+QZWrCjF3Ika9AVin1+qaslyvDx99iWvSgeIvMAt9BmUqm0E9XKRJrl+ilrF5OJD3Q6XUK/7Woj52akqTjjgXLBbU0Vn+Krai75ZJWaTP5hg3gPiCk/mqg8gL4gYFdRuWVt45DCwrGdYEypug3XUayzVqm2azX8mYijDeFHzZInGTSh6t0egHLCHmVFkPle1+KzSUeS5WDqy1QYIDMjGrGY3/UT0aBLsAL+X6wJSDjPgHroKHR7kuiPF4Ws6NCTvgpMaVOGnRaKCSVEITnDY1xko7TzK5U+SKlrMmOCs8fH0DJRhugJ7daj2IYMfU1XCCtDlDqlkSo7VJo/K4+KwAnv gaVquD3x Mj6vfa4ZvGmamZKdZQwRs2M7BZl4TnFtyD2PZps53c+/9o7JFoRGv3N1/Iu3cuZ+hShG0Z7zmg5CUyZLdthRpE3TV1p5uRLjZwX9gIlYVjBxvpfA9H2cY84o/67GSv774uCga+5fC91dzkaOT9vIdmTzShnab1fG0thBS53DwV2isR7n5o3Bm3MIykj59Ri38YM2XXGpK63z8PKWFMVq1Kx6kz1Gq/unAecrV3ugfXYjMkF+f3IFSdG27lBn6Ec37nxzmELp4s8AxOo+Uhl/zAlDHHTVmgL+57mtpYLQO7lj/HqzqLcJcdtNqDF0jzMv2B9T3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Luis Chamberlain split_folio() and split_folio_to_list() assume order 0, to support minorder for non-anonymous folios, we must expand these to check the folio mapping order and use that. Set new_order to be at least minimum folio order if it is set in split_huge_page_to_list() so that we can maintain minimum folio order requirement in the page cache. Update the debugfs write files used for testing to ensure the order is respected as well. We simply enforce the min order when a file mapping is used. Signed-off-by: Luis Chamberlain Signed-off-by: Pankaj Raghav Reviewed-by: Hannes Reinecke Reviewed-by: Zi Yan --- include/linux/huge_mm.h | 14 +++++++--- mm/huge_memory.c | 59 ++++++++++++++++++++++++++++++++++++++--- 2 files changed, 65 insertions(+), 8 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index cee3c5da8f0ed..b6024bf39a9fe 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -90,6 +90,8 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr; #define thp_vma_allowable_order(vma, vm_flags, tva_flags, order) \ (!!thp_vma_allowable_orders(vma, vm_flags, tva_flags, BIT(order))) +#define split_folio(f) split_folio_to_list(f, NULL) + #ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES #define HPAGE_PMD_SHIFT PMD_SHIFT #define HPAGE_PUD_SHIFT PUD_SHIFT @@ -323,9 +325,10 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add bool can_split_folio(struct folio *folio, int *pextra_pins); int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, unsigned int new_order); +int split_folio_to_list(struct folio *folio, struct list_head *list); static inline int split_huge_page(struct page *page) { - return split_huge_page_to_list_to_order(page, NULL, 0); + return split_folio(page_folio(page)); } void deferred_split_folio(struct folio *folio); @@ -490,6 +493,12 @@ static inline int split_huge_page(struct page *page) { return 0; } + +static inline int split_folio_to_list(struct folio *folio, struct list_head *list) +{ + return 0; +} + static inline void deferred_split_folio(struct folio *folio) {} #define split_huge_pmd(__vma, __pmd, __address) \ do { } while (0) @@ -604,7 +613,4 @@ static inline int split_folio_to_order(struct folio *folio, int new_order) return split_folio_to_list_to_order(folio, NULL, new_order); } -#define split_folio_to_list(f, l) split_folio_to_list_to_order(f, l, 0) -#define split_folio(f) split_folio_to_order(f, 0) - #endif /* _LINUX_HUGE_MM_H */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 251d6932130fa..af080296e11b3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3062,6 +3062,9 @@ bool can_split_folio(struct folio *folio, int *pextra_pins) * released, or if some unexpected race happened (e.g., anon VMA disappeared, * truncation). * + * Callers should ensure that the order respects the address space mapping + * min-order if one is set for non-anonymous folios. + * * Returns -EINVAL when trying to split to an order that is incompatible * with the folio. Splitting to order 0 is compatible with all folios. */ @@ -3143,6 +3146,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, mapping = NULL; anon_vma_lock_write(anon_vma); } else { + unsigned int min_order; gfp_t gfp; mapping = folio->mapping; @@ -3153,6 +3157,14 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, goto out; } + min_order = mapping_min_folio_order(folio->mapping); + if (new_order < min_order) { + VM_WARN_ONCE(1, "Cannot split mapped folio below min-order: %u", + min_order); + ret = -EINVAL; + goto out; + } + gfp = current_gfp_context(mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK); @@ -3265,6 +3277,25 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, return ret; } +int split_folio_to_list(struct folio *folio, struct list_head *list) +{ + unsigned int min_order = 0; + + if (folio_test_anon(folio)) + goto out; + + if (!folio->mapping) { + if (folio_test_pmd_mappable(folio)) + count_vm_event(THP_SPLIT_PAGE_FAILED); + return -EBUSY; + } + + min_order = mapping_min_folio_order(folio->mapping); +out: + return split_huge_page_to_list_to_order(&folio->page, list, + min_order); +} + void __folio_undo_large_rmappable(struct folio *folio) { struct deferred_split *ds_queue; @@ -3496,6 +3527,8 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, struct vm_area_struct *vma = vma_lookup(mm, addr); struct page *page; struct folio *folio; + struct address_space *mapping; + unsigned int target_order = new_order; if (!vma) break; @@ -3516,7 +3549,13 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, if (!is_transparent_hugepage(folio)) goto next; - if (new_order >= folio_order(folio)) + if (!folio_test_anon(folio)) { + mapping = folio->mapping; + target_order = max(new_order, + mapping_min_folio_order(mapping)); + } + + if (target_order >= folio_order(folio)) goto next; total++; @@ -3532,9 +3571,13 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, if (!folio_trylock(folio)) goto next; - if (!split_folio_to_order(folio, new_order)) + if (!folio_test_anon(folio) && folio->mapping != mapping) + goto unlock; + + if (!split_folio_to_order(folio, target_order)) split++; +unlock: folio_unlock(folio); next: folio_put(folio); @@ -3559,6 +3602,7 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, pgoff_t index; int nr_pages = 1; unsigned long total = 0, split = 0; + unsigned int min_order; file = getname_kernel(file_path); if (IS_ERR(file)) @@ -3572,9 +3616,11 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, file_path, off_start, off_end); mapping = candidate->f_mapping; + min_order = mapping_min_folio_order(mapping); for (index = off_start; index < off_end; index += nr_pages) { struct folio *folio = filemap_get_folio(mapping, index); + unsigned int target_order = new_order; nr_pages = 1; if (IS_ERR(folio)) @@ -3583,18 +3629,23 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (!folio_test_large(folio)) goto next; + target_order = max(new_order, min_order); total++; nr_pages = folio_nr_pages(folio); - if (new_order >= folio_order(folio)) + if (target_order >= folio_order(folio)) goto next; if (!folio_trylock(folio)) goto next; - if (!split_folio_to_order(folio, new_order)) + if (folio->mapping != mapping) + goto unlock; + + if (!split_folio_to_order(folio, target_order)) split++; +unlock: folio_unlock(folio); next: folio_put(folio); From patchwork Mon Jul 15 09:44:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13733198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74C15C3DA5D for ; Mon, 15 Jul 2024 09:45:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 09A936B009B; Mon, 15 Jul 2024 05:45:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 048946B009C; Mon, 15 Jul 2024 05:45:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2CAD6B009D; Mon, 15 Jul 2024 05:45:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id BE4416B009B for ; Mon, 15 Jul 2024 05:45:38 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 490BBA5258 for ; Mon, 15 Jul 2024 09:45:38 +0000 (UTC) X-FDA: 82341504756.04.89BD598 Received: from mout-p-102.mailbox.org (mout-p-102.mailbox.org [80.241.56.152]) by imf02.hostedemail.com (Postfix) with ESMTP id 842B88000C for ; Mon, 15 Jul 2024 09:45:36 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=DziKE2LF; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf02.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.152 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721036695; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xbld/pizLMEJfrPYBqwQ81gjcSKIsZj5n7TDu8kDUY0=; b=KBJb8MzeiRHuFFictEm66eNn2t9CJUCjWuoKN9tfrDVvi8j9zGRz7s9lxklOuwUHVAgAc3 7l4qBKsor7SLkzGlUY7UeRRZTAlNXsz+sNPi7tuf/uSrku0N3yr2ewhW6mv3IQMmsHOL5/ 3Z1matd3tZrJULH/tX2djvG7N53gyaY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721036695; a=rsa-sha256; cv=none; b=th4o84CCCxKdkThUYUQmFgK6CnjxXIqAwOny9u6xEqPnAqf8iic63Vk7WjjziI0bWkI9F0 Ho6KGLzlkxb8nOxU8sB+T0xcy6MlbBupiE9X1Ua11SKErlbSZr72ifb2uVypsvOmYEH7OI onLhyZPvLDRcBUH9pd4/htvtaWLWXIU= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=DziKE2LF; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf02.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.152 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com Received: from smtp2.mailbox.org (smtp2.mailbox.org [IPv6:2001:67c:2050:b231:465::2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-102.mailbox.org (Postfix) with ESMTPS id 4WMy4D6rpSz9slc; Mon, 15 Jul 2024 11:45:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1721036733; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xbld/pizLMEJfrPYBqwQ81gjcSKIsZj5n7TDu8kDUY0=; b=DziKE2LFE+foxOGJVWPx7ePlNX35aSAPc4JT7wXBeonEpWgqtmj1s/uCtAHsolEfoAlwQY SFHmwH8TKmqeUWfMSvGVo1Z0aXhDPf8+DmViEFIO2v1b9HHxDDPPY3c8/PwyYU2RUGwKAH mkom7J2oa+9jxrEdMn5SXSewYicuV2NroLCHc+Pa2V3xv8RSElNGN2MOEZDnAJ4oIqYlRI mgWoCRcbeSUvnl3qbdNHoBCr/Ddebi04C9Qw4g/wOP3gXzvHsoP1IDR/qwFlJBa1/S71IX TJHSleYyo8MxFnpbaphpSvEblAhElowOxU8DcvzyXza3tjwqH9XWxmnUJioUwQ== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, ryan.roberts@arm.com, hch@lst.de, Zi Yan Subject: [PATCH v10 05/10] filemap: cap PTE range to be created to allowed zero fill in folio_map_range() Date: Mon, 15 Jul 2024 11:44:52 +0200 Message-ID: <20240715094457.452836-6-kernel@pankajraghav.com> In-Reply-To: <20240715094457.452836-1-kernel@pankajraghav.com> References: <20240715094457.452836-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 842B88000C X-Stat-Signature: xt9ibkc71id1bq4rk8wbjbxhcghbiwzi X-Rspam-User: X-HE-Tag: 1721036736-446737 X-HE-Meta: U2FsdGVkX19Tn1S8zgm7nL5SqT/xQojI+vIBYOo8PJJN6b3EyVF+3sRGaJIlIt8MtZUchxFIERXCAa3wusogVdTC9SbKL8ZWw1mN8fam2/LYeGeMcz3jGrf7tQMLIhzcfZHdyQ9uHM3+UxihgAoeBx9gG4jWCZTH3kkuAdFFITbXJcgnQAjzTUQ7mmXNffwaQIIfmcu5+zyIg2Pn2eNQYtmCHQVrxGxU3x2xz2VpgoVMWd4kOlnFwirmwRCXF0vdYvnKOYq2Ks/C231kkm3VeaF5onU+3G1LHC9ms9XzYyuJKpb4Yz7gMrpjgqGvrtVHLqQEF7x7rYiLQKUJgnD2esFue+keg0XwoOjKZ6YOxEuRnbdNyYDmDQKddiQriAkJY31o2JdEdWHvQucqWSAnGHBD7UzNSrKFOGQ7ytQcggmurFA3qpu3lkX37VlanMjCzdzvwU17hGVakhlJPr+OZTRbOvIUQjSgm0fI0DXMetxKmpHyvgl+corQsSQ2mzpCc+sryCHej3vUNx8XjruENbyXzmcD9CSKPViQY2cW4ysArbtN1ZYbY+cFoeicbSHMSLHpSYDArQg8kWH3wQFO7NxjzVIog3fQ3ernSJi/g7Bp/KQuD5rrTW9WAQTU09ExUNO195CdWYPAcK2aKY4aPrSd9oJrnmF8n8GK4Il8+HRPMIgykrZo9khl93lHX0wmQKO5LVNiaybpmfjB0B7utjvpJkXi2ZQvmLS4fuBP6fTaf+QtR58S0Etqtc45BuVrQAvx1s04VAaB1BYahzTkg7Ks1CBS1Zw94CaGiBftNqDOzc4HTL9VLaIhsDU8/0tfoP/MqRDGmSfN+bttdrsYhqR6egYdQDXiNlD31fTYz5X5H/ltBqXXYuB60W10s7ryXBuPjUXDAV+J7WUTW0YxUO0VeyfluIPxKjWX86PYzVoaSMTVIlVBCs1x9XNs5IV+QXD5AE4Psd3X5pz8B5I nFpPD7bu GoEJ7O1AnSMGAQGoKG+Gf0uWlHoqkEDAFTpSJsiBIxGsJw6KZNtn2r386xVi/Zh3Kg7XDkwa/aN9pRybKlsLqHFm8J9zlbM2n1XNMAYqBEv155hiCSu7kRqi/KZXVj3XqwRppb72qwXoG6MUo+hWcfABsve3JnmFZyFBdEf0fvmyKrR2bUwQRzBu+FoCFF/IaXZMx9H4gMveJKe3BtYRf/KAClxmvwjNRTeu9HaEXnRHIw1peS6oOwIEUzM/zL/mRUTOVqLyn+g9hySojk8U5tIdux91779JZD/4wT3QsWrx3StjAV5xWH6+6L5ndoScheSjOzrrA2ruRgTT2Kfa/F/COzWDxarlcCAoTHr9/U6kVhYAUDmwlWahv4/VAicxS68W+MifGJ4L43rLT57Cf0jd9g9MBZWlBNqIhz9xMgW7tEKqeisc2dTb9in8SgfgBFHc2IiRCbfGCaBpWGLQ6OOiCCfRnkM23PwnfhhYOg13bffcmV16OKR1vua44RD+AJW1Q X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav Usually the page cache does not extend beyond the size of the inode, therefore, no PTEs are created for folios that extend beyond the size. But with LBS support, we might extend page cache beyond the size of the inode as we need to guarantee folios of minimum order. While doing a read, do_fault_around() can create PTEs for pages that lie beyond the EOF leading to incorrect error return when accessing a page beyond the mapped file. Cap the PTE range to be created for the page cache up to the end of file(EOF) in filemap_map_pages() so that return error codes are consistent with POSIX[1] for LBS configurations. generic/749(currently in xfstest-dev patches-in-queue branch [0]) has been created to trigger this edge case. This also fixes generic/749 for tmpfs with huge=always on systems with 4k base page size. [0] https://lore.kernel.org/all/20240615002935.1033031-3-mcgrof@kernel.org/ [1](from mmap(2)) SIGBUS Attempted access to a page of the buffer that lies beyond the end of the mapped file. For an explanation of the treatment of the bytes in the page that corresponds to the end of a mapped file that is not a multiple of the page size, see NOTES. Signed-off-by: Luis Chamberlain Signed-off-by: Pankaj Raghav Reviewed-by: Hannes Reinecke Reviewed-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong --- mm/filemap.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/filemap.c b/mm/filemap.c index d27e9ac54309d..d322109274532 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3608,7 +3608,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, struct vm_area_struct *vma = vmf->vma; struct file *file = vma->vm_file; struct address_space *mapping = file->f_mapping; - pgoff_t last_pgoff = start_pgoff; + pgoff_t file_end, last_pgoff = start_pgoff; unsigned long addr; XA_STATE(xas, &mapping->i_pages, start_pgoff); struct folio *folio; @@ -3634,6 +3634,10 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, goto out; } + file_end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE) - 1; + if (end_pgoff > file_end) + end_pgoff = file_end; + folio_type = mm_counter_file(folio); do { unsigned long end; From patchwork Mon Jul 15 09:44:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13733199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42D2BC3DA59 for ; Mon, 15 Jul 2024 09:45:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C0CD6B009D; Mon, 15 Jul 2024 05:45:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8ADC86B009E; Mon, 15 Jul 2024 05:45:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D8726B009F; Mon, 15 Jul 2024 05:45:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3FD2D6B009D for ; Mon, 15 Jul 2024 05:45:42 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id EFD661214ED for ; Mon, 15 Jul 2024 09:45:41 +0000 (UTC) X-FDA: 82341504882.30.6377C62 Received: from mout-p-102.mailbox.org (mout-p-102.mailbox.org [80.241.56.152]) by imf21.hostedemail.com (Postfix) with ESMTP id 223191C001B for ; Mon, 15 Jul 2024 09:45:39 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=rGrj22+Z; spf=pass (imf21.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.152 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721036711; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=92ifPOSk+9xMdQXjuWHjR+cARiQXMVhXQNki2unPXuQ=; b=yr92EcT+RGdtkDBFFpfqCLGEoe8nnOMwzvnhg9nPZD01w/+t9UFtmqZ0d+69KSeTNe2fDK 9MoYqiEPPYrW/3M1Aip6BmVspvDXRaDrtFViV69Ng0fXdCiesG0VtADIUtWvLBdDDBa9Nw HryzSyT53FDqu/QyPKHIfaRGknxH1FU= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=rGrj22+Z; spf=pass (imf21.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.152 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721036711; a=rsa-sha256; cv=none; b=y+bJuT55lPWrNp/mxvrQrViv+qJQIKM0mHnzETHl2SLxrIkzWpjgjMbgmUThAOwJbVqGOF X+my0M2kOarN4TgMVZ5IvqjtX3Cst/ZHxXunTHhl7cbts485VWpvZyDmYThgBMhA4WsAVC B7ogw87GzNv+O7ATSB+z6oqVTowko7k= Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-102.mailbox.org (Postfix) with ESMTPS id 4WMy4K0G2Sz9sb6; Mon, 15 Jul 2024 11:45:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1721036737; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=92ifPOSk+9xMdQXjuWHjR+cARiQXMVhXQNki2unPXuQ=; b=rGrj22+ZbxWopDnVHtQ4n9SYXpTi+VrSPG6duV70k1Xw2paVhATH4a7VmK4dscc7zY2/O8 5Q6+F+TWSQLltJzk/L+FGzXTWshUUN/d9QbtsLNtI6cGDPpWtIgmAZ9Xpzn6bScg+/6uyq XBXN+0gRAgYZWIRK8XAea5BBejOKEW70Zv7AOmMxoumemPvAQ/xsg1ZBLJZ46UW0T0Wobu gE3zexFXWrJvj7mxOaBV7uW4nNkLR6Wn7VH6tPr0CVQwn5rpQtQzioTJoAacDBJ9eKIgir j32814slkkg3dakyF4d8QIYZgV/Ib2z08IT6D3c2xbqUxPqr2qaAHHI2ewlHBQ== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, ryan.roberts@arm.com, hch@lst.de, Zi Yan , Dave Chinner Subject: [PATCH v10 06/10] iomap: fix iomap_dio_zero() for fs bs > system page size Date: Mon, 15 Jul 2024 11:44:53 +0200 Message-ID: <20240715094457.452836-7-kernel@pankajraghav.com> In-Reply-To: <20240715094457.452836-1-kernel@pankajraghav.com> References: <20240715094457.452836-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Stat-Signature: ozt7kopjsy3zeks8sxa79ogeotqb8joj X-Rspam-User: X-Rspamd-Queue-Id: 223191C001B X-Rspamd-Server: rspam02 X-HE-Tag: 1721036739-885544 X-HE-Meta: U2FsdGVkX19Qsy559uaIWvC+LeuXOHJ48Pr0jehKYAM59+J3cB7ufR2/sTPvMmv4yhZZffwGapan9gvmSfEFAOacAc569ifE+M3n+rqhlvnn8adJZbmZjhIfFcCpT5sfGC2iP3d8ptxdQyAukFK+lJW5XzC/vtP1Uf8GYEH0ZVhjbpWQ70rcj3hcUhqp1t8U2Bw3H3+3xOEJidgv77Apyc2lB2u6dexuVnGm5BbEKYwIxsSTw+0By7CnWD+6pCc6xtBvsxKKdFwiaONBUEUXAESy3LzTLW4o2ba4HHOhiGvESw6yR62e2yiwZik3uCuZy4BTUdHbqv11hYEMu/CrDAtCIP7cKV26zwOA2Hf9GouUxiLdtU/tJTybIK8VQG9Dya53dXWURq2gQZubjXeKgCYDvwIac5fJrpqpK7PtGp7WZoAjx4R9txOGvZMMo0wuehYBqaBy4/pts9b5bH9Hh3dyI68u/Y8miH8sVb87si8Ms6jjdbqdb8h1/yMT55b8dFs3F83jEL2K9bGuEbDOO7h4eedZheJIfkQgaz4Kyp4G+jH268RGJbniUssy1jX24J9Hqcrt7fL805JSPmkTmvy0J73mjA+rks7bCd/pZCqXJfPh9cqitcgnI8uA9nbY/Ysmu853vrXd6gUB04Gqb40zpHlFfs73qcaJUi/s8Zu3uINJmWcFutx46+HVkD39o4GB6dATuR9Ny3bC1T9VOFuKTiiOoDZ4LmuCPxc/VrnzyjgaR91otxVY7UtHw2UaKc91g0BxafPUCxAeySk/5Hgx34R5H4jPpWDNWCHp/vQcW3r1WDG+b1Qq6KmXvVcO0hRIZ8Nm3D2LDDHPJezpM3mg1FGukqWn84sYuhu07KaTZWTUAfFQf6hOyq8uYvgort54FVI5n8iFS8AyMReuJnX0kIyLVTvbm1y4mspLje//uXtVPuuCPVW5eMsAUfyLZH/9sQyVVOxujN3e9Gv ypmwU+vZ 4xFZvze1nDq2BeZ65OpP71umOSnIgdUHQkiNrOzdJrE4aHbWWflzn0q+Hhnw6RBa4I22aUaAu7caEPZ31aLlscju4BxD73FYm1YtBEodWf+P7aBlGYrHCHPYqeK73NupmcSaRLeEmKYKjKBpUGoeSYLBW6CHJSg/FlddXvKdY6B0BEmXL8n5lDEnF45V9+ZVsFEtXvEEXZWCns/NeW/1GTTbm1PNaeMqKQ7M96d64oRjaJL14aR4mkBkE/JVPlM4UFnzeMEQ0VrWr5qgTi0MXHVp3rbCG4L+0fvYIZey898cK+4r6JSneDoZ99Pxy69lqWmOEcsQ8fRLroaPDmdWApTYMNkYBylLwzlFbwiloJ0yURsqObwi7VT8Kdvv8yWhdQ2HcK3IlDsjdNzU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav iomap_dio_zero() will pad a fs block with zeroes if the direct IO size < fs block size. iomap_dio_zero() has an implicit assumption that fs block size < page_size. This is true for most filesystems at the moment. If the block size > page size, this will send the contents of the page next to zero page(as len > PAGE_SIZE) to the underlying block device, causing FS corruption. iomap is a generic infrastructure and it should not make any assumptions about the fs block size and the page size of the system. Signed-off-by: Pankaj Raghav Reviewed-by: Hannes Reinecke Reviewed-by: Darrick J. Wong Reviewed-by: Dave Chinner --- fs/iomap/buffered-io.c | 4 ++-- fs/iomap/direct-io.c | 45 ++++++++++++++++++++++++++++++++++++------ 2 files changed, 41 insertions(+), 8 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index f420c53d86acc..d745f718bcde8 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -2007,10 +2007,10 @@ iomap_writepages(struct address_space *mapping, struct writeback_control *wbc, } EXPORT_SYMBOL_GPL(iomap_writepages); -static int __init iomap_init(void) +static int __init iomap_buffered_init(void) { return bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE), offsetof(struct iomap_ioend, io_bio), BIOSET_NEED_BVECS); } -fs_initcall(iomap_init); +fs_initcall(iomap_buffered_init); diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index f3b43d223a46e..c02b266bba525 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include "trace.h" @@ -27,6 +28,13 @@ #define IOMAP_DIO_WRITE (1U << 30) #define IOMAP_DIO_DIRTY (1U << 31) +/* + * Used for sub block zeroing in iomap_dio_zero() + */ +#define IOMAP_ZERO_PAGE_SIZE (SZ_64K) +#define IOMAP_ZERO_PAGE_ORDER (get_order(IOMAP_ZERO_PAGE_SIZE)) +static struct page *zero_page; + struct iomap_dio { struct kiocb *iocb; const struct iomap_dio_ops *dops; @@ -232,13 +240,20 @@ void iomap_dio_bio_end_io(struct bio *bio) } EXPORT_SYMBOL_GPL(iomap_dio_bio_end_io); -static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio, +static int iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio, loff_t pos, unsigned len) { struct inode *inode = file_inode(dio->iocb->ki_filp); - struct page *page = ZERO_PAGE(0); struct bio *bio; + if (!len) + return 0; + /* + * Max block size supported is 64k + */ + if (WARN_ON_ONCE(len > IOMAP_ZERO_PAGE_SIZE)) + return -EINVAL; + bio = iomap_dio_alloc_bio(iter, dio, 1, REQ_OP_WRITE | REQ_SYNC | REQ_IDLE); fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits, GFP_KERNEL); @@ -246,8 +261,9 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio, bio->bi_private = dio; bio->bi_end_io = iomap_dio_bio_end_io; - __bio_add_page(bio, page, len, 0); + __bio_add_page(bio, zero_page, len, 0); iomap_dio_submit_bio(iter, dio, bio, pos); + return 0; } /* @@ -356,8 +372,10 @@ static loff_t iomap_dio_bio_iter(const struct iomap_iter *iter, if (need_zeroout) { /* zero out from the start of the block to the write offset */ pad = pos & (fs_block_size - 1); - if (pad) - iomap_dio_zero(iter, dio, pos - pad, pad); + + ret = iomap_dio_zero(iter, dio, pos - pad, pad); + if (ret) + goto out; } /* @@ -431,7 +449,8 @@ static loff_t iomap_dio_bio_iter(const struct iomap_iter *iter, /* zero out from the end of the write to the end of the block */ pad = pos & (fs_block_size - 1); if (pad) - iomap_dio_zero(iter, dio, pos, fs_block_size - pad); + ret = iomap_dio_zero(iter, dio, pos, + fs_block_size - pad); } out: /* Undo iter limitation to current extent */ @@ -753,3 +772,17 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, return iomap_dio_complete(dio); } EXPORT_SYMBOL_GPL(iomap_dio_rw); + +static int __init iomap_dio_init(void) +{ + zero_page = alloc_pages(GFP_KERNEL | __GFP_ZERO, + IOMAP_ZERO_PAGE_ORDER); + + if (!zero_page) + return -ENOMEM; + + set_memory_ro((unsigned long)page_address(zero_page), + 1U << IOMAP_ZERO_PAGE_ORDER); + return 0; +} +fs_initcall(iomap_dio_init); From patchwork Mon Jul 15 09:44:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13733200 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3622FC3DA59 for ; Mon, 15 Jul 2024 09:45:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BBCB76B009F; Mon, 15 Jul 2024 05:45:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B44FD6B00A0; Mon, 15 Jul 2024 05:45:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 998C16B00A1; Mon, 15 Jul 2024 05:45:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 728C86B009F for ; Mon, 15 Jul 2024 05:45:46 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 2DEC81414D4 for ; Mon, 15 Jul 2024 09:45:46 +0000 (UTC) X-FDA: 82341505092.09.C3533B1 Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) by imf11.hostedemail.com (Postfix) with ESMTP id 690E740017 for ; Mon, 15 Jul 2024 09:45:44 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=uM8OlI5I; spf=pass (imf11.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.172 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721036708; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=D613TVQCptAW9VynAGuVSOK0s4oeJXQu1SypShzHmM4=; b=DsqsDsA47Gjs2KxW7OJFVWVC1GSX1ku3+Fm0qkoG+pwa6i4wgLLtr4CAnrY3XuJHTPTv2x X1dCEm2EV54a8dN8LQYCv5tZe2aslEKUwIt1QbVCRVoCAp4c0CVuZcwAawFVQeKnh7wSw3 tqWta9GRDjqFBXxjFukT4pfrnzckSew= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721036708; a=rsa-sha256; cv=none; b=Rhw1FQiNig14JYF/tUJSIDDIQw+wVEOw38cZwpFlUSveOf1ujDIudWdWud4P7xNQ/FaY+l PhtU7kOf+9Al2x5clhGyshA5G3WYqi8S9pmASgx4fLdMwahpiWjxp2tdYfmqsZqMzvpV2f IiqX5XnxaIwXr3cDVJKrUcGhMJuyxXk= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=uM8OlI5I; spf=pass (imf11.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.172 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4WMy4P1gdnz9sWC; Mon, 15 Jul 2024 11:45:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1721036741; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=D613TVQCptAW9VynAGuVSOK0s4oeJXQu1SypShzHmM4=; b=uM8OlI5IEQGqmikVhy8VZ5B3cKaYJYBPNRpBgkLZIib8ODN2Fl/BqmG7kAmfQ7PjReqyBp J+XuclRTwXs8G7QKLPWDxuO7VLEauyYRUf1avtK8kwNAvSWtgPn7oF/yC9O/8Gw3bThwwi n4dQx3gfrkYw1HDKQ84AnP0PyosggCXx+A5m4bnptcbSvuN7wb1qMH6JqOy4kjVAUUPV+u Ku93aX1GWrXEKWIGpzmCPEOohhvIHxLXnjH0Kd0c8/TD89CXBciUEKtNZd3ShvDKgoz/ag w3MwxrhqgwWCDYsf9EgsE5C0+65MTag8GZA0OKT0kw3WEIR4gqegmU6LJTym6A== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, ryan.roberts@arm.com, hch@lst.de, Zi Yan , Dave Chinner Subject: [PATCH v10 07/10] xfs: use kvmalloc for xattr buffers Date: Mon, 15 Jul 2024 11:44:54 +0200 Message-ID: <20240715094457.452836-8-kernel@pankajraghav.com> In-Reply-To: <20240715094457.452836-1-kernel@pankajraghav.com> References: <20240715094457.452836-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 690E740017 X-Stat-Signature: 6mn5wt7sm4rboqzupqg1bucbcofbtdh5 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1721036744-519342 X-HE-Meta: U2FsdGVkX18NgIgsHN2MqrrE+YBHceNfHnuop27gi/iS+ERDgOtBXgg9WrnLilMRZT65Lrtsco/5TkXVJCktDBrNqLEnO1foFeyDDQddL9a8ZWK+Wam/agv6vo0ivQU5w0DHck4PqkGZXojKU2ZVz4aOs0wqs9cKfI4S85Bj5AeGmz+BdiLQPQNHxkHW4CMdC14g52Vk6ky+KnxfqbZEdg6yVL4GlfsVDHKvGzJsAmorT61wExV1Jww5jsrP1uGCfwgVaZDZl1q26A8brj59EcwKgUsklupb3KFco5n19e9fyTiXCzDAG5zpXzH3WY2RC+C+0VuarC3EA+FRibJnQVyXlZAhejAh50mvBjY1s6tMxQwH7UAgaJrbLJb1cIsqtrKGf1hjmlLw44GqR8ScG6wVs7rnjq/ZvysHqdz9W42f6eiu8ucEWrrY2beDfeyoCbX9FAMJGfbALppSqqtlGqfiTcVKQL4OMZFnUUXteg147eOEDdSCQNDc5lBnd7G5aYJhtZr39Xwo0EjQ+x+68ocynCh7xWGktqlxhJTpsiGtEae2ZfWSvHdydkJPRzoSOMFQ2RcIqF5JOcNStghYbBCxu9m9L0pKmTzZm9u6nIR6fgFx4QWHXrzI3yiiRAeAVcXl6pQHRqaRQpnwjCEdgMXXhbv8/7a3he8PNLmSP9FLDhItzfE8Iw1ZJFtBdP1haEpDrLlPH47DjzYkaQk2ULVnIqw6wACagaa51EWj+aPPECFDaxuU+Wvq+EHB+Wwjo0x5CE4EC9Z2A5d48qQyAf60tY3+3S17URIlkVOBxJLmBoFueyhOPzuaEAmRVBk/ecbaq9lwsAYewhxzrf+FzCOI7BtpKokTxS4gHeEHPWNVKRNsRF2g7br6a9N80Un5RSXpi+LufmsWyYfNeZ+fbd1SncZYOIb6S0fjY3HPitdsnpbjwZEv+OAsqTSsuPOu8qQ2Hrth2d5xqEs+gr9 zdYCw8/c XgAKkcdrU8XlOumqxUD9sxxmDABNVe5v8aQwHR/1NUsEDBViveXxmxKkpySBoJPyh6sXSFKNgP1NVx/n6DIpfuKN+myilwR0j5fnB2vojXPiSf4kGs82VNI/4mOxLmmDm1CS6hGy6BX8WgxqiUOKieWZf9Z6ZllFL07pxM3D5i3uP07Y2C0E/Q7mb2awYqThhJoT+08IUG9mn3Q/vYqoTR7grneEwuxogbxHNUrvRCFq1zu32GUiN+AQpZ+WdmXDzyI4ZpJ26+39edtQ7PbpR2gB4gdpgAMmTAj9SsiaSNG5fh/oyRA91gGJ9mibhU4V5sbW6m8yI3TX45tP3QKb8zRiY/jMYvhbrKxh5x9v5brRlLOVWdGgGEhMN4b3LNgVxF4P3FQ2Hxbd+geKwy9VGHQWiqQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Dave Chinner Pankaj Raghav reported that when filesystem block size is larger than page size, the xattr code can use kmalloc() for high order allocations. This triggers a useless warning in the allocator as it is a __GFP_NOFAIL allocation here: static inline struct page *rmqueue(struct zone *preferred_zone, struct zone *zone, unsigned int order, gfp_t gfp_flags, unsigned int alloc_flags, int migratetype) { struct page *page; /* * We most definitely don't want callers attempting to * allocate greater than order-1 page units with __GFP_NOFAIL. */ >>>> WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); ... Fix this by changing all these call sites to use kvmalloc(), which will strip the NOFAIL from the kmalloc attempt and if that fails will do a __GFP_NOFAIL vmalloc(). This is not an issue that productions systems will see as filesystems with block size > page size cannot be mounted by the kernel; Pankaj is developing this functionality right now. Reported-by: Pankaj Raghav Fixes: f078d4ea8276 ("xfs: convert kmem_alloc() to kmalloc()") Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong Reviewed-by: Pankaj Raghav --- fs/xfs/libxfs/xfs_attr_leaf.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c index b9e98950eb3d8..09f4cb061a6e0 100644 --- a/fs/xfs/libxfs/xfs_attr_leaf.c +++ b/fs/xfs/libxfs/xfs_attr_leaf.c @@ -1138,10 +1138,7 @@ xfs_attr3_leaf_to_shortform( trace_xfs_attr_leaf_to_sf(args); - tmpbuffer = kmalloc(args->geo->blksize, GFP_KERNEL | __GFP_NOFAIL); - if (!tmpbuffer) - return -ENOMEM; - + tmpbuffer = kvmalloc(args->geo->blksize, GFP_KERNEL | __GFP_NOFAIL); memcpy(tmpbuffer, bp->b_addr, args->geo->blksize); leaf = (xfs_attr_leafblock_t *)tmpbuffer; @@ -1205,7 +1202,7 @@ xfs_attr3_leaf_to_shortform( error = 0; out: - kfree(tmpbuffer); + kvfree(tmpbuffer); return error; } @@ -1613,7 +1610,7 @@ xfs_attr3_leaf_compact( trace_xfs_attr_leaf_compact(args); - tmpbuffer = kmalloc(args->geo->blksize, GFP_KERNEL | __GFP_NOFAIL); + tmpbuffer = kvmalloc(args->geo->blksize, GFP_KERNEL | __GFP_NOFAIL); memcpy(tmpbuffer, bp->b_addr, args->geo->blksize); memset(bp->b_addr, 0, args->geo->blksize); leaf_src = (xfs_attr_leafblock_t *)tmpbuffer; @@ -1651,7 +1648,7 @@ xfs_attr3_leaf_compact( */ xfs_trans_log_buf(trans, bp, 0, args->geo->blksize - 1); - kfree(tmpbuffer); + kvfree(tmpbuffer); } /* @@ -2330,7 +2327,7 @@ xfs_attr3_leaf_unbalance( struct xfs_attr_leafblock *tmp_leaf; struct xfs_attr3_icleaf_hdr tmphdr; - tmp_leaf = kzalloc(state->args->geo->blksize, + tmp_leaf = kvzalloc(state->args->geo->blksize, GFP_KERNEL | __GFP_NOFAIL); /* @@ -2371,7 +2368,7 @@ xfs_attr3_leaf_unbalance( } memcpy(save_leaf, tmp_leaf, state->args->geo->blksize); savehdr = tmphdr; /* struct copy */ - kfree(tmp_leaf); + kvfree(tmp_leaf); } xfs_attr3_leaf_hdr_to_disk(state->args->geo, save_leaf, &savehdr); From patchwork Mon Jul 15 09:44:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13733201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B29BC3DA59 for ; Mon, 15 Jul 2024 09:45:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D93486B00A3; Mon, 15 Jul 2024 05:45:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF4166B00A2; Mon, 15 Jul 2024 05:45:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B46236B00A3; Mon, 15 Jul 2024 05:45:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8AC706B00A1 for ; Mon, 15 Jul 2024 05:45:50 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 445D8161543 for ; Mon, 15 Jul 2024 09:45:50 +0000 (UTC) X-FDA: 82341505260.01.DF5ADBD Received: from mout-p-102.mailbox.org (mout-p-102.mailbox.org [80.241.56.152]) by imf07.hostedemail.com (Postfix) with ESMTP id 778C340013 for ; Mon, 15 Jul 2024 09:45:48 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=msd62TWL; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf07.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.152 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721036706; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1hpvVZhxiYoy4pHnc2n7weXG4jb43hSIPB2EVq7bVws=; b=rKzp8q2egCAH9jCCBjAH7ax9ldrTEZmtAF8pDpPH8l9TNqVRVeM99sSre9nvFbyCzcilPE p6/E4dfspjkm1hTeaiXbOlmoUeAdi6iDFjxrlX/LEDIKO101sIHdUfq7S+YlLlTXmn5QPj fU/JaG8uIDBCtY/JhYPkc5N4RBde6dc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721036706; a=rsa-sha256; cv=none; b=5W0ObzWHs/yWZva1vF3oQuYglxGJjduJVuw3yvUnQaFx/FqVzRnfhIgSyFznBmoXSjBvoz tTlY0bz3lffgtsp9cl6AOHboFsOxLcpikciJa3qY436+YD4Cz72//M79UHB9ypYLCMxyjD R6iZ80y37r/MODUNazk9CR4PqvYmwT8= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=msd62TWL; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf07.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.152 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-102.mailbox.org (Postfix) with ESMTPS id 4WMy4T0mnhz9sZh; Mon, 15 Jul 2024 11:45:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1721036745; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1hpvVZhxiYoy4pHnc2n7weXG4jb43hSIPB2EVq7bVws=; b=msd62TWLIbtBTLRUrgcXf3EX0hzuGpPV9F+MXYWQ78XHC/Tlf/hHZ52/3glcG1DFe5XDPa OVnhu6hV1GVb3AsPUFx2gkCBHLPdESfZ+a4K/mzJwjrehq1PnKbZokOE5SglsDQFEE50z4 TaPX0ZGrCj5gJaKQebeqz+mGBAB8ftqHOYwX3dX7MA3KjtL/wtEEZYrc5kcHNrIROd+6MC 5My8oLMbTZV1yn94kPyzVBuTbJCKIYohb0PoX6+SXKLD7Dp1nB+9QPvWBvW5sONyhJMqdg zr7Jjxi2G6ZeK7N6QUwxIKbTua4Df+/d+If464wK0BPXQdryKQKXmTuf9dkA4Q== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, ryan.roberts@arm.com, hch@lst.de, Zi Yan , Dave Chinner Subject: [PATCH v10 08/10] xfs: expose block size in stat Date: Mon, 15 Jul 2024 11:44:55 +0200 Message-ID: <20240715094457.452836-9-kernel@pankajraghav.com> In-Reply-To: <20240715094457.452836-1-kernel@pankajraghav.com> References: <20240715094457.452836-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 778C340013 X-Stat-Signature: pndjoxpaf4639z9mjxj5jy5r7iqawazh X-Rspam-User: X-HE-Tag: 1721036748-130229 X-HE-Meta: U2FsdGVkX1+3W/u7WkhkTy83HSa+JreeUF0N0lT2k00ogch0eVRrqc0NuvjUi+FzYi+jDBzn4UYIFN5cW2tGMlhmwHlQXn/u9GAmfcV8w/jeclaX4w2kpJdE7hadqt2MaDbfplmmJCwSSQ5+Jg3g2i2BmfiXPLLjfsYYva1gbnKO6JiMmpI3xJxsGkfpXj7NnpWxhjnYTkHzPzTIniWlp8zEhJaw8GG8iEmTPWwjolyYZURcDovmfgLQ7NTLc9gRNVwU/MGUbBI53HnNhSkYtKflIOr/0tutS80Lzr94wHTKR/khDxQwhlL+kC99HQMcn/Fm6Fin3ymjN+XNOgBTG4z5jcJpBV/mHZFy8NBtLui0P5uKzhyHGWok9IE4sARXsi4CDEAVvMZYx6kertDulJzmkck+tddKSubhorh9LKa1yKN8FSvFXRDv65sg8PTod12yNr0DDyNKNER4CnG4qt7hz1CwgJBQ7W9iXunq8QuUZ6l61iFbx6oqhdss3pnSZH0SE1LUVxITEYSWb8WkY4Rm+JiVJigWsyXYdODua0Hy5ie20J1o20EGMno6x5vX+id4G+blJhw+vM1mIEygfCfp+nHenLC87dmo+wM6Kfx6ORFkWAWJiClYRXNNC5bF6US0aTnAaVxXuwOLEEkOp+uLYYeK6QNId0QQ6aS34oY4xn6pKOGdpIUTN778VdRT7JhXC+cAQRPnoV2yQ7fX4ZPJtj+vj5as6JcAUbaKi7K4QskWcE3ZaPD9rBP/9xjw9aI+8EeJdH+RBKOnIPUZ8YCH/MkDqr5UKGrXhgQ2cjYwiZcH6C3+jKv6r+4nmYZH/FNoLFGKMdpVlDp2eah1FMysC/CLUlxEoQK9RdOZTw8jXMzt2vUk8ufhiqammIrPxazBDJnxddrT+zLXcxv32wzgJ5gz22slvGnOg2ISAy1RWKBSV8Zq7tPp9XWoFY97DziPQ12NyUpPTHWAvNL pGrXLuEo l1QiSAuJ9Nxg2bxdThIkmVrvvochb0WyGoTiHW0qj1WajQW7hweteqoHoRRc3KNhJ8wNZPNhcwD5Mx1bP+dCkNSdEOsQyPIrtUVziJL0H66TUWHaJHRZQbDCTcS8wG1mF4pFren/g/831YJ2ERAb1y1TkEJc5jRbUKvYULfxAnzc17FtFfJ3eO+6lJLV4cZ9rVTEALpMe5i9vuw7HSyHON7Ggppk2SUUkBd4/nT+z32CJ3aILk+2+bpJpOR2RF9SYy1e5YrABfv5GwXXPZbhbvhEPLZeCli/w60DlHH4BPPJltBU1bra+PTiW6Bnu7guDawcFL/Oqf7bYCD1fcjpXAWe4kqcTDMxc/AfErLFHSWNihU4DEZ3e61mpU6G3hEGYrNBzqxCwz8wLVWyl3b4xwSdJcU9NrASbuavLxnojCuiqwInNe2qEB7t2rA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav For block size larger than page size, the unit of efficient IO is the block size, not the page size. Leaving stat() to report PAGE_SIZE as the block size causes test programs like fsx to issue illegal ranges for operations that require block size alignment (e.g. fallocate() insert range). Hence update the preferred IO size to reflect the block size in this case. This change is based on a patch originally from Dave Chinner.[1] [1] https://lwn.net/ml/linux-fsdevel/20181107063127.3902-16-david@fromorbit.com/ Signed-off-by: Pankaj Raghav Signed-off-by: Luis Chamberlain Reviewed-by: Darrick J. Wong Reviewed-by: Dave Chinner --- fs/xfs/xfs_iops.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index a00dcbc77e12b..da5c13150315e 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -562,7 +562,7 @@ xfs_stat_blksize( return 1U << mp->m_allocsize_log; } - return PAGE_SIZE; + return max_t(uint32_t, PAGE_SIZE, mp->m_sb.sb_blocksize); } STATIC int From patchwork Mon Jul 15 09:44:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13733202 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A351DC3DA59 for ; Mon, 15 Jul 2024 09:45:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 359226B00A1; Mon, 15 Jul 2024 05:45:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E4CC6B00A2; Mon, 15 Jul 2024 05:45:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10FBF6B00A4; Mon, 15 Jul 2024 05:45:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E21AC6B00A1 for ; Mon, 15 Jul 2024 05:45:53 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6C5D6A527A for ; Mon, 15 Jul 2024 09:45:53 +0000 (UTC) X-FDA: 82341505386.17.1346ED8 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) by imf12.hostedemail.com (Postfix) with ESMTP id B78574000C for ; Mon, 15 Jul 2024 09:45:51 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=LjKjycMp; spf=pass (imf12.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721036702; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8rdOu+47Xth5hXKbmOu06fRFiMFY0Pwf2tkrEBizRPk=; b=O8opNfVNoqbFf4cXSd3fpqX8TCXsVBB6VAXJJied+fSXns1ZrgjxU3Z0pd0DzDM7k0eDtr ZsXzDWGxBllIPde5eGIcgL8bAq5AA5s1hYbt167zeQULn8SWZ1zUV9HreR6VHpZMLTgOpc bot1isJNwb06eUu96Osc+L0qUDyoP0c= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=LjKjycMp; spf=pass (imf12.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721036702; a=rsa-sha256; cv=none; b=7aRxRDuoMGKx4xO/34u8HJmifu5XEwit2HoQj5hcticTpcMjjWZ42pEAJKpKKiIG8b0Ey6 aeqP4hWwuVLCTRko8iiYs2UqCb+yH4aeLglMVOvLUzt5u2HCCMfVN83VtARxfu0QHitC32 0UVnlj5wYFrC08MsRdoVKfOFlV7kULo= Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4WMy4X3K9Nz9sSR; Mon, 15 Jul 2024 11:45:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1721036748; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8rdOu+47Xth5hXKbmOu06fRFiMFY0Pwf2tkrEBizRPk=; b=LjKjycMp1JEsvMEyQlRKY5b/IKkTDm817V3NAYJL7hkSwZxP8p0DtgxH5w0EzV+A3e8FeM JgyJbWsnMJ0ZfuAPMJShVoGTe68pSeNk1rmVA17mRSdMS9NAFECJUUrgXDX7B9FzXlwp4v dpgEJZg8kbjiCzq0n/dCNmzooXD0JNv8a2+fCuDng9DC6lJ8J7up51X4kjlyHP7koCSBno fgOPG0ASE7GOusgq/KlzStXXfMBbPnkM6bhdB1B9oM3+UoPWjir3nzjxHZYQJF4QktB+/s j8trDYX3s1AYR5wqHb3hZWnNV7YSZ8CHiQp1ZrcUgzruRmiPVt3ffkTRsiwEUA== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, ryan.roberts@arm.com, hch@lst.de, Zi Yan , Dave Chinner Subject: [PATCH v10 09/10] xfs: make the calculation generic in xfs_sb_validate_fsb_count() Date: Mon, 15 Jul 2024 11:44:56 +0200 Message-ID: <20240715094457.452836-10-kernel@pankajraghav.com> In-Reply-To: <20240715094457.452836-1-kernel@pankajraghav.com> References: <20240715094457.452836-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B78574000C X-Stat-Signature: ot6izjp99xa6qmpia55xsr8tmmojugy3 X-Rspam-User: X-HE-Tag: 1721036751-637319 X-HE-Meta: U2FsdGVkX1+t/Da2U9rKqsCdw8tkDgBEcR8m4r0akwOIJEvnx64QvjPdXnc/IsLmwRbo+HsxDGztb5nK9R+N6f7WPO6C1pftUy5s+IkRkjDQQeLHTJZ3f+w3MEofhctSGPNJvTiDVsFo1z7B1ajZh8oICp5FVKd2rTuiBL5zAaqPDzPnJ93jLKLZUPaY2UMZuMtgqox0oE2dSsIyJWqfFZ90T5ilTiKnDjd2WTD1iO90Ox2WKTriKhN75alDnA+MZlXfWvJsalGOv4921dwoSveA+Eisun5jd6+56SaKQpFpHtotEiy6cOhv69P8fhJJTzFkQo6D0ESBlqGCdoU9y73jAgpFFpgEhfrP7O2EcrPHaYXXFJUl5z8mqLumy+HW4y8Ep7lZIkPXZ2m3W2OehMg5SNZl+W56sxkPnqYFQfuoFWWYyov6k7acFnAm9/pG/p3g+gI8rdS9Cp843dDSphCTpHRx0TlElequKvJx1aWMytnAnaKhk3cNC5/FfhGn2k0mHnA9sxw+44Xk5ISrh8IWDZGthDDT94OqyN/Omc+V74wE51eWdOpQz9q/6GktwR0sOPXap+ARHPEr3n69P5qMq03+LcldCZv0lAPkvFv7/jPKW57uS7v1HrFuc5iHtvC6BRrztE9+gg2InfA5l5YLP8jX5WxEpu3+mq9i3n3044k014T8S3k/1E/2GL5hODyWGyzLye7apZfQFlwrXkkZzWr5+y42jQAvOM9CEDu2xwzXpRWd2SWi84nZKemYOTIm5CPq4QYkvMNnipT+hbtP4sniTf8OC53txudoOZw9JUtH+UW0yIDXdeaAX1zklJMskp+R5RX9QvnYEhx9rnj/cp17wXaS9Urq5C7b2FA4edl6B/B7YkV3tnj0LoCMRXEg9/GKIYw7qe2MJlA+S8uiW0EKkRmSFz78S87ljdsR6WQq/fx0WwOydXQJs4rq9pb7XORty7JAsuFOm7A NE+10O6w imgpejcTrotH1MJQ43kCn1LOslpvbBA0asnw8iy6pF7Ln3myOtXCqneFLE6UHlXiSMZmfix7SNQWsdgAStAfMQ119rmZKyTy8oM9ihv/Jhnplnx/xrCGBtgiGe7Rbdyvd+/16bKqIFTx5udTs+BHmmLzNwvZGAEfg43HxZ3qy4FC6246gTF2SWXxTxda4eBgNascpU6jzjIml6GGfy0EtCK1bWh5l1FjCc4g/KkWHMFSYfSO7dV4F9BqkLNgnSxZRm2QuIdglwhQtxM9J3P3j7JIDFYwJMFHh1gFMT7Zp9B4GUzXgCU/OdGlKXEKOjd3xsKflRoXAM4bMPw6HL17sij69x1eSq/Afcm1ThmqCUmLze9S6V45WbWi/jg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav Instead of assuming that PAGE_SHIFT is always higher than the blocklog, make the calculation generic so that page cache count can be calculated correctly for LBS. Signed-off-by: Pankaj Raghav Reviewed-by: Darrick J. Wong Reviewed-by: Dave Chinner --- fs/xfs/xfs_mount.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 09eef1721ef4f..3949f720b5354 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -132,11 +132,16 @@ xfs_sb_validate_fsb_count( xfs_sb_t *sbp, uint64_t nblocks) { + uint64_t max_bytes; + ASSERT(PAGE_SHIFT >= sbp->sb_blocklog); ASSERT(sbp->sb_blocklog >= BBSHIFT); + if (check_shl_overflow(nblocks, sbp->sb_blocklog, &max_bytes)) + return -EFBIG; + /* Limited by ULONG_MAX of page cache index */ - if (nblocks >> (PAGE_SHIFT - sbp->sb_blocklog) > ULONG_MAX) + if (max_bytes >> PAGE_SHIFT > ULONG_MAX) return -EFBIG; return 0; } From patchwork Mon Jul 15 09:44:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13733203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 564DFC3DA59 for ; Mon, 15 Jul 2024 09:45:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D93966B00A5; Mon, 15 Jul 2024 05:45:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF5016B00A6; Mon, 15 Jul 2024 05:45:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B20646B00A7; Mon, 15 Jul 2024 05:45:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8A9116B00A5 for ; Mon, 15 Jul 2024 05:45:57 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 489CDA1563 for ; Mon, 15 Jul 2024 09:45:57 +0000 (UTC) X-FDA: 82341505554.29.3F55817 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) by imf10.hostedemail.com (Postfix) with ESMTP id 8A063C0018 for ; Mon, 15 Jul 2024 09:45:55 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=pEoF3Vcs; spf=pass (imf10.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721036718; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SSKSy4tqVH5b+XJ+esQOWgPyYLwKFVSkaMyHSBptbYg=; b=ecy0JKio3p786PCpsAzQswCq2dA2xmrxnWmaxWMmB44D04ITSSeYj8+dd7Mf92R41bdxLD 7IK4+SUQy6H8gpSNnv2kDjy6TV2NWevYgdR6yzX8OjkAvcUbJTQ189U4DeCq69PvwDTAAw a0Ac/zNlEssOIyivkVP5ul7ZFvqbPYI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721036718; a=rsa-sha256; cv=none; b=HUvlicYaRrcP0fZkLAUd1RzV6raNhZGPZNrKZOrhqEpgcsqqRPvL34CX1JKaKe7VksZSsZ CiYe+LYoKUPgp65/C8vKhwDhwLpKBLHbUrcOwA702jgGK7Std1PnTY4+arFqYgMYYFR9p+ 2k2iUyFnrAnf6O/or2Bsk1A5sFdsG7I= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=pEoF3Vcs; spf=pass (imf10.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com Received: from smtp2.mailbox.org (smtp2.mailbox.org [IPv6:2001:67c:2050:b231:465::2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4WMy4c2QVsz9sVv; Mon, 15 Jul 2024 11:45:52 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1721036752; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SSKSy4tqVH5b+XJ+esQOWgPyYLwKFVSkaMyHSBptbYg=; b=pEoF3VcskAxTuU6XHDhgHjwJ9erMiPjrqp5XIw1iZC3NNGusMPc8tUJOyAY0su5r/+fi6f 7IehDXNP9w5IRjHE7w2n5wWsibSBIbUPOd1ZmaOTOtvGbmIzx9D6OcQiIBO61pANyMxZZY Gffes6zz4RN5/Y/KoDF0h6mhldCZ8oGJIgFhgGuT4vYM4FNFqD/N5HfrNrB8z96ZXkgCeQ COP/sZp2v1AwK+zIdinzquJyEDZ9ug0CMLoU2tP3epENmowKDWWdVbkFyVqFUT+gxWzjCG fsJSRy1S4YeYWGyPpOpWtOUT7KHV79N78GvHuC+7DwfCZM4W66WLWuJoouezhA== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, ryan.roberts@arm.com, hch@lst.de, Zi Yan Subject: [PATCH v10 10/10] xfs: enable block size larger than page size support Date: Mon, 15 Jul 2024 11:44:57 +0200 Message-ID: <20240715094457.452836-11-kernel@pankajraghav.com> In-Reply-To: <20240715094457.452836-1-kernel@pankajraghav.com> References: <20240715094457.452836-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8A063C0018 X-Stat-Signature: 8nomongra5swg4a9zqh7kjz4reqz9pk5 X-HE-Tag: 1721036755-619721 X-HE-Meta: U2FsdGVkX1+meQ6TkdohJklCSBF1VijzQcztNd62+6gOXsckNO4HJA2lMhD3CHg5QWA+yLKDUluIvW0jI1wvkDUWR8oJLw0YI8RWIqb/N7gD9Ivc1VJOAS7vy+1lyrY7rA29TVVNbG5JUq/mo7zdslEC67AyZr8P2kOZazwkBRZYar0f2/JPenhuw2YnXegXoVwr4vovubij0mrrXsvQxIAqmBdFSuP6wSrHDQYmPHXHnyrxQYpXRZ97MznuOIgb4/8ybZ/ZGcGJTVHsITL40/u7kaRFevjtEjTkLkJnS6jOpMVrt2b9EjorY5c/Zqp8kt4QNoGkcbM1zdO50TEVxM+ieAcy/QqhrCDKESkdJhuWjLjMPoP2MnDOFAmawot3+oNVJc+fDYz17OpS/VWors792a7GEskj5ZVQYn4xAlkpFz7UAy2x098Qy6BBjxW9K6bDkLlPRd3uWWcT0/Q3TZnVCm+v/sJ1gEg79bCi+HAyD5mI90FXPZTnAE6wAle3S6EvFxtGVWKKu2S1CMaTfEQ+0VsC/dpSzwJ/+FO3cW40B1mjb9+QjZ6ujr28oKngIe+IDHi5tVdwmYcsxGoOCvVkjpTnLBUH4aA93v0GXk3X87Oi4ivwIwcc0GGWH9/vDq2XmlKn0kMeZlRgI+fH81R6G2w/petKb8XlWROVcuJRWGHt0aRzUiwvy4oWJzYECQfUf4ADcTv/GIYMTrC8gh4PayEN7bhAZf8gYARPFN8z9cLwidTcPuBn00DZdFT6Lt/qqqJH9apEylx/JfdGTbyP2YUjq3aF2xIM6nwHxWUdpT0DWxNN5O+RHki0pSqEF5vbi/L/5ntICWim7VOTSnRQeuY52CSehQZissCnpoympM6L95qcoN4p59lJT+5bHw716MYVsHS2FZIQLM4r13Yug2B250Ldcdll7Glt57j7FFt5elCpkPWWYt5aHs6lJxq4zJTYZxb4XUvbQYV c2m31dvh ZpV7oGGtsN4PZDSV/kfiRGMyMlYyQEPdWV48UWhBTy737HwVNbcPUXez2o6H36QtIzIaCPNGYIWw5SYb7ehWf9Mt9lK1/jFke9UXIhlBvJQEyopT2yREBDDiOIh0zHtMeTVoHJ9IkxI9I1CrrbIjC2nvjIfIwZGyWY9Y30OrPMoJcj1lKUFO82JdWpaOFlIwI0+YBExq/Q7/K28VUM2MEUmZJdqTNMCO6JvG65x8DNXvMxoa3G1jXOghMjjDe+9YpX4MFhs8uYyU9ABfyZGuscdeCk7GgyR84tvFYJtHo3BII0fHHmXxlyK+qv9OIzSF4sL4F X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav Page cache now has the ability to have a minimum order when allocating a folio which is a prerequisite to add support for block size > page size. Signed-off-by: Pankaj Raghav Signed-off-by: Luis Chamberlain Reviewed-by: Darrick J. Wong --- fs/xfs/libxfs/xfs_ialloc.c | 5 +++++ fs/xfs/libxfs/xfs_shared.h | 3 +++ fs/xfs/xfs_icache.c | 6 ++++-- fs/xfs/xfs_mount.c | 1 - fs/xfs/xfs_super.c | 30 ++++++++++++++++++++++-------- 5 files changed, 34 insertions(+), 11 deletions(-) diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c index 14c81f227c5bb..1e76431d75a4b 100644 --- a/fs/xfs/libxfs/xfs_ialloc.c +++ b/fs/xfs/libxfs/xfs_ialloc.c @@ -3019,6 +3019,11 @@ xfs_ialloc_setup_geometry( igeo->ialloc_align = mp->m_dalign; else igeo->ialloc_align = 0; + + if (mp->m_sb.sb_blocksize > PAGE_SIZE) + igeo->min_folio_order = mp->m_sb.sb_blocklog - PAGE_SHIFT; + else + igeo->min_folio_order = 0; } /* Compute the location of the root directory inode that is laid out by mkfs. */ diff --git a/fs/xfs/libxfs/xfs_shared.h b/fs/xfs/libxfs/xfs_shared.h index 34f104ed372c0..e67a1c7cc0b02 100644 --- a/fs/xfs/libxfs/xfs_shared.h +++ b/fs/xfs/libxfs/xfs_shared.h @@ -231,6 +231,9 @@ struct xfs_ino_geometry { /* precomputed value for di_flags2 */ uint64_t new_diflags2; + /* minimum folio order of a page cache allocation */ + unsigned int min_folio_order; + }; #endif /* __XFS_SHARED_H__ */ diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index cf629302d48e7..0fcf235e50235 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -88,7 +88,8 @@ xfs_inode_alloc( /* VFS doesn't initialise i_mode! */ VFS_I(ip)->i_mode = 0; - mapping_set_large_folios(VFS_I(ip)->i_mapping); + mapping_set_folio_min_order(VFS_I(ip)->i_mapping, + M_IGEO(mp)->min_folio_order); XFS_STATS_INC(mp, vn_active); ASSERT(atomic_read(&ip->i_pincount) == 0); @@ -325,7 +326,8 @@ xfs_reinit_inode( inode->i_uid = uid; inode->i_gid = gid; inode->i_state = state; - mapping_set_large_folios(inode->i_mapping); + mapping_set_folio_min_order(inode->i_mapping, + M_IGEO(mp)->min_folio_order); return error; } diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 3949f720b5354..c6933440f8066 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -134,7 +134,6 @@ xfs_sb_validate_fsb_count( { uint64_t max_bytes; - ASSERT(PAGE_SHIFT >= sbp->sb_blocklog); ASSERT(sbp->sb_blocklog >= BBSHIFT); if (check_shl_overflow(nblocks, sbp->sb_blocklog, &max_bytes)) diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 27e9f749c4c7f..3c455ef588d48 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1638,16 +1638,30 @@ xfs_fs_fill_super( goto out_free_sb; } - /* - * Until this is fixed only page-sized or smaller data blocks work. - */ if (mp->m_sb.sb_blocksize > PAGE_SIZE) { - xfs_warn(mp, - "File system with blocksize %d bytes. " - "Only pagesize (%ld) or less will currently work.", + size_t max_folio_size = mapping_max_folio_size_supported(); + + if (!xfs_has_crc(mp)) { + xfs_warn(mp, +"V4 Filesystem with blocksize %d bytes. Only pagesize (%ld) or less is supported.", mp->m_sb.sb_blocksize, PAGE_SIZE); - error = -ENOSYS; - goto out_free_sb; + error = -ENOSYS; + goto out_free_sb; + } + + if (mp->m_sb.sb_blocksize > max_folio_size) { + xfs_warn(mp, +"block size (%u bytes) not supported; maximum folio size supported in "\ +"the page cache is (%ld bytes). Check MAX_PAGECACHE_ORDER (%d)", + mp->m_sb.sb_blocksize, max_folio_size, + MAX_PAGECACHE_ORDER); + error = -ENOSYS; + goto out_free_sb; + } + + xfs_warn(mp, +"EXPERIMENTAL: V5 Filesystem with Large Block Size (%d bytes) enabled.", + mp->m_sb.sb_blocksize); } /* Ensure this filesystem fits in the page cache limits */