From patchwork Mon Jul 15 09:44:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13733203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 564DFC3DA59 for ; Mon, 15 Jul 2024 09:45:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D93966B00A5; Mon, 15 Jul 2024 05:45:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF5016B00A6; Mon, 15 Jul 2024 05:45:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B20646B00A7; Mon, 15 Jul 2024 05:45:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8A9116B00A5 for ; Mon, 15 Jul 2024 05:45:57 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 489CDA1563 for ; Mon, 15 Jul 2024 09:45:57 +0000 (UTC) X-FDA: 82341505554.29.3F55817 Received: from mout-p-101.mailbox.org (mout-p-101.mailbox.org [80.241.56.151]) by imf10.hostedemail.com (Postfix) with ESMTP id 8A063C0018 for ; Mon, 15 Jul 2024 09:45:55 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=pEoF3Vcs; spf=pass (imf10.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721036718; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SSKSy4tqVH5b+XJ+esQOWgPyYLwKFVSkaMyHSBptbYg=; b=ecy0JKio3p786PCpsAzQswCq2dA2xmrxnWmaxWMmB44D04ITSSeYj8+dd7Mf92R41bdxLD 7IK4+SUQy6H8gpSNnv2kDjy6TV2NWevYgdR6yzX8OjkAvcUbJTQ189U4DeCq69PvwDTAAw a0Ac/zNlEssOIyivkVP5ul7ZFvqbPYI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721036718; a=rsa-sha256; cv=none; b=HUvlicYaRrcP0fZkLAUd1RzV6raNhZGPZNrKZOrhqEpgcsqqRPvL34CX1JKaKe7VksZSsZ CiYe+LYoKUPgp65/C8vKhwDhwLpKBLHbUrcOwA702jgGK7Std1PnTY4+arFqYgMYYFR9p+ 2k2iUyFnrAnf6O/or2Bsk1A5sFdsG7I= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=pEoF3Vcs; spf=pass (imf10.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.151 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com; dmarc=pass (policy=quarantine) header.from=pankajraghav.com Received: from smtp2.mailbox.org (smtp2.mailbox.org [IPv6:2001:67c:2050:b231:465::2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-101.mailbox.org (Postfix) with ESMTPS id 4WMy4c2QVsz9sVv; Mon, 15 Jul 2024 11:45:52 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1721036752; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SSKSy4tqVH5b+XJ+esQOWgPyYLwKFVSkaMyHSBptbYg=; b=pEoF3VcskAxTuU6XHDhgHjwJ9erMiPjrqp5XIw1iZC3NNGusMPc8tUJOyAY0su5r/+fi6f 7IehDXNP9w5IRjHE7w2n5wWsibSBIbUPOd1ZmaOTOtvGbmIzx9D6OcQiIBO61pANyMxZZY Gffes6zz4RN5/Y/KoDF0h6mhldCZ8oGJIgFhgGuT4vYM4FNFqD/N5HfrNrB8z96ZXkgCeQ COP/sZp2v1AwK+zIdinzquJyEDZ9ug0CMLoU2tP3epENmowKDWWdVbkFyVqFUT+gxWzjCG fsJSRy1S4YeYWGyPpOpWtOUT7KHV79N78GvHuC+7DwfCZM4W66WLWuJoouezhA== From: "Pankaj Raghav (Samsung)" To: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, kernel@pankajraghav.com, ryan.roberts@arm.com, hch@lst.de, Zi Yan Subject: [PATCH v10 10/10] xfs: enable block size larger than page size support Date: Mon, 15 Jul 2024 11:44:57 +0200 Message-ID: <20240715094457.452836-11-kernel@pankajraghav.com> In-Reply-To: <20240715094457.452836-1-kernel@pankajraghav.com> References: <20240715094457.452836-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8A063C0018 X-Stat-Signature: 8nomongra5swg4a9zqh7kjz4reqz9pk5 X-HE-Tag: 1721036755-619721 X-HE-Meta: U2FsdGVkX1+meQ6TkdohJklCSBF1VijzQcztNd62+6gOXsckNO4HJA2lMhD3CHg5QWA+yLKDUluIvW0jI1wvkDUWR8oJLw0YI8RWIqb/N7gD9Ivc1VJOAS7vy+1lyrY7rA29TVVNbG5JUq/mo7zdslEC67AyZr8P2kOZazwkBRZYar0f2/JPenhuw2YnXegXoVwr4vovubij0mrrXsvQxIAqmBdFSuP6wSrHDQYmPHXHnyrxQYpXRZ97MznuOIgb4/8ybZ/ZGcGJTVHsITL40/u7kaRFevjtEjTkLkJnS6jOpMVrt2b9EjorY5c/Zqp8kt4QNoGkcbM1zdO50TEVxM+ieAcy/QqhrCDKESkdJhuWjLjMPoP2MnDOFAmawot3+oNVJc+fDYz17OpS/VWors792a7GEskj5ZVQYn4xAlkpFz7UAy2x098Qy6BBjxW9K6bDkLlPRd3uWWcT0/Q3TZnVCm+v/sJ1gEg79bCi+HAyD5mI90FXPZTnAE6wAle3S6EvFxtGVWKKu2S1CMaTfEQ+0VsC/dpSzwJ/+FO3cW40B1mjb9+QjZ6ujr28oKngIe+IDHi5tVdwmYcsxGoOCvVkjpTnLBUH4aA93v0GXk3X87Oi4ivwIwcc0GGWH9/vDq2XmlKn0kMeZlRgI+fH81R6G2w/petKb8XlWROVcuJRWGHt0aRzUiwvy4oWJzYECQfUf4ADcTv/GIYMTrC8gh4PayEN7bhAZf8gYARPFN8z9cLwidTcPuBn00DZdFT6Lt/qqqJH9apEylx/JfdGTbyP2YUjq3aF2xIM6nwHxWUdpT0DWxNN5O+RHki0pSqEF5vbi/L/5ntICWim7VOTSnRQeuY52CSehQZissCnpoympM6L95qcoN4p59lJT+5bHw716MYVsHS2FZIQLM4r13Yug2B250Ldcdll7Glt57j7FFt5elCpkPWWYt5aHs6lJxq4zJTYZxb4XUvbQYV c2m31dvh ZpV7oGGtsN4PZDSV/kfiRGMyMlYyQEPdWV48UWhBTy737HwVNbcPUXez2o6H36QtIzIaCPNGYIWw5SYb7ehWf9Mt9lK1/jFke9UXIhlBvJQEyopT2yREBDDiOIh0zHtMeTVoHJ9IkxI9I1CrrbIjC2nvjIfIwZGyWY9Y30OrPMoJcj1lKUFO82JdWpaOFlIwI0+YBExq/Q7/K28VUM2MEUmZJdqTNMCO6JvG65x8DNXvMxoa3G1jXOghMjjDe+9YpX4MFhs8uYyU9ABfyZGuscdeCk7GgyR84tvFYJtHo3BII0fHHmXxlyK+qv9OIzSF4sL4F X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Pankaj Raghav Page cache now has the ability to have a minimum order when allocating a folio which is a prerequisite to add support for block size > page size. Signed-off-by: Pankaj Raghav Signed-off-by: Luis Chamberlain Reviewed-by: Darrick J. Wong --- fs/xfs/libxfs/xfs_ialloc.c | 5 +++++ fs/xfs/libxfs/xfs_shared.h | 3 +++ fs/xfs/xfs_icache.c | 6 ++++-- fs/xfs/xfs_mount.c | 1 - fs/xfs/xfs_super.c | 30 ++++++++++++++++++++++-------- 5 files changed, 34 insertions(+), 11 deletions(-) diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c index 14c81f227c5bb..1e76431d75a4b 100644 --- a/fs/xfs/libxfs/xfs_ialloc.c +++ b/fs/xfs/libxfs/xfs_ialloc.c @@ -3019,6 +3019,11 @@ xfs_ialloc_setup_geometry( igeo->ialloc_align = mp->m_dalign; else igeo->ialloc_align = 0; + + if (mp->m_sb.sb_blocksize > PAGE_SIZE) + igeo->min_folio_order = mp->m_sb.sb_blocklog - PAGE_SHIFT; + else + igeo->min_folio_order = 0; } /* Compute the location of the root directory inode that is laid out by mkfs. */ diff --git a/fs/xfs/libxfs/xfs_shared.h b/fs/xfs/libxfs/xfs_shared.h index 34f104ed372c0..e67a1c7cc0b02 100644 --- a/fs/xfs/libxfs/xfs_shared.h +++ b/fs/xfs/libxfs/xfs_shared.h @@ -231,6 +231,9 @@ struct xfs_ino_geometry { /* precomputed value for di_flags2 */ uint64_t new_diflags2; + /* minimum folio order of a page cache allocation */ + unsigned int min_folio_order; + }; #endif /* __XFS_SHARED_H__ */ diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index cf629302d48e7..0fcf235e50235 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -88,7 +88,8 @@ xfs_inode_alloc( /* VFS doesn't initialise i_mode! */ VFS_I(ip)->i_mode = 0; - mapping_set_large_folios(VFS_I(ip)->i_mapping); + mapping_set_folio_min_order(VFS_I(ip)->i_mapping, + M_IGEO(mp)->min_folio_order); XFS_STATS_INC(mp, vn_active); ASSERT(atomic_read(&ip->i_pincount) == 0); @@ -325,7 +326,8 @@ xfs_reinit_inode( inode->i_uid = uid; inode->i_gid = gid; inode->i_state = state; - mapping_set_large_folios(inode->i_mapping); + mapping_set_folio_min_order(inode->i_mapping, + M_IGEO(mp)->min_folio_order); return error; } diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 3949f720b5354..c6933440f8066 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -134,7 +134,6 @@ xfs_sb_validate_fsb_count( { uint64_t max_bytes; - ASSERT(PAGE_SHIFT >= sbp->sb_blocklog); ASSERT(sbp->sb_blocklog >= BBSHIFT); if (check_shl_overflow(nblocks, sbp->sb_blocklog, &max_bytes)) diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 27e9f749c4c7f..3c455ef588d48 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1638,16 +1638,30 @@ xfs_fs_fill_super( goto out_free_sb; } - /* - * Until this is fixed only page-sized or smaller data blocks work. - */ if (mp->m_sb.sb_blocksize > PAGE_SIZE) { - xfs_warn(mp, - "File system with blocksize %d bytes. " - "Only pagesize (%ld) or less will currently work.", + size_t max_folio_size = mapping_max_folio_size_supported(); + + if (!xfs_has_crc(mp)) { + xfs_warn(mp, +"V4 Filesystem with blocksize %d bytes. Only pagesize (%ld) or less is supported.", mp->m_sb.sb_blocksize, PAGE_SIZE); - error = -ENOSYS; - goto out_free_sb; + error = -ENOSYS; + goto out_free_sb; + } + + if (mp->m_sb.sb_blocksize > max_folio_size) { + xfs_warn(mp, +"block size (%u bytes) not supported; maximum folio size supported in "\ +"the page cache is (%ld bytes). Check MAX_PAGECACHE_ORDER (%d)", + mp->m_sb.sb_blocksize, max_folio_size, + MAX_PAGECACHE_ORDER); + error = -ENOSYS; + goto out_free_sb; + } + + xfs_warn(mp, +"EXPERIMENTAL: V5 Filesystem with Large Block Size (%d bytes) enabled.", + mp->m_sb.sb_blocksize); } /* Ensure this filesystem fits in the page cache limits */