From patchwork Tue Feb 4 23:12:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luis Chamberlain X-Patchwork-Id: 13960174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 285AFC02193 for ; Tue, 4 Feb 2025 23:12:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9E8F3280010; Tue, 4 Feb 2025 18:12:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9945128000C; Tue, 4 Feb 2025 18:12:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E844280010; Tue, 4 Feb 2025 18:12:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 413EA28000C for ; Tue, 4 Feb 2025 18:12:20 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id EFFEE47337 for ; Tue, 4 Feb 2025 23:12:19 +0000 (UTC) X-FDA: 83083812798.28.EC1929A Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf03.hostedemail.com (Postfix) with ESMTP id 3256A2000B for ; Tue, 4 Feb 2025 23:12:18 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=e+b994xu; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=kernel.org (policy=quarantine); spf=none (imf03.hostedemail.com: domain of mcgrof@infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=mcgrof@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738710738; a=rsa-sha256; cv=none; b=OoiGnCICbAaDqD1OwWCmnK0fFZTHOXIHWmMbuaJ3mJGmkYWwnGByyB0ipMkLO83bX2KUGk VNHUbx10xMkdwdc05kxUJUt/DJRXz7BJOoy8d9WobgacVxpO2h0YeoUx3lDF485tlxrO/q HKeiDXpFHR92ycwPysmnj5I3J2n1f1U= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=e+b994xu; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=kernel.org (policy=quarantine); spf=none (imf03.hostedemail.com: domain of mcgrof@infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=mcgrof@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738710738; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0Z0iXLDDA0FqixDntgSkkhZtkEsfKRvlixQFpa6OzQk=; b=lUnfkY4zvT6eDr6z1uR8XOCaFRTBTAIJRAL0tPP+UltU03xjejGcvjNFOHVzoOoAwL8cKk hdp7oeJEZ4aLVa7kPw2rwsEs6K2s6uCd5GsDkfkMQSsip8g89YarUSs5tuOkb+JBi9bu3O 2bqV2V3GGOnlZBHb9WG5j7p6F1lAhvE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=0Z0iXLDDA0FqixDntgSkkhZtkEsfKRvlixQFpa6OzQk=; b=e+b994xuT1jXSNuGpGkSqkCp66 fajUl5jWltnA42/bzNM/j//7w0eUOCresTP1juGfRIh5OaT75uo/XSWJEMzCU7he9Sb4DqvWCg4uZ 5wPQ5za3OmKz8WCNUIwfGRwkRk24ntESqzdUsl9DB4/qqTdc5gLDt75XA5Ek+/VNqhE184cBuDoHJ c/6w2JszvSjEKh2HkzQEGguaP/Uq0C9q+G4ER+L0aXmGAopqgDxStHv6eg7ILUP3YKoB7J0bDGanm OkEuBowZZqn3npGN+mWhh7tXr60CMZBl93t7d1+KlLd/kNTzdvuy9GurHHCzEUQilbi237AjyYKw+ 72YC6mlw==; Received: from mcgrof by bombadil.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tfS5T-00000001nhS-1X69; Tue, 04 Feb 2025 23:12:11 +0000 From: Luis Chamberlain To: hare@suse.de, willy@infradead.org, dave@stgolabs.net, david@fromorbit.com, djwong@kernel.org, kbusch@kernel.org Cc: john.g.garry@oracle.com, hch@lst.de, ritesh.list@gmail.com, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, gost.dev@samsung.com, p.raghav@samsung.com, da.gomez@samsung.com, kernel@pankajraghav.com, mcgrof@kernel.org Subject: [PATCH v2 4/8] fs/mpage: use blocks_per_folio instead of blocks_per_page Date: Tue, 4 Feb 2025 15:12:05 -0800 Message-ID: <20250204231209.429356-5-mcgrof@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250204231209.429356-1-mcgrof@kernel.org> References: <20250204231209.429356-1-mcgrof@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 3256A2000B X-Stat-Signature: oiaqgrhxap5cy36bgbaijx6f6z7d15su X-Rspamd-Pre-Result: action=add header; module=dmarc; Action set by DMARC X-Rspam: Yes X-HE-Tag: 1738710738-737397 X-HE-Meta: U2FsdGVkX1/HwnmNhDP7PNKk2xZheljiEWA6M5QU9xY3lt7tMSTaViVvEKXzXkVKOkyXa6N2IQll+Q218p4mFmxlqIsWyZwAzH+NCuVzklo5C11Ugxt75hB2W5y30m7cNGusnWHZMjUV67cn9n9q2DMMbSpFfFbWFhbuvibHZ3a9sr5BXayGeEpMJJWo471a+yQbi8voylEllI9oZIW32+1l60BvGH1fyLa6RzJHpOH6Ec+TXbNbuM6yB0NyGXd4kHaTTpYEJw703redfoRnrOH15GrcmdBbCyB4Xt5626qH4Z0QLyHYG9AmGHYgh93FpYMY5aEtAzql2DoPCEe85qomETMbe3m5R8HpBl0Kc+pkbJZgcz6SiaETyChCniOpmPZRZWmx0JBaMgkSqEUB5bghmmJKTxCDWwPVg1oSEWIswtz9OjHSoL1PnAZOvt0nnDGx4V3zsI252tyCLNUTPeaV2vBOYJZCJqpdptVx7nTZ5o79mmDTMCcr8w8pA1tVpf6tQsUgQqb3prq2R8tcgUDHW22gpXM3kEqfvYy/BilfDIMJbFJos21D6NUIfLTtEgX2b4D7hQswQqH+Q7SrA2ip3qo0TmAqdoXlxUpdToP9IYG6wnKwCzKYVANa+HyoNcOpHRNy91wkS9K57JqWjjFz3IZrNAg3eAYOYNsmsXm+OIeaXV5ZES2dwoaC9cBo45VX6IVU+UyCl0eGYknB1c9HYQbz0Ip9RRBcazmhqxb6Db4eT1tgiMPkX6qvtull613l6oOCy6I/vSvyKUij+HSXRSBa2nxashIF2/f8xGSW7gi0RF6hWzLq38P0t3il/nX4LK0IFUPEeMW1/io5Pw+wuG3D7DOGC3Olt9f2B8of0U21veAKw1EjUb0cCaylNJKlM+Mb/tF7b7Er01wkCi5jc2maKVoK2w+BSKRcVdI2h1InFD2j3PffAr7EfbXP/2rC2GrP6UQ1DYA+gNn Xg5vJjJa 0oURaqHQmnPUUkyQjlAy0wdIY8ikVKv9zQ80Fvd7R7xsJFNvbF9u9besf7Tr1DH8bDMH1xg8LFIPyWCW7XDoC4U8Q6i/h6PoPo/PNqFg5FpoIfEASa9feWy1ESP3sgx43luF1A6pSgQReQxMIhPMwrh33CjMX6kxkmr+EyGmd4MPOkyQ+nBB0ze5GY2nma20wgvSp7CB6Lld+JyvX7msLwUPHioKrZk6HXw5fcEmlMbaXYTQx8nnSlP4/zy+GhKXRy9Q4p1z81c70NBm53lz61qhVqEOs3/ZLljtFMt+Tz1IWQB1nuT03I8mZ5xzMpY40aeiQuUV/BjkK6b+d5LxxPMURrBTb8vyublppceyK5LTPpTkAy3UITbRcxwWVkExYl2N5Ti3Ay8Y1iCgnzN/tubWdJB25AEfRgmAs4l5zI1a14JASe+JgABVFKF/UcK2GWARBVvUG1eAkx9hSWMAzYJxa6LbDgmEq1QF8a30u3UsCJdxG+zP/jsfkNgwvAJycSTwswhlmCrlGSh9R7Pt5+Vc9WDlou+YdVlgEkC3sbN3I25k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Hannes Reinecke Convert mpage to folios and associate the number of blocks with a folio and not a page. [mcgrof: keep 1 page request on mpage_read_folio()] Signed-off-by: Hannes Reinecke Signed-off-by: Luis Chamberlain --- fs/mpage.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/fs/mpage.c b/fs/mpage.c index a3c82206977f..c17d7a724e4b 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -107,7 +107,7 @@ static void map_buffer_to_folio(struct folio *folio, struct buffer_head *bh, * don't make any buffers if there is only one buffer on * the folio and the folio just needs to be set up to date */ - if (inode->i_blkbits == PAGE_SHIFT && + if (inode->i_blkbits == folio_shift(folio) && buffer_uptodate(bh)) { folio_mark_uptodate(folio); return; @@ -153,7 +153,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) struct folio *folio = args->folio; struct inode *inode = folio->mapping->host; const unsigned blkbits = inode->i_blkbits; - const unsigned blocks_per_page = PAGE_SIZE >> blkbits; + const unsigned blocks_per_folio = folio_size(folio) >> blkbits; const unsigned blocksize = 1 << blkbits; struct buffer_head *map_bh = &args->map_bh; sector_t block_in_file; @@ -161,7 +161,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) sector_t last_block_in_file; sector_t first_block; unsigned page_block; - unsigned first_hole = blocks_per_page; + unsigned first_hole = blocks_per_folio; struct block_device *bdev = NULL; int length; int fully_mapped = 1; @@ -182,7 +182,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) goto confused; block_in_file = folio_pos(folio) >> blkbits; - last_block = block_in_file + args->nr_pages * blocks_per_page; + last_block = block_in_file + args->nr_pages * blocks_per_folio; last_block_in_file = (i_size_read(inode) + blocksize - 1) >> blkbits; if (last_block > last_block_in_file) last_block = last_block_in_file; @@ -204,7 +204,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) clear_buffer_mapped(map_bh); break; } - if (page_block == blocks_per_page) + if (page_block == blocks_per_folio) break; page_block++; block_in_file++; @@ -216,7 +216,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) * Then do more get_blocks calls until we are done with this folio. */ map_bh->b_folio = folio; - while (page_block < blocks_per_page) { + while (page_block < blocks_per_folio) { map_bh->b_state = 0; map_bh->b_size = 0; @@ -229,7 +229,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) if (!buffer_mapped(map_bh)) { fully_mapped = 0; - if (first_hole == blocks_per_page) + if (first_hole == blocks_per_folio) first_hole = page_block; page_block++; block_in_file++; @@ -247,7 +247,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) goto confused; } - if (first_hole != blocks_per_page) + if (first_hole != blocks_per_folio) goto confused; /* hole -> non-hole */ /* Contiguous blocks? */ @@ -260,7 +260,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) if (relative_block == nblocks) { clear_buffer_mapped(map_bh); break; - } else if (page_block == blocks_per_page) + } else if (page_block == blocks_per_folio) break; page_block++; block_in_file++; @@ -268,7 +268,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) bdev = map_bh->b_bdev; } - if (first_hole != blocks_per_page) { + if (first_hole != blocks_per_folio) { folio_zero_segment(folio, first_hole << blkbits, PAGE_SIZE); if (first_hole == 0) { folio_mark_uptodate(folio); @@ -303,10 +303,10 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) relative_block = block_in_file - args->first_logical_block; nblocks = map_bh->b_size >> blkbits; if ((buffer_boundary(map_bh) && relative_block == nblocks) || - (first_hole != blocks_per_page)) + (first_hole != blocks_per_folio)) args->bio = mpage_bio_submit_read(args->bio); else - args->last_block_in_bio = first_block + blocks_per_page - 1; + args->last_block_in_bio = first_block + blocks_per_folio - 1; out: return args->bio; @@ -456,12 +456,12 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, struct address_space *mapping = folio->mapping; struct inode *inode = mapping->host; const unsigned blkbits = inode->i_blkbits; - const unsigned blocks_per_page = PAGE_SIZE >> blkbits; + const unsigned blocks_per_folio = folio_size(folio) >> blkbits; sector_t last_block; sector_t block_in_file; sector_t first_block; unsigned page_block; - unsigned first_unmapped = blocks_per_page; + unsigned first_unmapped = blocks_per_folio; struct block_device *bdev = NULL; int boundary = 0; sector_t boundary_block = 0; @@ -486,12 +486,12 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, */ if (buffer_dirty(bh)) goto confused; - if (first_unmapped == blocks_per_page) + if (first_unmapped == blocks_per_folio) first_unmapped = page_block; continue; } - if (first_unmapped != blocks_per_page) + if (first_unmapped != blocks_per_folio) goto confused; /* hole -> non-hole */ if (!buffer_dirty(bh) || !buffer_uptodate(bh)) @@ -536,7 +536,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, goto page_is_mapped; last_block = (i_size - 1) >> blkbits; map_bh.b_folio = folio; - for (page_block = 0; page_block < blocks_per_page; ) { + for (page_block = 0; page_block < blocks_per_folio; ) { map_bh.b_state = 0; map_bh.b_size = 1 << blkbits; @@ -618,14 +618,14 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, BUG_ON(folio_test_writeback(folio)); folio_start_writeback(folio); folio_unlock(folio); - if (boundary || (first_unmapped != blocks_per_page)) { + if (boundary || (first_unmapped != blocks_per_folio)) { bio = mpage_bio_submit_write(bio); if (boundary_block) { write_boundary_block(boundary_bdev, boundary_block, 1 << blkbits); } } else { - mpd->last_block_in_bio = first_block + blocks_per_page - 1; + mpd->last_block_in_bio = first_block + blocks_per_folio - 1; } goto out;