From patchwork Fri Feb 21 22:38:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luis Chamberlain X-Patchwork-Id: 13986406 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D08EDC021B6 for ; Fri, 21 Feb 2025 22:38:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43277280020; Fri, 21 Feb 2025 17:38:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F6EF28001E; Fri, 21 Feb 2025 17:38:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3D7328001D; Fri, 21 Feb 2025 17:38:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A9D6E280017 for ; Fri, 21 Feb 2025 17:38:33 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5C13B160893 for ; Fri, 21 Feb 2025 22:38:33 +0000 (UTC) X-FDA: 83145417306.26.B340D42 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf11.hostedemail.com (Postfix) with ESMTP id 9CAF440003 for ; Fri, 21 Feb 2025 22:38:31 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=SSoOiVej; spf=none (imf11.hostedemail.com: domain of mcgrof@infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=mcgrof@infradead.org; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=kernel.org (policy=quarantine) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740177511; a=rsa-sha256; cv=none; b=ZGGkxUAKLx4O2ovUZFYoxH1bXESUK6/m5W8PzdMPp29ecXVNjcRoh0sYMtHgspCauaGFAM bezsA4ntDZt9lb9F4V6CfMPf+fYZqa1twrgJv6z8aZkQTEaGaAJ6VDzRsTsdg1E26wk4zF ZhrPrMb5rkctvUNN4nUwiuU5kAT3ObY= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=SSoOiVej; spf=none (imf11.hostedemail.com: domain of mcgrof@infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=mcgrof@infradead.org; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=kernel.org (policy=quarantine) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740177511; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+ubD7J9wYkED4fOf4+EQ9LXTiGFTdc0o5ColS8uVAP8=; b=2teGGxtMV3xp6GCejIEmASVVLMiR7OlU0XcznHybTtmKGpbszGQ18c7ljKNvRCmjTfBLKS cnfgz2WNVKGAu76/sRkA9w2pccr+GtXPiwECZVg48MSy/5Bi0qbgVGeCbWuZOnRJ9bUFkQ GB1NxaASFlohT+aStoQGal5URmG3JJQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=+ubD7J9wYkED4fOf4+EQ9LXTiGFTdc0o5ColS8uVAP8=; b=SSoOiVejG4f2uQMzI6veQFB34H bOsLDfXVdlrvZ15lpIx/WnblMH4RvdUHujStmPHPCDwC+n6glopmf6jiSp+VpdwPcx6o/c27cfoSw whXgslKmM9kDiDS0UmEat01qMZv/NiaTOywGu5NyZe/IZisj95t4MxS+UpKCvTdEZ8YxHboIwNZkC XNRVsX2G4n8xTy/7U1jmQvs0R+bGPehAuDdwXJieeVO5LFJsoWERuxBF8BC1nm6R8AbK6BjgUgsZr Vfdsd4A8Xu5L20YpC1ZLmYiLB0EeKZUh8G4wvcBW+0XTxKbBasaOrpCuIyxFXhe32r9GiUodrYFkK UP3OFF0A==; Received: from mcgrof by bombadil.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tlbf7-000000073D7-3Yz7; Fri, 21 Feb 2025 22:38:25 +0000 From: Luis Chamberlain To: brauner@kernel.org, akpm@linux-foundation.org, hare@suse.de, willy@infradead.org, dave@stgolabs.net, david@fromorbit.com, djwong@kernel.org, kbusch@kernel.org Cc: john.g.garry@oracle.com, hch@lst.de, ritesh.list@gmail.com, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, gost.dev@samsung.com, p.raghav@samsung.com, da.gomez@samsung.com, kernel@pankajraghav.com, mcgrof@kernel.org Subject: [PATCH v3 4/8] fs/mpage: use blocks_per_folio instead of blocks_per_page Date: Fri, 21 Feb 2025 14:38:19 -0800 Message-ID: <20250221223823.1680616-5-mcgrof@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250221223823.1680616-1-mcgrof@kernel.org> References: <20250221223823.1680616-1-mcgrof@kernel.org> MIME-Version: 1.0 X-Stat-Signature: s8fzecn9rojri8r9wzxpumaem9ny51b1 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 9CAF440003 X-Rspam-User: X-Rspamd-Pre-Result: action=add header; module=dmarc; Action set by DMARC X-Rspam: Yes X-HE-Tag: 1740177511-445499 X-HE-Meta: U2FsdGVkX1+e52s10OihNH96NHJe0pQwortNundv5ovoMeGphX5CgGNs8RC1nJlcbSyCfgQN7yhTMggmqHxJRtvL8FnNbXYNEHqfMozWxIJUfqzz/0twf5bsA7qTvnsnh2Iicbpkq5VBLLAIOIABJQJ/Mnd8WsWgbPP5j7FvWQJsMH2zRsGyzlEnGNSplbQ97jTTEzCutZFodq+1ZqGeXVK4/8I6Yy34d/lZvHasbtlYRI5h8xwaq0/dBF1dEnVRGeGot92Kq8pGFBE4K3Si8ZEt4/GTLMIp8Nb4mFATDxllcN0vhXlnGLWKeDMaZGYtyhfsCvNB1u+JU+nRkofa1SgRjDf9MyN3XbGSiHCx4jGVUmtMChnyFHPlQ+CiQ6kMLx3HpfIcgsP/3V9nwFNAw4iOwjbgIN9RnyAOHnZRWPG81uh/Cec/HR/sf7Q9RnMTHMkSHbxkfsq5neUDRoiYypS0NcrOFb718x+NME/7b7o8ERF1lhuJA+TiOeu6fT/NN5vYiFYVUEIGHmhhvQ3S/efTRPgFsproWkysbUrr9I7UhC1wz5qIb5mMQHxb/0R3F+kQFsstRp5eiYrLElfc4FHttKMmktLLUsU4/ao046nwUbHFjjFKi2nDKGp5DaB4TBmdIciVJkp+/Yyq1OcnJIZQ5Hq9BZWvMKDzudiFI+uvScrSk/tRPBbiA3rhLU47SrFwwTgmPLujUlH2FM9WMMuq9Ozkv6pYGvqlYn92cQ86TVO8h95ZFsHWekdQ9yJrAl+EmVZMsCC7kUCCWyOTuBD3cJsNckTzKRJoxJrTGXBOTVCaVZX2lMRI86YISDdBPBRmBkaqucyJyI2SJQcA+8mD4FxSFK3NRk0lsqw8YBVSf347vYnJTs3HHNbkkJgoz5CGCzjinvnkiNYKkKD2zcKPU+a44IxOQM/xdGdm3j5ruDfMgbIV+u1AGrw/g65DIQ61legvvJrnkFqzlSJ jQoOhCyw wcrOS97nceQGBBXhKnj61zBfrwwIp6dC97Fr4F2qqZIQLD2TiBwLP6vO6n14XXI77Qs4r7slhjqZpewC9fshdLxquruPiBvn9a9lh4As7DG2alJMlA+D01V/gVhZLSP7IoKCdlEXJr7P3IcCb0b6cQ8eVe5PZ8vjRAjWXth8uCaiHpfTT3YRbbU7cSLtJD0+WCmnNBA/uTfVr5di+OoOTE6HOcXgnfK4sFr9Ix903Q4kCYVMf66y42RlNBasyXr5152Y/1iO19kHjsITpR07jtbJN+NQBYOto/jqRzTDrAvErPIt38lB957+jRkU1Z6KEHw5ffZ4wdgEBt/q3+r1+z1RAebQuiXb4Qh5zUWwg4dJcrxOSG3edV7sTnAMvrdMTfOs1JXOaMF2jswHX/FksHVSe1LxnhXYiKN/nn8B1CEAY/ewoIKqdvdCkhxjRPiqVkUe8iYQZ1val/hqW9c9P6RGZy0xbl5pgAO74n+pyRjbcZWuLLByIsZJOPProW24qu4bgtAqmLJBrxdY7Pv1QFCw1t6a+Q0fcbBHMVCFQZzfefrQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Convert mpage to folios and adjust accounting for the number of blocks within a folio instead of a single page. This also adjusts the number of pages we should process to be the size of the folio to ensure we always read a full folio. Note that the page cache code already ensures do_mpage_readpage() will work with folios respecting the address space min order, this ensures that so long as folio_size() is used for our requirements mpage will also now be able to process block sizes larger than the page size. Originally-by: Hannes Reinecke Signed-off-by: Luis Chamberlain --- fs/mpage.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/fs/mpage.c b/fs/mpage.c index a3c82206977f..9c8cf4015238 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -107,7 +107,7 @@ static void map_buffer_to_folio(struct folio *folio, struct buffer_head *bh, * don't make any buffers if there is only one buffer on * the folio and the folio just needs to be set up to date */ - if (inode->i_blkbits == PAGE_SHIFT && + if (inode->i_blkbits == folio_shift(folio) && buffer_uptodate(bh)) { folio_mark_uptodate(folio); return; @@ -153,7 +153,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) struct folio *folio = args->folio; struct inode *inode = folio->mapping->host; const unsigned blkbits = inode->i_blkbits; - const unsigned blocks_per_page = PAGE_SIZE >> blkbits; + const unsigned blocks_per_folio = folio_size(folio) >> blkbits; const unsigned blocksize = 1 << blkbits; struct buffer_head *map_bh = &args->map_bh; sector_t block_in_file; @@ -161,7 +161,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) sector_t last_block_in_file; sector_t first_block; unsigned page_block; - unsigned first_hole = blocks_per_page; + unsigned first_hole = blocks_per_folio; struct block_device *bdev = NULL; int length; int fully_mapped = 1; @@ -182,7 +182,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) goto confused; block_in_file = folio_pos(folio) >> blkbits; - last_block = block_in_file + args->nr_pages * blocks_per_page; + last_block = block_in_file + ((args->nr_pages * PAGE_SIZE) >> blkbits); last_block_in_file = (i_size_read(inode) + blocksize - 1) >> blkbits; if (last_block > last_block_in_file) last_block = last_block_in_file; @@ -204,7 +204,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) clear_buffer_mapped(map_bh); break; } - if (page_block == blocks_per_page) + if (page_block == blocks_per_folio) break; page_block++; block_in_file++; @@ -216,7 +216,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) * Then do more get_blocks calls until we are done with this folio. */ map_bh->b_folio = folio; - while (page_block < blocks_per_page) { + while (page_block < blocks_per_folio) { map_bh->b_state = 0; map_bh->b_size = 0; @@ -229,7 +229,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) if (!buffer_mapped(map_bh)) { fully_mapped = 0; - if (first_hole == blocks_per_page) + if (first_hole == blocks_per_folio) first_hole = page_block; page_block++; block_in_file++; @@ -247,7 +247,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) goto confused; } - if (first_hole != blocks_per_page) + if (first_hole != blocks_per_folio) goto confused; /* hole -> non-hole */ /* Contiguous blocks? */ @@ -260,7 +260,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) if (relative_block == nblocks) { clear_buffer_mapped(map_bh); break; - } else if (page_block == blocks_per_page) + } else if (page_block == blocks_per_folio) break; page_block++; block_in_file++; @@ -268,8 +268,8 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) bdev = map_bh->b_bdev; } - if (first_hole != blocks_per_page) { - folio_zero_segment(folio, first_hole << blkbits, PAGE_SIZE); + if (first_hole != blocks_per_folio) { + folio_zero_segment(folio, first_hole << blkbits, folio_size(folio)); if (first_hole == 0) { folio_mark_uptodate(folio); folio_unlock(folio); @@ -303,10 +303,10 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) relative_block = block_in_file - args->first_logical_block; nblocks = map_bh->b_size >> blkbits; if ((buffer_boundary(map_bh) && relative_block == nblocks) || - (first_hole != blocks_per_page)) + (first_hole != blocks_per_folio)) args->bio = mpage_bio_submit_read(args->bio); else - args->last_block_in_bio = first_block + blocks_per_page - 1; + args->last_block_in_bio = first_block + blocks_per_folio - 1; out: return args->bio; @@ -385,7 +385,7 @@ int mpage_read_folio(struct folio *folio, get_block_t get_block) { struct mpage_readpage_args args = { .folio = folio, - .nr_pages = 1, + .nr_pages = folio_nr_pages(folio), .get_block = get_block, }; @@ -456,12 +456,12 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, struct address_space *mapping = folio->mapping; struct inode *inode = mapping->host; const unsigned blkbits = inode->i_blkbits; - const unsigned blocks_per_page = PAGE_SIZE >> blkbits; + const unsigned blocks_per_folio = folio_size(folio) >> blkbits; sector_t last_block; sector_t block_in_file; sector_t first_block; unsigned page_block; - unsigned first_unmapped = blocks_per_page; + unsigned first_unmapped = blocks_per_folio; struct block_device *bdev = NULL; int boundary = 0; sector_t boundary_block = 0; @@ -486,12 +486,12 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, */ if (buffer_dirty(bh)) goto confused; - if (first_unmapped == blocks_per_page) + if (first_unmapped == blocks_per_folio) first_unmapped = page_block; continue; } - if (first_unmapped != blocks_per_page) + if (first_unmapped != blocks_per_folio) goto confused; /* hole -> non-hole */ if (!buffer_dirty(bh) || !buffer_uptodate(bh)) @@ -536,7 +536,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, goto page_is_mapped; last_block = (i_size - 1) >> blkbits; map_bh.b_folio = folio; - for (page_block = 0; page_block < blocks_per_page; ) { + for (page_block = 0; page_block < blocks_per_folio; ) { map_bh.b_state = 0; map_bh.b_size = 1 << blkbits; @@ -618,14 +618,14 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, BUG_ON(folio_test_writeback(folio)); folio_start_writeback(folio); folio_unlock(folio); - if (boundary || (first_unmapped != blocks_per_page)) { + if (boundary || (first_unmapped != blocks_per_folio)) { bio = mpage_bio_submit_write(bio); if (boundary_block) { write_boundary_block(boundary_bdev, boundary_block, 1 << blkbits); } } else { - mpd->last_block_in_bio = first_block + blocks_per_page - 1; + mpd->last_block_in_bio = first_block + blocks_per_folio - 1; } goto out;