From patchwork Fri Dec 15 20:02:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494853 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBC1B563A4; Fri, 15 Dec 2023 20:02:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="miy2KdKg" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eU+CKpTDHM2zoWr8Iu/6jTnTrmbE2V5dKmSdec4HmWI=; b=miy2KdKgA4utgsHuGUGwTLx0dz Acc625FaWDqz6kjwPvC6mTFGbM6or6mvD1tKGRypzGaSjQSazNrvrtT4smUCkPtOXXGU3z1hLrrPP XoRkZw5GUsiSIqEPKdGqy0b5sHNInqMxIQWNsM9HA8zI29JppKdLqhtbIE4PtbiUkolYkvm6Bs+/l /qhImewSf4uXpReKXXZz3p+/nn4XNoxsG6FvVGRc6egVPJR+rGlLAcA2JCW9Lkcul9/ckefuFRyd0 fznqPAnejnkNGuvVfmTxuzv3tUHyXAg1oeyqyj4wPJVLQyLntLzU2Rm3/JgNpkjawemq6+nSsoAGD BprBoiuQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOU-0038i6-UQ; Fri, 15 Dec 2023 20:02:46 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 01/14] fs: Remove clean_page_buffers() Date: Fri, 15 Dec 2023 20:02:32 +0000 Message-Id: <20231215200245.748418-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This function has been unused since the removal of bdev_write_page(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/mpage.c | 10 ---------- include/linux/buffer_head.h | 1 - 2 files changed, 11 deletions(-) diff --git a/fs/mpage.c b/fs/mpage.c index ffb064ed9d04..63bf99856024 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -455,16 +455,6 @@ static void clean_buffers(struct page *page, unsigned first_unmapped) try_to_free_buffers(page_folio(page)); } -/* - * For situations where we want to clean all buffers attached to a page. - * We don't need to calculate how many buffers are attached to the page, - * we just need to specify a number larger than the maximum number of buffers. - */ -void clean_page_buffers(struct page *page) -{ - clean_buffers(page, ~0U); -} - static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, void *data) { diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 5f23ee599889..94f6161eb45e 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -270,7 +270,6 @@ int generic_write_end(struct file *, struct address_space *, loff_t, unsigned, unsigned, struct page *, void *); void folio_zero_new_buffers(struct folio *folio, size_t from, size_t to); -void clean_page_buffers(struct page *page); int cont_write_begin(struct file *, struct address_space *, loff_t, unsigned, struct page **, void **, get_block_t *, loff_t *); From patchwork Fri Dec 15 20:02:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494867 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7174C6A027; Fri, 15 Dec 2023 20:02:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="uTpIJ5A+" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Haux6v60eBd/p5vruOwKLTBeh3vkJK7iQizVm3UqAfQ=; b=uTpIJ5A+lfpANn/PViXJbG6gYu 59D6LVUz54boCJgmKlKoRi4dsaWXAl35SCUINIgOxcXZhgveFvB4aEGiqzx8rXkMiLk/n3thT50wP l+CE/THLkcrTtFBkCo10X5Fndxp2GpjQjrtpss65rU6XxOzmp5ajw6Kq6nCdJkilEDFnRk8mkpsd5 kXmKe+Q9cG4TFzbGM2SpYatoR3dBEF1kMKvWXbXZNsFOsLBfyxFW2wt6Gr/CQdwlSvfSaJv4Y7aSP iGNqt8ee2N20jrSDYLRDqPjK9anjzo5mj0hUnfWVY/gvuXSuo0DTeN0gAqOvItPgdSYSqolTAo22J FdtKPxww==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOV-0038iA-0t; Fri, 15 Dec 2023 20:02:47 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 02/14] fs: Convert clean_buffers() to take a folio Date: Fri, 15 Dec 2023 20:02:33 +0000 Message-Id: <20231215200245.748418-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The only caller already has a folio, so pass it in and use it throughout. Saves two calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/mpage.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/mpage.c b/fs/mpage.c index 63bf99856024..630f4a7c7d03 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -430,13 +430,13 @@ struct mpage_data { * We have our BIO, so we can now mark the buffers clean. Make * sure to only clean buffers which we know we'll be writing. */ -static void clean_buffers(struct page *page, unsigned first_unmapped) +static void clean_buffers(struct folio *folio, unsigned first_unmapped) { unsigned buffer_counter = 0; - struct buffer_head *bh, *head; - if (!page_has_buffers(page)) + struct buffer_head *bh, *head = folio_buffers(folio); + + if (!head) return; - head = page_buffers(page); bh = head; do { @@ -451,8 +451,8 @@ static void clean_buffers(struct page *page, unsigned first_unmapped) * read_folio would fail to serialize with the bh and it would read from * disk before we reach the platter. */ - if (buffer_heads_over_limit && PageUptodate(page)) - try_to_free_buffers(page_folio(page)); + if (buffer_heads_over_limit && folio_test_uptodate(folio)) + try_to_free_buffers(folio); } static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, @@ -615,7 +615,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, goto alloc_new; } - clean_buffers(&folio->page, first_unmapped); + clean_buffers(folio, first_unmapped); BUG_ON(folio_test_writeback(folio)); folio_start_writeback(folio); From patchwork Fri Dec 15 20:02:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494861 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBBE45639E; Fri, 15 Dec 2023 20:02:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="CgLipEpS" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=y2LTUl+ZxJP8korj46ZIojz6qFBZmH/UltM1ueD//bY=; b=CgLipEpSQeL7n3K7h5qv//dI1J Hyxu1SNcuc6KE8jhGSWFs58vYD3Ub89XHd7GA8EZWVff0bnCAQ2AUPlYAcWmRslUjziZa2Txj/tdR lZb6urewEhwHsr+FYouUuqEuHYuh6LVjV4L23AiWMXhDxMJGDXScMgR0BKTTWz/qXkwG/ZCxdq8Ir lPuibOk7rQWaudaDIcmh+imazfOubEGNxtJd+4tg2i92riw+HuTk67lZpzaAQ5j8acssDeDrM1W2d RQEqLCazr3B5+hkCZ5B8FYOaHepMoxpgtSmO8YS/auXztCf+iqOwvANUpuLNAb3LtykX77pNzIHHZ PDjxz1zg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOV-0038iK-3c; Fri, 15 Dec 2023 20:02:47 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 03/14] fs: Reduce stack usage in __mpage_writepage Date: Fri, 15 Dec 2023 20:02:34 +0000 Message-Id: <20231215200245.748418-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Some architectures support a very large PAGE_SIZE, so instead of the 8 pointers we see with a 4kB PAGE_SIZE, we can see 128 pointers with 64kB or so many on Hexagon that it trips compiler warnings about exceeding stack frame size. All we're doing with this array is checking for block contiguity, which we can as well do by remembering the address of the first block in the page and checking this block is at the appropriate offset from that address. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/mpage.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/fs/mpage.c b/fs/mpage.c index 630f4a7c7d03..84b02098e7a5 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -466,7 +466,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, const unsigned blocks_per_page = PAGE_SIZE >> blkbits; sector_t last_block; sector_t block_in_file; - sector_t blocks[MAX_BUF_PER_PAGE]; + sector_t first_block; unsigned page_block; unsigned first_unmapped = blocks_per_page; struct block_device *bdev = NULL; @@ -504,10 +504,12 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, if (!buffer_dirty(bh) || !buffer_uptodate(bh)) goto confused; if (page_block) { - if (bh->b_blocknr != blocks[page_block-1] + 1) + if (bh->b_blocknr != first_block + page_block) goto confused; + } else { + first_block = bh->b_blocknr; } - blocks[page_block++] = bh->b_blocknr; + page_block++; boundary = buffer_boundary(bh); if (boundary) { boundary_block = bh->b_blocknr; @@ -556,10 +558,12 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, boundary_bdev = map_bh.b_bdev; } if (page_block) { - if (map_bh.b_blocknr != blocks[page_block-1] + 1) + if (map_bh.b_blocknr != first_block + page_block) goto confused; + } else { + first_block = map_bh.b_blocknr; } - blocks[page_block++] = map_bh.b_blocknr; + page_block++; boundary = buffer_boundary(&map_bh); bdev = map_bh.b_bdev; if (block_in_file == last_block) @@ -591,7 +595,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, /* * This page will go to BIO. Do we need to send this BIO off first? */ - if (bio && mpd->last_block_in_bio != blocks[0] - 1) + if (bio && mpd->last_block_in_bio != first_block - 1) bio = mpage_bio_submit_write(bio); alloc_new: @@ -599,7 +603,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, bio = bio_alloc(bdev, BIO_MAX_VECS, REQ_OP_WRITE | wbc_to_write_flags(wbc), GFP_NOFS); - bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9); + bio->bi_iter.bi_sector = first_block << (blkbits - 9); wbc_init_bio(wbc, bio); } @@ -627,7 +631,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, boundary_block, 1 << blkbits); } } else { - mpd->last_block_in_bio = blocks[blocks_per_page - 1]; + mpd->last_block_in_bio = first_block + blocks_per_page - 1; } goto out; From patchwork Fri Dec 15 20:02:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494857 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBBA656392; Fri, 15 Dec 2023 20:02:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="e8qURtTR" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0B/sDa1saKqxXEgZRJR/a3sr7f58TwwgS4ua8ky7x0Y=; b=e8qURtTRWssY4Mjtv1z26WCh21 uo7XYAn73N0F0GMHC17x1N/4sjzDtGq8ql4X893T5CwDNPR7YGXhT3waInAkxYFtk3pcMT7x06YQe CQXxoQyRmfTRzWJwDwkbCvzy0EGlhf7Gxu4RuDAubplZH+SvGPIdh8aJkPrb0MYF28Xnzen7iYgyO 0kFxroaOzhit50zDMcQ0rATYSd2QoVduwm9cREFbbaw4a63ATvP0g5wp6qGoyqW0cOJkcXmnVtDso qObeTV8nBGrkMzWZcfJUILvzDiS+ZHDYuDFyi37QlNOj2N/4i+3Sq+5RqMYHzIAJTBn6V1c2ufLhD 5tIq2YyA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOV-0038iR-6r; Fri, 15 Dec 2023 20:02:47 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 04/14] fs: Reduce stack usage in do_mpage_readpage Date: Fri, 15 Dec 2023 20:02:35 +0000 Message-Id: <20231215200245.748418-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Some architectures support a very large PAGE_SIZE, so instead of the 8 pointers we see with a 4kB PAGE_SIZE, we can see 128 pointers with 64kB or so many on Hexagon that it trips compiler warnings about exceeding stack frame size. All we're doing with this array is checking for block contiguity, which we can as well do by remembering the address of the first block in the page and checking this block is at the appropriate offset from that address. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/mpage.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/mpage.c b/fs/mpage.c index 84b02098e7a5..d4963f3d8051 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -166,7 +166,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) sector_t block_in_file; sector_t last_block; sector_t last_block_in_file; - sector_t blocks[MAX_BUF_PER_PAGE]; + sector_t first_block; unsigned page_block; unsigned first_hole = blocks_per_page; struct block_device *bdev = NULL; @@ -205,6 +205,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) unsigned map_offset = block_in_file - args->first_logical_block; unsigned last = nblocks - map_offset; + first_block = map_bh->b_blocknr + map_offset; for (relative_block = 0; ; relative_block++) { if (relative_block == last) { clear_buffer_mapped(map_bh); @@ -212,8 +213,6 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) } if (page_block == blocks_per_page) break; - blocks[page_block] = map_bh->b_blocknr + map_offset + - relative_block; page_block++; block_in_file++; } @@ -259,7 +258,9 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) goto confused; /* hole -> non-hole */ /* Contiguous blocks? */ - if (page_block && blocks[page_block-1] != map_bh->b_blocknr-1) + if (!page_block) + first_block = map_bh->b_blocknr; + else if (first_block + page_block != map_bh->b_blocknr) goto confused; nblocks = map_bh->b_size >> blkbits; for (relative_block = 0; ; relative_block++) { @@ -268,7 +269,6 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) break; } else if (page_block == blocks_per_page) break; - blocks[page_block] = map_bh->b_blocknr+relative_block; page_block++; block_in_file++; } @@ -289,7 +289,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) /* * This folio will go to BIO. Do we need to send this BIO off first? */ - if (args->bio && (args->last_block_in_bio != blocks[0] - 1)) + if (args->bio && (args->last_block_in_bio != first_block - 1)) args->bio = mpage_bio_submit_read(args->bio); alloc_new: @@ -298,7 +298,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) gfp); if (args->bio == NULL) goto confused; - args->bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9); + args->bio->bi_iter.bi_sector = first_block << (blkbits - 9); } length = first_hole << blkbits; @@ -313,7 +313,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args) (first_hole != blocks_per_page)) args->bio = mpage_bio_submit_read(args->bio); else - args->last_block_in_bio = blocks[blocks_per_page - 1]; + args->last_block_in_bio = first_block + blocks_per_page - 1; out: return args->bio; From patchwork Fri Dec 15 20:02:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494862 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24122563AC; Fri, 15 Dec 2023 20:02:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="OGngCJBo" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=h/m/ubY/K1ZdLR6hCbBi+7ePQUK5r4QCjij0zKQvyNA=; b=OGngCJBoPBlEI+RqZXEcVMjqRl Qs0RtPBChjAntMDeQuzdkssVP8YWEwS0ct376855NHwDMuflJICChxLuWjQ7+KEUFLcnPdSVLe6qu xlcTvlR3XwWm/xBXcNYrKtsqr27sJQh3SK0z+Zg4xfOWbfrjkqvISl40ERmV/4lghTZqFIY7xLZqV 9GrrPM1pNuw2V1zm7x5pHi+0erxePu3LhUl6aN8ArUcB3R7WGi0jeEE2pYlH5u0WVZ6QoTzEg4MWB Wyzd4bRJusV/elmO8cspKm2yyXbwjm1zgoyZjuzIMdau2klbXbAoPmhbixT+VNmAVIm7TzUbjUrH9 nH5uEX9Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOV-0038id-AP; Fri, 15 Dec 2023 20:02:47 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 05/14] adfs: Remove writepage implementation Date: Fri, 15 Dec 2023 20:02:36 +0000 Message-Id: <20231215200245.748418-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 If the filesystem implements migrate_folio and writepages, there is no need for a writepage implementation. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/adfs/inode.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/fs/adfs/inode.c b/fs/adfs/inode.c index 3081edb09e46..a183e213a4a5 100644 --- a/fs/adfs/inode.c +++ b/fs/adfs/inode.c @@ -5,6 +5,7 @@ * Copyright (C) 1997-1999 Russell King */ #include +#include #include #include "adfs.h" @@ -33,9 +34,10 @@ adfs_get_block(struct inode *inode, sector_t block, struct buffer_head *bh, return 0; } -static int adfs_writepage(struct page *page, struct writeback_control *wbc) +static int adfs_writepages(struct address_space *mapping, + struct writeback_control *wbc) { - return block_write_full_page(page, adfs_get_block, wbc); + return mpage_writepages(mapping, wbc, adfs_get_block); } static int adfs_read_folio(struct file *file, struct folio *folio) @@ -76,10 +78,11 @@ static const struct address_space_operations adfs_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, .read_folio = adfs_read_folio, - .writepage = adfs_writepage, + .writepages = adfs_writepages, .write_begin = adfs_write_begin, .write_end = generic_write_end, - .bmap = _adfs_bmap + .migrate_folio = buffer_migrate_folio, + .bmap = _adfs_bmap, }; /* From patchwork Fri Dec 15 20:02:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494858 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 240AC563A7; Fri, 15 Dec 2023 20:02:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="o+JIP3Od" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=iWuo/17AnVV2eL2F0kRsj8EINE9372h1+vI87BJPV7s=; b=o+JIP3Odm8bEnfQ/co0ItO1vUz FRCFZFMMWJh39c2OJ4pHnnRSXol2eGTQzvKBUoVSfB20fwKFrZA3wKsqZsr2YHyafQXYFUXMn1rHH cOBIOb2YY6Hj+KAdXak5LJb73NiqYrROPbU2EToKFYA674ZKv5LZ6kwO7ouX7viF9OkbnbswbwXrC TBz9oRgmsR+zHSkSbjXKk0DrWRe1fpVRuNRq5Xko8CZ/QKM1AnM7qXh2R85fsPkqeqmLjEAMlH9yx LyW4Nkj0nwO+PgXBVY42quKEd4SYEgBXDxoM1aAg713iRRDworTk/HNRX5qmCjzEQVeuu7roWSeKk +wect5ww==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOV-0038im-ER; Fri, 15 Dec 2023 20:02:47 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 06/14] bfs: Remove writepage implementation Date: Fri, 15 Dec 2023 20:02:37 +0000 Message-Id: <20231215200245.748418-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 If the filesystem implements migrate_folio and writepages, there is no need for a writepage implementation. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/bfs/file.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/fs/bfs/file.c b/fs/bfs/file.c index adc2230079c6..a778411574a9 100644 --- a/fs/bfs/file.c +++ b/fs/bfs/file.c @@ -11,6 +11,7 @@ */ #include +#include #include #include "bfs.h" @@ -150,9 +151,10 @@ static int bfs_get_block(struct inode *inode, sector_t block, return err; } -static int bfs_writepage(struct page *page, struct writeback_control *wbc) +static int bfs_writepages(struct address_space *mapping, + struct writeback_control *wbc) { - return block_write_full_page(page, bfs_get_block, wbc); + return mpage_writepages(mapping, wbc, bfs_get_block); } static int bfs_read_folio(struct file *file, struct folio *folio) @@ -190,9 +192,10 @@ const struct address_space_operations bfs_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, .read_folio = bfs_read_folio, - .writepage = bfs_writepage, + .writepages = bfs_writepages, .write_begin = bfs_write_begin, .write_end = generic_write_end, + .migrate_folio = buffer_migrate_folio, .bmap = bfs_bmap, }; From patchwork Fri Dec 15 20:02:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494864 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 240F2563A9; Fri, 15 Dec 2023 20:02:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Saw/rk8l" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GmcNLU65oEAUUTl/mxQRoWVPOO4x8pF5TdUbIDSNX5c=; b=Saw/rk8lkRD8nFo7Bh7EUFPxjw WO26ofxnYr81jYZybxOB70gglbMpOnZFsJVnFizakWiQcRw/HEgJFNEmuPzSWzH/8Dy3IKd9Ah3Qq DexWB7ilEHGsFh3uZGQYPH2j68QaBalphRoeG1Vrj667xIeY8ELFPlv1JxQwYlHfcs1Z87QmOeQ+R E9CW7vPxXkPnS5bruR45c8QiDwofHAWp/I/TJO3quc+yAqBhvx+5hnN9RkKYNWOKpoap19iGNZrv2 VlpZMTFkrk29dDtnxhzENmg1QIlQG29VPtq6Lwhz3E0vB3QqkGzax1mLCweWyzmJNxgf4n+LfSDOB kEOisUaA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOV-0038it-IW; Fri, 15 Dec 2023 20:02:47 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 07/14] hfs: Really remove hfs_writepage Date: Fri, 15 Dec 2023 20:02:38 +0000 Message-Id: <20231215200245.748418-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The earlier commit to remove hfs_writepage only removed it from one of the aops. Remove it from the btree_aops as well. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/hfs/inode.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c index a7bc4690a780..8c34798a0715 100644 --- a/fs/hfs/inode.c +++ b/fs/hfs/inode.c @@ -29,11 +29,6 @@ static const struct inode_operations hfs_file_inode_operations; #define HFS_VALID_MODE_BITS (S_IFREG | S_IFDIR | S_IRWXUGO) -static int hfs_writepage(struct page *page, struct writeback_control *wbc) -{ - return block_write_full_page(page, hfs_get_block, wbc); -} - static int hfs_read_folio(struct file *file, struct folio *folio) { return block_read_full_folio(folio, hfs_get_block); @@ -162,9 +157,10 @@ const struct address_space_operations hfs_btree_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, .read_folio = hfs_read_folio, - .writepage = hfs_writepage, + .writepages = hfs_writepages, .write_begin = hfs_write_begin, .write_end = generic_write_end, + .migrate_folio = buffer_migrate_folio, .bmap = hfs_bmap, .release_folio = hfs_release_folio, }; From patchwork Fri Dec 15 20:02:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494854 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24189563AD; Fri, 15 Dec 2023 20:02:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="LmH9PtVl" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fdiMOBZgvd1GbkJ8y1Kh/20f5RjjhoyfjJXzJkLF3z0=; b=LmH9PtVld+Jyw8zf/OkZOwV20R Krs0C2KZ64p7dlfZXWTA5P47NTFZTPuk4yjXczjRSkJUsplZo2jWReAeL5kKUpSBm/82CVq7hJXW4 XDNtLl40ZFN8Yw4LLUkFI05XkFbT1TsMMS22iQ2nqb/JQW+k5LfSomw2d7AvtvSKTSxL7uSjWEspX J17a2T/FZ3bQH016FEPcByuSOzt97iTvr1Du8AOQZ96Lrb0Ym2bCW+ty2IIDJuoKRguXd9qTK9FeP mIfzsr999v4Zef4Aow1QSERqvgquTKna63QD0pcjPZIijjbkFdkws4ZGJeLZI8+QqcgdkwVIZET+n nFs1wwog==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOV-0038j4-Mu; Fri, 15 Dec 2023 20:02:47 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 08/14] hfsplus: Really remove hfsplus_writepage Date: Fri, 15 Dec 2023 20:02:39 +0000 Message-Id: <20231215200245.748418-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The earlier commit to remove hfsplus_writepage only removed it from one of the aops. Remove it from the btree_aops as well. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/hfsplus/inode.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c index 702a0663b1d8..3d326926c195 100644 --- a/fs/hfsplus/inode.c +++ b/fs/hfsplus/inode.c @@ -28,11 +28,6 @@ static int hfsplus_read_folio(struct file *file, struct folio *folio) return block_read_full_folio(folio, hfsplus_get_block); } -static int hfsplus_writepage(struct page *page, struct writeback_control *wbc) -{ - return block_write_full_page(page, hfsplus_get_block, wbc); -} - static void hfsplus_write_failed(struct address_space *mapping, loff_t to) { struct inode *inode = mapping->host; @@ -159,9 +154,10 @@ const struct address_space_operations hfsplus_btree_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, .read_folio = hfsplus_read_folio, - .writepage = hfsplus_writepage, + .writepages = hfsplus_writepages, .write_begin = hfsplus_write_begin, .write_end = generic_write_end, + .migrate_folio = buffer_migrate_folio, .bmap = hfsplus_bmap, .release_folio = hfsplus_release_folio, }; From patchwork Fri Dec 15 20:02:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494855 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C28C563AE; Fri, 15 Dec 2023 20:02:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="NpTWOJG3" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=v+w3jrrOuwEW6c9b12xy+g71WBvRB50BeRcbxotV8AU=; b=NpTWOJG3eqMAQI1Owoek8oh1L6 jxonRna6pH2XUTTsHx+c91AdAMBru33Ua87kVGfze5tDyfb0tZ2xlIuIWTtyvix2ob7hhWSjHMJs5 ICHFJo3qav0HVeNEKOmAmcNkdxD/HQBHEKO6LzIHI0P3BJGb9bHKI+XnsTKJpDNhYZChGlbpvFy5D LkTfyZjCeyGg2Ps5uJynYPRzfpIEptucW5gkHKTUZ74MUzCBtKYaG19+2TFqj2iqGpyWtU/sMFNX5 xnvYp3hAl7brxshEuDZL5d8cjNsLtuZAPGZkUYhMJKgC7ZoKkhcxssOsvN50VRJ+RDR8Bnsd/Fc7e 1H6KDUmw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOV-0038jA-RE; Fri, 15 Dec 2023 20:02:47 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 09/14] minix: Remove writepage implementation Date: Fri, 15 Dec 2023 20:02:40 +0000 Message-Id: <20231215200245.748418-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 If the filesystem implements migrate_folio and writepages, there is no need for a writepage implementation. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/minix/inode.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/fs/minix/inode.c b/fs/minix/inode.c index f8af6c3ae336..73f37f298087 100644 --- a/fs/minix/inode.c +++ b/fs/minix/inode.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -397,9 +398,10 @@ static int minix_get_block(struct inode *inode, sector_t block, return V2_minix_get_block(inode, block, bh_result, create); } -static int minix_writepage(struct page *page, struct writeback_control *wbc) +static int minix_writepages(struct address_space *mapping, + struct writeback_control *wbc) { - return block_write_full_page(page, minix_get_block, wbc); + return mpage_writepages(mapping, wbc, minix_get_block); } static int minix_read_folio(struct file *file, struct folio *folio) @@ -444,9 +446,10 @@ static const struct address_space_operations minix_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, .read_folio = minix_read_folio, - .writepage = minix_writepage, + .writepages = minix_writepages, .write_begin = minix_write_begin, .write_end = generic_write_end, + .migrate_folio = buffer_migrate_folio, .bmap = minix_bmap, .direct_IO = noop_direct_IO }; From patchwork Fri Dec 15 20:02:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494859 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 72A01563B5; Fri, 15 Dec 2023 20:02:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="eCGDYeSt" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=SCryoIsThT1F2D/n8Lhi4Jv7COeVsife5aCkou7x9Bc=; b=eCGDYeSt9PCLn0V6USNBSuUOiJ 9tz1Ngb/YuMDZJR/S7oHDoEQrlMShjf4X5PlnYMHT5HcK4ItcnpGMiAPGEV4WgYvdI8Bpti/NRbq8 7wPzUAwjW4ddyf+o2Vm5X3E5mp13Jh2td2CMVnN+0VgHizYSYCchr1GOB2zNxTWCsGjSgILGUAGs/ /QWg32SkO8XGYsYotHsjxH9zfe/I33oXv9Dpym9mAqYZ2wdLZqLvy1a19zcWIXmhFrw9DHtncKD/2 q/v8t0mby8FoaMV10K0hzLdxXPl1YtOvbJncVj6PrZYZW9NNpJ9myN7pEwQsnHnM8LeD6+wVFns/a ulHDJMeQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOV-0038jK-Va; Fri, 15 Dec 2023 20:02:48 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 10/14] ocfs2: Remove writepage implementation Date: Fri, 15 Dec 2023 20:02:41 +0000 Message-Id: <20231215200245.748418-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 If the filesystem implements migrate_folio and writepages, there is no need for a writepage implementation. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/ocfs2/aops.c | 15 ++++++--------- fs/ocfs2/ocfs2_trace.h | 2 -- 2 files changed, 6 insertions(+), 11 deletions(-) diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c index 795997806326..b82185075de7 100644 --- a/fs/ocfs2/aops.c +++ b/fs/ocfs2/aops.c @@ -389,21 +389,18 @@ static void ocfs2_readahead(struct readahead_control *rac) /* Note: Because we don't support holes, our allocation has * already happened (allocation writes zeros to the file data) * so we don't have to worry about ordered writes in - * ocfs2_writepage. + * ocfs2_writepages. * - * ->writepage is called during the process of invalidating the page cache + * ->writepages is called during the process of invalidating the page cache * during blocked lock processing. It can't block on any cluster locks * to during block mapping. It's relying on the fact that the block * mapping can't have disappeared under the dirty pages that it is * being asked to write back. */ -static int ocfs2_writepage(struct page *page, struct writeback_control *wbc) +static int ocfs2_writepages(struct address_space *mapping, + struct writeback_control *wbc) { - trace_ocfs2_writepage( - (unsigned long long)OCFS2_I(page->mapping->host)->ip_blkno, - page->index); - - return block_write_full_page(page, ocfs2_get_block, wbc); + return mpage_writepages(mapping, wbc, ocfs2_get_block); } /* Taken from ext3. We don't necessarily need the full blown @@ -2471,7 +2468,7 @@ const struct address_space_operations ocfs2_aops = { .dirty_folio = block_dirty_folio, .read_folio = ocfs2_read_folio, .readahead = ocfs2_readahead, - .writepage = ocfs2_writepage, + .writepages = ocfs2_writepages, .write_begin = ocfs2_write_begin, .write_end = ocfs2_write_end, .bmap = ocfs2_bmap, diff --git a/fs/ocfs2/ocfs2_trace.h b/fs/ocfs2/ocfs2_trace.h index ac4fd1d5b128..9898c11bdfa1 100644 --- a/fs/ocfs2/ocfs2_trace.h +++ b/fs/ocfs2/ocfs2_trace.h @@ -1157,8 +1157,6 @@ DEFINE_OCFS2_ULL_ULL_EVENT(ocfs2_get_block_end); DEFINE_OCFS2_ULL_ULL_EVENT(ocfs2_readpage); -DEFINE_OCFS2_ULL_ULL_EVENT(ocfs2_writepage); - DEFINE_OCFS2_ULL_ULL_EVENT(ocfs2_bmap); TRACE_EVENT(ocfs2_try_to_write_inline_data, From patchwork Fri Dec 15 20:02:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494865 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 949605644D; Fri, 15 Dec 2023 20:02:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="gFuvhDYv" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jtQJbz4fdyRVv5uL17Tn1fQYq5bZueXqglZgma3TBl8=; b=gFuvhDYvigMOyd9cgE5NRHYGMP JXw9GfbWWxR/6/N7DMulUTX+25ifhwz3zoyE0lzX7H9OfJYCNg/MFDdY2MaFng+trp0U3o+gQBSx3 8C5KaPgv1m/g3oZI+7UCdOI97RLBceH7asFFwM1ltU+xJV/gQlbCZuCKhTnKmdVzUNHtkvoZs1egf mpT5f56FHW51GIvbnhKxjwc9jnQVUWJE91EHvZ66wUeoxcvcgINVM3y85oKEZegnrtAzP3d07jUBO kiQ53iw4H/NDc5AGQWsFnkidijsaDL1EIBmVDdIUIGGK9wbBfziTFxK9L+YOU7apsGq6OiRL9mwts wddReUhw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOW-0038jQ-38; Fri, 15 Dec 2023 20:02:48 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 11/14] sysv: Remove writepage implementation Date: Fri, 15 Dec 2023 20:02:42 +0000 Message-Id: <20231215200245.748418-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 If the filesystem implements migrate_folio and writepages, there is no need for a writepage implementation. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/sysv/itree.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/fs/sysv/itree.c b/fs/sysv/itree.c index 725981474e5f..410ab2a44d2f 100644 --- a/fs/sysv/itree.c +++ b/fs/sysv/itree.c @@ -8,6 +8,7 @@ #include #include +#include #include #include "sysv.h" @@ -456,9 +457,10 @@ int sysv_getattr(struct mnt_idmap *idmap, const struct path *path, return 0; } -static int sysv_writepage(struct page *page, struct writeback_control *wbc) +static int sysv_writepages(struct address_space *mapping, + struct writeback_control *wbc) { - return block_write_full_page(page,get_block,wbc); + return mpage_writepages(mapping, wbc, get_block); } static int sysv_read_folio(struct file *file, struct folio *folio) @@ -503,8 +505,9 @@ const struct address_space_operations sysv_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, .read_folio = sysv_read_folio, - .writepage = sysv_writepage, + .writepages = sysv_writepages, .write_begin = sysv_write_begin, .write_end = generic_write_end, + .migrate_folio = buffer_migrate_folio, .bmap = sysv_bmap }; From patchwork Fri Dec 15 20:02:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494856 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADC8356770; Fri, 15 Dec 2023 20:02:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ZhE/bzSE" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/yLVY3qvXTW91YFKY50YK8NtYUoq7IQ3U6zezP7o3zI=; b=ZhE/bzSErGAKIq0i4CqeDdqDHQ KHmSkiLt3RL8zX4UJvxKXRLUyQkcSqxbaWnwW7Pir5OAMr9lY7X42cCSJbUdMaIhy/8tiECDoowfP jPku2JHA+XmaR8ISU758lnfDj5NCds1wwmjb78TqcGFsucJmlBoQaAt4KzveAP/MqrGBV2gv4Z3QL gJsezO8xGGnNXQscv4wG+S8pPN6NEbzALZaEfvsU50ZnpsKClX2SM/suJ2Ytpu7/maynpftI+Nadh bcDwVIZMwbq0JAbXowUVwbBWmDkVMElOdoe5QV/ZH4xV6Bdzhw+Us6y09sU6x0dNct69n+qVu99Gl xRjaacYw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOW-0038jc-AU; Fri, 15 Dec 2023 20:02:48 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 12/14] ufs: Remove writepage implementation Date: Fri, 15 Dec 2023 20:02:43 +0000 Message-Id: <20231215200245.748418-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 If the filesystem implements migrate_folio and writepages, there is no need for a writepage implementation. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/ufs/inode.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/fs/ufs/inode.c b/fs/ufs/inode.c index ebce93b08281..a7bb2e63cdde 100644 --- a/fs/ufs/inode.c +++ b/fs/ufs/inode.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include @@ -390,7 +391,7 @@ ufs_inode_getblock(struct inode *inode, u64 ind_block, /** * ufs_getfrag_block() - `get_block_t' function, interface between UFS and - * read_folio, writepage and so on + * read_folio, writepages and so on */ static int ufs_getfrag_block(struct inode *inode, sector_t fragment, struct buffer_head *bh_result, int create) @@ -467,9 +468,10 @@ static int ufs_getfrag_block(struct inode *inode, sector_t fragment, struct buff return 0; } -static int ufs_writepage(struct page *page, struct writeback_control *wbc) +static int ufs_writepages(struct address_space *mapping, + struct writeback_control *wbc) { - return block_write_full_page(page,ufs_getfrag_block,wbc); + return mpage_writepages(mapping, wbc, ufs_getfrag_block); } static int ufs_read_folio(struct file *file, struct folio *folio) @@ -528,9 +530,10 @@ const struct address_space_operations ufs_aops = { .dirty_folio = block_dirty_folio, .invalidate_folio = block_invalidate_folio, .read_folio = ufs_read_folio, - .writepage = ufs_writepage, + .writepages = ufs_writepages, .write_begin = ufs_write_begin, .write_end = ufs_write_end, + .migrate_folio = buffer_migrate_folio, .bmap = ufs_bmap }; From patchwork Fri Dec 15 20:02:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494860 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E68B15787C; Fri, 15 Dec 2023 20:02:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="G1pdApiH" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OF9/KtMCSO7iRo1gm6o2VJ23Gj1ecgTlZXtN74iujvU=; b=G1pdApiHUfoUdJNkTK1vUttiWN 5GdIz83V2P34Nj9ohyg7GY9FdpXzYS7NN0p+ygxkvfbj2cu9EGts4bRrr0I0ZHuzu+vXwcmWksCYm sdDXdWgM4HL74p5gH4qf+0IvyNeQ7Y7GnrRyujggMgcfSMFptQGaFRQbD/RD3HYJrg7cYok3S5dj+ 0huWg+6wL24mkkQDGuJ50xF7XsaAh3LfZAyKoGYMAXqp3Zbt3DuEyF5VlSTc8KcBvFAHKkR6MwM3S kFhvnrSLddUaRtnYfXtap/lCWnMfjhhG+Py+Tl0XrcIeaZESdm+nPKmp5T+iUW44AZ5FqJVZ6TK5n Bvx8gV1w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOW-0038jk-FL; Fri, 15 Dec 2023 20:02:48 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 13/14] fs: Convert block_write_full_page to block_write_full_folio Date: Fri, 15 Dec 2023 20:02:44 +0000 Message-Id: <20231215200245.748418-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Convert the function to be compatible with writepage_t so that it can be passed to write_cache_pages() by blkdev. This removes a call to compound_head(). We can also remove the function export as both callers are built-in. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- block/fops.c | 21 ++++++++++++++++++--- fs/buffer.c | 16 +++++++--------- fs/ext4/page-io.c | 2 +- fs/gfs2/aops.c | 4 ++-- fs/mpage.c | 2 +- fs/ntfs/aops.c | 4 ++-- fs/ocfs2/alloc.c | 2 +- fs/ocfs2/file.c | 2 +- include/linux/buffer_head.h | 4 ++-- 9 files changed, 35 insertions(+), 22 deletions(-) diff --git a/block/fops.c b/block/fops.c index 0bdad1e8d514..0cf8cf72cdfa 100644 --- a/block/fops.c +++ b/block/fops.c @@ -410,9 +410,24 @@ static int blkdev_get_block(struct inode *inode, sector_t iblock, return 0; } -static int blkdev_writepage(struct page *page, struct writeback_control *wbc) +/* + * We cannot call mpage_writepages() as it does not take the buffer lock. + * We must use block_write_full_folio() directly which holds the buffer + * lock. The buffer lock provides the synchronisation with writeback + * that filesystems rely on when they use the blockdev's mapping. + */ +static int blkdev_writepages(struct address_space *mapping, + struct writeback_control *wbc) { - return block_write_full_page(page, blkdev_get_block, wbc); + struct blk_plug plug; + int err; + + blk_start_plug(&plug); + err = write_cache_pages(mapping, wbc, block_write_full_folio, + blkdev_get_block); + blk_finish_plug(&plug); + + return err; } static int blkdev_read_folio(struct file *file, struct folio *folio) @@ -449,7 +464,7 @@ const struct address_space_operations def_blk_aops = { .invalidate_folio = block_invalidate_folio, .read_folio = blkdev_read_folio, .readahead = blkdev_readahead, - .writepage = blkdev_writepage, + .writepages = blkdev_writepages, .write_begin = blkdev_write_begin, .write_end = blkdev_write_end, .migrate_folio = buffer_migrate_folio_norefs, diff --git a/fs/buffer.c b/fs/buffer.c index 9f41d2b38902..2e69f0ddca37 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -372,7 +372,7 @@ static void end_buffer_async_read_io(struct buffer_head *bh, int uptodate) } /* - * Completion handler for block_write_full_page() - pages which are unlocked + * Completion handler for block_write_full_folio() - pages which are unlocked * during I/O, and which have PageWriteback cleared upon I/O completion. */ void end_buffer_async_write(struct buffer_head *bh, int uptodate) @@ -1771,18 +1771,18 @@ static struct buffer_head *folio_create_buffers(struct folio *folio, */ /* - * While block_write_full_page is writing back the dirty buffers under + * While block_write_full_folio is writing back the dirty buffers under * the page lock, whoever dirtied the buffers may decide to clean them * again at any time. We handle that by only looking at the buffer * state inside lock_buffer(). * - * If block_write_full_page() is called for regular writeback + * If block_write_full_folio() is called for regular writeback * (wbc->sync_mode == WB_SYNC_NONE) then it will redirty a page which has a * locked buffer. This only can happen if someone has written the buffer * directly, with submit_bh(). At the address_space level PageWriteback * prevents this contention from occurring. * - * If block_write_full_page() is called with wbc->sync_mode == + * If block_write_full_folio() is called with wbc->sync_mode == * WB_SYNC_ALL, the writes are posted using REQ_SYNC; this * causes the writes to be flagged as synchronous writes. */ @@ -1829,7 +1829,7 @@ int __block_write_full_folio(struct inode *inode, struct folio *folio, * truncate in progress. */ /* - * The buffer was zeroed by block_write_full_page() + * The buffer was zeroed by block_write_full_folio() */ clear_buffer_dirty(bh); set_buffer_uptodate(bh); @@ -2696,10 +2696,9 @@ EXPORT_SYMBOL(block_truncate_page); /* * The generic ->writepage function for buffer-backed address_spaces */ -int block_write_full_page(struct page *page, get_block_t *get_block, - struct writeback_control *wbc) +int block_write_full_folio(struct folio *folio, struct writeback_control *wbc, + void *get_block) { - struct folio *folio = page_folio(page); struct inode * const inode = folio->mapping->host; loff_t i_size = i_size_read(inode); @@ -2726,7 +2725,6 @@ int block_write_full_page(struct page *page, get_block_t *get_block, return __block_write_full_folio(inode, folio, get_block, wbc, end_buffer_async_write); } -EXPORT_SYMBOL(block_write_full_page); sector_t generic_block_bmap(struct address_space *mapping, sector_t block, get_block_t *get_block) diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index dfdd7e5cf038..312bc6813357 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -444,7 +444,7 @@ int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *folio, folio_clear_error(folio); /* - * Comments copied from block_write_full_page: + * Comments copied from block_write_full_folio: * * The folio straddles i_size. It must be zeroed out on each and every * writepage invocation because it may be mmapped. "A file is mapped diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index 5cffb079b87c..f986cd032b76 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -82,11 +82,11 @@ static int gfs2_get_block_noalloc(struct inode *inode, sector_t lblock, } /** - * gfs2_write_jdata_folio - gfs2 jdata-specific version of block_write_full_page + * gfs2_write_jdata_folio - gfs2 jdata-specific version of block_write_full_folio * @folio: The folio to write * @wbc: The writeback control * - * This is the same as calling block_write_full_page, but it also + * This is the same as calling block_write_full_folio, but it also * writes pages outside of i_size */ static int gfs2_write_jdata_folio(struct folio *folio, diff --git a/fs/mpage.c b/fs/mpage.c index d4963f3d8051..738882e0766d 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -642,7 +642,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc, /* * The caller has a ref on the inode, so *mapping is stable */ - ret = block_write_full_page(&folio->page, mpd->get_block, wbc); + ret = block_write_full_folio(folio, wbc, mpd->get_block); mapping_set_error(mapping, ret); out: mpd->bio = bio; diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c index 1c747a3baa3e..2d01517a2d59 100644 --- a/fs/ntfs/aops.c +++ b/fs/ntfs/aops.c @@ -1304,7 +1304,7 @@ static int ntfs_write_mst_block(struct page *page, * page cleaned. The VM has already locked the page and marked it clean. * * For non-resident attributes, ntfs_writepage() writes the @page by calling - * the ntfs version of the generic block_write_full_page() function, + * the ntfs version of the generic block_write_full_folio() function, * ntfs_write_block(), which in turn if necessary creates and writes the * buffers associated with the page asynchronously. * @@ -1314,7 +1314,7 @@ static int ntfs_write_mst_block(struct page *page, * vfs inode dirty code path for the inode the mft record belongs to or via the * vm page dirty code path for the page the mft record is in. * - * Based on ntfs_read_folio() and fs/buffer.c::block_write_full_page(). + * Based on ntfs_read_folio() and fs/buffer.c::block_write_full_folio(). * * Return 0 on success and -errno on error. */ diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c index 91b32b2377ac..ea9127ba3208 100644 --- a/fs/ocfs2/alloc.c +++ b/fs/ocfs2/alloc.c @@ -6934,7 +6934,7 @@ static int ocfs2_grab_eof_pages(struct inode *inode, loff_t start, loff_t end, * nonzero data on subsequent file extends. * * We need to call this before i_size is updated on the inode because - * otherwise block_write_full_page() will skip writeout of pages past + * otherwise block_write_full_folio() will skip writeout of pages past * i_size. */ int ocfs2_zero_range_for_truncate(struct inode *inode, handle_t *handle, diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c index 94e2a1244442..8b6d15010703 100644 --- a/fs/ocfs2/file.c +++ b/fs/ocfs2/file.c @@ -818,7 +818,7 @@ static int ocfs2_write_zero_page(struct inode *inode, u64 abs_from, /* * fs-writeback will release the dirty pages without page lock * whose offset are over inode size, the release happens at - * block_write_full_page(). + * block_write_full_folio(). */ i_size_write(inode, abs_to); inode->i_blocks = ocfs2_inode_sector_count(inode); diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 94f6161eb45e..396b2adf24bf 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -252,8 +252,8 @@ void __bh_read_batch(int nr, struct buffer_head *bhs[], * address_spaces. */ void block_invalidate_folio(struct folio *folio, size_t offset, size_t length); -int block_write_full_page(struct page *page, get_block_t *get_block, - struct writeback_control *wbc); +int block_write_full_folio(struct folio *folio, struct writeback_control *wbc, + void *get_block); int __block_write_full_folio(struct inode *inode, struct folio *folio, get_block_t *get_block, struct writeback_control *wbc, bh_end_io_t *handler); From patchwork Fri Dec 15 20:02:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13494863 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E685D56778; Fri, 15 Dec 2023 20:02:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="KWXxx/CL" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yOUu3Oa9oop0gFCwHzPJjrXW9AuzoUvkVlbRlf+EeEk=; b=KWXxx/CL90Q2i3AE4nF0/mc+Ao yCvtsrMjiAytmy+AXKn93CDYgqxfFKBdhap8lWBk+1BZ/PAX+jE6QGRo7HLonPxjSSKnqLeDR1M8R tmxLyCmpYj7r/Y4DqQAFj2eekjVysrtUiydLRxXpNh5jv/FS1M0Xfi+jgvBRLcU5NUUvqmP48AP+O LunUCVkXMfq9oHMZ7oCXnribKfi5dluuUKM/LSqY8E75FU5ODSWTwGVluoj4Lz0BXBAHTVrn+Sr8A kkmLXpZJcaO8FiS4j82bUdzrpcYdJftLdmG8fz2Lfq9BaJUMnYuPcpx8KA3BAdVxm9b4qLhVhxUjN yfHnqccQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rEEOW-0038jt-Jx; Fri, 15 Dec 2023 20:02:48 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , Christoph Hellwig , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 14/14] fs: Remove the bh_end_io argument from __block_write_full_folio Date: Fri, 15 Dec 2023 20:02:45 +0000 Message-Id: <20231215200245.748418-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231215200245.748418-1-willy@infradead.org> References: <20231215200245.748418-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 All callers are passing end_buffer_async_write as this argument, so we can hardcode references to it within __block_write_full_folio(). That lets us make end_buffer_async_write() static. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/buffer.c | 22 ++++++++++------------ fs/gfs2/aops.c | 2 +- include/linux/buffer_head.h | 4 +--- 3 files changed, 12 insertions(+), 16 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 2e69f0ddca37..d5ce6b29c893 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -372,10 +372,10 @@ static void end_buffer_async_read_io(struct buffer_head *bh, int uptodate) } /* - * Completion handler for block_write_full_folio() - pages which are unlocked - * during I/O, and which have PageWriteback cleared upon I/O completion. + * Completion handler for block_write_full_folio() - folios which are unlocked + * during I/O, and which have the writeback flag cleared upon I/O completion. */ -void end_buffer_async_write(struct buffer_head *bh, int uptodate) +static void end_buffer_async_write(struct buffer_head *bh, int uptodate) { unsigned long flags; struct buffer_head *first; @@ -415,7 +415,6 @@ void end_buffer_async_write(struct buffer_head *bh, int uptodate) spin_unlock_irqrestore(&first->b_uptodate_lock, flags); return; } -EXPORT_SYMBOL(end_buffer_async_write); /* * If a page's buffers are under async readin (end_buffer_async_read @@ -1787,8 +1786,7 @@ static struct buffer_head *folio_create_buffers(struct folio *folio, * causes the writes to be flagged as synchronous writes. */ int __block_write_full_folio(struct inode *inode, struct folio *folio, - get_block_t *get_block, struct writeback_control *wbc, - bh_end_io_t *handler) + get_block_t *get_block, struct writeback_control *wbc) { int err; sector_t block; @@ -1867,7 +1865,8 @@ int __block_write_full_folio(struct inode *inode, struct folio *folio, continue; } if (test_clear_buffer_dirty(bh)) { - mark_buffer_async_write_endio(bh, handler); + mark_buffer_async_write_endio(bh, + end_buffer_async_write); } else { unlock_buffer(bh); } @@ -1920,7 +1919,8 @@ int __block_write_full_folio(struct inode *inode, struct folio *folio, if (buffer_mapped(bh) && buffer_dirty(bh) && !buffer_delay(bh)) { lock_buffer(bh); - mark_buffer_async_write_endio(bh, handler); + mark_buffer_async_write_endio(bh, + end_buffer_async_write); } else { /* * The buffer may have been set dirty during @@ -2704,8 +2704,7 @@ int block_write_full_folio(struct folio *folio, struct writeback_control *wbc, /* Is the folio fully inside i_size? */ if (folio_pos(folio) + folio_size(folio) <= i_size) - return __block_write_full_folio(inode, folio, get_block, wbc, - end_buffer_async_write); + return __block_write_full_folio(inode, folio, get_block, wbc); /* Is the folio fully outside i_size? (truncate in progress) */ if (folio_pos(folio) >= i_size) { @@ -2722,8 +2721,7 @@ int block_write_full_folio(struct folio *folio, struct writeback_control *wbc, */ folio_zero_segment(folio, offset_in_folio(folio, i_size), folio_size(folio)); - return __block_write_full_folio(inode, folio, get_block, wbc, - end_buffer_async_write); + return __block_write_full_folio(inode, folio, get_block, wbc); } sector_t generic_block_bmap(struct address_space *mapping, sector_t block, diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index f986cd032b76..9914d7f54f7d 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -108,7 +108,7 @@ static int gfs2_write_jdata_folio(struct folio *folio, folio_size(folio)); return __block_write_full_folio(inode, folio, gfs2_get_block_noalloc, - wbc, end_buffer_async_write); + wbc); } /** diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 396b2adf24bf..d78454a4dd1f 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -205,7 +205,6 @@ struct buffer_head *create_empty_buffers(struct folio *folio, unsigned long blocksize, unsigned long b_state); void end_buffer_read_sync(struct buffer_head *bh, int uptodate); void end_buffer_write_sync(struct buffer_head *bh, int uptodate); -void end_buffer_async_write(struct buffer_head *bh, int uptodate); /* Things to do with buffers at mapping->private_list */ void mark_buffer_dirty_inode(struct buffer_head *bh, struct inode *inode); @@ -255,8 +254,7 @@ void block_invalidate_folio(struct folio *folio, size_t offset, size_t length); int block_write_full_folio(struct folio *folio, struct writeback_control *wbc, void *get_block); int __block_write_full_folio(struct inode *inode, struct folio *folio, - get_block_t *get_block, struct writeback_control *wbc, - bh_end_io_t *handler); + get_block_t *get_block, struct writeback_control *wbc); int block_read_full_folio(struct folio *, get_block_t *); bool block_is_partially_uptodate(struct folio *, size_t from, size_t count); int block_write_begin(struct address_space *mapping, loff_t pos, unsigned len,