diff mbox series

[v15,16/17] iomap: Convert iomap_add_to_ioend to take a folio

Message ID 20210719184001.1750630-17-willy@infradead.org (mailing list archive)
State New, archived
Headers show
Series Folio support in block + iomap layers | expand

Commit Message

Matthew Wilcox July 19, 2021, 6:40 p.m. UTC
We still iterate one block at a time, but now we call compound_head()
less often.  Rename file_offset to pos to fit the rest of the file.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/iomap/buffered-io.c | 109 ++++++++++++++++++-----------------------
 1 file changed, 48 insertions(+), 61 deletions(-)

Comments

Christoph Hellwig July 20, 2021, 7:20 a.m. UTC | #1
On Mon, Jul 19, 2021 at 07:40:00PM +0100, Matthew Wilcox (Oracle) wrote:
> -	merged = __bio_try_merge_page(wpc->ioend->io_bio, page, len, poff,
> -			&same_page);
>  	if (iop)
>  		atomic_add(len, &iop->write_bytes_pending);
> -
> -	if (!merged) {
> -		if (bio_full(wpc->ioend->io_bio, len)) {
> -			wpc->ioend->io_bio =
> -				iomap_chain_bio(wpc->ioend->io_bio);
> -		}
> -		bio_add_page(wpc->ioend->io_bio, page, len, poff);
> +	if (!bio_add_folio(wpc->ioend->io_bio, folio, len, poff)) {
> +		wpc->ioend->io_bio = iomap_chain_bio(wpc->ioend->io_bio);
> +		bio_add_folio(wpc->ioend->io_bio, folio, len, poff);
>  	}

I actually have pretty similar changes for the red and write path to
avoid __bio_try_merge_page in my queue.  I'll send them out ASAP as
I think such a change should be done separately from the folio
switch.

>  	/*
> -	 * Walk through the page to find areas to write back. If we run off the
> -	 * end of the current map or find the current map invalid, grab a new
> -	 * one.
> +	 * Walk through the folio to find areas to write back. If we
> +	 * run off the end of the current map or find the current map
> +	 * invalid, grab a new one.

Why the reformatting?

Otherwise this looks sane to me.
Matthew Wilcox July 20, 2021, 11:45 a.m. UTC | #2
On Tue, Jul 20, 2021 at 09:20:43AM +0200, Christoph Hellwig wrote:
> >  	/*
> > -	 * Walk through the page to find areas to write back. If we run off the
> > -	 * end of the current map or find the current map invalid, grab a new
> > -	 * one.
> > +	 * Walk through the folio to find areas to write back. If we
> > +	 * run off the end of the current map or find the current map
> > +	 * invalid, grab a new one.
> 
> Why the reformatting?

s/page/folio/ takes the first line over 80 columns, so pass the whole
comment to 'fmt -p \*'
Darrick J. Wong July 21, 2021, 12:12 a.m. UTC | #3
On Mon, Jul 19, 2021 at 07:40:00PM +0100, Matthew Wilcox (Oracle) wrote:
> We still iterate one block at a time, but now we call compound_head()
> less often.  Rename file_offset to pos to fit the rest of the file.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Everything in this patch looks ok to me, though I gather there will be
further changes to bio_add_folio, so I'll leave this off for now.

I /am/ beginning to wonder, though -- seeing as Christoph and Matthew
both have very large patchsets changing things in fs/iomap/, how would
you like those landed?  Christoph's iterator refactoring looks like it
could be ready to go for 5.15.  Matthew's folio series looks like a
mostly straightforward conversion for iomap, except that it has 91
patches as a hard dependency.

Since most of the iomap changes for 5.15 aren't directly related to
folios, I think I prefer iomap-for-next to be based directly off -rcX
like usual, though I don't know where that leaves the iomap folio
conversion.  I suppose one could add them to a branch that itself is a
result of the folio and iomap branches, or leave them off for 5.16?

Other ideas?

--D
Christoph Hellwig July 21, 2021, 4:27 a.m. UTC | #4
On Tue, Jul 20, 2021 at 05:12:19PM -0700, Darrick J. Wong wrote:
> I /am/ beginning to wonder, though -- seeing as Christoph and Matthew
> both have very large patchsets changing things in fs/iomap/, how would
> you like those landed?  Christoph's iterator refactoring looks like it
> could be ready to go for 5.15.  Matthew's folio series looks like a
> mostly straightforward conversion for iomap, except that it has 91
> patches as a hard dependency.
> 
> Since most of the iomap changes for 5.15 aren't directly related to
> folios, I think I prefer iomap-for-next to be based directly off -rcX
> like usual, though I don't know where that leaves the iomap folio
> conversion.  I suppose one could add them to a branch that itself is a
> result of the folio and iomap branches, or leave them off for 5.16?

Maybe willy has a different opinion, but I thought the plan was to have
the based folio enablement in 5.15, and then do things like the iomap
conversion in the the next merge window.  If we have everything ready
this window we could still add a branch that builds on top of both
the iomap and folio trees, though.
Matthew Wilcox July 21, 2021, 4:31 a.m. UTC | #5
On Wed, Jul 21, 2021 at 05:27:49AM +0100, Christoph Hellwig wrote:
> On Tue, Jul 20, 2021 at 05:12:19PM -0700, Darrick J. Wong wrote:
> > I /am/ beginning to wonder, though -- seeing as Christoph and Matthew
> > both have very large patchsets changing things in fs/iomap/, how would
> > you like those landed?  Christoph's iterator refactoring looks like it
> > could be ready to go for 5.15.  Matthew's folio series looks like a
> > mostly straightforward conversion for iomap, except that it has 91
> > patches as a hard dependency.
> > 
> > Since most of the iomap changes for 5.15 aren't directly related to
> > folios, I think I prefer iomap-for-next to be based directly off -rcX
> > like usual, though I don't know where that leaves the iomap folio
> > conversion.  I suppose one could add them to a branch that itself is a
> > result of the folio and iomap branches, or leave them off for 5.16?
> 
> Maybe willy has a different opinion, but I thought the plan was to have
> the based folio enablement in 5.15, and then do things like the iomap
> conversion in the the next merge window.  If we have everything ready
> this window we could still add a branch that builds on top of both
> the iomap and folio trees, though.

Yes, my plan was to have the iomap conversion and the second half of the
page cache work hit 5.16.  If we're ready earlier, that's great!  Both
you and I want to see both the folio work and the iomap_iter work
get merged, so I don't anticipate any lack of will to get the work done.
Darrick J. Wong July 21, 2021, 3:28 p.m. UTC | #6
On Wed, Jul 21, 2021 at 05:31:16AM +0100, Matthew Wilcox wrote:
> On Wed, Jul 21, 2021 at 05:27:49AM +0100, Christoph Hellwig wrote:
> > On Tue, Jul 20, 2021 at 05:12:19PM -0700, Darrick J. Wong wrote:
> > > I /am/ beginning to wonder, though -- seeing as Christoph and Matthew
> > > both have very large patchsets changing things in fs/iomap/, how would
> > > you like those landed?  Christoph's iterator refactoring looks like it
> > > could be ready to go for 5.15.  Matthew's folio series looks like a
> > > mostly straightforward conversion for iomap, except that it has 91
> > > patches as a hard dependency.
> > > 
> > > Since most of the iomap changes for 5.15 aren't directly related to
> > > folios, I think I prefer iomap-for-next to be based directly off -rcX
> > > like usual, though I don't know where that leaves the iomap folio
> > > conversion.  I suppose one could add them to a branch that itself is a
> > > result of the folio and iomap branches, or leave them off for 5.16?
> > 
> > Maybe willy has a different opinion, but I thought the plan was to have
> > the based folio enablement in 5.15, and then do things like the iomap
> > conversion in the the next merge window.  If we have everything ready
> > this window we could still add a branch that builds on top of both
> > the iomap and folio trees, though.
> 
> Yes, my plan was to have the iomap conversion and the second half of the
> page cache work hit 5.16.  If we're ready earlier, that's great!  Both
> you and I want to see both the folio work and the iomap_iter work
> get merged, so I don't anticipate any lack of will to get the work done.

Ok, good.  I'll await a non-RFC version of the iterator rework for 5.15,
and folio conversions for 5.16.

--D
diff mbox series

Patch

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index f8ca307270e7..60d3b7af61d1 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1253,36 +1253,29 @@  iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
  * first, otherwise finish off the current ioend and start another.
  */
 static void
-iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
+iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio,
 		struct iomap_page *iop, struct iomap_writepage_ctx *wpc,
 		struct writeback_control *wbc, struct list_head *iolist)
 {
-	sector_t sector = iomap_sector(&wpc->iomap, offset);
+	sector_t sector = iomap_sector(&wpc->iomap, pos);
 	unsigned len = i_blocksize(inode);
-	unsigned poff = offset & (PAGE_SIZE - 1);
-	bool merged, same_page = false;
+	size_t poff = offset_in_folio(folio, pos);
 
-	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) {
+	if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos, sector)) {
 		if (wpc->ioend)
 			list_add(&wpc->ioend->io_list, iolist);
-		wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc);
+		wpc->ioend = iomap_alloc_ioend(inode, wpc, pos, sector, wbc);
 	}
 
-	merged = __bio_try_merge_page(wpc->ioend->io_bio, page, len, poff,
-			&same_page);
 	if (iop)
 		atomic_add(len, &iop->write_bytes_pending);
-
-	if (!merged) {
-		if (bio_full(wpc->ioend->io_bio, len)) {
-			wpc->ioend->io_bio =
-				iomap_chain_bio(wpc->ioend->io_bio);
-		}
-		bio_add_page(wpc->ioend->io_bio, page, len, poff);
+	if (!bio_add_folio(wpc->ioend->io_bio, folio, len, poff)) {
+		wpc->ioend->io_bio = iomap_chain_bio(wpc->ioend->io_bio);
+		bio_add_folio(wpc->ioend->io_bio, folio, len, poff);
 	}
 
 	wpc->ioend->io_size += len;
-	wbc_account_cgroup_owner(wbc, page, len);
+	wbc_account_cgroup_owner(wbc, &folio->page, len);
 }
 
 /*
@@ -1304,45 +1297,43 @@  iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
 static int
 iomap_writepage_map(struct iomap_writepage_ctx *wpc,
 		struct writeback_control *wbc, struct inode *inode,
-		struct page *page, u64 end_offset)
+		struct folio *folio, loff_t end_pos)
 {
-	struct folio *folio = page_folio(page);
 	struct iomap_page *iop = iomap_page_create(inode, folio);
 	struct iomap_ioend *ioend, *next;
 	unsigned len = i_blocksize(inode);
-	u64 file_offset; /* file offset of page */
+	unsigned nblocks = i_blocks_per_folio(inode, folio);
+	loff_t pos = folio_pos(folio);
 	int error = 0, count = 0, i;
 	LIST_HEAD(submit_list);
 
 	WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) != 0);
 
 	/*
-	 * Walk through the page to find areas to write back. If we run off the
-	 * end of the current map or find the current map invalid, grab a new
-	 * one.
+	 * Walk through the folio to find areas to write back. If we
+	 * run off the end of the current map or find the current map
+	 * invalid, grab a new one.
 	 */
-	for (i = 0, file_offset = page_offset(page);
-	     i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset;
-	     i++, file_offset += len) {
+	for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) {
 		if (iop && !test_bit(i, iop->uptodate))
 			continue;
 
-		error = wpc->ops->map_blocks(wpc, inode, file_offset);
+		error = wpc->ops->map_blocks(wpc, inode, pos);
 		if (error)
 			break;
 		if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE))
 			continue;
 		if (wpc->iomap.type == IOMAP_HOLE)
 			continue;
-		iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc,
+		iomap_add_to_ioend(inode, pos, folio, iop, wpc, wbc,
 				 &submit_list);
 		count++;
 	}
 
 	WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list));
-	WARN_ON_ONCE(!PageLocked(page));
-	WARN_ON_ONCE(PageWriteback(page));
-	WARN_ON_ONCE(PageDirty(page));
+	WARN_ON_ONCE(!folio_test_locked(folio));
+	WARN_ON_ONCE(folio_test_writeback(folio));
+	WARN_ON_ONCE(folio_test_dirty(folio));
 
 	/*
 	 * We cannot cancel the ioend directly here on error.  We may have
@@ -1358,16 +1349,16 @@  iomap_writepage_map(struct iomap_writepage_ctx *wpc,
 		 * now.
 		 */
 		if (wpc->ops->discard_page)
-			wpc->ops->discard_page(page, file_offset);
+			wpc->ops->discard_page(&folio->page, pos);
 		if (!count) {
-			ClearPageUptodate(page);
-			unlock_page(page);
+			folio_clear_uptodate(folio);
+			folio_unlock(folio);
 			goto done;
 		}
 	}
 
-	set_page_writeback(page);
-	unlock_page(page);
+	folio_start_writeback(folio);
+	folio_unlock(folio);
 
 	/*
 	 * Preserve the original error if there was one, otherwise catch
@@ -1388,9 +1379,9 @@  iomap_writepage_map(struct iomap_writepage_ctx *wpc,
 	 * with a partial page truncate on a sub-page block sized filesystem.
 	 */
 	if (!count)
-		end_page_writeback(page);
+		folio_end_writeback(folio);
 done:
-	mapping_set_error(page->mapping, error);
+	mapping_set_error(folio->mapping, error);
 	return error;
 }
 
@@ -1404,16 +1395,15 @@  iomap_writepage_map(struct iomap_writepage_ctx *wpc,
 static int
 iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
 {
+	struct folio *folio = page_folio(page);
 	struct iomap_writepage_ctx *wpc = data;
-	struct inode *inode = page->mapping->host;
-	pgoff_t end_index;
-	u64 end_offset;
-	loff_t offset;
+	struct inode *inode = folio->mapping->host;
+	loff_t end_pos, isize;
 
-	trace_iomap_writepage(inode, page_offset(page), PAGE_SIZE);
+	trace_iomap_writepage(inode, folio_pos(folio), folio_size(folio));
 
 	/*
-	 * Refuse to write the page out if we are called from reclaim context.
+	 * Refuse to write the folio out if we are called from reclaim context.
 	 *
 	 * This avoids stack overflows when called from deeply used stacks in
 	 * random callers for direct reclaim or memcg reclaim.  We explicitly
@@ -1427,10 +1417,10 @@  iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
 		goto redirty;
 
 	/*
-	 * Is this page beyond the end of the file?
+	 * Is this folio beyond the end of the file?
 	 *
-	 * The page index is less than the end_index, adjust the end_offset
-	 * to the highest offset that this page should represent.
+	 * The folio index is less than the end_index, adjust the end_pos
+	 * to the highest offset that this folio should represent.
 	 * -----------------------------------------------------
 	 * |			file mapping	       | <EOF> |
 	 * -----------------------------------------------------
@@ -1439,11 +1429,9 @@  iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
 	 * |     desired writeback range    |      see else    |
 	 * ---------------------------------^------------------|
 	 */
-	offset = i_size_read(inode);
-	end_index = offset >> PAGE_SHIFT;
-	if (page->index < end_index)
-		end_offset = (loff_t)(page->index + 1) << PAGE_SHIFT;
-	else {
+	isize = i_size_read(inode);
+	end_pos = folio_pos(folio) + folio_size(folio);
+	if (end_pos - 1 >= isize) {
 		/*
 		 * Check whether the page to write out is beyond or straddles
 		 * i_size or not.
@@ -1455,7 +1443,8 @@  iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
 		 * |				    |      Straddles     |
 		 * ---------------------------------^-----------|--------|
 		 */
-		unsigned offset_into_page = offset & (PAGE_SIZE - 1);
+		size_t poff = offset_in_folio(folio, isize);
+		pgoff_t end_index = isize >> PAGE_SHIFT;
 
 		/*
 		 * Skip the page if it is fully outside i_size, e.g. due to a
@@ -1474,8 +1463,8 @@  iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
 		 * if the page to write is totally beyond the i_size or if it's
 		 * offset is just equal to the EOF.
 		 */
-		if (page->index > end_index ||
-		    (page->index == end_index && offset_into_page == 0))
+		if (folio->index > end_index ||
+		    (folio->index == end_index && poff == 0))
 			goto redirty;
 
 		/*
@@ -1486,17 +1475,15 @@  iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
 		 * memory is zeroed when mapped, and writes to that region are
 		 * not written out to the file."
 		 */
-		zero_user_segment(page, offset_into_page, PAGE_SIZE);
-
-		/* Adjust the end_offset to the end of file */
-		end_offset = offset;
+		zero_user_segment(&folio->page, poff, folio_size(folio));
+		end_pos = isize;
 	}
 
-	return iomap_writepage_map(wpc, wbc, inode, page, end_offset);
+	return iomap_writepage_map(wpc, wbc, inode, folio, end_pos);
 
 redirty:
-	redirty_page_for_writepage(wbc, page);
-	unlock_page(page);
+	folio_redirty_for_writepage(wbc, folio);
+	folio_unlock(folio);
 	return 0;
 }