diff mbox series

[4/9] btrfs: move the cow_fixup earlier in writepages handling

Message ID 20230724132701.816771-5-hch@lst.de (mailing list archive)
State New, archived
Headers show
Series [1/9] btrfs: don't stop integrity writeback too early | expand

Commit Message

Christoph Hellwig July 24, 2023, 1:26 p.m. UTC
btrfs has a special fixup for pages that are marked dirty without having
space reserved for them.  But the place where is is run means it can't
work for I/O that isn't kicked off inline from __extent_writepage, most
notable compressed I/O and I/O to zoned file systems.

Move the fixup earlier based on not findining any delalloc range in the
I/O tree to cover this case as well instead of relying on the fairly
obscure fallthrough behavior that calls __extent_write_page_io even when
no delalloc space was found.

Fixes: c8b978188c9a ("Btrfs: Add zlib compression support")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/extent_io.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

Comments

Boris Burkov Aug. 3, 2023, 12:21 a.m. UTC | #1
On Mon, Jul 24, 2023 at 06:26:56AM -0700, Christoph Hellwig wrote:
> btrfs has a special fixup for pages that are marked dirty without having
> space reserved for them.  But the place where is is run means it can't
> work for I/O that isn't kicked off inline from __extent_writepage, most
> notable compressed I/O and I/O to zoned file systems.
> 
> Move the fixup earlier based on not findining any delalloc range in the
> I/O tree to cover this case as well instead of relying on the fairly
> obscure fallthrough behavior that calls __extent_write_page_io even when
> no delalloc space was found.

This almost makes sense to me, but not quite. As far as I can tell, the
zoned and compressed cases you are describing are the cases in
btrfs_run_delalloc_range which end up calling extent_write_locked_range.
And indeed, if that happens, it appears we return 1, don't call
__extent_writepage, and don't do the redirty check. However, if that
happens, then your new code won't run either, because it will set
found_delalloc after btrfs_run_delalloc_range returns 1 (>= 0).

Therefore, it must be the case that your new check uses the assumption
that in any case where the fixup would trip, the find_delalloc must have
failed as well. Let's assume that's true, because we always set the
delalloc bit for any page we properly dirty. Even then, this feels like
it strictly reduces the cases we do the fixup.

To me it seems like best case this is a no-op change, worst case it
reduces the cases where catch wrong dirty pages.

Put another way, I don't see a codepath which hits this logic, but
doesn't hit the old logic.

Do you have a reproducer for what this is fixing?

Thanks,
Boris

> 
> Fixes: c8b978188c9a ("Btrfs: Add zlib compression support")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Josef Bacik <josef@toxicpanda.com>
> ---
>  fs/btrfs/extent_io.c | 31 +++++++++++++++++--------------
>  1 file changed, 17 insertions(+), 14 deletions(-)
> 
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 1cc46bbbd888cd..cc258bddd88eab 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -1153,6 +1153,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
>  	u64 delalloc_start = page_start;
>  	u64 delalloc_end = page_end;
>  	u64 delalloc_to_write = 0;
> +	bool found_delalloc = false;
>  	int ret = 0;
>  
>  	while (delalloc_start < page_end) {
> @@ -1169,6 +1170,22 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
>  			return ret;
>  
>  		delalloc_start = delalloc_end + 1;
> +		found_delalloc = true;
> +	}
> +
> +	/*
> +	 * If we did not find any delalloc range in the io_tree, this must be
> +	 * the rare case of dirtying pages through get_user_pages without
> +	 * calling into ->page_mkwrite.
> +	 * While these are in the process of being fixed by switching to
> +	 * pin_user_pages, some are still around and need to be worked around
> +	 * by creating a delalloc reservation in a fixup worker, and waiting
> +	 * us to be called again with that reservation.
> +	 */
> +	if (!found_delalloc && btrfs_writepage_cow_fixup(page)) {
> +		redirty_page_for_writepage(wbc, page);
> +		unlock_page(page);
> +		return 1;
>  	}
>  
>  	/*
> @@ -1274,14 +1291,6 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
>  	int ret = 0;
>  	int nr = 0;
>  
> -	ret = btrfs_writepage_cow_fixup(page);
> -	if (ret) {
> -		/* Fixup worker will requeue */
> -		redirty_page_for_writepage(bio_ctrl->wbc, page);
> -		unlock_page(page);
> -		return 1;
> -	}
> -
>  	bio_ctrl->end_io_func = end_bio_extent_writepage;
>  	while (cur <= end) {
>  		u32 len = end - cur + 1;
> @@ -1421,9 +1430,6 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl
>  		goto done;
>  
>  	ret = __extent_writepage_io(BTRFS_I(inode), page, bio_ctrl, i_size, &nr);
> -	if (ret == 1)
> -		return 0;
> -
>  	bio_ctrl->wbc->nr_to_write--;
>  
>  done:
> @@ -2176,8 +2182,6 @@ void extent_write_locked_range(struct inode *inode, struct page *locked_page,
>  
>  		ret = __extent_writepage_io(BTRFS_I(inode), page, &bio_ctrl,
>  					    i_size, &nr);
> -		if (ret == 1)
> -			goto next_page;
>  
>  		/* Make sure the mapping tag for page dirty gets cleared. */
>  		if (nr == 0) {
> @@ -2193,7 +2197,6 @@ void extent_write_locked_range(struct inode *inode, struct page *locked_page,
>  		btrfs_page_unlock_writer(fs_info, page, cur, cur_len);
>  		if (ret < 0)
>  			found_error = true;
> -next_page:
>  		put_page(page);
>  		cur = cur_end + 1;
>  	}
> -- 
> 2.39.2
>
diff mbox series

Patch

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 1cc46bbbd888cd..cc258bddd88eab 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1153,6 +1153,7 @@  static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
 	u64 delalloc_start = page_start;
 	u64 delalloc_end = page_end;
 	u64 delalloc_to_write = 0;
+	bool found_delalloc = false;
 	int ret = 0;
 
 	while (delalloc_start < page_end) {
@@ -1169,6 +1170,22 @@  static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
 			return ret;
 
 		delalloc_start = delalloc_end + 1;
+		found_delalloc = true;
+	}
+
+	/*
+	 * If we did not find any delalloc range in the io_tree, this must be
+	 * the rare case of dirtying pages through get_user_pages without
+	 * calling into ->page_mkwrite.
+	 * While these are in the process of being fixed by switching to
+	 * pin_user_pages, some are still around and need to be worked around
+	 * by creating a delalloc reservation in a fixup worker, and waiting
+	 * us to be called again with that reservation.
+	 */
+	if (!found_delalloc && btrfs_writepage_cow_fixup(page)) {
+		redirty_page_for_writepage(wbc, page);
+		unlock_page(page);
+		return 1;
 	}
 
 	/*
@@ -1274,14 +1291,6 @@  static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
 	int ret = 0;
 	int nr = 0;
 
-	ret = btrfs_writepage_cow_fixup(page);
-	if (ret) {
-		/* Fixup worker will requeue */
-		redirty_page_for_writepage(bio_ctrl->wbc, page);
-		unlock_page(page);
-		return 1;
-	}
-
 	bio_ctrl->end_io_func = end_bio_extent_writepage;
 	while (cur <= end) {
 		u32 len = end - cur + 1;
@@ -1421,9 +1430,6 @@  static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl
 		goto done;
 
 	ret = __extent_writepage_io(BTRFS_I(inode), page, bio_ctrl, i_size, &nr);
-	if (ret == 1)
-		return 0;
-
 	bio_ctrl->wbc->nr_to_write--;
 
 done:
@@ -2176,8 +2182,6 @@  void extent_write_locked_range(struct inode *inode, struct page *locked_page,
 
 		ret = __extent_writepage_io(BTRFS_I(inode), page, &bio_ctrl,
 					    i_size, &nr);
-		if (ret == 1)
-			goto next_page;
 
 		/* Make sure the mapping tag for page dirty gets cleared. */
 		if (nr == 0) {
@@ -2193,7 +2197,6 @@  void extent_write_locked_range(struct inode *inode, struct page *locked_page,
 		btrfs_page_unlock_writer(fs_info, page, cur, cur_len);
 		if (ret < 0)
 			found_error = true;
-next_page:
 		put_page(page);
 		cur = cur_end + 1;
 	}