diff mbox series

[3/3] xfs, iomap: ->discard_folio() is broken so remove it

Message ID 20230214055114.4141947-4-david@fromorbit.com (mailing list archive)
State New, archived
Headers show
Series xfs, iomap: fix writeback failure handling | expand

Commit Message

Dave Chinner Feb. 14, 2023, 5:51 a.m. UTC
From: Dave Chinner <dchinner@redhat.com>

Ever since commit e9c3a8e820ed ("iomap: don't invalidate folios
after writeback errors") XFS and iomap have been retaining dirty
folios in memory after a writeback error. XFS no longer invalidates
the folio, and iomap no longer clears the folio uptodate state.

However, iomap is still been calling ->discard_folio on error, and
XFS is still punching the delayed allocation range backing the dirty
folio.

This is incorrect behaviour. The folio remains dirty and up to date,
meaning that another writeback will be attempted in the near future.
THis means that XFS is still going to have to allocate space for it
during writeback, and that means it still needs to have a delayed
allocation reservation and extent backing the dirty folio.

Failure to retain the delalloc extent (because xfs_discard_folio()
punched it out) means that the next writeback attempt does not find
an extent over the range of the write in ->map_blocks(), and
xfs_map_blocks() triggers a WARN_ON() because it should never land
in a hole for a data fork writeback request. This looks like:

[  647.356969] ------------[ cut here ]------------
[  647.359277] WARNING: CPU: 14 PID: 21913 at fs/xfs/libxfs/xfs_bmap.c:4510 xfs_bmapi_convert_delalloc+0x221/0x4e0
[  647.364551] Modules linked in:
[  647.366294] CPU: 14 PID: 21913 Comm: test_delalloc_c Not tainted 6.2.0-rc7-dgc+ #1754
[  647.370356] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-5 04/01/2014
[  647.374781] RIP: 0010:xfs_bmapi_convert_delalloc+0x221/0x4e0
[  647.377807] Code: e9 7d fe ff ff 80 bf 54 01 00 00 00 0f 84 68 fe ff ff 48 8d 47 70 48 89 04 24 e9 63 fe ff ff 83 fd 02 41 be f5 ff ff ff 74 a5 <0f> 0b eb a0
[  647.387242] RSP: 0018:ffffc9000aa677a8 EFLAGS: 00010293
[  647.389837] RAX: 0000000000000000 RBX: ffff88825bc4da00 RCX: 0000000000000000
[  647.393371] RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffff88825bc4da40
[  647.396546] RBP: 0000000000000000 R08: ffffc9000aa67810 R09: ffffc9000aa67850
[  647.400186] R10: ffff88825bc4da00 R11: ffff888800a9aaac R12: ffff888101707000
[  647.403484] R13: ffffc9000aa677e0 R14: 00000000fffffff5 R15: 0000000000000004
[  647.406251] FS:  00007ff35ec24640(0000) GS:ffff88883ed00000(0000) knlGS:0000000000000000
[  647.410089] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  647.413225] CR2: 00007f7292cbc5d0 CR3: 0000000807d0e004 CR4: 0000000000060ee0
[  647.416917] Call Trace:
[  647.418080]  <TASK>
[  647.419291]  ? _raw_spin_unlock_irqrestore+0xe/0x30
[  647.421400]  xfs_map_blocks+0x1b7/0x590
[  647.422951]  iomap_do_writepage+0x1f1/0x7d0
[  647.424607]  ? __mod_lruvec_page_state+0x93/0x140
[  647.426419]  write_cache_pages+0x17b/0x4f0
[  647.428079]  ? iomap_read_end_io+0x2c0/0x2c0
[  647.429839]  iomap_writepages+0x1c/0x40
[  647.431377]  xfs_vm_writepages+0x79/0xb0
[  647.432826]  do_writepages+0xbd/0x1a0
[  647.434207]  ? obj_cgroup_release+0x73/0xb0
[  647.435769]  ? drain_obj_stock+0x130/0x290
[  647.437273]  ? avc_has_perm+0x8a/0x1a0
[  647.438746]  ? avc_has_perm_noaudit+0x8c/0x100
[  647.440223]  __filemap_fdatawrite_range+0x8e/0xa0
[  647.441960]  filemap_write_and_wait_range+0x3d/0xa0
[  647.444258]  __iomap_dio_rw+0x181/0x790
[  647.445960]  ? __schedule+0x385/0xa20
[  647.447829]  iomap_dio_rw+0xe/0x30
[  647.449284]  xfs_file_dio_write_aligned+0x97/0x150
[  647.451332]  ? selinux_file_permission+0x107/0x150
[  647.453299]  xfs_file_write_iter+0xd2/0x120
[  647.455238]  vfs_write+0x20d/0x3d0
[  647.456768]  ksys_write+0x69/0xf0
[  647.458067]  do_syscall_64+0x34/0x80
[  647.459488]  entry_SYSCALL_64_after_hwframe+0x63/0xcd
[  647.461529] RIP: 0033:0x7ff3651406e9
[  647.463119] Code: 48 8d 3d 2a a1 0c 00 0f 05 eb a5 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f8
[  647.470563] RSP: 002b:00007ff35ec23df8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[  647.473465] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff3651406e9
[  647.476278] RDX: 0000000000001400 RSI: 0000000020000000 RDI: 0000000000000005
[  647.478895] RBP: 00007ff35ec23e20 R08: 0000000000000005 R09: 0000000000000000
[  647.481568] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffe533d8d4e
[  647.483751] R13: 00007ffe533d8d4f R14: 0000000000000000 R15: 00007ff35ec24640
[  647.486168]  </TASK>
[  647.487142] ---[ end trace 0000000000000000 ]---

Punching delalloc extents out from under dirty cached pages is wrong
and broken. We can't remove the delalloc extent until the page is
either removed from memory (i.e. invaliated) or writeback succeeds
in converting the delalloc extent to a real extent and writeback can
clean the page.

Hence we remove xfs_discard_folio() because it is only punching
delalloc blocks from under dirty pages now. With that removal,
nothing else uses ->discard_folio(), so we remove that from the
iomap infrastructure as well.

Reported-by: pengfei.xu@intel.com
Fixes: e9c3a8e820ed ("iomap: don't invalidate folios after writeback errors")
Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/iomap/buffered-io.c | 16 +++-------------
 fs/xfs/xfs_aops.c      | 35 -----------------------------------
 include/linux/iomap.h  |  6 ------
 3 files changed, 3 insertions(+), 54 deletions(-)

Comments

Christoph Hellwig Feb. 14, 2023, 8:14 a.m. UTC | #1
Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
Brian Foster Feb. 14, 2023, 6:10 p.m. UTC | #2
On Tue, Feb 14, 2023 at 04:51:14PM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> Ever since commit e9c3a8e820ed ("iomap: don't invalidate folios
> after writeback errors") XFS and iomap have been retaining dirty
> folios in memory after a writeback error. XFS no longer invalidates
> the folio, and iomap no longer clears the folio uptodate state.
> 
> However, iomap is still been calling ->discard_folio on error, and
> XFS is still punching the delayed allocation range backing the dirty
> folio.
> 
> This is incorrect behaviour. The folio remains dirty and up to date,
> meaning that another writeback will be attempted in the near future.
> THis means that XFS is still going to have to allocate space for it
> during writeback, and that means it still needs to have a delayed
> allocation reservation and extent backing the dirty folio.
> 

Hmm.. I don't think that is correct. It looks like the previous patch
removes the invalidation, but writeback clears the dirty bit before
calling into the fs and we're not doing anything to redirty the folio,
so there's no guarantee of subsequent writeback. As of that patch we
presumably leave around a !dirty,uptodate folio without backing storage
(due to the discard call as you've pointed out). I would hope/think the
!dirty state would mean a redirty reallocates delalloc for the folio,
but that's not immediately clear to me.

Regardless, I can see how this prevents this sort of error in the
scenario where writeback fails due to corruption, but I don't see how it
doesn't just break error handling of writeback failures not associated
with corruption. I.e., a delalloc folio is allocated/dirtied, writeback
fails due to some random/transient error, delalloc is left around on a
!dirty page (i.e. stale), and reclaim eventually comes around and
results in the usual block accounting corruption associated with stale
delalloc blocks. This is easy enough to test/reproduce (just tried it
via error injection to delalloc conversion) that I'm kind of surprised
fstests doesn't uncover it. :/

> Failure to retain the delalloc extent (because xfs_discard_folio()
> punched it out) means that the next writeback attempt does not find
> an extent over the range of the write in ->map_blocks(), and
> xfs_map_blocks() triggers a WARN_ON() because it should never land
> in a hole for a data fork writeback request. This looks like:
> 

I'm not sure this warning makes a lot of sense either given most of this
should occur around the folio lock. Looking back at the code and the
error report for this, the same error injection used above on a 5k write
to a bsize=1k fs actually shows the punch remove fsb offsets 0-5 on a
writeback failure, so it does appear to be punching too much out. The
cause appears to be that the end offset is calculated in
xfs_discard_folio() by rounding up the start offset to 4k (folio size).
If pos == 0, this results in passing end_fsb == 0 to the punch code,
which xfs_iext_lookup_extent_before() then changes to fsb == 5 because
that's the last block of the delalloc extent that covers fsb 0.

I've not reproduced the warning shown below, but I do see the side
effect of losing data at fsb 5 if the first page conversion fails. This
is silent because iomap now sees a hole and just skips the page. I
suspect the warning results from a combination of this problem and
racing writeback contexts as you've described in the commit log.

Brian

> [  647.356969] ------------[ cut here ]------------
> [  647.359277] WARNING: CPU: 14 PID: 21913 at fs/xfs/libxfs/xfs_bmap.c:4510 xfs_bmapi_convert_delalloc+0x221/0x4e0
> [  647.364551] Modules linked in:
> [  647.366294] CPU: 14 PID: 21913 Comm: test_delalloc_c Not tainted 6.2.0-rc7-dgc+ #1754
> [  647.370356] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-5 04/01/2014
> [  647.374781] RIP: 0010:xfs_bmapi_convert_delalloc+0x221/0x4e0
> [  647.377807] Code: e9 7d fe ff ff 80 bf 54 01 00 00 00 0f 84 68 fe ff ff 48 8d 47 70 48 89 04 24 e9 63 fe ff ff 83 fd 02 41 be f5 ff ff ff 74 a5 <0f> 0b eb a0
> [  647.387242] RSP: 0018:ffffc9000aa677a8 EFLAGS: 00010293
> [  647.389837] RAX: 0000000000000000 RBX: ffff88825bc4da00 RCX: 0000000000000000
> [  647.393371] RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffff88825bc4da40
> [  647.396546] RBP: 0000000000000000 R08: ffffc9000aa67810 R09: ffffc9000aa67850
> [  647.400186] R10: ffff88825bc4da00 R11: ffff888800a9aaac R12: ffff888101707000
> [  647.403484] R13: ffffc9000aa677e0 R14: 00000000fffffff5 R15: 0000000000000004
> [  647.406251] FS:  00007ff35ec24640(0000) GS:ffff88883ed00000(0000) knlGS:0000000000000000
> [  647.410089] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  647.413225] CR2: 00007f7292cbc5d0 CR3: 0000000807d0e004 CR4: 0000000000060ee0
> [  647.416917] Call Trace:
> [  647.418080]  <TASK>
> [  647.419291]  ? _raw_spin_unlock_irqrestore+0xe/0x30
> [  647.421400]  xfs_map_blocks+0x1b7/0x590
> [  647.422951]  iomap_do_writepage+0x1f1/0x7d0
> [  647.424607]  ? __mod_lruvec_page_state+0x93/0x140
> [  647.426419]  write_cache_pages+0x17b/0x4f0
> [  647.428079]  ? iomap_read_end_io+0x2c0/0x2c0
> [  647.429839]  iomap_writepages+0x1c/0x40
> [  647.431377]  xfs_vm_writepages+0x79/0xb0
> [  647.432826]  do_writepages+0xbd/0x1a0
> [  647.434207]  ? obj_cgroup_release+0x73/0xb0
> [  647.435769]  ? drain_obj_stock+0x130/0x290
> [  647.437273]  ? avc_has_perm+0x8a/0x1a0
> [  647.438746]  ? avc_has_perm_noaudit+0x8c/0x100
> [  647.440223]  __filemap_fdatawrite_range+0x8e/0xa0
> [  647.441960]  filemap_write_and_wait_range+0x3d/0xa0
> [  647.444258]  __iomap_dio_rw+0x181/0x790
> [  647.445960]  ? __schedule+0x385/0xa20
> [  647.447829]  iomap_dio_rw+0xe/0x30
> [  647.449284]  xfs_file_dio_write_aligned+0x97/0x150
> [  647.451332]  ? selinux_file_permission+0x107/0x150
> [  647.453299]  xfs_file_write_iter+0xd2/0x120
> [  647.455238]  vfs_write+0x20d/0x3d0
> [  647.456768]  ksys_write+0x69/0xf0
> [  647.458067]  do_syscall_64+0x34/0x80
> [  647.459488]  entry_SYSCALL_64_after_hwframe+0x63/0xcd
> [  647.461529] RIP: 0033:0x7ff3651406e9
> [  647.463119] Code: 48 8d 3d 2a a1 0c 00 0f 05 eb a5 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f8
> [  647.470563] RSP: 002b:00007ff35ec23df8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> [  647.473465] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff3651406e9
> [  647.476278] RDX: 0000000000001400 RSI: 0000000020000000 RDI: 0000000000000005
> [  647.478895] RBP: 00007ff35ec23e20 R08: 0000000000000005 R09: 0000000000000000
> [  647.481568] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffe533d8d4e
> [  647.483751] R13: 00007ffe533d8d4f R14: 0000000000000000 R15: 00007ff35ec24640
> [  647.486168]  </TASK>
> [  647.487142] ---[ end trace 0000000000000000 ]---
> 
> Punching delalloc extents out from under dirty cached pages is wrong
> and broken. We can't remove the delalloc extent until the page is
> either removed from memory (i.e. invaliated) or writeback succeeds
> in converting the delalloc extent to a real extent and writeback can
> clean the page.
> 
> Hence we remove xfs_discard_folio() because it is only punching
> delalloc blocks from under dirty pages now. With that removal,
> nothing else uses ->discard_folio(), so we remove that from the
> iomap infrastructure as well.
> 
> Reported-by: pengfei.xu@intel.com
> Fixes: e9c3a8e820ed ("iomap: don't invalidate folios after writeback errors")
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/iomap/buffered-io.c | 16 +++-------------
>  fs/xfs/xfs_aops.c      | 35 -----------------------------------
>  include/linux/iomap.h  |  6 ------
>  3 files changed, 3 insertions(+), 54 deletions(-)
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 356193e44cf0..502fa2d41097 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -1635,19 +1635,9 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
>  	 * completion to mark the error state of the pages under writeback
>  	 * appropriately.
>  	 */
> -	if (unlikely(error)) {
> -		/*
> -		 * Let the filesystem know what portion of the current page
> -		 * failed to map. If the page hasn't been added to ioend, it
> -		 * won't be affected by I/O completion and we must unlock it
> -		 * now.
> -		 */
> -		if (wpc->ops->discard_folio)
> -			wpc->ops->discard_folio(folio, pos);
> -		if (!count) {
> -			folio_unlock(folio);
> -			goto done;
> -		}
> +	if (unlikely(error && !count)) {
> +		folio_unlock(folio);
> +		goto done;
>  	}
>  
>  	folio_start_writeback(folio);
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index 41734202796f..3f0dae5ca9c2 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -448,44 +448,9 @@ xfs_prepare_ioend(
>  	return status;
>  }
>  
> -/*
> - * If the page has delalloc blocks on it, we need to punch them out before we
> - * invalidate the page.  If we don't, we leave a stale delalloc mapping on the
> - * inode that can trip up a later direct I/O read operation on the same region.
> - *
> - * We prevent this by truncating away the delalloc regions on the page.  Because
> - * they are delalloc, we can do this without needing a transaction. Indeed - if
> - * we get ENOSPC errors, we have to be able to do this truncation without a
> - * transaction as there is no space left for block reservation (typically why we
> - * see a ENOSPC in writeback).
> - */
> -static void
> -xfs_discard_folio(
> -	struct folio		*folio,
> -	loff_t			pos)
> -{
> -	struct xfs_inode	*ip = XFS_I(folio->mapping->host);
> -	struct xfs_mount	*mp = ip->i_mount;
> -	int			error;
> -
> -	if (xfs_is_shutdown(mp))
> -		return;
> -
> -	xfs_alert_ratelimited(mp,
> -		"page discard on page "PTR_FMT", inode 0x%llx, pos %llu.",
> -			folio, ip->i_ino, pos);
> -
> -	error = xfs_bmap_punch_delalloc_range(ip, pos,
> -			round_up(pos, folio_size(folio)));
> -
> -	if (error && !xfs_is_shutdown(mp))
> -		xfs_alert(mp, "page discard unable to remove delalloc mapping.");
> -}
> -
>  static const struct iomap_writeback_ops xfs_writeback_ops = {
>  	.map_blocks		= xfs_map_blocks,
>  	.prepare_ioend		= xfs_prepare_ioend,
> -	.discard_folio		= xfs_discard_folio,
>  };
>  
>  STATIC int
> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> index 0983dfc9a203..681e26a86791 100644
> --- a/include/linux/iomap.h
> +++ b/include/linux/iomap.h
> @@ -310,12 +310,6 @@ struct iomap_writeback_ops {
>  	 * conversions.
>  	 */
>  	int (*prepare_ioend)(struct iomap_ioend *ioend, int status);
> -
> -	/*
> -	 * Optional, allows the file system to discard state on a page where
> -	 * we failed to submit any I/O.
> -	 */
> -	void (*discard_folio)(struct folio *folio, loff_t pos);
>  };
>  
>  struct iomap_writepage_ctx {
> -- 
> 2.39.0
>
Dave Chinner Feb. 14, 2023, 10:20 p.m. UTC | #3
On Tue, Feb 14, 2023 at 01:10:05PM -0500, Brian Foster wrote:
> On Tue, Feb 14, 2023 at 04:51:14PM +1100, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@redhat.com>
> > 
> > Ever since commit e9c3a8e820ed ("iomap: don't invalidate folios
> > after writeback errors") XFS and iomap have been retaining dirty
> > folios in memory after a writeback error. XFS no longer invalidates
> > the folio, and iomap no longer clears the folio uptodate state.
> > 
> > However, iomap is still been calling ->discard_folio on error, and
> > XFS is still punching the delayed allocation range backing the dirty
> > folio.
> > 
> > This is incorrect behaviour. The folio remains dirty and up to date,
> > meaning that another writeback will be attempted in the near future.
> > THis means that XFS is still going to have to allocate space for it
> > during writeback, and that means it still needs to have a delayed
> > allocation reservation and extent backing the dirty folio.
> > 
> 
> Hmm.. I don't think that is correct. It looks like the previous patch
> removes the invalidation, but writeback clears the dirty bit before
> calling into the fs and we're not doing anything to redirty the folio,
> so there's no guarantee of subsequent writeback.

Ah, right, I got confused with iomap_do_writepage() which redirties
folios it performs no action on. The case that is being tripped here
is "count == 0" which means no action has actually been taken on the
folio and it is not submitted for writeback. We don't mark the folio
with an error on submission failure like we do for errors reported
to IO completion, so the folio is just left in it's current state
in the cache.

> Regardless, I can see how this prevents this sort of error in the
> scenario where writeback fails due to corruption, but I don't see how it
> doesn't just break error handling of writeback failures not associated
> with corruption.

What other cases in XFS do we have that cause mapping failure? We
can't get ENOSPC here because of delalloc reservations. We can't get
ENOMEM because all the memory allocations are blocking. That just
leaves IO errors reading metadata, or structure corruption when
parsing and modifying on-disk metadata.  I can't think (off the top
of my head) of any other type of error we can get returned from
allocation - what sort of non-corruption errors were you thinking
of here?

> fails due to some random/transient error, delalloc is left around on a
> !dirty page (i.e. stale), and reclaim eventually comes around and
> results in the usual block accounting corruption associated with stale
> delalloc blocks.

The first patches in the series fix those issues. If we get stray
delalloc extents on a healthy inode, then it will still trigger all
the warnings/asserts that we have now. But if the inode has been
marked sick by a corruption based allocation failure, it will clean
up in reclaim without leaking anything or throwing any new warnings.

> This is easy enough to test/reproduce (just tried it
> via error injection to delalloc conversion) that I'm kind of surprised
> fstests doesn't uncover it. :/

> > Failure to retain the delalloc extent (because xfs_discard_folio()
> > punched it out) means that the next writeback attempt does not find
> > an extent over the range of the write in ->map_blocks(), and
> > xfs_map_blocks() triggers a WARN_ON() because it should never land
> > in a hole for a data fork writeback request. This looks like:
> > 
> 
> I'm not sure this warning makes a lot of sense either given most of this
> should occur around the folio lock. Looking back at the code and the
> error report for this, the same error injection used above on a 5k write
> to a bsize=1k fs actually shows the punch remove fsb offsets 0-5 on a
> writeback failure, so it does appear to be punching too much out.  The
> cause appears to be that the end offset is calculated in
> xfs_discard_folio() by rounding up the start offset to 4k (folio size).
> If pos == 0, this results in passing end_fsb == 0 to the punch code,
> which xfs_iext_lookup_extent_before() then changes to fsb == 5 because
> that's the last block of the delalloc extent that covers fsb 0.

And that is the bug I could not see in commit 7348b322332d ("xfs:
xfs_bmap_punch_delalloc_range() should take a byte range") which is
what this warning was bisected down to. Thank you for identifying
the reason the bisect landed on that commit. Have you written a
fix to test out you reasoning that you can post?

Cheers,

Dave.
Dave Chinner Feb. 15, 2023, 1:26 a.m. UTC | #4
On Wed, Feb 15, 2023 at 09:20:00AM +1100, Dave Chinner wrote:
> On Tue, Feb 14, 2023 at 01:10:05PM -0500, Brian Foster wrote:
> > On Tue, Feb 14, 2023 at 04:51:14PM +1100, Dave Chinner wrote:
> > > From: Dave Chinner <dchinner@redhat.com>
> > > 
> > > Ever since commit e9c3a8e820ed ("iomap: don't invalidate folios
> > > after writeback errors") XFS and iomap have been retaining dirty
> > > folios in memory after a writeback error. XFS no longer invalidates
> > > the folio, and iomap no longer clears the folio uptodate state.
> > > 
> > > However, iomap is still been calling ->discard_folio on error, and
> > > XFS is still punching the delayed allocation range backing the dirty
> > > folio.
> > > 
> > > This is incorrect behaviour. The folio remains dirty and up to date,
> > > meaning that another writeback will be attempted in the near future.
> > > THis means that XFS is still going to have to allocate space for it
> > > during writeback, and that means it still needs to have a delayed
> > > allocation reservation and extent backing the dirty folio.
> > > 
> > 
> > Hmm.. I don't think that is correct. It looks like the previous patch
> > removes the invalidation, but writeback clears the dirty bit before
> > calling into the fs and we're not doing anything to redirty the folio,
> > so there's no guarantee of subsequent writeback.
> 
> Ah, right, I got confused with iomap_do_writepage() which redirties
> folios it performs no action on. The case that is being tripped here
> is "count == 0" which means no action has actually been taken on the
> folio and it is not submitted for writeback. We don't mark the folio
> with an error on submission failure like we do for errors reported
> to IO completion, so the folio is just left in it's current state
> in the cache.

OK, so after thinking on this for a little while, and then asking
the question on #xfs:

[15/2/23 09:39] <dchinner> so, if we don't start writeback on a page
on mapping failure, should we be redirtying it?

I think the direction this patchset is heading towards is the
correct direction. The discussion that followed pretty much leads to
needing to redirty the folio on any submission failure so that the
VFS infrastructure will try to write the data again in future. I've
included the full log of the discussion below so there is a record
of in the lore archives.

I also think that redirtying the page is the right thing to do when
we consider that we are going to be trying to fix corruptions
online, without users even needing to know a corruption was
encountered. In this case, we need to keep the folio dirty so that
once we've repaired the metadata corruption the user data will be
written back.

This also points out another aspect where health status should be
taken into account. When we select an AG for allocation, we should
check first that it is healthy before trying to allocate from it.
This would allow writeback to fail the first time because the AG
selected was corrupt, but on the second VFS attempt to write it back
it won't select the AG we already know is corrupt and hence may well
succeed in allocating the space needed to perform writeback.

It's these sorts of conditions that lead me to think that this
patchset is going in the right direction for XFS - we just need to
ensure that the folio we failed to submit bios for (even on mixed
folio writeback submission success/failure) is redirtied so that
future writeback attempts will be made.

Hence I think all this patchset needs is an additional patch that
adds a call to folio_redirty_for_writeback() when mapping failures
occur. We may need some additional fixes to ensure these dirty pages
are discarded at unmount if they are persistent/unrecoverable
failures, but this seems to be the right approach for the failure
handling behaviour we are trying to acheive now and into the
future...

Cheers,

Dave.
Brian Foster Feb. 15, 2023, 3:25 p.m. UTC | #5
On Wed, Feb 15, 2023 at 09:20:00AM +1100, Dave Chinner wrote:
> On Tue, Feb 14, 2023 at 01:10:05PM -0500, Brian Foster wrote:
> > On Tue, Feb 14, 2023 at 04:51:14PM +1100, Dave Chinner wrote:
> > > From: Dave Chinner <dchinner@redhat.com>
> > > 
> > > Ever since commit e9c3a8e820ed ("iomap: don't invalidate folios
> > > after writeback errors") XFS and iomap have been retaining dirty
> > > folios in memory after a writeback error. XFS no longer invalidates
> > > the folio, and iomap no longer clears the folio uptodate state.
> > > 
> > > However, iomap is still been calling ->discard_folio on error, and
> > > XFS is still punching the delayed allocation range backing the dirty
> > > folio.
> > > 
> > > This is incorrect behaviour. The folio remains dirty and up to date,
> > > meaning that another writeback will be attempted in the near future.
> > > THis means that XFS is still going to have to allocate space for it
> > > during writeback, and that means it still needs to have a delayed
> > > allocation reservation and extent backing the dirty folio.
> > > 
> > 
> > Hmm.. I don't think that is correct. It looks like the previous patch
> > removes the invalidation, but writeback clears the dirty bit before
> > calling into the fs and we're not doing anything to redirty the folio,
> > so there's no guarantee of subsequent writeback.
> 
> Ah, right, I got confused with iomap_do_writepage() which redirties
> folios it performs no action on. The case that is being tripped here
> is "count == 0" which means no action has actually been taken on the
> folio and it is not submitted for writeback. We don't mark the folio
> with an error on submission failure like we do for errors reported
> to IO completion, so the folio is just left in it's current state
> in the cache.
> 
> > Regardless, I can see how this prevents this sort of error in the
> > scenario where writeback fails due to corruption, but I don't see how it
> > doesn't just break error handling of writeback failures not associated
> > with corruption.
> 
> What other cases in XFS do we have that cause mapping failure? We
> can't get ENOSPC here because of delalloc reservations. We can't get
> ENOMEM because all the memory allocations are blocking. That just
> leaves IO errors reading metadata, or structure corruption when
> parsing and modifying on-disk metadata.  I can't think (off the top
> of my head) of any other type of error we can get returned from
> allocation - what sort of non-corruption errors were you thinking
> of here?
> 
> > fails due to some random/transient error, delalloc is left around on a
> > !dirty page (i.e. stale), and reclaim eventually comes around and
> > results in the usual block accounting corruption associated with stale
> > delalloc blocks.
> 
> The first patches in the series fix those issues. If we get stray
> delalloc extents on a healthy inode, then it will still trigger all
> the warnings/asserts that we have now. But if the inode has been
> marked sick by a corruption based allocation failure, it will clean
> up in reclaim without leaking anything or throwing any new warnings.
> 

Those warnings/asserts that exist now indicate something is wrong and
that free space accounting is likely about to become corrupted, because
an otherwise clean inode is being reclaimed with stale delalloc blocks.

I see there's an error injection knob (XFS_ERRTAG_REDUCE_MAX_IEXTENTS)
tied to the max extent count checking stuff in the delalloc conversion
path. You should be able to add some (10+) extents to a file and then
turn that thing all the way up to induce a (delalloc conversion)
writeback failure and see exactly what I'm talking about [1].

Brian

[1] The following occurs with this patch, but not on mainline because the
purpose of ->discard_folio() is to prevent it.

(/mnt/file has 10+ preexisting extents beyond the 0-5k range)

# echo 1 > /sys/fs/xfs/vdb1/errortag/reduce_max_iextents
# xfs_io -fc "pwrite 0 5k" -c fsync /mnt/file
wrote 5120/5120 bytes at offset 0
5 KiB, 5 ops; 0.0000 sec (52.503 MiB/sec and 53763.4409 ops/sec)
fsync: File too large
# umount /mnt/
#
Message from syslogd@localhost at Feb 15 09:47:41 ...                                                                                                           kernel:XFS: Assertion failed: 0, file: fs/xfs/xfs_icache.c, line: 1818

Message from syslogd@localhost at Feb 15 09:47:41 ...
 kernel:XFS: Assertion failed: xfs_is_shutdown(mp) || percpu_counter_sum(&mp->m_delalloc_blks) == 0, file: fs/xfs/xfs_super.c, line: 1068
#
# xfs_repair -n /dev/vdb1 
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
sb_fdblocks 20960174, counted 20960186
...

> > This is easy enough to test/reproduce (just tried it
> > via error injection to delalloc conversion) that I'm kind of surprised
> > fstests doesn't uncover it. :/
> 
> > > Failure to retain the delalloc extent (because xfs_discard_folio()
> > > punched it out) means that the next writeback attempt does not find
> > > an extent over the range of the write in ->map_blocks(), and
> > > xfs_map_blocks() triggers a WARN_ON() because it should never land
> > > in a hole for a data fork writeback request. This looks like:
> > > 
> > 
> > I'm not sure this warning makes a lot of sense either given most of this
> > should occur around the folio lock. Looking back at the code and the
> > error report for this, the same error injection used above on a 5k write
> > to a bsize=1k fs actually shows the punch remove fsb offsets 0-5 on a
> > writeback failure, so it does appear to be punching too much out.  The
> > cause appears to be that the end offset is calculated in
> > xfs_discard_folio() by rounding up the start offset to 4k (folio size).
> > If pos == 0, this results in passing end_fsb == 0 to the punch code,
> > which xfs_iext_lookup_extent_before() then changes to fsb == 5 because
> > that's the last block of the delalloc extent that covers fsb 0.
> 
> And that is the bug I could not see in commit 7348b322332d ("xfs:
> xfs_bmap_punch_delalloc_range() should take a byte range") which is
> what this warning was bisected down to. Thank you for identifying
> the reason the bisect landed on that commit. Have you written a
> fix to test out you reasoning that you can post?
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
>
Dave Chinner Feb. 15, 2023, 11:03 p.m. UTC | #6
On Wed, Feb 15, 2023 at 10:25:43AM -0500, Brian Foster wrote:
> On Wed, Feb 15, 2023 at 09:20:00AM +1100, Dave Chinner wrote:
> > On Tue, Feb 14, 2023 at 01:10:05PM -0500, Brian Foster wrote:
> > > On Tue, Feb 14, 2023 at 04:51:14PM +1100, Dave Chinner wrote:
> > > > From: Dave Chinner <dchinner@redhat.com>
> > > > 
> > > > Ever since commit e9c3a8e820ed ("iomap: don't invalidate folios
> > > > after writeback errors") XFS and iomap have been retaining dirty
> > > > folios in memory after a writeback error. XFS no longer invalidates
> > > > the folio, and iomap no longer clears the folio uptodate state.
> > > > 
> > > > However, iomap is still been calling ->discard_folio on error, and
> > > > XFS is still punching the delayed allocation range backing the dirty
> > > > folio.
> > > > 
> > > > This is incorrect behaviour. The folio remains dirty and up to date,
> > > > meaning that another writeback will be attempted in the near future.
> > > > THis means that XFS is still going to have to allocate space for it
> > > > during writeback, and that means it still needs to have a delayed
> > > > allocation reservation and extent backing the dirty folio.
> > > > 
> > > 
> > > Hmm.. I don't think that is correct. It looks like the previous patch
> > > removes the invalidation, but writeback clears the dirty bit before
> > > calling into the fs and we're not doing anything to redirty the folio,
> > > so there's no guarantee of subsequent writeback.
> > 
> > Ah, right, I got confused with iomap_do_writepage() which redirties
> > folios it performs no action on. The case that is being tripped here
> > is "count == 0" which means no action has actually been taken on the
> > folio and it is not submitted for writeback. We don't mark the folio
> > with an error on submission failure like we do for errors reported
> > to IO completion, so the folio is just left in it's current state
> > in the cache.
> > 
> > > Regardless, I can see how this prevents this sort of error in the
> > > scenario where writeback fails due to corruption, but I don't see how it
> > > doesn't just break error handling of writeback failures not associated
> > > with corruption.
> > 
> > What other cases in XFS do we have that cause mapping failure? We
> > can't get ENOSPC here because of delalloc reservations. We can't get
> > ENOMEM because all the memory allocations are blocking. That just
> > leaves IO errors reading metadata, or structure corruption when
> > parsing and modifying on-disk metadata.  I can't think (off the top
> > of my head) of any other type of error we can get returned from
> > allocation - what sort of non-corruption errors were you thinking
> > of here?
> > 
> > > fails due to some random/transient error, delalloc is left around on a
> > > !dirty page (i.e. stale), and reclaim eventually comes around and
> > > results in the usual block accounting corruption associated with stale
> > > delalloc blocks.
> > 
> > The first patches in the series fix those issues. If we get stray
> > delalloc extents on a healthy inode, then it will still trigger all
> > the warnings/asserts that we have now. But if the inode has been
> > marked sick by a corruption based allocation failure, it will clean
> > up in reclaim without leaking anything or throwing any new warnings.
> > 
> 
> Those warnings/asserts that exist now indicate something is wrong and
> that free space accounting is likely about to become corrupted, because
> an otherwise clean inode is being reclaimed with stale delalloc blocks.

Well, yes.

> I see there's an error injection knob (XFS_ERRTAG_REDUCE_MAX_IEXTENTS)
> tied to the max extent count checking stuff in the delalloc conversion
> path. You should be able to add some (10+) extents to a file and then
> turn that thing all the way up to induce a (delalloc conversion)
> writeback failure and see exactly what I'm talking about [1].
> 
> Brian
> 
> [1] The following occurs with this patch, but not on mainline because the
> purpose of ->discard_folio() is to prevent it.

A non-corruption related writeback error has resulted in those debug
checks triggering correctly. This demonstrates the debug checks are
still working as intended. :)

Hence this isn't an argument against removing ->discard_folio(), this is
merely a demonstration that the current patch series needs more work.

Indeed, if the folio gets redirtied here instead of left clean as
we've already talked about, a future writeback may, in fact, succeed
and this specific problem goes away. We know how this retry
mechanism works - it's exactly what we do with metadata write
failures. Further, changing the behaviour of failure handling here
is exactly what we have the configurable error handling
infrastructure for. It's also why the "fail on unmount"
functionality exists, too.

That is, if we get to the point that "fail on unmount" triggers for
metadata we cannot write back due to persistent errors, we should
also perform the same trigger for data we cannot write back due to
persistent writeback allocation failures. In which case, any
allocation error should mark the inode sick and the unconverted
delalloc extents get cleaned up correctly by the final inode reclaim
pass.

Cheers,

Dave.
diff mbox series

Patch

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 356193e44cf0..502fa2d41097 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1635,19 +1635,9 @@  iomap_writepage_map(struct iomap_writepage_ctx *wpc,
 	 * completion to mark the error state of the pages under writeback
 	 * appropriately.
 	 */
-	if (unlikely(error)) {
-		/*
-		 * Let the filesystem know what portion of the current page
-		 * failed to map. If the page hasn't been added to ioend, it
-		 * won't be affected by I/O completion and we must unlock it
-		 * now.
-		 */
-		if (wpc->ops->discard_folio)
-			wpc->ops->discard_folio(folio, pos);
-		if (!count) {
-			folio_unlock(folio);
-			goto done;
-		}
+	if (unlikely(error && !count)) {
+		folio_unlock(folio);
+		goto done;
 	}
 
 	folio_start_writeback(folio);
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 41734202796f..3f0dae5ca9c2 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -448,44 +448,9 @@  xfs_prepare_ioend(
 	return status;
 }
 
-/*
- * If the page has delalloc blocks on it, we need to punch them out before we
- * invalidate the page.  If we don't, we leave a stale delalloc mapping on the
- * inode that can trip up a later direct I/O read operation on the same region.
- *
- * We prevent this by truncating away the delalloc regions on the page.  Because
- * they are delalloc, we can do this without needing a transaction. Indeed - if
- * we get ENOSPC errors, we have to be able to do this truncation without a
- * transaction as there is no space left for block reservation (typically why we
- * see a ENOSPC in writeback).
- */
-static void
-xfs_discard_folio(
-	struct folio		*folio,
-	loff_t			pos)
-{
-	struct xfs_inode	*ip = XFS_I(folio->mapping->host);
-	struct xfs_mount	*mp = ip->i_mount;
-	int			error;
-
-	if (xfs_is_shutdown(mp))
-		return;
-
-	xfs_alert_ratelimited(mp,
-		"page discard on page "PTR_FMT", inode 0x%llx, pos %llu.",
-			folio, ip->i_ino, pos);
-
-	error = xfs_bmap_punch_delalloc_range(ip, pos,
-			round_up(pos, folio_size(folio)));
-
-	if (error && !xfs_is_shutdown(mp))
-		xfs_alert(mp, "page discard unable to remove delalloc mapping.");
-}
-
 static const struct iomap_writeback_ops xfs_writeback_ops = {
 	.map_blocks		= xfs_map_blocks,
 	.prepare_ioend		= xfs_prepare_ioend,
-	.discard_folio		= xfs_discard_folio,
 };
 
 STATIC int
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 0983dfc9a203..681e26a86791 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -310,12 +310,6 @@  struct iomap_writeback_ops {
 	 * conversions.
 	 */
 	int (*prepare_ioend)(struct iomap_ioend *ioend, int status);
-
-	/*
-	 * Optional, allows the file system to discard state on a page where
-	 * we failed to submit any I/O.
-	 */
-	void (*discard_folio)(struct folio *folio, loff_t pos);
 };
 
 struct iomap_writepage_ctx {