diff mbox series

[6/9] xfs: xfs_bmap_punch_delalloc_range() should take a byte range

Message ID 20221115013043.360610-7-david@fromorbit.com (mailing list archive)
State New
Headers show
Series xfs, iomap: fix data corrupton due to stale cached iomaps | expand

Commit Message

Dave Chinner Nov. 15, 2022, 1:30 a.m. UTC
From: Dave Chinner <dchinner@redhat.com>

All the callers of xfs_bmap_punch_delalloc_range() jump through
hoops to convert a byte range to filesystem blocks before calling
xfs_bmap_punch_delalloc_range(). Instead, pass the byte range to
xfs_bmap_punch_delalloc_range() and have it do the conversion to
filesystem blocks internally.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_aops.c      | 16 ++++++----------
 fs/xfs/xfs_bmap_util.c | 10 ++++++----
 fs/xfs/xfs_bmap_util.h |  2 +-
 fs/xfs/xfs_iomap.c     | 21 ++++-----------------
 4 files changed, 17 insertions(+), 32 deletions(-)

Comments

Christoph Hellwig Nov. 15, 2022, 8:44 a.m. UTC | #1
On Tue, Nov 15, 2022 at 12:30:40PM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> All the callers of xfs_bmap_punch_delalloc_range() jump through
> hoops to convert a byte range to filesystem blocks before calling
> xfs_bmap_punch_delalloc_range(). Instead, pass the byte range to
> xfs_bmap_punch_delalloc_range() and have it do the conversion to
> filesystem blocks internally.

Ok, so we do this here.   Why can't we just do this earlier and
avoid the strange inbetween stage with a wrapper?
Darrick J. Wong Nov. 15, 2022, 11:48 p.m. UTC | #2
On Tue, Nov 15, 2022 at 12:44:46AM -0800, Christoph Hellwig wrote:
> On Tue, Nov 15, 2022 at 12:30:40PM +1100, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@redhat.com>
> > 
> > All the callers of xfs_bmap_punch_delalloc_range() jump through
> > hoops to convert a byte range to filesystem blocks before calling
> > xfs_bmap_punch_delalloc_range(). Instead, pass the byte range to
> > xfs_bmap_punch_delalloc_range() and have it do the conversion to
> > filesystem blocks internally.
> 
> Ok, so we do this here.   Why can't we just do this earlier and
> avoid the strange inbetween stage with a wrapper?

Probably to avoid rewriting the previous patch with this units change,
is my guess.  Dave, do you want to merge with this patch 4?  I'm
satisfied enough with patches 4 and 6 that I'd rather get this out to
for-next for further testing than play more patch golf.

Reviewed-by: Darrick J. Wong <djwong@kernel.org>

--D
Dave Chinner Nov. 16, 2022, 12:57 a.m. UTC | #3
On Tue, Nov 15, 2022 at 03:48:33PM -0800, Darrick J. Wong wrote:
> On Tue, Nov 15, 2022 at 12:44:46AM -0800, Christoph Hellwig wrote:
> > On Tue, Nov 15, 2022 at 12:30:40PM +1100, Dave Chinner wrote:
> > > From: Dave Chinner <dchinner@redhat.com>
> > > 
> > > All the callers of xfs_bmap_punch_delalloc_range() jump through
> > > hoops to convert a byte range to filesystem blocks before calling
> > > xfs_bmap_punch_delalloc_range(). Instead, pass the byte range to
> > > xfs_bmap_punch_delalloc_range() and have it do the conversion to
> > > filesystem blocks internally.
> > 
> > Ok, so we do this here.   Why can't we just do this earlier and
> > avoid the strange inbetween stage with a wrapper?
> 
> Probably to avoid rewriting the previous patch with this units change,
> is my guess.  Dave, do you want to merge with this patch 4?  I'm
> satisfied enough with patches 4 and 6 that I'd rather get this out to
> for-next for further testing than play more patch golf.

The fact that Christoph NAK'd exporting mapping_seek_hole_data()
this time around is just Plain Fucking Annoying. He didn't say
anything in the last thread about it after I explained why I used
it, so I had assumed it was all OK.

I've already been playing patch golf all morning now to rearrange
all the deck chairs to avoid exporting mapping_seek_hole_data().
Instead we now have an exported iomap function that wraps
mapping_seek_hole_data, and the wrapper function I added in patch 4
is now the callback function that is passed 3 layers deep into the
iomap code.

Hence the xfs_buffered_write_delalloc_punch() function needs to
remain - we can't elide it entire like this patch does - because now
we need a callback function that we can provide to the iomap code so
we avoid coupling internal XFS implementation functions to external
VFS APIs.

-Dave.
Christoph Hellwig Nov. 16, 2022, 5:46 a.m. UTC | #4
On Wed, Nov 16, 2022 at 11:57:23AM +1100, Dave Chinner wrote:
> The fact that Christoph NAK'd exporting mapping_seek_hole_data()
> this time around is just Plain Fucking Annoying.

Thank you so much, I love you too.

> He didn't say
> anything in the last thread about it after I explained why I used
> it, so I had assumed it was all OK.

Quote from the last thread:

"So, the whole scan for delalloc logic seems pretty generic, I think
it can an should be lifted to iomap, with
xfs_buffered_write_delalloc_punch provided as a callback."

> I've already been playing patch golf all morning now to rearrange
> all the deck chairs to avoid exporting mapping_seek_hole_data().
> Instead we now have an exported iomap function that wraps
> mapping_seek_hole_data, and the wrapper function I added in patch 4
> is now the callback function that is passed 3 layers deep into the
> iomap code.
> 
> Hence the xfs_buffered_write_delalloc_punch() function needs to
> remain - we can't elide it entire like this patch does - because now
> we need a callback function that we can provide to the iomap code so
> we avoid coupling internal XFS implementation functions to external
> VFS APIs.

Which was roughly was the intention all along.
diff mbox series

Patch

diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 5d1a995b15f8..6aadc5815068 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -114,9 +114,8 @@  xfs_end_ioend(
 	if (unlikely(error)) {
 		if (ioend->io_flags & IOMAP_F_SHARED) {
 			xfs_reflink_cancel_cow_range(ip, offset, size, true);
-			xfs_bmap_punch_delalloc_range(ip,
-						      XFS_B_TO_FSBT(mp, offset),
-						      XFS_B_TO_FSB(mp, size));
+			xfs_bmap_punch_delalloc_range(ip, offset,
+					offset + size);
 		}
 		goto done;
 	}
@@ -455,12 +454,8 @@  xfs_discard_folio(
 	struct folio		*folio,
 	loff_t			pos)
 {
-	struct inode		*inode = folio->mapping->host;
-	struct xfs_inode	*ip = XFS_I(inode);
+	struct xfs_inode	*ip = XFS_I(folio->mapping->host);
 	struct xfs_mount	*mp = ip->i_mount;
-	size_t			offset = offset_in_folio(folio, pos);
-	xfs_fileoff_t		start_fsb = XFS_B_TO_FSBT(mp, pos);
-	xfs_fileoff_t		pageoff_fsb = XFS_B_TO_FSBT(mp, offset);
 	int			error;
 
 	if (xfs_is_shutdown(mp))
@@ -470,8 +465,9 @@  xfs_discard_folio(
 		"page discard on page "PTR_FMT", inode 0x%llx, pos %llu.",
 			folio, ip->i_ino, pos);
 
-	error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
-			i_blocks_per_folio(inode, folio) - pageoff_fsb);
+	error = xfs_bmap_punch_delalloc_range(ip, pos,
+			round_up(pos, folio_size(folio)));
+
 	if (error && !xfs_is_shutdown(mp))
 		xfs_alert(mp, "page discard unable to remove delalloc mapping.");
 }
diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
index 04d0c2bff67c..867645b74d88 100644
--- a/fs/xfs/xfs_bmap_util.c
+++ b/fs/xfs/xfs_bmap_util.c
@@ -590,11 +590,13 @@  xfs_getbmap(
 int
 xfs_bmap_punch_delalloc_range(
 	struct xfs_inode	*ip,
-	xfs_fileoff_t		start_fsb,
-	xfs_fileoff_t		length)
+	xfs_off_t		start_byte,
+	xfs_off_t		end_byte)
 {
+	struct xfs_mount	*mp = ip->i_mount;
 	struct xfs_ifork	*ifp = &ip->i_df;
-	xfs_fileoff_t		end_fsb = start_fsb + length;
+	xfs_fileoff_t		start_fsb = XFS_B_TO_FSBT(mp, start_byte);
+	xfs_fileoff_t		end_fsb = XFS_B_TO_FSB(mp, end_byte);
 	struct xfs_bmbt_irec	got, del;
 	struct xfs_iext_cursor	icur;
 	int			error = 0;
@@ -607,7 +609,7 @@  xfs_bmap_punch_delalloc_range(
 
 	while (got.br_startoff + got.br_blockcount > start_fsb) {
 		del = got;
-		xfs_trim_extent(&del, start_fsb, length);
+		xfs_trim_extent(&del, start_fsb, end_fsb - start_fsb);
 
 		/*
 		 * A delete can push the cursor forward. Step back to the
diff --git a/fs/xfs/xfs_bmap_util.h b/fs/xfs/xfs_bmap_util.h
index 24b37d211f1d..6888078f5c31 100644
--- a/fs/xfs/xfs_bmap_util.h
+++ b/fs/xfs/xfs_bmap_util.h
@@ -31,7 +31,7 @@  xfs_bmap_rtalloc(struct xfs_bmalloca *ap)
 #endif /* CONFIG_XFS_RT */
 
 int	xfs_bmap_punch_delalloc_range(struct xfs_inode *ip,
-		xfs_fileoff_t start_fsb, xfs_fileoff_t length);
+		xfs_off_t start_byte, xfs_off_t end_byte);
 
 struct kgetbmap {
 	__s64		bmv_offset;	/* file offset of segment in blocks */
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 2d48fcc7bd6f..04da22943e7c 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1120,20 +1120,6 @@  xfs_buffered_write_iomap_begin(
 	return error;
 }
 
-static int
-xfs_buffered_write_delalloc_punch(
-	struct inode		*inode,
-	loff_t			start_byte,
-	loff_t			end_byte)
-{
-	struct xfs_mount	*mp = XFS_M(inode->i_sb);
-	xfs_fileoff_t		start_fsb = XFS_B_TO_FSBT(mp, start_byte);
-	xfs_fileoff_t		end_fsb = XFS_B_TO_FSB(mp, end_byte);
-
-	return xfs_bmap_punch_delalloc_range(XFS_I(inode), start_fsb,
-				end_fsb - start_fsb);
-}
-
 /*
  * Scan the data range passed to us for dirty page cache folios. If we find a
  * dirty folio, punch out the preceeding range and update the offset from which
@@ -1172,8 +1158,9 @@  xfs_buffered_write_delalloc_scan(
 			if (offset > *punch_start_byte) {
 				int	error;
 
-				error = xfs_buffered_write_delalloc_punch(inode,
-						*punch_start_byte, offset);
+				error = xfs_bmap_punch_delalloc_range(
+						XFS_I(inode), *punch_start_byte,
+						offset);
 				if (error) {
 					folio_unlock(folio);
 					folio_put(folio);
@@ -1267,7 +1254,7 @@  xfs_buffered_write_delalloc_release(
 	}
 
 	if (punch_start_byte < end_byte)
-		error = xfs_buffered_write_delalloc_punch(inode,
+		error = xfs_bmap_punch_delalloc_range(XFS_I(inode),
 				punch_start_byte, end_byte);
 out_unlock:
 	filemap_invalidate_unlock(inode->i_mapping);