diff mbox series

[1/2] xfs: force writes to delalloc regions to unwritten

Message ID 157915535059.2406747.264640456606868955.stgit@magnolia (mailing list archive)
State Superseded
Headers show
Series xfs: fix stale disk exposure after crash | expand

Commit Message

Darrick J. Wong Jan. 16, 2020, 6:15 a.m. UTC
From: Darrick J. Wong <darrick.wong@oracle.com>

When writing to a delalloc region in the data fork, commit the new
allocations (of the da reservation) as unwritten so that the mappings
are only marked written once writeback completes successfully.  This
fixes the problem of stale data exposure if the system goes down during
targeted writeback of a specific region of a file, as tested by
generic/042.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fs/xfs/libxfs/xfs_bmap.c |   28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

Comments

Christoph Hellwig Jan. 16, 2020, 4:47 p.m. UTC | #1
On Wed, Jan 15, 2020 at 10:15:50PM -0800, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> When writing to a delalloc region in the data fork, commit the new
> allocations (of the da reservation) as unwritten so that the mappings
> are only marked written once writeback completes successfully.  This
> fixes the problem of stale data exposure if the system goes down during
> targeted writeback of a specific region of a file, as tested by
> generic/042.

I think this is the only safe way to deal with buffered I/O into
holes, so:

Reviewed-by: Christoph Hellwig <hch@lst.de>
Darrick J. Wong Jan. 16, 2020, 11:16 p.m. UTC | #2
On Thu, Jan 16, 2020 at 08:47:41AM -0800, Christoph Hellwig wrote:
> On Wed, Jan 15, 2020 at 10:15:50PM -0800, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > When writing to a delalloc region in the data fork, commit the new
> > allocations (of the da reservation) as unwritten so that the mappings
> > are only marked written once writeback completes successfully.  This
> > fixes the problem of stale data exposure if the system goes down during
> > targeted writeback of a specific region of a file, as tested by
> > generic/042.
> 
> I think this is the only safe way to deal with buffered I/O into
> holes, so:

Ditto.  Thanks for reviewing things!

--D

> Reviewed-by: Christoph Hellwig <hch@lst.de>
Dave Chinner Jan. 19, 2020, 8:49 p.m. UTC | #3
On Wed, Jan 15, 2020 at 10:15:50PM -0800, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> When writing to a delalloc region in the data fork, commit the new
> allocations (of the da reservation) as unwritten so that the mappings
> are only marked written once writeback completes successfully.  This
> fixes the problem of stale data exposure if the system goes down during
> targeted writeback of a specific region of a file, as tested by
> generic/042.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  fs/xfs/libxfs/xfs_bmap.c |   28 +++++++++++++++++-----------
>  1 file changed, 17 insertions(+), 11 deletions(-)
> 
> 
> diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
> index 4544732d09a5..220ea1dc67ab 100644
> --- a/fs/xfs/libxfs/xfs_bmap.c
> +++ b/fs/xfs/libxfs/xfs_bmap.c
> @@ -4190,17 +4190,7 @@ xfs_bmapi_allocate(
>  	bma->got.br_blockcount = bma->length;
>  	bma->got.br_state = XFS_EXT_NORM;
>  
> -	/*
> -	 * In the data fork, a wasdelay extent has been initialized, so
> -	 * shouldn't be flagged as unwritten.
> -	 *
> -	 * For the cow fork, however, we convert delalloc reservations
> -	 * (extents allocated for speculative preallocation) to
> -	 * allocated unwritten extents, and only convert the unwritten
> -	 * extents to real extents when we're about to write the data.
> -	 */
> -	if ((!bma->wasdel || (bma->flags & XFS_BMAPI_COWFORK)) &&
> -	    (bma->flags & XFS_BMAPI_PREALLOC))
> +	if (bma->flags & XFS_BMAPI_PREALLOC)
>  		bma->got.br_state = XFS_EXT_UNWRITTEN;
>  
>  	if (bma->wasdel)
> @@ -4608,8 +4598,24 @@ xfs_bmapi_convert_delalloc(
>  	bma.offset = bma.got.br_startoff;
>  	bma.length = max_t(xfs_filblks_t, bma.got.br_blockcount, MAXEXTLEN);
>  	bma.minleft = xfs_bmapi_minleft(tp, ip, whichfork);
> +
> +	/*
> +	 * When we're converting the delalloc reservations backing dirty pages
> +	 * in the page cache, we must be careful about how we create the new
> +	 * extents:
> +	 *
> +	 * New CoW fork extents are created unwritten, turned into real extents
> +	 * when we're about to write the data to disk, and mapped into the data
> +	 * fork after the write finishes.  End of story.
> +	 *
> +	 * New data fork extents must be mapped in as unwritten and converted
> +	 * to real extents after the write succeeds to avoid exposing stale
> +	 * disk contents if we crash.
> +	 */
>  	if (whichfork == XFS_COW_FORK)
>  		bma.flags = XFS_BMAPI_COWFORK | XFS_BMAPI_PREALLOC;
> +	else
> +		bma.flags = XFS_BMAPI_PREALLOC;

	bma.flags = XFS_BMAPI_PREALLOC;
	if (whichfork == XFS_COW_FORK)
		bma.flags |= XFS_BMAPI_COWFORK;

However, I'm still not convinced that this is the right/best
solution to the problem. It is the easiest, yes, but the down side
on fast/high iops storage and/or under low memory conditions has
potential to be extremely significant.

I suspect that heavy users of buffered O_DSYNC writes into sparse
files are going to notice this the most - there are databases out
there that work this way. And I suspect that most of the workloads
that use buffered O_DSYNC IO heavily won't see this change for years
as enterprise upgrade cycles are notoriously slow.

IOWs, all I see this change doing is kicking the can down the road
and guaranteeing that we'll still have to solve this stale data
exposure problem more efficiently in the future. And instead of
doing it now when we have the time and freedom to do the work, it
will have to be done urgently under high priority escalation
pressures...

Cheers,

Dave.
Darrick J. Wong Feb. 3, 2020, 8:14 p.m. UTC | #4
On Mon, Jan 20, 2020 at 07:49:25AM +1100, Dave Chinner wrote:
> On Wed, Jan 15, 2020 at 10:15:50PM -0800, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > When writing to a delalloc region in the data fork, commit the new
> > allocations (of the da reservation) as unwritten so that the mappings
> > are only marked written once writeback completes successfully.  This
> > fixes the problem of stale data exposure if the system goes down during
> > targeted writeback of a specific region of a file, as tested by
> > generic/042.
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> >  fs/xfs/libxfs/xfs_bmap.c |   28 +++++++++++++++++-----------
> >  1 file changed, 17 insertions(+), 11 deletions(-)
> > 
> > 
> > diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
> > index 4544732d09a5..220ea1dc67ab 100644
> > --- a/fs/xfs/libxfs/xfs_bmap.c
> > +++ b/fs/xfs/libxfs/xfs_bmap.c
> > @@ -4190,17 +4190,7 @@ xfs_bmapi_allocate(
> >  	bma->got.br_blockcount = bma->length;
> >  	bma->got.br_state = XFS_EXT_NORM;
> >  
> > -	/*
> > -	 * In the data fork, a wasdelay extent has been initialized, so
> > -	 * shouldn't be flagged as unwritten.
> > -	 *
> > -	 * For the cow fork, however, we convert delalloc reservations
> > -	 * (extents allocated for speculative preallocation) to
> > -	 * allocated unwritten extents, and only convert the unwritten
> > -	 * extents to real extents when we're about to write the data.
> > -	 */
> > -	if ((!bma->wasdel || (bma->flags & XFS_BMAPI_COWFORK)) &&
> > -	    (bma->flags & XFS_BMAPI_PREALLOC))
> > +	if (bma->flags & XFS_BMAPI_PREALLOC)
> >  		bma->got.br_state = XFS_EXT_UNWRITTEN;
> >  
> >  	if (bma->wasdel)
> > @@ -4608,8 +4598,24 @@ xfs_bmapi_convert_delalloc(
> >  	bma.offset = bma.got.br_startoff;
> >  	bma.length = max_t(xfs_filblks_t, bma.got.br_blockcount, MAXEXTLEN);
> >  	bma.minleft = xfs_bmapi_minleft(tp, ip, whichfork);
> > +
> > +	/*
> > +	 * When we're converting the delalloc reservations backing dirty pages
> > +	 * in the page cache, we must be careful about how we create the new
> > +	 * extents:
> > +	 *
> > +	 * New CoW fork extents are created unwritten, turned into real extents
> > +	 * when we're about to write the data to disk, and mapped into the data
> > +	 * fork after the write finishes.  End of story.
> > +	 *
> > +	 * New data fork extents must be mapped in as unwritten and converted
> > +	 * to real extents after the write succeeds to avoid exposing stale
> > +	 * disk contents if we crash.
> > +	 */
> >  	if (whichfork == XFS_COW_FORK)
> >  		bma.flags = XFS_BMAPI_COWFORK | XFS_BMAPI_PREALLOC;
> > +	else
> > +		bma.flags = XFS_BMAPI_PREALLOC;
> 
> 	bma.flags = XFS_BMAPI_PREALLOC;
> 	if (whichfork == XFS_COW_FORK)
> 		bma.flags |= XFS_BMAPI_COWFORK;
> 
> However, I'm still not convinced that this is the right/best
> solution to the problem. It is the easiest, yes, but the down side
> on fast/high iops storage and/or under low memory conditions has
> potential to be extremely significant.
> 
> I suspect that heavy users of buffered O_DSYNC writes into sparse
> files are going to notice this the most - there are databases out
> there that work this way. And I suspect that most of the workloads
> that use buffered O_DSYNC IO heavily won't see this change for years
> as enterprise upgrade cycles are notoriously slow.
> 
> IOWs, all I see this change doing is kicking the can down the road
> and guaranteeing that we'll still have to solve this stale data
> exposure problem more efficiently in the future. And instead of
> doing it now when we have the time and freedom to do the work, it
> will have to be done urgently under high priority escalation
> pressures...

FWIW I'm *already* under urgent high priority GA blocker escalation
pressure, which is why this came up again.

Granted it did take 12 days of losing the battle with the distro folks
that this really isn't a release blocker (but teh sekuritehs!!) but...oh
right, I forgot that xfs actually /does/ crash more than once per day in
our environment.

I guess *we* will find out how much performance really disappears if you
do it this way. :P

--D

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
Brian Foster May 7, 2020, 10:32 a.m. UTC | #5
On Mon, Feb 03, 2020 at 12:14:45PM -0800, Darrick J. Wong wrote:
> On Mon, Jan 20, 2020 at 07:49:25AM +1100, Dave Chinner wrote:
> > On Wed, Jan 15, 2020 at 10:15:50PM -0800, Darrick J. Wong wrote:
> > > From: Darrick J. Wong <darrick.wong@oracle.com>
> > > 
> > > When writing to a delalloc region in the data fork, commit the new
> > > allocations (of the da reservation) as unwritten so that the mappings
> > > are only marked written once writeback completes successfully.  This
> > > fixes the problem of stale data exposure if the system goes down during
> > > targeted writeback of a specific region of a file, as tested by
> > > generic/042.
> > > 
> > > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > > ---
> > >  fs/xfs/libxfs/xfs_bmap.c |   28 +++++++++++++++++-----------
> > >  1 file changed, 17 insertions(+), 11 deletions(-)
> > > 
> > > 
> > > diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
> > > index 4544732d09a5..220ea1dc67ab 100644
> > > --- a/fs/xfs/libxfs/xfs_bmap.c
> > > +++ b/fs/xfs/libxfs/xfs_bmap.c
> > > @@ -4190,17 +4190,7 @@ xfs_bmapi_allocate(
> > >  	bma->got.br_blockcount = bma->length;
> > >  	bma->got.br_state = XFS_EXT_NORM;
> > >  
> > > -	/*
> > > -	 * In the data fork, a wasdelay extent has been initialized, so
> > > -	 * shouldn't be flagged as unwritten.
> > > -	 *
> > > -	 * For the cow fork, however, we convert delalloc reservations
> > > -	 * (extents allocated for speculative preallocation) to
> > > -	 * allocated unwritten extents, and only convert the unwritten
> > > -	 * extents to real extents when we're about to write the data.
> > > -	 */
> > > -	if ((!bma->wasdel || (bma->flags & XFS_BMAPI_COWFORK)) &&
> > > -	    (bma->flags & XFS_BMAPI_PREALLOC))
> > > +	if (bma->flags & XFS_BMAPI_PREALLOC)
> > >  		bma->got.br_state = XFS_EXT_UNWRITTEN;
> > >  
> > >  	if (bma->wasdel)
> > > @@ -4608,8 +4598,24 @@ xfs_bmapi_convert_delalloc(
> > >  	bma.offset = bma.got.br_startoff;
> > >  	bma.length = max_t(xfs_filblks_t, bma.got.br_blockcount, MAXEXTLEN);
> > >  	bma.minleft = xfs_bmapi_minleft(tp, ip, whichfork);
> > > +
> > > +	/*
> > > +	 * When we're converting the delalloc reservations backing dirty pages
> > > +	 * in the page cache, we must be careful about how we create the new
> > > +	 * extents:
> > > +	 *
> > > +	 * New CoW fork extents are created unwritten, turned into real extents
> > > +	 * when we're about to write the data to disk, and mapped into the data
> > > +	 * fork after the write finishes.  End of story.
> > > +	 *
> > > +	 * New data fork extents must be mapped in as unwritten and converted
> > > +	 * to real extents after the write succeeds to avoid exposing stale
> > > +	 * disk contents if we crash.
> > > +	 */
> > >  	if (whichfork == XFS_COW_FORK)
> > >  		bma.flags = XFS_BMAPI_COWFORK | XFS_BMAPI_PREALLOC;
> > > +	else
> > > +		bma.flags = XFS_BMAPI_PREALLOC;
> > 
> > 	bma.flags = XFS_BMAPI_PREALLOC;
> > 	if (whichfork == XFS_COW_FORK)
> > 		bma.flags |= XFS_BMAPI_COWFORK;
> > 
> > However, I'm still not convinced that this is the right/best
> > solution to the problem. It is the easiest, yes, but the down side
> > on fast/high iops storage and/or under low memory conditions has
> > potential to be extremely significant.
> > 
> > I suspect that heavy users of buffered O_DSYNC writes into sparse
> > files are going to notice this the most - there are databases out
> > there that work this way. And I suspect that most of the workloads
> > that use buffered O_DSYNC IO heavily won't see this change for years
> > as enterprise upgrade cycles are notoriously slow.
> > 
> > IOWs, all I see this change doing is kicking the can down the road
> > and guaranteeing that we'll still have to solve this stale data
> > exposure problem more efficiently in the future. And instead of
> > doing it now when we have the time and freedom to do the work, it
> > will have to be done urgently under high priority escalation
> > pressures...
> 
> FWIW I'm *already* under urgent high priority GA blocker escalation
> pressure, which is why this came up again.
> 
> Granted it did take 12 days of losing the battle with the distro folks
> that this really isn't a release blocker (but teh sekuritehs!!) but...oh
> right, I forgot that xfs actually /does/ crash more than once per day in
> our environment.
> 
> I guess *we* will find out how much performance really disappears if you
> do it this way. :P
> 

Sorry for resurrecting an old thread here, but I was thinking about this
problem a bit and realized I didn't have a great handle on the concerns
with using unwritten extents for delalloc writeback. Dave calls out the
O_DSYNC buffered writes into sparse files case above. I don't see any
numbers posted here so I ran some quick tests using a large ramdisk to
get low latency I/O.

I only seem to require a couple threads to max out single file, random
4k dsync buffered write iops in this particular setup. I see ~30.6k iops
from a baseline 5.7.0-rc1 kernel and that drops to ~25.7k iops when
using unwritten extents for delalloc conversion. However, note that the
same workload through single threaded aio+dio (qd 32) runs at ~63.7k
iops. That's already using unwritten extents for dio so it's unaffected
by this patch. Also note that using a 10MB extent size hint puts the
dsync buffered write case at ~27k iops (again for both kernels because
we're already using unwritten extents in that case as well).

For reference, full file preallocation (i.e. no allocs, unwritten
extents) runs at ~27k iops for the buffered write case and ~87k iops for
aio+dio. The overwrite (no unwritten, no alloc) case gets to ~250k iops
with the same couple dsync buffered write threads and close to 300k iops
with single threaded aio+dio (which I think is maxing out my memory
bandwidth).

Altogether, this has me wondering whether it's really worth the
complexity of trying to avoid the overhead of unwritten extents for
delalloc conversion. There is a noticeable hit, but it's an already slow
path compared to async I/O mechanisms. Further, it's a workload that
typically comes with a recommendation to use extent size hints to avoid
fragmentation issues and minimize allocation overhead, and that feature
already bypasses delalloc extents in favor of unwritten extents.
Thoughts? Suggestions for other tests?

Brian

> --D
> 
> > Cheers,
> > 
> > Dave.
> > -- 
> > Dave Chinner
> > david@fromorbit.com
>
Darrick J. Wong May 14, 2020, 4:33 p.m. UTC | #6
On Thu, May 07, 2020 at 06:32:32AM -0400, Brian Foster wrote:
> On Mon, Feb 03, 2020 at 12:14:45PM -0800, Darrick J. Wong wrote:
> > On Mon, Jan 20, 2020 at 07:49:25AM +1100, Dave Chinner wrote:
> > > On Wed, Jan 15, 2020 at 10:15:50PM -0800, Darrick J. Wong wrote:
> > > > From: Darrick J. Wong <darrick.wong@oracle.com>
> > > > 
> > > > When writing to a delalloc region in the data fork, commit the new
> > > > allocations (of the da reservation) as unwritten so that the mappings
> > > > are only marked written once writeback completes successfully.  This
> > > > fixes the problem of stale data exposure if the system goes down during
> > > > targeted writeback of a specific region of a file, as tested by
> > > > generic/042.
> > > > 
> > > > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > > > ---
> > > >  fs/xfs/libxfs/xfs_bmap.c |   28 +++++++++++++++++-----------
> > > >  1 file changed, 17 insertions(+), 11 deletions(-)
> > > > 
> > > > 
> > > > diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
> > > > index 4544732d09a5..220ea1dc67ab 100644
> > > > --- a/fs/xfs/libxfs/xfs_bmap.c
> > > > +++ b/fs/xfs/libxfs/xfs_bmap.c
> > > > @@ -4190,17 +4190,7 @@ xfs_bmapi_allocate(
> > > >  	bma->got.br_blockcount = bma->length;
> > > >  	bma->got.br_state = XFS_EXT_NORM;
> > > >  
> > > > -	/*
> > > > -	 * In the data fork, a wasdelay extent has been initialized, so
> > > > -	 * shouldn't be flagged as unwritten.
> > > > -	 *
> > > > -	 * For the cow fork, however, we convert delalloc reservations
> > > > -	 * (extents allocated for speculative preallocation) to
> > > > -	 * allocated unwritten extents, and only convert the unwritten
> > > > -	 * extents to real extents when we're about to write the data.
> > > > -	 */
> > > > -	if ((!bma->wasdel || (bma->flags & XFS_BMAPI_COWFORK)) &&
> > > > -	    (bma->flags & XFS_BMAPI_PREALLOC))
> > > > +	if (bma->flags & XFS_BMAPI_PREALLOC)
> > > >  		bma->got.br_state = XFS_EXT_UNWRITTEN;
> > > >  
> > > >  	if (bma->wasdel)
> > > > @@ -4608,8 +4598,24 @@ xfs_bmapi_convert_delalloc(
> > > >  	bma.offset = bma.got.br_startoff;
> > > >  	bma.length = max_t(xfs_filblks_t, bma.got.br_blockcount, MAXEXTLEN);
> > > >  	bma.minleft = xfs_bmapi_minleft(tp, ip, whichfork);
> > > > +
> > > > +	/*
> > > > +	 * When we're converting the delalloc reservations backing dirty pages
> > > > +	 * in the page cache, we must be careful about how we create the new
> > > > +	 * extents:
> > > > +	 *
> > > > +	 * New CoW fork extents are created unwritten, turned into real extents
> > > > +	 * when we're about to write the data to disk, and mapped into the data
> > > > +	 * fork after the write finishes.  End of story.
> > > > +	 *
> > > > +	 * New data fork extents must be mapped in as unwritten and converted
> > > > +	 * to real extents after the write succeeds to avoid exposing stale
> > > > +	 * disk contents if we crash.
> > > > +	 */
> > > >  	if (whichfork == XFS_COW_FORK)
> > > >  		bma.flags = XFS_BMAPI_COWFORK | XFS_BMAPI_PREALLOC;
> > > > +	else
> > > > +		bma.flags = XFS_BMAPI_PREALLOC;
> > > 
> > > 	bma.flags = XFS_BMAPI_PREALLOC;
> > > 	if (whichfork == XFS_COW_FORK)
> > > 		bma.flags |= XFS_BMAPI_COWFORK;
> > > 
> > > However, I'm still not convinced that this is the right/best
> > > solution to the problem. It is the easiest, yes, but the down side
> > > on fast/high iops storage and/or under low memory conditions has
> > > potential to be extremely significant.
> > > 
> > > I suspect that heavy users of buffered O_DSYNC writes into sparse
> > > files are going to notice this the most - there are databases out
> > > there that work this way. And I suspect that most of the workloads
> > > that use buffered O_DSYNC IO heavily won't see this change for years
> > > as enterprise upgrade cycles are notoriously slow.
> > > 
> > > IOWs, all I see this change doing is kicking the can down the road
> > > and guaranteeing that we'll still have to solve this stale data
> > > exposure problem more efficiently in the future. And instead of
> > > doing it now when we have the time and freedom to do the work, it
> > > will have to be done urgently under high priority escalation
> > > pressures...
> > 
> > FWIW I'm *already* under urgent high priority GA blocker escalation
> > pressure, which is why this came up again.
> > 
> > Granted it did take 12 days of losing the battle with the distro folks
> > that this really isn't a release blocker (but teh sekuritehs!!) but...oh
> > right, I forgot that xfs actually /does/ crash more than once per day in
> > our environment.
> > 
> > I guess *we* will find out how much performance really disappears if you
> > do it this way. :P
> > 
> 
> Sorry for resurrecting an old thread here, but I was thinking about this
> problem a bit and realized I didn't have a great handle on the concerns
> with using unwritten extents for delalloc writeback. Dave calls out the
> O_DSYNC buffered writes into sparse files case above. I don't see any
> numbers posted here so I ran some quick tests using a large ramdisk to
> get low latency I/O.
> 
> I only seem to require a couple threads to max out single file, random
> 4k dsync buffered write iops in this particular setup. I see ~30.6k iops
> from a baseline 5.7.0-rc1 kernel and that drops to ~25.7k iops when
> using unwritten extents for delalloc conversion. However, note that the
> same workload through single threaded aio+dio (qd 32) runs at ~63.7k
> iops. That's already using unwritten extents for dio so it's unaffected
> by this patch. Also note that using a 10MB extent size hint puts the
> dsync buffered write case at ~27k iops (again for both kernels because
> we're already using unwritten extents in that case as well).
> 
> For reference, full file preallocation (i.e. no allocs, unwritten
> extents) runs at ~27k iops for the buffered write case and ~87k iops for
> aio+dio. The overwrite (no unwritten, no alloc) case gets to ~250k iops
> with the same couple dsync buffered write threads and close to 300k iops
> with single threaded aio+dio (which I think is maxing out my memory
> bandwidth).
> 
> Altogether, this has me wondering whether it's really worth the
> complexity of trying to avoid the overhead of unwritten extents for
> delalloc conversion. There is a noticeable hit, but it's an already slow
> path compared to async I/O mechanisms. Further, it's a workload that
> typically comes with a recommendation to use extent size hints to avoid
> fragmentation issues and minimize allocation overhead, and that feature
> already bypasses delalloc extents in favor of unwritten extents.
> Thoughts? Suggestions for other tests?

4-5 months ago I ran more or less the same benchmark (albeit with
$someproduct) and came to the same conclusion -- if you're really doing
scattershot buffered O_DSYNC writes to a file, you'll lose about 15-20%
with this patch added.  Then apparently I ... got buried in xmas and
other bugs and forgot to send the results. :/

Granted, you had to /force/ $someproduct to do this because it would
typically do either synchronous aio+dio, or it could do async writes
with an fsync at the important parts, or it could set an extent hint,
or (the default) it writes zeroes ahead of time so that XFS will stay
out of the way when checkpoints need to get done asap.

I could say (glibly) that I'm so buried in bug triage that what's a few
more? but maybe the rest of you have other opinions? :)

--D

> 
> Brian
> 
> > --D
> > 
> > > Cheers,
> > > 
> > > Dave.
> > > -- 
> > > Dave Chinner
> > > david@fromorbit.com
> > 
>
Brian Foster May 14, 2020, 5:44 p.m. UTC | #7
On Thu, May 14, 2020 at 09:33:17AM -0700, Darrick J. Wong wrote:
> On Thu, May 07, 2020 at 06:32:32AM -0400, Brian Foster wrote:
> > On Mon, Feb 03, 2020 at 12:14:45PM -0800, Darrick J. Wong wrote:
> > > On Mon, Jan 20, 2020 at 07:49:25AM +1100, Dave Chinner wrote:
> > > > On Wed, Jan 15, 2020 at 10:15:50PM -0800, Darrick J. Wong wrote:
> > > > > From: Darrick J. Wong <darrick.wong@oracle.com>
> > > > > 
> > > > > When writing to a delalloc region in the data fork, commit the new
> > > > > allocations (of the da reservation) as unwritten so that the mappings
> > > > > are only marked written once writeback completes successfully.  This
> > > > > fixes the problem of stale data exposure if the system goes down during
> > > > > targeted writeback of a specific region of a file, as tested by
> > > > > generic/042.
> > > > > 
> > > > > Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> > > > > ---
> > > > >  fs/xfs/libxfs/xfs_bmap.c |   28 +++++++++++++++++-----------
> > > > >  1 file changed, 17 insertions(+), 11 deletions(-)
> > > > > 
> > > > > 
> > > > > diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
> > > > > index 4544732d09a5..220ea1dc67ab 100644
> > > > > --- a/fs/xfs/libxfs/xfs_bmap.c
> > > > > +++ b/fs/xfs/libxfs/xfs_bmap.c
> > > > > @@ -4190,17 +4190,7 @@ xfs_bmapi_allocate(
> > > > >  	bma->got.br_blockcount = bma->length;
> > > > >  	bma->got.br_state = XFS_EXT_NORM;
> > > > >  
> > > > > -	/*
> > > > > -	 * In the data fork, a wasdelay extent has been initialized, so
> > > > > -	 * shouldn't be flagged as unwritten.
> > > > > -	 *
> > > > > -	 * For the cow fork, however, we convert delalloc reservations
> > > > > -	 * (extents allocated for speculative preallocation) to
> > > > > -	 * allocated unwritten extents, and only convert the unwritten
> > > > > -	 * extents to real extents when we're about to write the data.
> > > > > -	 */
> > > > > -	if ((!bma->wasdel || (bma->flags & XFS_BMAPI_COWFORK)) &&
> > > > > -	    (bma->flags & XFS_BMAPI_PREALLOC))
> > > > > +	if (bma->flags & XFS_BMAPI_PREALLOC)
> > > > >  		bma->got.br_state = XFS_EXT_UNWRITTEN;
> > > > >  
> > > > >  	if (bma->wasdel)
> > > > > @@ -4608,8 +4598,24 @@ xfs_bmapi_convert_delalloc(
> > > > >  	bma.offset = bma.got.br_startoff;
> > > > >  	bma.length = max_t(xfs_filblks_t, bma.got.br_blockcount, MAXEXTLEN);
> > > > >  	bma.minleft = xfs_bmapi_minleft(tp, ip, whichfork);
> > > > > +
> > > > > +	/*
> > > > > +	 * When we're converting the delalloc reservations backing dirty pages
> > > > > +	 * in the page cache, we must be careful about how we create the new
> > > > > +	 * extents:
> > > > > +	 *
> > > > > +	 * New CoW fork extents are created unwritten, turned into real extents
> > > > > +	 * when we're about to write the data to disk, and mapped into the data
> > > > > +	 * fork after the write finishes.  End of story.
> > > > > +	 *
> > > > > +	 * New data fork extents must be mapped in as unwritten and converted
> > > > > +	 * to real extents after the write succeeds to avoid exposing stale
> > > > > +	 * disk contents if we crash.
> > > > > +	 */
> > > > >  	if (whichfork == XFS_COW_FORK)
> > > > >  		bma.flags = XFS_BMAPI_COWFORK | XFS_BMAPI_PREALLOC;
> > > > > +	else
> > > > > +		bma.flags = XFS_BMAPI_PREALLOC;
> > > > 
> > > > 	bma.flags = XFS_BMAPI_PREALLOC;
> > > > 	if (whichfork == XFS_COW_FORK)
> > > > 		bma.flags |= XFS_BMAPI_COWFORK;
> > > > 
> > > > However, I'm still not convinced that this is the right/best
> > > > solution to the problem. It is the easiest, yes, but the down side
> > > > on fast/high iops storage and/or under low memory conditions has
> > > > potential to be extremely significant.
> > > > 
> > > > I suspect that heavy users of buffered O_DSYNC writes into sparse
> > > > files are going to notice this the most - there are databases out
> > > > there that work this way. And I suspect that most of the workloads
> > > > that use buffered O_DSYNC IO heavily won't see this change for years
> > > > as enterprise upgrade cycles are notoriously slow.
> > > > 
> > > > IOWs, all I see this change doing is kicking the can down the road
> > > > and guaranteeing that we'll still have to solve this stale data
> > > > exposure problem more efficiently in the future. And instead of
> > > > doing it now when we have the time and freedom to do the work, it
> > > > will have to be done urgently under high priority escalation
> > > > pressures...
> > > 
> > > FWIW I'm *already* under urgent high priority GA blocker escalation
> > > pressure, which is why this came up again.
> > > 
> > > Granted it did take 12 days of losing the battle with the distro folks
> > > that this really isn't a release blocker (but teh sekuritehs!!) but...oh
> > > right, I forgot that xfs actually /does/ crash more than once per day in
> > > our environment.
> > > 
> > > I guess *we* will find out how much performance really disappears if you
> > > do it this way. :P
> > > 
> > 
> > Sorry for resurrecting an old thread here, but I was thinking about this
> > problem a bit and realized I didn't have a great handle on the concerns
> > with using unwritten extents for delalloc writeback. Dave calls out the
> > O_DSYNC buffered writes into sparse files case above. I don't see any
> > numbers posted here so I ran some quick tests using a large ramdisk to
> > get low latency I/O.
> > 
> > I only seem to require a couple threads to max out single file, random
> > 4k dsync buffered write iops in this particular setup. I see ~30.6k iops
> > from a baseline 5.7.0-rc1 kernel and that drops to ~25.7k iops when
> > using unwritten extents for delalloc conversion. However, note that the
> > same workload through single threaded aio+dio (qd 32) runs at ~63.7k
> > iops. That's already using unwritten extents for dio so it's unaffected
> > by this patch. Also note that using a 10MB extent size hint puts the
> > dsync buffered write case at ~27k iops (again for both kernels because
> > we're already using unwritten extents in that case as well).
> > 
> > For reference, full file preallocation (i.e. no allocs, unwritten
> > extents) runs at ~27k iops for the buffered write case and ~87k iops for
> > aio+dio. The overwrite (no unwritten, no alloc) case gets to ~250k iops
> > with the same couple dsync buffered write threads and close to 300k iops
> > with single threaded aio+dio (which I think is maxing out my memory
> > bandwidth).
> > 
> > Altogether, this has me wondering whether it's really worth the
> > complexity of trying to avoid the overhead of unwritten extents for
> > delalloc conversion. There is a noticeable hit, but it's an already slow
> > path compared to async I/O mechanisms. Further, it's a workload that
> > typically comes with a recommendation to use extent size hints to avoid
> > fragmentation issues and minimize allocation overhead, and that feature
> > already bypasses delalloc extents in favor of unwritten extents.
> > Thoughts? Suggestions for other tests?
> 
> 4-5 months ago I ran more or less the same benchmark (albeit with
> $someproduct) and came to the same conclusion -- if you're really doing
> scattershot buffered O_DSYNC writes to a file, you'll lose about 15-20%
> with this patch added.  Then apparently I ... got buried in xmas and
> other bugs and forgot to send the results. :/
> 

Heh. :P Thanks for following up..

> Granted, you had to /force/ $someproduct to do this because it would
> typically do either synchronous aio+dio, or it could do async writes
> with an fsync at the important parts, or it could set an extent hint,
> or (the default) it writes zeroes ahead of time so that XFS will stay
> out of the way when checkpoints need to get done asap.
> 

Right, all of which already utilize unwritten extents except for the
explicit zeroing case.

> I could say (glibly) that I'm so buried in bug triage that what's a few
> more? but maybe the rest of you have other opinions? :)
> 

In dwelling on this a bit more since my previous reply, I also realized
that holding off this particular patch has kind of distorted the
problem. For example, I'd been trying to think of clever ways to prevent
stale data exposure on buffered writes, but that leads to ideas that
tend to be specific to delayed allocation and thus of limited benefit
for other write paths.

IOW, it's not really the delayed allocation case we should be so focused
on improving as much as the performance hit of unwritten extents in
general. We've already accepted the corresponding performance hit in
more common I/O paths in the name of correctness. The (preexisting)
impact of preallocated unwritten extents in more efficient write paths
vs. pure overwrites is far more prominent than the impact of unwritten
extents on buffered writes.

ISTM that the right thing to do here is merge this patch, finally fix
the last known stale data exposure vector, and then perhaps step back
and think about how we might improve performance of unwritten extents
(or whatever alternate scheme to avoid stale data exposure we might
think up) regardless of allocation policy or write path. That might even
make a decent side topic associated with the SSD allocation policy topic
proposal Dave recently posted.

It looks like Christoph already reviewed the patch. I'm not sure if his
opinion changed it all after the subsequent discussion, but otherwise
that just leaves Dave's objection. Dave, any thoughts on this given the
test results and broader context? What do you think about getting this
patch merged and revisiting the whole unwritten extent thing
independently?

Brian

> --D
> 
> > 
> > Brian
> > 
> > > --D
> > > 
> > > > Cheers,
> > > > 
> > > > Dave.
> > > > -- 
> > > > Dave Chinner
> > > > david@fromorbit.com
> > > 
> > 
>
Christoph Hellwig May 17, 2020, 7:48 a.m. UTC | #8
On Thu, May 14, 2020 at 01:44:48PM -0400, Brian Foster wrote:
> It looks like Christoph already reviewed the patch. I'm not sure if his
> opinion changed it all after the subsequent discussion, but otherwise
> that just leaves Dave's objection. Dave, any thoughts on this given the
> test results and broader context? What do you think about getting this
> patch merged and revisiting the whole unwritten extent thing
> independently?

Absolutely no change of mind.  I think we need to fix the issue ASAP
and then look into performance improvements as soon as we get to it.
Darrick J. Wong May 19, 2020, 12:40 a.m. UTC | #9
On Sun, May 17, 2020 at 12:48:43AM -0700, Christoph Hellwig wrote:
> On Thu, May 14, 2020 at 01:44:48PM -0400, Brian Foster wrote:
> > It looks like Christoph already reviewed the patch. I'm not sure if his
> > opinion changed it all after the subsequent discussion, but otherwise
> > that just leaves Dave's objection. Dave, any thoughts on this given the
> > test results and broader context? What do you think about getting this
> > patch merged and revisiting the whole unwritten extent thing
> > independently?
> 
> Absolutely no change of mind.  I think we need to fix the issue ASAP
> and then look into performance improvements as soon as we get to it.

Hm, well, I do have a couple more patches to fix a couple of minor
regressions that fstests found...

--D
Dave Chinner May 20, 2020, 1:03 a.m. UTC | #10
On Thu, May 14, 2020 at 01:44:48PM -0400, Brian Foster wrote:
> On Thu, May 14, 2020 at 09:33:17AM -0700, Darrick J. Wong wrote:
> ISTM that the right thing to do here is merge this patch, finally fix
> the last known stale data exposure vector, and then perhaps step back
> and think about how we might improve performance of unwritten extents
> (or whatever alternate scheme to avoid stale data exposure we might
> think up) regardless of allocation policy or write path. That might even
> make a decent side topic associated with the SSD allocation policy topic
> proposal Dave recently posted.
> 
> It looks like Christoph already reviewed the patch. I'm not sure if his
> opinion changed it all after the subsequent discussion, but otherwise
> that just leaves Dave's objection. Dave, any thoughts on this given the
> test results and broader context? What do you think about getting this
> patch merged and revisiting the whole unwritten extent thing
> independently?

I guess when we look at this in the broader context of "buffered IO
already sucks real bad for high performance IO" then a few percent
here or there doesn't really matter.

Note, however, that the difference between dio+aio and buffered
writes has nothing to do with unwritten extents - what you are
seeing is the cost of the CPU copying the data into the page cache
in the user process context vs just submitting IO. Essentially, IO
submission time is way higher for buffered IO because of the data
copy, hence a CPU can do less of them per second. IOWs, unwritten
extents are not significant compared to the overhead the page cache
adds to the IO path....

Cheers,

Dave.
diff mbox series

Patch

diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
index 4544732d09a5..220ea1dc67ab 100644
--- a/fs/xfs/libxfs/xfs_bmap.c
+++ b/fs/xfs/libxfs/xfs_bmap.c
@@ -4190,17 +4190,7 @@  xfs_bmapi_allocate(
 	bma->got.br_blockcount = bma->length;
 	bma->got.br_state = XFS_EXT_NORM;
 
-	/*
-	 * In the data fork, a wasdelay extent has been initialized, so
-	 * shouldn't be flagged as unwritten.
-	 *
-	 * For the cow fork, however, we convert delalloc reservations
-	 * (extents allocated for speculative preallocation) to
-	 * allocated unwritten extents, and only convert the unwritten
-	 * extents to real extents when we're about to write the data.
-	 */
-	if ((!bma->wasdel || (bma->flags & XFS_BMAPI_COWFORK)) &&
-	    (bma->flags & XFS_BMAPI_PREALLOC))
+	if (bma->flags & XFS_BMAPI_PREALLOC)
 		bma->got.br_state = XFS_EXT_UNWRITTEN;
 
 	if (bma->wasdel)
@@ -4608,8 +4598,24 @@  xfs_bmapi_convert_delalloc(
 	bma.offset = bma.got.br_startoff;
 	bma.length = max_t(xfs_filblks_t, bma.got.br_blockcount, MAXEXTLEN);
 	bma.minleft = xfs_bmapi_minleft(tp, ip, whichfork);
+
+	/*
+	 * When we're converting the delalloc reservations backing dirty pages
+	 * in the page cache, we must be careful about how we create the new
+	 * extents:
+	 *
+	 * New CoW fork extents are created unwritten, turned into real extents
+	 * when we're about to write the data to disk, and mapped into the data
+	 * fork after the write finishes.  End of story.
+	 *
+	 * New data fork extents must be mapped in as unwritten and converted
+	 * to real extents after the write succeeds to avoid exposing stale
+	 * disk contents if we crash.
+	 */
 	if (whichfork == XFS_COW_FORK)
 		bma.flags = XFS_BMAPI_COWFORK | XFS_BMAPI_PREALLOC;
+	else
+		bma.flags = XFS_BMAPI_PREALLOC;
 
 	if (!xfs_iext_peek_prev_extent(ifp, &bma.icur, &bma.prev))
 		bma.prev.br_startoff = NULLFILEOFF;