diff mbox series

[v3,1/2] xfs: set a mount flag when perag reservation is active

Message ID 20210318161707.723742-2-bfoster@redhat.com (mailing list archive)
State Superseded
Headers show
Series xfs: set aside allocation btree blocks from block reservation | expand

Commit Message

Brian Foster March 18, 2021, 4:17 p.m. UTC
perag reservation is enabled at mount time on a per AG basis. The
upcoming in-core allocation btree accounting mechanism needs to know
when reservation is enabled and that all perag AGF contexts are
initialized. As a preparation step, set a flag in the mount
structure and unconditionally initialize the pagf on all mounts
where at least one reservation is active.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_ag_resv.c | 24 ++++++++++++++----------
 fs/xfs/xfs_mount.h          |  1 +
 2 files changed, 15 insertions(+), 10 deletions(-)

Comments

Dave Chinner March 18, 2021, 8:55 p.m. UTC | #1
On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote:
> perag reservation is enabled at mount time on a per AG basis. The
> upcoming in-core allocation btree accounting mechanism needs to know
> when reservation is enabled and that all perag AGF contexts are
> initialized. As a preparation step, set a flag in the mount
> structure and unconditionally initialize the pagf on all mounts
> where at least one reservation is active.

I'm not sure this is a good idea. AFAICT, this means just about any
filesystem with finobt, reflink and/or rmap will now typically read
every AGF header in the filesystem at mount time. That means pretty
much every v5 filesystem in production...

We've always tried to avoid needing to reading all AG headers at
mount time because that does not scale when we have really large
filesystems (I'm talking petabytes here). We should only read AG
headers if there is something not fully recovered during the mount
(i.e. slow path) and not on every mount.

Needing to do a few thousand synchonous read IOs during mount makes
mount very slow, and as such we always try to do dynamic
instantiation of AG headers...  Testing I've done with exabyte scale
filesystems (>10^6 AGs) show that it can take minutes for mount to
run when each AG header needs to be read, and that's on SSDs where
the individual read latency is only a couple of hundred
microseconds. On spinning disks that can do 200 IOPS, we're
potentially talking hours just to mount really large filesystems...

Hence I don't think that any algorithm that requires reading every
AGF header in the filesystem at mount time on every v5 filesystem
already out there in production (because finobt triggers this) is a
particularly good idea...

Cheers,

Dave.
Darrick J. Wong March 18, 2021, 10:19 p.m. UTC | #2
On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote:
> On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote:
> > perag reservation is enabled at mount time on a per AG basis. The
> > upcoming in-core allocation btree accounting mechanism needs to know
> > when reservation is enabled and that all perag AGF contexts are
> > initialized. As a preparation step, set a flag in the mount
> > structure and unconditionally initialize the pagf on all mounts
> > where at least one reservation is active.
> 
> I'm not sure this is a good idea. AFAICT, this means just about any
> filesystem with finobt, reflink and/or rmap will now typically read
> every AGF header in the filesystem at mount time. That means pretty
> much every v5 filesystem in production...

They already do that, because the AG headers are where we store the
btree block counts.

> We've always tried to avoid needing to reading all AG headers at
> mount time because that does not scale when we have really large
> filesystems (I'm talking petabytes here). We should only read AG
> headers if there is something not fully recovered during the mount
> (i.e. slow path) and not on every mount.
> 
> Needing to do a few thousand synchonous read IOs during mount makes
> mount very slow, and as such we always try to do dynamic
> instantiation of AG headers...  Testing I've done with exabyte scale
> filesystems (>10^6 AGs) show that it can take minutes for mount to
> run when each AG header needs to be read, and that's on SSDs where
> the individual read latency is only a couple of hundred
> microseconds. On spinning disks that can do 200 IOPS, we're
> potentially talking hours just to mount really large filesystems...

Is that with reflink enabled?  Reflink always scans the right edge of
the refcount btree at mount to clean out stale COW staging extents, and
(prior to the introduction of the inode btree counts feature last year)
we also ahad to walk the entire finobt to find out how big it is.

TBH I think the COW recovery and the AG block reservation pieces are
prime candidates for throwing at an xfs_pwork workqueue so we can
perform those scans in parallel.

> Hence I don't think that any algorithm that requires reading every
> AGF header in the filesystem at mount time on every v5 filesystem
> already out there in production (because finobt triggers this) is a
> particularly good idea...

Perhaps not, but the horse bolted 5 years ago. :/

--D

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
Dave Chinner March 19, 2021, 1:05 a.m. UTC | #3
On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote:
> > On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote:
> > > perag reservation is enabled at mount time on a per AG basis. The
> > > upcoming in-core allocation btree accounting mechanism needs to know
> > > when reservation is enabled and that all perag AGF contexts are
> > > initialized. As a preparation step, set a flag in the mount
> > > structure and unconditionally initialize the pagf on all mounts
> > > where at least one reservation is active.
> > 
> > I'm not sure this is a good idea. AFAICT, this means just about any
> > filesystem with finobt, reflink and/or rmap will now typically read
> > every AGF header in the filesystem at mount time. That means pretty
> > much every v5 filesystem in production...
> 
> They already do that, because the AG headers are where we store the
> btree block counts.

Oh, we're brute forcing AG reservation space? I thought we were
doing something smarter than that, because I'm sure this isn't the
first time I've mentioned this problem....

> > We've always tried to avoid needing to reading all AG headers at
> > mount time because that does not scale when we have really large
> > filesystems (I'm talking petabytes here). We should only read AG
> > headers if there is something not fully recovered during the mount
> > (i.e. slow path) and not on every mount.
> > 
> > Needing to do a few thousand synchonous read IOs during mount makes
> > mount very slow, and as such we always try to do dynamic
> > instantiation of AG headers...  Testing I've done with exabyte scale
> > filesystems (>10^6 AGs) show that it can take minutes for mount to
> > run when each AG header needs to be read, and that's on SSDs where
> > the individual read latency is only a couple of hundred
> > microseconds. On spinning disks that can do 200 IOPS, we're
> > potentially talking hours just to mount really large filesystems...
> 
> Is that with reflink enabled?  Reflink always scans the right edge of
> the refcount btree at mount to clean out stale COW staging extents,

Aren't they cleaned up at unmount when the inode is inactivated?
i.e. isn't this something that should only be done on a unclean
mount?

> and
> (prior to the introduction of the inode btree counts feature last year)
> we also ahad to walk the entire finobt to find out how big it is.

ugh, I forgot about the fact we had to add that wart because we
screwed up the space reservations for finobt operations...

As for large scale testing, I suspect I turned everything optional
off when I last did this testing, because mkfs currently requires a
lot of per-AG IO to initialise structures. On an SSD, mkfs.xfs
-K -f -d agcount=10000 ... takes

		mkfs time	mount time
-m crc=0	15s		1s
-m rmapbt=1	25s		6s

Multiply those times by at another 1000 to get to an 8EB
filesystem and the difference is several hours of mkfs time and
a couple of hours of mount time....

So from the numbers, it is pretty likely I didn't test anything that
actually required iterating 8 million AGs at mount time....

> TBH I think the COW recovery and the AG block reservation pieces are
> prime candidates for throwing at an xfs_pwork workqueue so we can
> perform those scans in parallel.

As I mentioned on #xfs, I think we only need to do the AG read if we
are near enospc. i.e. we can take the entire reservation at mount
time (which is fixed per-ag) and only take away the used from the
reservation (i.e. return to the free space pool) when we actually
access the AGF/AGI the first time. Or when we get a ENOSPC
event, which might occur when we try to take the fixed reservation
at mount time...

> > Hence I don't think that any algorithm that requires reading every
> > AGF header in the filesystem at mount time on every v5 filesystem
> > already out there in production (because finobt triggers this) is a
> > particularly good idea...
> 
> Perhaps not, but the horse bolted 5 years ago. :/

Let's go catch it :P

Cheers,

Dave.
Darrick J. Wong March 19, 2021, 1:34 a.m. UTC | #4
On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote:
> > > On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote:
> > > > perag reservation is enabled at mount time on a per AG basis. The
> > > > upcoming in-core allocation btree accounting mechanism needs to know
> > > > when reservation is enabled and that all perag AGF contexts are
> > > > initialized. As a preparation step, set a flag in the mount
> > > > structure and unconditionally initialize the pagf on all mounts
> > > > where at least one reservation is active.
> > > 
> > > I'm not sure this is a good idea. AFAICT, this means just about any
> > > filesystem with finobt, reflink and/or rmap will now typically read
> > > every AGF header in the filesystem at mount time. That means pretty
> > > much every v5 filesystem in production...
> > 
> > They already do that, because the AG headers are where we store the
> > btree block counts.
> 
> Oh, we're brute forcing AG reservation space? I thought we were
> doing something smarter than that, because I'm sure this isn't the
> first time I've mentioned this problem....

Probably not... :)

> > > We've always tried to avoid needing to reading all AG headers at
> > > mount time because that does not scale when we have really large
> > > filesystems (I'm talking petabytes here). We should only read AG
> > > headers if there is something not fully recovered during the mount
> > > (i.e. slow path) and not on every mount.
> > > 
> > > Needing to do a few thousand synchonous read IOs during mount makes
> > > mount very slow, and as such we always try to do dynamic
> > > instantiation of AG headers...  Testing I've done with exabyte scale
> > > filesystems (>10^6 AGs) show that it can take minutes for mount to
> > > run when each AG header needs to be read, and that's on SSDs where
> > > the individual read latency is only a couple of hundred
> > > microseconds. On spinning disks that can do 200 IOPS, we're
> > > potentially talking hours just to mount really large filesystems...
> > 
> > Is that with reflink enabled?  Reflink always scans the right edge of
> > the refcount btree at mount to clean out stale COW staging extents,
> 
> Aren't they cleaned up at unmount when the inode is inactivated?

Yes.  Or when the blockgc timeout expires, or when ENOSPC pushes
blockgc...

> i.e. isn't this something that should only be done on a unclean
> mount?

Years ago (back when reflink was experimental) we left it that way so
that if there were any serious implementation bugs we wouldn't leak
blocks everywhere.  I think we forgot to take it out.

> > and
> > (prior to the introduction of the inode btree counts feature last year)
> > we also ahad to walk the entire finobt to find out how big it is.
> 
> ugh, I forgot about the fact we had to add that wart because we
> screwed up the space reservations for finobt operations...

Yeah.

> As for large scale testing, I suspect I turned everything optional
> off when I last did this testing, because mkfs currently requires a
> lot of per-AG IO to initialise structures. On an SSD, mkfs.xfs
> -K -f -d agcount=10000 ... takes
> 
> 		mkfs time	mount time
> -m crc=0	15s		1s
> -m rmapbt=1	25s		6s
> 
> Multiply those times by at another 1000 to get to an 8EB
> filesystem and the difference is several hours of mkfs time and
> a couple of hours of mount time....
> 
> So from the numbers, it is pretty likely I didn't test anything that
> actually required iterating 8 million AGs at mount time....
> 
> > TBH I think the COW recovery and the AG block reservation pieces are
> > prime candidates for throwing at an xfs_pwork workqueue so we can
> > perform those scans in parallel.

[This didn't turn out to be difficult at all.]

> As I mentioned on #xfs, I think we only need to do the AG read if we
> are near enospc. i.e. we can take the entire reservation at mount
> time (which is fixed per-ag) and only take away the used from the
> reservation (i.e. return to the free space pool) when we actually
> access the AGF/AGI the first time. Or when we get a ENOSPC
> event, which might occur when we try to take the fixed reservation
> at mount time...

<nod> That's probably not hard.  Compute the theoretical maximum size of
the finobt/rmapbt/refcountbt, multiply that by the number of AGs, try to
reserve that much, and if we get it, we can trivially initialise the
per-AG reservation structure.  If that fails, we fall back to the
scanning thing we do now:

When we set pag[if]_init in the per-AG structure, we can back off the
space reservation by the number of blocks in the trees tracked by that
AG header, which will add that quantity to fdblocks.  We can handle the
ENOSPC case by modifying the per-AG blockgc worker to load the AGF/AGI
if they aren't already.

> > > Hence I don't think that any algorithm that requires reading every
> > > AGF header in the filesystem at mount time on every v5 filesystem
> > > already out there in production (because finobt triggers this) is a
> > > particularly good idea...
> > 
> > Perhaps not, but the horse bolted 5 years ago. :/
> 
> Let's go catch it :P

FWIW I previously fixed the rmapbt/reflink transaction reservations
being unnecessarily large, so (provided deferred inode inactivation gets
reviewed this cycle) I can try to put all these reflink cleanups
together for the next cycle.

--D

> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
Dave Chinner March 19, 2021, 1:43 a.m. UTC | #5
On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > TBH I think the COW recovery and the AG block reservation pieces are
> > prime candidates for throwing at an xfs_pwork workqueue so we can
> > perform those scans in parallel.
> 
> As I mentioned on #xfs, I think we only need to do the AG read if we
> are near enospc. i.e. we can take the entire reservation at mount
> time (which is fixed per-ag) and only take away the used from the
> reservation (i.e. return to the free space pool) when we actually
> access the AGF/AGI the first time. Or when we get a ENOSPC
> event, which might occur when we try to take the fixed reservation
> at mount time...

Which leaves the question about when we need to actually do the
accounting needed to fix the bug Brian is trying to fix. Can that be
delayed until we read the AGFs or have an ENOSPC event occur? Or
maybe some other "we are near ENOSPC and haven't read all AGFs yet"
threshold/trigger?

If that's the case, then I'm happy to have this patchset proceed as
it stands under the understanding that there will be follow up to
make the clean, lots of space free mount case avoid reading the the
AG headers.

If it can't be made constrained, then I think we probably need to
come up with a different approach that doesn't require reading every
AG header on every mount...

Cheers,

Dave.
Darrick J. Wong March 19, 2021, 1:48 a.m. UTC | #6
On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote:
> On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > TBH I think the COW recovery and the AG block reservation pieces are
> > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > perform those scans in parallel.
> > 
> > As I mentioned on #xfs, I think we only need to do the AG read if we
> > are near enospc. i.e. we can take the entire reservation at mount
> > time (which is fixed per-ag) and only take away the used from the
> > reservation (i.e. return to the free space pool) when we actually
> > access the AGF/AGI the first time. Or when we get a ENOSPC
> > event, which might occur when we try to take the fixed reservation
> > at mount time...
> 
> Which leaves the question about when we need to actually do the
> accounting needed to fix the bug Brian is trying to fix. Can that be
> delayed until we read the AGFs or have an ENOSPC event occur? Or
> maybe some other "we are near ENOSPC and haven't read all AGFs yet"
> threshold/trigger?

Or just load them in the background and let mount() return to userspace?

> If that's the case, then I'm happy to have this patchset proceed as
> it stands under the understanding that there will be follow up to
> make the clean, lots of space free mount case avoid reading the the
> AG headers.
> 
> If it can't be made constrained, then I think we probably need to
> come up with a different approach that doesn't require reading every
> AG header on every mount...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
Dave Chinner March 19, 2021, 2:08 a.m. UTC | #7
On Thu, Mar 18, 2021 at 06:48:21PM -0700, Darrick J. Wong wrote:
> On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote:
> > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > > TBH I think the COW recovery and the AG block reservation pieces are
> > > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > > perform those scans in parallel.
> > > 
> > > As I mentioned on #xfs, I think we only need to do the AG read if we
> > > are near enospc. i.e. we can take the entire reservation at mount
> > > time (which is fixed per-ag) and only take away the used from the
> > > reservation (i.e. return to the free space pool) when we actually
> > > access the AGF/AGI the first time. Or when we get a ENOSPC
> > > event, which might occur when we try to take the fixed reservation
> > > at mount time...
> > 
> > Which leaves the question about when we need to actually do the
> > accounting needed to fix the bug Brian is trying to fix. Can that be
> > delayed until we read the AGFs or have an ENOSPC event occur? Or
> > maybe some other "we are near ENOSPC and haven't read all AGFs yet"
> > threshold/trigger?
> 
> Or just load them in the background and let mount() return to userspace?

Perhaps, but that tends to have impacts on things that run
immediately after mount. e.g. it will screw with benchmarks in
unpredictable ways and I'm not going to like that at all. :(

i.e. I like the deterministic, repeatable behaviour we have right
now because it makes back-to-back performance testing easy to reason
about why performance/behaviour changed...

Cheers,

Dave.
Brian Foster March 19, 2021, 2:54 p.m. UTC | #8
On Thu, Mar 18, 2021 at 06:34:30PM -0700, Darrick J. Wong wrote:
> On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote:
> > > > On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote:
> > > > > perag reservation is enabled at mount time on a per AG basis. The
> > > > > upcoming in-core allocation btree accounting mechanism needs to know
> > > > > when reservation is enabled and that all perag AGF contexts are
> > > > > initialized. As a preparation step, set a flag in the mount
> > > > > structure and unconditionally initialize the pagf on all mounts
> > > > > where at least one reservation is active.
> > > > 
> > > > I'm not sure this is a good idea. AFAICT, this means just about any
> > > > filesystem with finobt, reflink and/or rmap will now typically read
> > > > every AGF header in the filesystem at mount time. That means pretty
> > > > much every v5 filesystem in production...
> > > 
> > > They already do that, because the AG headers are where we store the
> > > btree block counts.
> > 
> > Oh, we're brute forcing AG reservation space? I thought we were
> > doing something smarter than that, because I'm sure this isn't the
> > first time I've mentioned this problem....
> 
> Probably not... :)
> 
> > > > We've always tried to avoid needing to reading all AG headers at
> > > > mount time because that does not scale when we have really large
> > > > filesystems (I'm talking petabytes here). We should only read AG
> > > > headers if there is something not fully recovered during the mount
> > > > (i.e. slow path) and not on every mount.
> > > > 
> > > > Needing to do a few thousand synchonous read IOs during mount makes
> > > > mount very slow, and as such we always try to do dynamic
> > > > instantiation of AG headers...  Testing I've done with exabyte scale
> > > > filesystems (>10^6 AGs) show that it can take minutes for mount to
> > > > run when each AG header needs to be read, and that's on SSDs where
> > > > the individual read latency is only a couple of hundred
> > > > microseconds. On spinning disks that can do 200 IOPS, we're
> > > > potentially talking hours just to mount really large filesystems...
> > > 
> > > Is that with reflink enabled?  Reflink always scans the right edge of
> > > the refcount btree at mount to clean out stale COW staging extents,
> > 
> > Aren't they cleaned up at unmount when the inode is inactivated?
> 
> Yes.  Or when the blockgc timeout expires, or when ENOSPC pushes
> blockgc...
> 
> > i.e. isn't this something that should only be done on a unclean
> > mount?
> 
> Years ago (back when reflink was experimental) we left it that way so
> that if there were any serious implementation bugs we wouldn't leak
> blocks everywhere.  I think we forgot to take it out.
> 
> > > and
> > > (prior to the introduction of the inode btree counts feature last year)
> > > we also ahad to walk the entire finobt to find out how big it is.
> > 
> > ugh, I forgot about the fact we had to add that wart because we
> > screwed up the space reservations for finobt operations...
> 
> Yeah.
> 
> > As for large scale testing, I suspect I turned everything optional
> > off when I last did this testing, because mkfs currently requires a
> > lot of per-AG IO to initialise structures. On an SSD, mkfs.xfs
> > -K -f -d agcount=10000 ... takes
> > 
> > 		mkfs time	mount time
> > -m crc=0	15s		1s
> > -m rmapbt=1	25s		6s
> > 
> > Multiply those times by at another 1000 to get to an 8EB
> > filesystem and the difference is several hours of mkfs time and
> > a couple of hours of mount time....
> > 
> > So from the numbers, it is pretty likely I didn't test anything that
> > actually required iterating 8 million AGs at mount time....
> > 
> > > TBH I think the COW recovery and the AG block reservation pieces are
> > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > perform those scans in parallel.
> 
> [This didn't turn out to be difficult at all.]
> 
> > As I mentioned on #xfs, I think we only need to do the AG read if we
> > are near enospc. i.e. we can take the entire reservation at mount
> > time (which is fixed per-ag) and only take away the used from the
> > reservation (i.e. return to the free space pool) when we actually
> > access the AGF/AGI the first time. Or when we get a ENOSPC
> > event, which might occur when we try to take the fixed reservation
> > at mount time...
> 
> <nod> That's probably not hard.  Compute the theoretical maximum size of
> the finobt/rmapbt/refcountbt, multiply that by the number of AGs, try to
> reserve that much, and if we get it, we can trivially initialise the
> per-AG reservation structure.  If that fails, we fall back to the
> scanning thing we do now:
> 

Even in the failure case, we might be able to limit the mount time
scanning that takes place by just scanning until we've found enough AGs
with consumed reservation such that the mount time estimated reservation
succeeds. Of course, the worst case would always be a full scan (either
due to -ENOSPC on the res or very shortly after mount since the res
might have left the fs near -ENOSPC) so it might not be worth it unless
there's value and the logic is simple..

Brian

> When we set pag[if]_init in the per-AG structure, we can back off the
> space reservation by the number of blocks in the trees tracked by that
> AG header, which will add that quantity to fdblocks.  We can handle the
> ENOSPC case by modifying the per-AG blockgc worker to load the AGF/AGI
> if they aren't already.
> 
> > > > Hence I don't think that any algorithm that requires reading every
> > > > AGF header in the filesystem at mount time on every v5 filesystem
> > > > already out there in production (because finobt triggers this) is a
> > > > particularly good idea...
> > > 
> > > Perhaps not, but the horse bolted 5 years ago. :/
> > 
> > Let's go catch it :P
> 
> FWIW I previously fixed the rmapbt/reflink transaction reservations
> being unnecessarily large, so (provided deferred inode inactivation gets
> reviewed this cycle) I can try to put all these reflink cleanups
> together for the next cycle.
> 
> --D
> 
> > 
> > Cheers,
> > 
> > Dave.
> > -- 
> > Dave Chinner
> > david@fromorbit.com
>
Brian Foster March 19, 2021, 2:54 p.m. UTC | #9
On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote:
> On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > TBH I think the COW recovery and the AG block reservation pieces are
> > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > perform those scans in parallel.
> > 
> > As I mentioned on #xfs, I think we only need to do the AG read if we
> > are near enospc. i.e. we can take the entire reservation at mount
> > time (which is fixed per-ag) and only take away the used from the
> > reservation (i.e. return to the free space pool) when we actually
> > access the AGF/AGI the first time. Or when we get a ENOSPC
> > event, which might occur when we try to take the fixed reservation
> > at mount time...
> 
> Which leaves the question about when we need to actually do the
> accounting needed to fix the bug Brian is trying to fix. Can that be
> delayed until we read the AGFs or have an ENOSPC event occur? Or
> maybe some other "we are near ENOSPC and haven't read all AGFs yet"
> threshold/trigger?
> 

Technically there isn't a hard requirement to read in any AGFs at mount
time. The tradeoff is that leaves a gap in effectiveness until at least
the majority of allocbt blocks have been accounted for (via perag agf
initialization). The in-core counter simply folds into the reservation
set aside value, so it would just remain at 0 at reservation time and
behave as if the mechanism didn't exist in the first place. The obvious
risk is a user can mount the fs and immediately acquire reservation
without having populated the counter from enough AGs to prevent the
reservation overrun problem. For that reason, I didn't really consider
the "lazy" init approach a suitable fix and hooked onto the (mostly)
preexisting perag res behavior to initialize the appropriate structures
at mount time.

If that underlying mount time behavior changes, it's not totally clear
to me how that impacts this patch. If the perag res change relies on an
overestimated mount time reservation and a fallback to a hard scan on
-ENOSPC, then I wonder whether the overestimated reservation might
effectively subsume whatever the allocbt set aside might be for that AG.
If so, and the perag init effectively transfers excess reservation back
to free space at the same time allocbt blocks are accounted for (and set
aside from subsequent reservations), perhaps that has a similar net
effect as the current behavior (of initializing the allocbt count at
mount time)..?

One problem is that might be hard to reason about even with code in
place, let alone right now when the targeted behavior is still
vaporware. OTOH, I suppose that if we do know right now that the perag
res scan will still fall back to mount time scans beyond some low free
space threshold, perhaps it's just a matter of factoring allocbt set
aside into the threshold somehow so that we know the counter will always
be initialized before a user can over reserve blocks. As it is, I don't
really have a strong opinion on whether we should try to make this fix
now and preserve it, or otherwise table it and revisit once we know what
the resulting perag res code will look like. Thoughts?

Brian

> If that's the case, then I'm happy to have this patchset proceed as
> it stands under the understanding that there will be follow up to
> make the clean, lots of space free mount case avoid reading the the
> AG headers.
> 
> If it can't be made constrained, then I think we probably need to
> come up with a different approach that doesn't require reading every
> AG header on every mount...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
>
Dave Chinner March 23, 2021, 10:40 p.m. UTC | #10
On Fri, Mar 19, 2021 at 10:54:25AM -0400, Brian Foster wrote:
> On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote:
> > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > > TBH I think the COW recovery and the AG block reservation pieces are
> > > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > > perform those scans in parallel.
> > > 
> > > As I mentioned on #xfs, I think we only need to do the AG read if we
> > > are near enospc. i.e. we can take the entire reservation at mount
> > > time (which is fixed per-ag) and only take away the used from the
> > > reservation (i.e. return to the free space pool) when we actually
> > > access the AGF/AGI the first time. Or when we get a ENOSPC
> > > event, which might occur when we try to take the fixed reservation
> > > at mount time...
> > 
> > Which leaves the question about when we need to actually do the
> > accounting needed to fix the bug Brian is trying to fix. Can that be
> > delayed until we read the AGFs or have an ENOSPC event occur? Or
> > maybe some other "we are near ENOSPC and haven't read all AGFs yet"
> > threshold/trigger?
> > 
> 
> Technically there isn't a hard requirement to read in any AGFs at mount
> time. The tradeoff is that leaves a gap in effectiveness until at least
> the majority of allocbt blocks have been accounted for (via perag agf
> initialization). The in-core counter simply folds into the reservation
> set aside value, so it would just remain at 0 at reservation time and
> behave as if the mechanism didn't exist in the first place. The obvious
> risk is a user can mount the fs and immediately acquire reservation
> without having populated the counter from enough AGs to prevent the
> reservation overrun problem. For that reason, I didn't really consider
> the "lazy" init approach a suitable fix and hooked onto the (mostly)
> preexisting perag res behavior to initialize the appropriate structures
> at mount time.
> 
> If that underlying mount time behavior changes, it's not totally clear
> to me how that impacts this patch. If the perag res change relies on an
> overestimated mount time reservation and a fallback to a hard scan on
> -ENOSPC, then I wonder whether the overestimated reservation might
> effectively subsume whatever the allocbt set aside might be for that AG.
> If so, and the perag init effectively transfers excess reservation back
> to free space at the same time allocbt blocks are accounted for (and set
> aside from subsequent reservations), perhaps that has a similar net
> effect as the current behavior (of initializing the allocbt count at
> mount time)..?
> 
> One problem is that might be hard to reason about even with code in
> place, let alone right now when the targeted behavior is still
> vaporware. OTOH, I suppose that if we do know right now that the perag
> res scan will still fall back to mount time scans beyond some low free
> space threshold, perhaps it's just a matter of factoring allocbt set
> aside into the threshold somehow so that we know the counter will always
> be initialized before a user can over reserve blocks.

Yeah, that seems reasonable to me. I don't think it's difficult to
handle - just set the setaside to maximum at mount time, then as we
read in AGFs we replace the maximum setaside for that AG with the
actual btree block usage. If we hit ENOSPC, then we can read in the
uninitialised pags to reduce the setaside from the maximum to the
actual values and return that free space back to the global pool...

> As it is, I don't
> really have a strong opinion on whether we should try to make this fix
> now and preserve it, or otherwise table it and revisit once we know what
> the resulting perag res code will look like. Thoughts?

It sounds like we have a solid plan to address the AG header access
at mount time, adding this code now doesn't make anything worse,
nor does it appear to prevent us from fixing the AG header access
problem in the future. So I'm happy for this fix to go ahead as it
stands.

Cheers,

Dave.
Brian Foster March 24, 2021, 2:24 p.m. UTC | #11
On Wed, Mar 24, 2021 at 09:40:36AM +1100, Dave Chinner wrote:
> On Fri, Mar 19, 2021 at 10:54:25AM -0400, Brian Foster wrote:
> > On Fri, Mar 19, 2021 at 12:43:03PM +1100, Dave Chinner wrote:
> > > On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > > > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > > > TBH I think the COW recovery and the AG block reservation pieces are
> > > > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > > > perform those scans in parallel.
> > > > 
> > > > As I mentioned on #xfs, I think we only need to do the AG read if we
> > > > are near enospc. i.e. we can take the entire reservation at mount
> > > > time (which is fixed per-ag) and only take away the used from the
> > > > reservation (i.e. return to the free space pool) when we actually
> > > > access the AGF/AGI the first time. Or when we get a ENOSPC
> > > > event, which might occur when we try to take the fixed reservation
> > > > at mount time...
> > > 
> > > Which leaves the question about when we need to actually do the
> > > accounting needed to fix the bug Brian is trying to fix. Can that be
> > > delayed until we read the AGFs or have an ENOSPC event occur? Or
> > > maybe some other "we are near ENOSPC and haven't read all AGFs yet"
> > > threshold/trigger?
> > > 
> > 
> > Technically there isn't a hard requirement to read in any AGFs at mount
> > time. The tradeoff is that leaves a gap in effectiveness until at least
> > the majority of allocbt blocks have been accounted for (via perag agf
> > initialization). The in-core counter simply folds into the reservation
> > set aside value, so it would just remain at 0 at reservation time and
> > behave as if the mechanism didn't exist in the first place. The obvious
> > risk is a user can mount the fs and immediately acquire reservation
> > without having populated the counter from enough AGs to prevent the
> > reservation overrun problem. For that reason, I didn't really consider
> > the "lazy" init approach a suitable fix and hooked onto the (mostly)
> > preexisting perag res behavior to initialize the appropriate structures
> > at mount time.
> > 
> > If that underlying mount time behavior changes, it's not totally clear
> > to me how that impacts this patch. If the perag res change relies on an
> > overestimated mount time reservation and a fallback to a hard scan on
> > -ENOSPC, then I wonder whether the overestimated reservation might
> > effectively subsume whatever the allocbt set aside might be for that AG.
> > If so, and the perag init effectively transfers excess reservation back
> > to free space at the same time allocbt blocks are accounted for (and set
> > aside from subsequent reservations), perhaps that has a similar net
> > effect as the current behavior (of initializing the allocbt count at
> > mount time)..?
> > 
> > One problem is that might be hard to reason about even with code in
> > place, let alone right now when the targeted behavior is still
> > vaporware. OTOH, I suppose that if we do know right now that the perag
> > res scan will still fall back to mount time scans beyond some low free
> > space threshold, perhaps it's just a matter of factoring allocbt set
> > aside into the threshold somehow so that we know the counter will always
> > be initialized before a user can over reserve blocks.
> 
> Yeah, that seems reasonable to me. I don't think it's difficult to
> handle - just set the setaside to maximum at mount time, then as we
> read in AGFs we replace the maximum setaside for that AG with the
> actual btree block usage. If we hit ENOSPC, then we can read in the
> uninitialised pags to reduce the setaside from the maximum to the
> actual values and return that free space back to the global pool...
> 

Ack. That seems like a generic enough fallback plan if the
overestimation of perag reservation doesn't otherwise cover the gap.

> > As it is, I don't
> > really have a strong opinion on whether we should try to make this fix
> > now and preserve it, or otherwise table it and revisit once we know what
> > the resulting perag res code will look like. Thoughts?
> 
> It sounds like we have a solid plan to address the AG header access
> at mount time, adding this code now doesn't make anything worse,
> nor does it appear to prevent us from fixing the AG header access
> problem in the future. So I'm happy for this fix to go ahead as it
> stands.
> 

Ok, so is that a Rv-b..? ;)

So far after a quick skim back through the discussion, I don't have a
reason for a v4 of this series...

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
>
diff mbox series

Patch

diff --git a/fs/xfs/libxfs/xfs_ag_resv.c b/fs/xfs/libxfs/xfs_ag_resv.c
index fdfe6dc0d307..8e454097b905 100644
--- a/fs/xfs/libxfs/xfs_ag_resv.c
+++ b/fs/xfs/libxfs/xfs_ag_resv.c
@@ -250,6 +250,7 @@  xfs_ag_resv_init(
 	xfs_extlen_t			ask;
 	xfs_extlen_t			used;
 	int				error = 0;
+	bool				has_resv = false;
 
 	/* Create the metadata reservation. */
 	if (pag->pag_meta_resv.ar_asked == 0) {
@@ -287,6 +288,8 @@  xfs_ag_resv_init(
 			if (error)
 				goto out;
 		}
+		if (ask)
+			has_resv = true;
 	}
 
 	/* Create the RMAPBT metadata reservation */
@@ -300,18 +303,19 @@  xfs_ag_resv_init(
 		error = __xfs_ag_resv_init(pag, XFS_AG_RESV_RMAPBT, ask, used);
 		if (error)
 			goto out;
+		if (ask)
+			has_resv = true;
 	}
 
-#ifdef DEBUG
-	/* need to read in the AGF for the ASSERT below to work */
-	error = xfs_alloc_pagf_init(pag->pag_mount, tp, pag->pag_agno, 0);
-	if (error)
-		return error;
-
-	ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved +
-	       xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <=
-	       pag->pagf_freeblks + pag->pagf_flcount);
-#endif
+	if (has_resv) {
+		mp->m_has_agresv = true;
+		error = xfs_alloc_pagf_init(mp, tp, pag->pag_agno, 0);
+		if (error)
+			return error;
+		ASSERT(xfs_perag_resv(pag, XFS_AG_RESV_METADATA)->ar_reserved +
+		       xfs_perag_resv(pag, XFS_AG_RESV_RMAPBT)->ar_reserved <=
+		       pag->pagf_freeblks + pag->pagf_flcount);
+	}
 out:
 	return error;
 }
diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
index 659ad95fe3e0..489d9b2c53d9 100644
--- a/fs/xfs/xfs_mount.h
+++ b/fs/xfs/xfs_mount.h
@@ -139,6 +139,7 @@  typedef struct xfs_mount {
 	bool			m_fail_unmount;
 	bool			m_finobt_nores; /* no per-AG finobt resv. */
 	bool			m_update_sb;	/* sb needs update in mount */
+	bool			m_has_agresv;	/* perag reservations active */
 
 	/*
 	 * Bitsets of per-fs metadata that have been checked and/or are sick.