diff mbox

[RFC,2/4] xfs: defer agfl block frees when dfops is available

Message ID 20171207185810.48757-3-bfoster@redhat.com (mailing list archive)
State Superseded, archived
Headers show

Commit Message

Brian Foster Dec. 7, 2017, 6:58 p.m. UTC
The AGFL fixup code executes before every block allocation/free and
rectifies the AGFL based on the current, dynamic allocation
requirements of the fs. The AGFL must hold a minimum number of
blocks to satisfy a worst case split of the free space btrees caused
by the impending allocation operation. The AGFL is also updated to
maintain the implicit requirement for a minimum number of free slots
to satisfy a worst case join of the free space btrees.

Since the AGFL caches individual blocks, AGFL reduction typically
involves multiple, single block frees. We've had reports of
transaction overrun problems during certain workloads that boil down
to AGFL reduction freeing multiple blocks and consuming more space
in the log than was reserved for the transaction.

Since the objective of freeing AGFL blocks is to ensure free AGFL
free slots are available for the upcoming allocation, one way to
address this problem is to release surplus blocks from the AGFL
immediately but defer the free of those blocks (similar to how
file-mapped blocks are unmapped from the file in one transaction and
freed via a deferred operation) until the transaction is rolled.
This turns AGFL reduction into an operation with predictable log
reservation consumption.

Add the capability to defer AGFL block frees when a deferred ops
list is handed to the AGFL fixup code. Deferring AGFL frees is a
conditional behavior based on whether the caller has populated the
new dfops field of the xfs_alloc_arg structure. A bit of
customization is required to handle deferred completion processing
because AGFL blocks are accounted against a separate reservation
pool and AGFL are not inserted into the extent busy list when freed
(they are inserted when used and released back to the AGFL). Reuse
the majority of the existing deferred extent free infrastructure and
customize it appropriately to handle AGFL blocks.

Note that this patch only adds infrastructure. It does not change
behavior because no callers have been updated to pass dfops into the
allocation code.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_alloc.c  | 51 ++++++++++++++++++++++++++++++---
 fs/xfs/libxfs/xfs_alloc.h  |  1 +
 fs/xfs/libxfs/xfs_defer.h  |  1 +
 fs/xfs/xfs_trace.h         |  2 ++
 fs/xfs/xfs_trans_extfree.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 121 insertions(+), 4 deletions(-)

Comments

Dave Chinner Dec. 7, 2017, 10:41 p.m. UTC | #1
On Thu, Dec 07, 2017 at 01:58:08PM -0500, Brian Foster wrote:
> The AGFL fixup code executes before every block allocation/free and
> rectifies the AGFL based on the current, dynamic allocation
> requirements of the fs. The AGFL must hold a minimum number of
> blocks to satisfy a worst case split of the free space btrees caused
> by the impending allocation operation. The AGFL is also updated to
> maintain the implicit requirement for a minimum number of free slots
> to satisfy a worst case join of the free space btrees.
> 
> Since the AGFL caches individual blocks, AGFL reduction typically
> involves multiple, single block frees. We've had reports of
> transaction overrun problems during certain workloads that boil down
> to AGFL reduction freeing multiple blocks and consuming more space
> in the log than was reserved for the transaction.
> 
> Since the objective of freeing AGFL blocks is to ensure free AGFL
> free slots are available for the upcoming allocation, one way to
> address this problem is to release surplus blocks from the AGFL
> immediately but defer the free of those blocks (similar to how
> file-mapped blocks are unmapped from the file in one transaction and
> freed via a deferred operation) until the transaction is rolled.
> This turns AGFL reduction into an operation with predictable log
> reservation consumption.
> 
> Add the capability to defer AGFL block frees when a deferred ops
> list is handed to the AGFL fixup code. Deferring AGFL frees is a
> conditional behavior based on whether the caller has populated the
> new dfops field of the xfs_alloc_arg structure. A bit of
> customization is required to handle deferred completion processing
> because AGFL blocks are accounted against a separate reservation
> pool and AGFL are not inserted into the extent busy list when freed
> (they are inserted when used and released back to the AGFL). Reuse
> the majority of the existing deferred extent free infrastructure and
> customize it appropriately to handle AGFL blocks.

Ok, so it uses the EFI/EFD to make sure that the block freeing is
logged and replayed. So my question is:

> +/*
> + * AGFL blocks are accounted differently in the reserve pools and are not
> + * inserted into the busy extent list.
> + */
> +STATIC int
> +xfs_agfl_free_finish_item(
> +	struct xfs_trans		*tp,
> +	struct xfs_defer_ops		*dop,
> +	struct list_head		*item,
> +	void				*done_item,
> +	void				**state)
> +{

How does this function get called by log recovery when processing
the EFI as there is no flag in the EFI that says this was a AGFL
block?

That said, I haven't traced through whether this matters or not,
but I suspect it does because freelist frees use XFS_AG_RESV_AGFL
and that avoids accounting the free to the superblock counters
because the block is already accounted as free space....

Cheers,

Dave.
Dave Chinner Dec. 7, 2017, 10:54 p.m. UTC | #2
On Fri, Dec 08, 2017 at 09:41:26AM +1100, Dave Chinner wrote:
> On Thu, Dec 07, 2017 at 01:58:08PM -0500, Brian Foster wrote:
> > The AGFL fixup code executes before every block allocation/free and
> > rectifies the AGFL based on the current, dynamic allocation
> > requirements of the fs. The AGFL must hold a minimum number of
> > blocks to satisfy a worst case split of the free space btrees caused
> > by the impending allocation operation. The AGFL is also updated to
> > maintain the implicit requirement for a minimum number of free slots
> > to satisfy a worst case join of the free space btrees.
> > 
> > Since the AGFL caches individual blocks, AGFL reduction typically
> > involves multiple, single block frees. We've had reports of
> > transaction overrun problems during certain workloads that boil down
> > to AGFL reduction freeing multiple blocks and consuming more space
> > in the log than was reserved for the transaction.
> > 
> > Since the objective of freeing AGFL blocks is to ensure free AGFL
> > free slots are available for the upcoming allocation, one way to
> > address this problem is to release surplus blocks from the AGFL
> > immediately but defer the free of those blocks (similar to how
> > file-mapped blocks are unmapped from the file in one transaction and
> > freed via a deferred operation) until the transaction is rolled.
> > This turns AGFL reduction into an operation with predictable log
> > reservation consumption.
> > 
> > Add the capability to defer AGFL block frees when a deferred ops
> > list is handed to the AGFL fixup code. Deferring AGFL frees is a
> > conditional behavior based on whether the caller has populated the
> > new dfops field of the xfs_alloc_arg structure. A bit of
> > customization is required to handle deferred completion processing
> > because AGFL blocks are accounted against a separate reservation
> > pool and AGFL are not inserted into the extent busy list when freed
> > (they are inserted when used and released back to the AGFL). Reuse
> > the majority of the existing deferred extent free infrastructure and
> > customize it appropriately to handle AGFL blocks.
> 
> Ok, so it uses the EFI/EFD to make sure that the block freeing is
> logged and replayed. So my question is:
> 
> > +/*
> > + * AGFL blocks are accounted differently in the reserve pools and are not
> > + * inserted into the busy extent list.
> > + */
> > +STATIC int
> > +xfs_agfl_free_finish_item(
> > +	struct xfs_trans		*tp,
> > +	struct xfs_defer_ops		*dop,
> > +	struct list_head		*item,
> > +	void				*done_item,
> > +	void				**state)
> > +{
> 
> How does this function get called by log recovery when processing
> the EFI as there is no flag in the EFI that says this was a AGFL
> block?
> 
> That said, I haven't traced through whether this matters or not,
> but I suspect it does because freelist frees use XFS_AG_RESV_AGFL
> and that avoids accounting the free to the superblock counters
> because the block is already accounted as free space....

Just had another thought on this - this is going to cause a large
number of alloc/free transactions to roll the transaction at least
one more time. That means the logcount in the alloc/free transaction
reservation should be bumped by one. i.e. so that the common case
doesn't need to block and re-reserve grant space in the log to
complete the transaction because it has rolled the more times than
the reservation log count accounts for.

Cheers,

Dave.
Brian Foster Dec. 8, 2017, 2:16 p.m. UTC | #3
On Fri, Dec 08, 2017 at 09:41:26AM +1100, Dave Chinner wrote:
> On Thu, Dec 07, 2017 at 01:58:08PM -0500, Brian Foster wrote:
> > The AGFL fixup code executes before every block allocation/free and
> > rectifies the AGFL based on the current, dynamic allocation
> > requirements of the fs. The AGFL must hold a minimum number of
> > blocks to satisfy a worst case split of the free space btrees caused
> > by the impending allocation operation. The AGFL is also updated to
> > maintain the implicit requirement for a minimum number of free slots
> > to satisfy a worst case join of the free space btrees.
> > 
> > Since the AGFL caches individual blocks, AGFL reduction typically
> > involves multiple, single block frees. We've had reports of
> > transaction overrun problems during certain workloads that boil down
> > to AGFL reduction freeing multiple blocks and consuming more space
> > in the log than was reserved for the transaction.
> > 
> > Since the objective of freeing AGFL blocks is to ensure free AGFL
> > free slots are available for the upcoming allocation, one way to
> > address this problem is to release surplus blocks from the AGFL
> > immediately but defer the free of those blocks (similar to how
> > file-mapped blocks are unmapped from the file in one transaction and
> > freed via a deferred operation) until the transaction is rolled.
> > This turns AGFL reduction into an operation with predictable log
> > reservation consumption.
> > 
> > Add the capability to defer AGFL block frees when a deferred ops
> > list is handed to the AGFL fixup code. Deferring AGFL frees is a
> > conditional behavior based on whether the caller has populated the
> > new dfops field of the xfs_alloc_arg structure. A bit of
> > customization is required to handle deferred completion processing
> > because AGFL blocks are accounted against a separate reservation
> > pool and AGFL are not inserted into the extent busy list when freed
> > (they are inserted when used and released back to the AGFL). Reuse
> > the majority of the existing deferred extent free infrastructure and
> > customize it appropriately to handle AGFL blocks.
> 
> Ok, so it uses the EFI/EFD to make sure that the block freeing is
> logged and replayed. So my question is:
> 
> > +/*
> > + * AGFL blocks are accounted differently in the reserve pools and are not
> > + * inserted into the busy extent list.
> > + */
> > +STATIC int
> > +xfs_agfl_free_finish_item(
> > +	struct xfs_trans		*tp,
> > +	struct xfs_defer_ops		*dop,
> > +	struct list_head		*item,
> > +	void				*done_item,
> > +	void				**state)
> > +{
> 
> How does this function get called by log recovery when processing
> the EFI as there is no flag in the EFI that says this was a AGFL
> block?
> 

It doesn't...

> That said, I haven't traced through whether this matters or not,
> but I suspect it does because freelist frees use XFS_AG_RESV_AGFL
> and that avoids accounting the free to the superblock counters
> because the block is already accounted as free space....
> 

I don't think it does matter. I actually tested log recovery precisely
for this question, to see whether the traditional EFI recovery path
would disrupt accounting or anything and I didn't reproduce any problems
(well, except for that rmap record cleanup failure thing).

However, I do still need to trace through and understand why that is, to
know for sure that there aren't any problems lurking here (and if not, I
should probably document it), but I suspect the reason is that the
differences between how agfl and regular blocks are handled here only
affect in-core state of the AG reservation pools. These are all
reinitialized from zero on a subsequent mount based on the on-disk state
(... but good point, and I will try to confirm that before posting a
non-RFC variant).

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Brian Foster Dec. 8, 2017, 2:17 p.m. UTC | #4
On Fri, Dec 08, 2017 at 09:54:59AM +1100, Dave Chinner wrote:
> On Fri, Dec 08, 2017 at 09:41:26AM +1100, Dave Chinner wrote:
> > On Thu, Dec 07, 2017 at 01:58:08PM -0500, Brian Foster wrote:
> > > The AGFL fixup code executes before every block allocation/free and
> > > rectifies the AGFL based on the current, dynamic allocation
> > > requirements of the fs. The AGFL must hold a minimum number of
> > > blocks to satisfy a worst case split of the free space btrees caused
> > > by the impending allocation operation. The AGFL is also updated to
> > > maintain the implicit requirement for a minimum number of free slots
> > > to satisfy a worst case join of the free space btrees.
> > > 
> > > Since the AGFL caches individual blocks, AGFL reduction typically
> > > involves multiple, single block frees. We've had reports of
> > > transaction overrun problems during certain workloads that boil down
> > > to AGFL reduction freeing multiple blocks and consuming more space
> > > in the log than was reserved for the transaction.
> > > 
> > > Since the objective of freeing AGFL blocks is to ensure free AGFL
> > > free slots are available for the upcoming allocation, one way to
> > > address this problem is to release surplus blocks from the AGFL
> > > immediately but defer the free of those blocks (similar to how
> > > file-mapped blocks are unmapped from the file in one transaction and
> > > freed via a deferred operation) until the transaction is rolled.
> > > This turns AGFL reduction into an operation with predictable log
> > > reservation consumption.
> > > 
> > > Add the capability to defer AGFL block frees when a deferred ops
> > > list is handed to the AGFL fixup code. Deferring AGFL frees is a
> > > conditional behavior based on whether the caller has populated the
> > > new dfops field of the xfs_alloc_arg structure. A bit of
> > > customization is required to handle deferred completion processing
> > > because AGFL blocks are accounted against a separate reservation
> > > pool and AGFL are not inserted into the extent busy list when freed
> > > (they are inserted when used and released back to the AGFL). Reuse
> > > the majority of the existing deferred extent free infrastructure and
> > > customize it appropriately to handle AGFL blocks.
> > 
> > Ok, so it uses the EFI/EFD to make sure that the block freeing is
> > logged and replayed. So my question is:
> > 
> > > +/*
> > > + * AGFL blocks are accounted differently in the reserve pools and are not
> > > + * inserted into the busy extent list.
> > > + */
> > > +STATIC int
> > > +xfs_agfl_free_finish_item(
> > > +	struct xfs_trans		*tp,
> > > +	struct xfs_defer_ops		*dop,
> > > +	struct list_head		*item,
> > > +	void				*done_item,
> > > +	void				**state)
> > > +{
> > 
> > How does this function get called by log recovery when processing
> > the EFI as there is no flag in the EFI that says this was a AGFL
> > block?
> > 
> > That said, I haven't traced through whether this matters or not,
> > but I suspect it does because freelist frees use XFS_AG_RESV_AGFL
> > and that avoids accounting the free to the superblock counters
> > because the block is already accounted as free space....
> 
> Just had another thought on this - this is going to cause a large
> number of alloc/free transactions to roll the transaction at least
> one more time. That means the logcount in the alloc/free transaction
> reservation should be bumped by one. i.e. so that the common case
> doesn't need to block and re-reserve grant space in the log to
> complete the transaction because it has rolled the more times than
> the reservation log count accounts for.
> 

Yeah, that is something else we need to consider. One thing that stands
out is that we don't seem to currently break down log count values into
operational units as we do for log reservation itself. We could just
bump them all, or at least whatever ones are used in contexts that are
now able to defer AGFL block frees. There aren't that many separate
values defined, but I wonder if something like:

#define XFS_BMAPFREE_LOG_COUNT 2	/* 1 deferred free + 1 AGFL free */
...
#define XFS_INACTIVE_LOG_COUNT (XFS_ALLOCFREE_LOG_COUNT + 1)
...

... would help make this more self-documenting. Hm?

I was also a little concerned about increasing the size of the
transactions again since I believe that a bump in log count increases
the initial reservation requirement (so we don't end up blocking on the
roll, as you point out). But now that I take a quick look with the above
in mind, it's probably the appropriate thing to do so long it can be
applied selectively/accurately. I'll look more into it..

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Brian Foster Jan. 8, 2018, 9:56 p.m. UTC | #5
cc Darrick

On Fri, Dec 08, 2017 at 09:16:30AM -0500, Brian Foster wrote:
> On Fri, Dec 08, 2017 at 09:41:26AM +1100, Dave Chinner wrote:
> > On Thu, Dec 07, 2017 at 01:58:08PM -0500, Brian Foster wrote:
...
> > 
> > Ok, so it uses the EFI/EFD to make sure that the block freeing is
> > logged and replayed. So my question is:
> > 
> > > +/*
> > > + * AGFL blocks are accounted differently in the reserve pools and are not
> > > + * inserted into the busy extent list.
> > > + */
> > > +STATIC int
> > > +xfs_agfl_free_finish_item(
> > > +	struct xfs_trans		*tp,
> > > +	struct xfs_defer_ops		*dop,
> > > +	struct list_head		*item,
> > > +	void				*done_item,
> > > +	void				**state)
> > > +{
> > 
> > How does this function get called by log recovery when processing
> > the EFI as there is no flag in the EFI that says this was a AGFL
> > block?
> > 
> 
> It doesn't...
> 
> > That said, I haven't traced through whether this matters or not,
> > but I suspect it does because freelist frees use XFS_AG_RESV_AGFL
> > and that avoids accounting the free to the superblock counters
> > because the block is already accounted as free space....
> > 
> 
> I don't think it does matter. I actually tested log recovery precisely
> for this question, to see whether the traditional EFI recovery path
> would disrupt accounting or anything and I didn't reproduce any problems
> (well, except for that rmap record cleanup failure thing).
> 
> However, I do still need to trace through and understand why that is, to
> know for sure that there aren't any problems lurking here (and if not, I
> should probably document it), but I suspect the reason is that the
> differences between how agfl and regular blocks are handled here only
> affect in-core state of the AG reservation pools. These are all
> reinitialized from zero on a subsequent mount based on the on-disk state
> (... but good point, and I will try to confirm that before posting a
> non-RFC variant).
> 

After catching back up with this and taking a closer look at the code, I
can confirm that generic EFI recovery works fine for deferred AGFL block
frees. What happens is essentially that the slot is freed and the block
free is deferred in a particular tx. If we crash before that tx commits,
then obviously nothing changes and we're fine. If we crash after that tx
commits, EFI recovery frees the block and the AGFL reserve pool
adjustment is irrelevant as the in-core res counters are initialized
from the current state of the fs after log recovery has completed (so
even if we knew this was an agfl block, attempting reservation
adjustments at recovery time would probably be wrong).

That aside, looking through the perag res code had me a bit curious
about why we reserve all agfl blocks in the first place. IIUC, the AGFL
reserve pool actually serves the rmapbt, since that (and that alone) is
what the mount time reservation is based on. AGFL blocks can be used for
other purposes, however, and the current runtime reservation is adjusted
based on all AGFL activity. Is there a reason this reserve pool does not
specifically target rmapbt allocations? Doesn't not doing so allow some
percentage of the rmapbt reserved blocks to be consumed by other
structures (alloc btrees) until/unless the fs is remounted? I'm
wondering if specifically there's a risk of something like the
following:

- mount fs with some number N of AGFL reserved blocks based on current
  rmapbt state. Suppose the size of the rmapbt is R.
- A bunch of agfl blocks are used over time (U). Suppose 50% of those go
  to the rmapbt and the other 50% to alloc btrees and whatnot.
  ar_reserved is reduced by U, but R only increases by U/2.
- a bunch more unrelated physical allocations occur and consume all
  non-reserved space
- the fs unmounts/mounts and the perag code looks for the remaining U/2
  blocks to reserve for the rmapbt, but not all of those blocks are
  available because we depleted the reserved pool faster than the rmapbt
  grew.

Darrick, hm? FWIW, a quick test to allocate 100% of an AG to a file,
punch out every other block for a few minutes and then remount kind of
shows what I'm wondering about:

 mount-3215  [002] ...1  1642.401846: xfs_ag_resv_init: dev 253:3 agno 0 resv 2 freeblks 259564 flcount 6 resv 3193 ask 3194 len 3194
...
 <...>-28260 [000] ...1  1974.946866: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36317 flcount 9 resv 2936 ask 3194 len 1                                                                                            
 <...>-28428 [002] ...1  1976.371830: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36483 flcount 9 resv 2935 ask 3194 len 1                                                                                            
 <...>-28490 [002] ...1  1976.898147: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36544 flcount 9 resv 2934 ask 3194 len 1                                                                                            
 <...>-28491 [002] ...1  1976.907967: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36544 flcount 9 resv 2933 ask 3194 len 1                                                                                            
umount-28664 [002] ...1  1983.335444: xfs_ag_resv_free: dev 253:3 agno 0 resv 2 freeblks 36643 flcount 10 resv 2932 ask 3194 len 0                                                                                                   
 mount-28671 [000] ...1  1991.396640: xfs_ag_resv_init: dev 253:3 agno 0 resv 2 freeblks 36643 flcount 10 resv 3038 ask 3194 len 3194                    

We consume the res as we go, unmount with some held reservation value,
immediately remount and the associated reservation has jumped by 100
blocks or so. (Granted, whether this can manifest into a tangible
problem may be another story altogether.).

Brian

> Brian
> 
> > Cheers,
> > 
> > Dave.
> > -- 
> > Dave Chinner
> > david@fromorbit.com
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Darrick J. Wong Jan. 9, 2018, 8:43 p.m. UTC | #6
On Mon, Jan 08, 2018 at 04:56:03PM -0500, Brian Foster wrote:
> cc Darrick
> 
> On Fri, Dec 08, 2017 at 09:16:30AM -0500, Brian Foster wrote:
> > On Fri, Dec 08, 2017 at 09:41:26AM +1100, Dave Chinner wrote:
> > > On Thu, Dec 07, 2017 at 01:58:08PM -0500, Brian Foster wrote:
> ...
> > > 
> > > Ok, so it uses the EFI/EFD to make sure that the block freeing is
> > > logged and replayed. So my question is:
> > > 
> > > > +/*
> > > > + * AGFL blocks are accounted differently in the reserve pools and are not
> > > > + * inserted into the busy extent list.
> > > > + */
> > > > +STATIC int
> > > > +xfs_agfl_free_finish_item(
> > > > +	struct xfs_trans		*tp,
> > > > +	struct xfs_defer_ops		*dop,
> > > > +	struct list_head		*item,
> > > > +	void				*done_item,
> > > > +	void				**state)
> > > > +{
> > > 
> > > How does this function get called by log recovery when processing
> > > the EFI as there is no flag in the EFI that says this was a AGFL
> > > block?
> > > 
> > 
> > It doesn't...
> > 
> > > That said, I haven't traced through whether this matters or not,
> > > but I suspect it does because freelist frees use XFS_AG_RESV_AGFL
> > > and that avoids accounting the free to the superblock counters
> > > because the block is already accounted as free space....
> > > 
> > 
> > I don't think it does matter. I actually tested log recovery precisely
> > for this question, to see whether the traditional EFI recovery path
> > would disrupt accounting or anything and I didn't reproduce any problems
> > (well, except for that rmap record cleanup failure thing).
> > 
> > However, I do still need to trace through and understand why that is, to
> > know for sure that there aren't any problems lurking here (and if not, I
> > should probably document it), but I suspect the reason is that the
> > differences between how agfl and regular blocks are handled here only
> > affect in-core state of the AG reservation pools. These are all
> > reinitialized from zero on a subsequent mount based on the on-disk state
> > (... but good point, and I will try to confirm that before posting a
> > non-RFC variant).
> > 
> 
> After catching back up with this and taking a closer look at the code, I
> can confirm that generic EFI recovery works fine for deferred AGFL block
> frees. What happens is essentially that the slot is freed and the block
> free is deferred in a particular tx. If we crash before that tx commits,
> then obviously nothing changes and we're fine. If we crash after that tx
> commits, EFI recovery frees the block and the AGFL reserve pool
> adjustment is irrelevant as the in-core res counters are initialized
> from the current state of the fs after log recovery has completed (so
> even if we knew this was an agfl block, attempting reservation
> adjustments at recovery time would probably be wrong).
> 
> That aside, looking through the perag res code had me a bit curious
> about why we reserve all agfl blocks in the first place. IIUC, the AGFL
> reserve pool actually serves the rmapbt, since that (and that alone) is
> what the mount time reservation is based on. AGFL blocks can be used for
> other purposes, however, and the current runtime reservation is adjusted
> based on all AGFL activity. Is there a reason this reserve pool does not
> specifically target rmapbt allocations? Doesn't not doing so allow some
> percentage of the rmapbt reserved blocks to be consumed by other
> structures (alloc btrees) until/unless the fs is remounted? I'm
> wondering if specifically there's a risk of something like the
> following:
> 
> - mount fs with some number N of AGFL reserved blocks based on current
>   rmapbt state. Suppose the size of the rmapbt is R.
> - A bunch of agfl blocks are used over time (U). Suppose 50% of those go
>   to the rmapbt and the other 50% to alloc btrees and whatnot.
>   ar_reserved is reduced by U, but R only increases by U/2.
> - a bunch more unrelated physical allocations occur and consume all
>   non-reserved space
> - the fs unmounts/mounts and the perag code looks for the remaining U/2
>   blocks to reserve for the rmapbt, but not all of those blocks are
>   available because we depleted the reserved pool faster than the rmapbt
>   grew.
> 
> Darrick, hm? FWIW, a quick test to allocate 100% of an AG to a file,
> punch out every other block for a few minutes and then remount kind of
> shows what I'm wondering about:
> 
>  mount-3215  [002] ...1  1642.401846: xfs_ag_resv_init: dev 253:3 agno 0 resv 2 freeblks 259564 flcount 6 resv 3193 ask 3194 len 3194
> ...
>  <...>-28260 [000] ...1  1974.946866: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36317 flcount 9 resv 2936 ask 3194 len 1
>  <...>-28428 [002] ...1  1976.371830: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36483 flcount 9 resv 2935 ask 3194 len 1
>  <...>-28490 [002] ...1  1976.898147: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36544 flcount 9 resv 2934 ask 3194 len 1
>  <...>-28491 [002] ...1  1976.907967: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36544 flcount 9 resv 2933 ask 3194 len 1
> umount-28664 [002] ...1  1983.335444: xfs_ag_resv_free: dev 253:3 agno 0 resv 2 freeblks 36643 flcount 10 resv 2932 ask 3194 len 0
>  mount-28671 [000] ...1  1991.396640: xfs_ag_resv_init: dev 253:3 agno 0 resv 2 freeblks 36643 flcount 10 resv 3038 ask 3194 len 3194

Yep, that's a bug.  Our current agfl doesn't have much of a way to
signal to the perag reservation code that it's using blocks for the
rmapbt vs. any other use, so we use up the reservation and then pull
from the non-reserved free space after that, with the result that
sometimes we can blow the assert in xfs_ag_resv_init.  I've not been
able to come up with a convincing way to fix this problem, largely
because of:

> We consume the res as we go, unmount with some held reservation value,
> immediately remount and the associated reservation has jumped by 100
> blocks or so. (Granted, whether this can manifest into a tangible
> problem may be another story altogether.).

It's theoretically possible -- even with the perag reservation
functioning perfectly we still can run the ag totally out of blocks if
the rmap expands beyond our assumed max rmapbt size of max(1 record per
block, 1% of the ag).

Say you have agblocks = 11000, allocate 50% of the AG to a file, then
reflink that extent into 10000 other files.  Then dirty every other
block in each of the 10000 reflink copies.  Even if all the dirtied
blocks end up in some other AG, we've still expanded the number of rmap
records from 10001 to 5000 * 10000 == 50,000,000, which is much bigger
than our original estimation.

Unfortunately the options here aren't good -- we can't reserve enough
blocks to cover the maximal rmapbt when reflink is enabled, so the best
we could do is fail write_begin with ENOSPC if the shared extent's AG is
critically low on reservation, but that kinda sucks because at that
point the CoW /should/ be moving blocks (and therefore mappings) away
from the full AG.

(We already cut off reflinking when the perag reservations are
critically low, but that's only because we assume the caller will fall
back to a physical copy and that the physical copy will land in some
other AG.)

--D

> 
> Brian
> 
> > Brian
> > 
> > > Cheers,
> > > 
> > > Dave.
> > > -- 
> > > Dave Chinner
> > > david@fromorbit.com
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Brian Foster Jan. 10, 2018, 12:58 p.m. UTC | #7
On Tue, Jan 09, 2018 at 12:43:15PM -0800, Darrick J. Wong wrote:
> On Mon, Jan 08, 2018 at 04:56:03PM -0500, Brian Foster wrote:
> > cc Darrick
> > 
> > On Fri, Dec 08, 2017 at 09:16:30AM -0500, Brian Foster wrote:
> > > On Fri, Dec 08, 2017 at 09:41:26AM +1100, Dave Chinner wrote:
> > > > On Thu, Dec 07, 2017 at 01:58:08PM -0500, Brian Foster wrote:
> > ...
> > > > 
> > > > Ok, so it uses the EFI/EFD to make sure that the block freeing is
> > > > logged and replayed. So my question is:
> > > > 
> > > > > +/*
> > > > > + * AGFL blocks are accounted differently in the reserve pools and are not
> > > > > + * inserted into the busy extent list.
> > > > > + */
> > > > > +STATIC int
> > > > > +xfs_agfl_free_finish_item(
> > > > > +	struct xfs_trans		*tp,
> > > > > +	struct xfs_defer_ops		*dop,
> > > > > +	struct list_head		*item,
> > > > > +	void				*done_item,
> > > > > +	void				**state)
> > > > > +{
> > > > 
> > > > How does this function get called by log recovery when processing
> > > > the EFI as there is no flag in the EFI that says this was a AGFL
> > > > block?
> > > > 
> > > 
> > > It doesn't...
> > > 
> > > > That said, I haven't traced through whether this matters or not,
> > > > but I suspect it does because freelist frees use XFS_AG_RESV_AGFL
> > > > and that avoids accounting the free to the superblock counters
> > > > because the block is already accounted as free space....
> > > > 
> > > 
> > > I don't think it does matter. I actually tested log recovery precisely
> > > for this question, to see whether the traditional EFI recovery path
> > > would disrupt accounting or anything and I didn't reproduce any problems
> > > (well, except for that rmap record cleanup failure thing).
> > > 
> > > However, I do still need to trace through and understand why that is, to
> > > know for sure that there aren't any problems lurking here (and if not, I
> > > should probably document it), but I suspect the reason is that the
> > > differences between how agfl and regular blocks are handled here only
> > > affect in-core state of the AG reservation pools. These are all
> > > reinitialized from zero on a subsequent mount based on the on-disk state
> > > (... but good point, and I will try to confirm that before posting a
> > > non-RFC variant).
> > > 
> > 
> > After catching back up with this and taking a closer look at the code, I
> > can confirm that generic EFI recovery works fine for deferred AGFL block
> > frees. What happens is essentially that the slot is freed and the block
> > free is deferred in a particular tx. If we crash before that tx commits,
> > then obviously nothing changes and we're fine. If we crash after that tx
> > commits, EFI recovery frees the block and the AGFL reserve pool
> > adjustment is irrelevant as the in-core res counters are initialized
> > from the current state of the fs after log recovery has completed (so
> > even if we knew this was an agfl block, attempting reservation
> > adjustments at recovery time would probably be wrong).
> > 
> > That aside, looking through the perag res code had me a bit curious
> > about why we reserve all agfl blocks in the first place. IIUC, the AGFL
> > reserve pool actually serves the rmapbt, since that (and that alone) is
> > what the mount time reservation is based on. AGFL blocks can be used for
> > other purposes, however, and the current runtime reservation is adjusted
> > based on all AGFL activity. Is there a reason this reserve pool does not
> > specifically target rmapbt allocations? Doesn't not doing so allow some
> > percentage of the rmapbt reserved blocks to be consumed by other
> > structures (alloc btrees) until/unless the fs is remounted? I'm
> > wondering if specifically there's a risk of something like the
> > following:
> > 
> > - mount fs with some number N of AGFL reserved blocks based on current
> >   rmapbt state. Suppose the size of the rmapbt is R.
> > - A bunch of agfl blocks are used over time (U). Suppose 50% of those go
> >   to the rmapbt and the other 50% to alloc btrees and whatnot.
> >   ar_reserved is reduced by U, but R only increases by U/2.
> > - a bunch more unrelated physical allocations occur and consume all
> >   non-reserved space
> > - the fs unmounts/mounts and the perag code looks for the remaining U/2
> >   blocks to reserve for the rmapbt, but not all of those blocks are
> >   available because we depleted the reserved pool faster than the rmapbt
> >   grew.
> > 
> > Darrick, hm? FWIW, a quick test to allocate 100% of an AG to a file,
> > punch out every other block for a few minutes and then remount kind of
> > shows what I'm wondering about:
> > 
> >  mount-3215  [002] ...1  1642.401846: xfs_ag_resv_init: dev 253:3 agno 0 resv 2 freeblks 259564 flcount 6 resv 3193 ask 3194 len 3194
> > ...
> >  <...>-28260 [000] ...1  1974.946866: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36317 flcount 9 resv 2936 ask 3194 len 1
> >  <...>-28428 [002] ...1  1976.371830: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36483 flcount 9 resv 2935 ask 3194 len 1
> >  <...>-28490 [002] ...1  1976.898147: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36544 flcount 9 resv 2934 ask 3194 len 1
> >  <...>-28491 [002] ...1  1976.907967: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36544 flcount 9 resv 2933 ask 3194 len 1
> > umount-28664 [002] ...1  1983.335444: xfs_ag_resv_free: dev 253:3 agno 0 resv 2 freeblks 36643 flcount 10 resv 2932 ask 3194 len 0
> >  mount-28671 [000] ...1  1991.396640: xfs_ag_resv_init: dev 253:3 agno 0 resv 2 freeblks 36643 flcount 10 resv 3038 ask 3194 len 3194
> 
> Yep, that's a bug.  Our current agfl doesn't have much of a way to
> signal to the perag reservation code that it's using blocks for the
> rmapbt vs. any other use, so we use up the reservation and then pull
> from the non-reserved free space after that, with the result that
> sometimes we can blow the assert in xfs_ag_resv_init.  I've not been
> able to come up with a convincing way to fix this problem, largely
> because of:
> 

Ok..

> > We consume the res as we go, unmount with some held reservation value,
> > immediately remount and the associated reservation has jumped by 100
> > blocks or so. (Granted, whether this can manifest into a tangible
> > problem may be another story altogether.).
> 
> It's theoretically possible -- even with the perag reservation
> functioning perfectly we still can run the ag totally out of blocks if
> the rmap expands beyond our assumed max rmapbt size of max(1 record per
> block, 1% of the ag).
> 
> Say you have agblocks = 11000, allocate 50% of the AG to a file, then
> reflink that extent into 10000 other files.  Then dirty every other
> block in each of the 10000 reflink copies.  Even if all the dirtied
> blocks end up in some other AG, we've still expanded the number of rmap
> records from 10001 to 5000 * 10000 == 50,000,000, which is much bigger
> than our original estimation.
> 

Yeah, though I think the effectiveness of the maximum sized rmapbt
estimation is a separate issue from the accuracy of the reservation
accounting system. The latter makes me worry that certain, sustained
workloads could amplify the divergence between runtime accounting and
reality such that the reservation system is ineffective even in cases
that don't test the worst case estimation. That may be less likely in
the current situation where the AGFL consumers are semi-related, but
it's hard to characterize.

> Unfortunately the options here aren't good -- we can't reserve enough
> blocks to cover the maximal rmapbt when reflink is enabled, so the best
> we could do is fail write_begin with ENOSPC if the shared extent's AG is
> critically low on reservation, but that kinda sucks because at that
> point the CoW /should/ be moving blocks (and therefore mappings) away
> from the full AG.
> 

Two potential options came to mind when reading the code:

- Factor all possible AGFL consumers into the perag AGFL reservation
  calculation to address the impedence mismatch between the calculation
  and runtime accounting.
- Refactor the RESV_AGFL reservation into an RESV_RMAPBT reservation
  that sits on top of the AGFL rather than behind it.

... neither of which is fully thought out from a sanity perspective.

The former strikes me as significantly more complicated as it would
require the reservation calculation (and estimation) to account for all
possible consumers of the AGFL. That comes along with all the expected
future maintenance cost for future AGFL consumers. Add to that the fact
that the reservation is really only needed for the rmapbt at this point
in time and this seems like an undesirable option.

The latter more simply moves the accounting to where rmapbt blocks are
consumed/freed, irrespective of where the blocks are allocated from. I
_think_ that should allow for accurate reservation accounting without
losing any major capability (i.e., the reservation value is still an
estimation in the end), but I could be missing something in the bowels
of the res. code. I do wonder a bit about the reservation
over-protecting blocks in the free space btrees based on the current
AGFL population, but it's not immediately clear to me if or how much
that matters (and perhaps is something that could be addressed,
anyways). Thoughts?

Brian

> (We already cut off reflinking when the perag reservations are
> critically low, but that's only because we assume the caller will fall
> back to a physical copy and that the physical copy will land in some
> other AG.)
> 
> --D
> 
> > 
> > Brian
> > 
> > > Brian
> > > 
> > > > Cheers,
> > > > 
> > > > Dave.
> > > > -- 
> > > > Dave Chinner
> > > > david@fromorbit.com
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Darrick J. Wong Jan. 10, 2018, 7:08 p.m. UTC | #8
On Wed, Jan 10, 2018 at 07:58:59AM -0500, Brian Foster wrote:
> On Tue, Jan 09, 2018 at 12:43:15PM -0800, Darrick J. Wong wrote:
> > On Mon, Jan 08, 2018 at 04:56:03PM -0500, Brian Foster wrote:
> > > cc Darrick
> > > 
> > > On Fri, Dec 08, 2017 at 09:16:30AM -0500, Brian Foster wrote:
> > > > On Fri, Dec 08, 2017 at 09:41:26AM +1100, Dave Chinner wrote:
> > > > > On Thu, Dec 07, 2017 at 01:58:08PM -0500, Brian Foster wrote:
> > > ...
> > > > > 
> > > > > Ok, so it uses the EFI/EFD to make sure that the block freeing is
> > > > > logged and replayed. So my question is:
> > > > > 
> > > > > > +/*
> > > > > > + * AGFL blocks are accounted differently in the reserve pools and are not
> > > > > > + * inserted into the busy extent list.
> > > > > > + */
> > > > > > +STATIC int
> > > > > > +xfs_agfl_free_finish_item(
> > > > > > +	struct xfs_trans		*tp,
> > > > > > +	struct xfs_defer_ops		*dop,
> > > > > > +	struct list_head		*item,
> > > > > > +	void				*done_item,
> > > > > > +	void				**state)
> > > > > > +{
> > > > > 
> > > > > How does this function get called by log recovery when processing
> > > > > the EFI as there is no flag in the EFI that says this was a AGFL
> > > > > block?
> > > > > 
> > > > 
> > > > It doesn't...
> > > > 
> > > > > That said, I haven't traced through whether this matters or not,
> > > > > but I suspect it does because freelist frees use XFS_AG_RESV_AGFL
> > > > > and that avoids accounting the free to the superblock counters
> > > > > because the block is already accounted as free space....
> > > > > 
> > > > 
> > > > I don't think it does matter. I actually tested log recovery precisely
> > > > for this question, to see whether the traditional EFI recovery path
> > > > would disrupt accounting or anything and I didn't reproduce any problems
> > > > (well, except for that rmap record cleanup failure thing).
> > > > 
> > > > However, I do still need to trace through and understand why that is, to
> > > > know for sure that there aren't any problems lurking here (and if not, I
> > > > should probably document it), but I suspect the reason is that the
> > > > differences between how agfl and regular blocks are handled here only
> > > > affect in-core state of the AG reservation pools. These are all
> > > > reinitialized from zero on a subsequent mount based on the on-disk state
> > > > (... but good point, and I will try to confirm that before posting a
> > > > non-RFC variant).
> > > > 
> > > 
> > > After catching back up with this and taking a closer look at the code, I
> > > can confirm that generic EFI recovery works fine for deferred AGFL block
> > > frees. What happens is essentially that the slot is freed and the block
> > > free is deferred in a particular tx. If we crash before that tx commits,
> > > then obviously nothing changes and we're fine. If we crash after that tx
> > > commits, EFI recovery frees the block and the AGFL reserve pool
> > > adjustment is irrelevant as the in-core res counters are initialized
> > > from the current state of the fs after log recovery has completed (so
> > > even if we knew this was an agfl block, attempting reservation
> > > adjustments at recovery time would probably be wrong).
> > > 
> > > That aside, looking through the perag res code had me a bit curious
> > > about why we reserve all agfl blocks in the first place. IIUC, the AGFL
> > > reserve pool actually serves the rmapbt, since that (and that alone) is
> > > what the mount time reservation is based on. AGFL blocks can be used for
> > > other purposes, however, and the current runtime reservation is adjusted
> > > based on all AGFL activity. Is there a reason this reserve pool does not
> > > specifically target rmapbt allocations? Doesn't not doing so allow some
> > > percentage of the rmapbt reserved blocks to be consumed by other
> > > structures (alloc btrees) until/unless the fs is remounted? I'm
> > > wondering if specifically there's a risk of something like the
> > > following:
> > > 
> > > - mount fs with some number N of AGFL reserved blocks based on current
> > >   rmapbt state. Suppose the size of the rmapbt is R.
> > > - A bunch of agfl blocks are used over time (U). Suppose 50% of those go
> > >   to the rmapbt and the other 50% to alloc btrees and whatnot.
> > >   ar_reserved is reduced by U, but R only increases by U/2.
> > > - a bunch more unrelated physical allocations occur and consume all
> > >   non-reserved space
> > > - the fs unmounts/mounts and the perag code looks for the remaining U/2
> > >   blocks to reserve for the rmapbt, but not all of those blocks are
> > >   available because we depleted the reserved pool faster than the rmapbt
> > >   grew.
> > > 
> > > Darrick, hm? FWIW, a quick test to allocate 100% of an AG to a file,
> > > punch out every other block for a few minutes and then remount kind of
> > > shows what I'm wondering about:
> > > 
> > >  mount-3215  [002] ...1  1642.401846: xfs_ag_resv_init: dev 253:3 agno 0 resv 2 freeblks 259564 flcount 6 resv 3193 ask 3194 len 3194
> > > ...
> > >  <...>-28260 [000] ...1  1974.946866: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36317 flcount 9 resv 2936 ask 3194 len 1
> > >  <...>-28428 [002] ...1  1976.371830: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36483 flcount 9 resv 2935 ask 3194 len 1
> > >  <...>-28490 [002] ...1  1976.898147: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36544 flcount 9 resv 2934 ask 3194 len 1
> > >  <...>-28491 [002] ...1  1976.907967: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36544 flcount 9 resv 2933 ask 3194 len 1
> > > umount-28664 [002] ...1  1983.335444: xfs_ag_resv_free: dev 253:3 agno 0 resv 2 freeblks 36643 flcount 10 resv 2932 ask 3194 len 0
> > >  mount-28671 [000] ...1  1991.396640: xfs_ag_resv_init: dev 253:3 agno 0 resv 2 freeblks 36643 flcount 10 resv 3038 ask 3194 len 3194
> > 
> > Yep, that's a bug.  Our current agfl doesn't have much of a way to
> > signal to the perag reservation code that it's using blocks for the
> > rmapbt vs. any other use, so we use up the reservation and then pull
> > from the non-reserved free space after that, with the result that
> > sometimes we can blow the assert in xfs_ag_resv_init.  I've not been
> > able to come up with a convincing way to fix this problem, largely
> > because of:
> > 
> 
> Ok..
> 
> > > We consume the res as we go, unmount with some held reservation value,
> > > immediately remount and the associated reservation has jumped by 100
> > > blocks or so. (Granted, whether this can manifest into a tangible
> > > problem may be another story altogether.).
> > 
> > It's theoretically possible -- even with the perag reservation
> > functioning perfectly we still can run the ag totally out of blocks if
> > the rmap expands beyond our assumed max rmapbt size of max(1 record per
> > block, 1% of the ag).
> > 
> > Say you have agblocks = 11000, allocate 50% of the AG to a file, then
> > reflink that extent into 10000 other files.  Then dirty every other
> > block in each of the 10000 reflink copies.  Even if all the dirtied
> > blocks end up in some other AG, we've still expanded the number of rmap
> > records from 10001 to 5000 * 10000 == 50,000,000, which is much bigger
> > than our original estimation.
> > 
> 
> Yeah, though I think the effectiveness of the maximum sized rmapbt
> estimation is a separate issue from the accuracy of the reservation
> accounting system. The latter makes me worry that certain, sustained
> workloads could amplify the divergence between runtime accounting and
> reality such that the reservation system is ineffective even in cases
> that don't test the worst case estimation. That may be less likely in
> the current situation where the AGFL consumers are semi-related, but
> it's hard to characterize.

Agreed, on both points.

> > Unfortunately the options here aren't good -- we can't reserve enough
> > blocks to cover the maximal rmapbt when reflink is enabled, so the best
> > we could do is fail write_begin with ENOSPC if the shared extent's AG is
> > critically low on reservation, but that kinda sucks because at that
> > point the CoW /should/ be moving blocks (and therefore mappings) away
> > from the full AG.
> > 
> 
> Two potential options came to mind when reading the code:
> 
> - Factor all possible AGFL consumers into the perag AGFL reservation
>   calculation to address the impedence mismatch between the calculation
>   and runtime accounting.
> - Refactor the RESV_AGFL reservation into an RESV_RMAPBT reservation
>   that sits on top of the AGFL rather than behind it.
> 
> ... neither of which is fully thought out from a sanity perspective.
> 
> The former strikes me as significantly more complicated as it would
> require the reservation calculation (and estimation) to account for all
> possible consumers of the AGFL. That comes along with all the expected
> future maintenance cost for future AGFL consumers. Add to that the fact
> that the reservation is really only needed for the rmapbt at this point
> in time and this seems like an undesirable option.

<nod>  I also don't know that we're not going to someday add another
consumer of agfl blocks that also needs its own perag reservation, in
which case we'd have to have more accounting.  Or...

> The latter more simply moves the accounting to where rmapbt blocks are
> consumed/freed, irrespective of where the blocks are allocated from. I
> _think_ that should allow for accurate reservation accounting without
> losing any major capability (i.e., the reservation value is still an
> estimation in the end), but I could be missing something in the bowels
> of the res. code. I do wonder a bit about the reservation
> over-protecting blocks in the free space btrees based on the current
> AGFL population, but it's not immediately clear to me if or how much
> that matters (and perhaps is something that could be addressed,
> anyways). Thoughts?

Hmmm.  Right now the perag code reserves some number of blocks for
future use by the rmapbt.  The agfl (via fix_freelist) refreshes from
the perag pool; and the bnobt/rmapbt get blocks from the agfl.  IOWs, we
use the agfl reservation (sloppily) as a backstop for the agfl, and
RESV_AGFL gets charged for any agfl allocation, even if it ultimately
goes to something that isn't the rmapbt, even though the rmapbt code put
that reservation there for its own use.

You propose moving the xfs_ag_resv_{alloc,free}_extent accounting bits
to xfs_rmapbt_{alloc,free}_block so that only rmapbt allocations can
drain from RESV_AGFL (or put back to the pool).  This fixes the
accounting problem, since only those who set up RESV_AGFL reservations
actually get to account from them, i.e. we no longer lose RESV_AGFL
blocks to the bnobt.  Therefore, fix_freelist is no longer responsible
for passing RESV_AGFL into the extent allocation/free routines ... but
does the rmapbt directly allocate its own blocks now?  (I don't think
this is possible, because we'd then have to create an OWN_FS rmap record
for the new rmapbt blocks.)  Or does it still draw from the agfl, in
which case we have to figure out how to get reserved blocks to the
rmapbt if the agfl can't supply any blocks?

--D

> Brian
> 
> > (We already cut off reflinking when the perag reservations are
> > critically low, but that's only because we assume the caller will fall
> > back to a physical copy and that the physical copy will land in some
> > other AG.)
> > 
> > --D
> > 
> > > 
> > > Brian
> > > 
> > > > Brian
> > > > 
> > > > > Cheers,
> > > > > 
> > > > > Dave.
> > > > > -- 
> > > > > Dave Chinner
> > > > > david@fromorbit.com
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index 8a2235ebca08..ab636181471c 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -39,6 +39,9 @@ 
 #include "xfs_buf_item.h"
 #include "xfs_log.h"
 #include "xfs_ag_resv.h"
+#include "xfs_bmap.h"
+
+extern kmem_zone_t	*xfs_bmap_free_item_zone;
 
 struct workqueue_struct *xfs_alloc_wq;
 
@@ -2065,6 +2068,40 @@  xfs_free_agfl_block(
 }
 
 /*
+ * Defer an AGFL block free. This is effectively equivalent to
+ * xfs_bmap_add_free() with some special handling particular to AGFL blocks.
+ *
+ * Deferring AGFL frees helps prevent log reservation overruns due to too many
+ * allocation operations in a transaction. AGFL frees are prone to this problem
+ * because for one they are always freed one at a time. Further, an immediate
+ * AGFL block free can cause a btree join and require another block free before
+ * the real allocation can proceed. Deferring the free disconnects freeing up
+ * the AGFL slot from freeing the block.
+ */
+STATIC void
+xfs_defer_agfl_block(
+	struct xfs_mount		*mp,
+	struct xfs_defer_ops		*dfops,
+	xfs_agnumber_t			agno,
+	xfs_fsblock_t			agbno,
+	struct xfs_owner_info		*oinfo)
+{
+	struct xfs_extent_free_item	*new;		/* new element */
+
+	ASSERT(xfs_bmap_free_item_zone != NULL);
+	ASSERT(oinfo != NULL);
+
+	new = kmem_zone_alloc(xfs_bmap_free_item_zone, KM_SLEEP);
+	new->xefi_startblock = XFS_AGB_TO_FSB(mp, agno, agbno);
+	new->xefi_blockcount = 1;
+	new->xefi_oinfo = *oinfo;
+
+	trace_xfs_agfl_free_defer(mp, agno, 0, agbno, 1);
+
+	xfs_defer_add(dfops, XFS_DEFER_OPS_TYPE_AGFL_FREE, &new->xefi_list);
+}
+
+/*
  * Decide whether to use this allocation group for this allocation.
  * If so, fix up the btree freelist's size.
  */
@@ -2164,10 +2201,16 @@  xfs_alloc_fix_freelist(
 		if (error)
 			goto out_agbp_relse;
 
-		error = xfs_free_agfl_block(tp, args->agno, bno, agbp,
-					    &targs.oinfo);
-		if (error)
-			goto out_agbp_relse;
+		/* defer agfl frees if dfops is provided */
+		if (args->dfops) {
+			xfs_defer_agfl_block(mp, args->dfops, args->agno, bno,
+						     &targs.oinfo);
+		} else {
+			error = xfs_free_agfl_block(tp, args->agno, bno, agbp,
+						    &targs.oinfo);
+			if (error)
+				goto out_agbp_relse;
+		}
 	}
 
 	targs.tp = tp;
diff --git a/fs/xfs/libxfs/xfs_alloc.h b/fs/xfs/libxfs/xfs_alloc.h
index d3a150180b1d..559568806265 100644
--- a/fs/xfs/libxfs/xfs_alloc.h
+++ b/fs/xfs/libxfs/xfs_alloc.h
@@ -61,6 +61,7 @@  typedef unsigned int xfs_alloctype_t;
  */
 typedef struct xfs_alloc_arg {
 	struct xfs_trans *tp;		/* transaction pointer */
+	struct xfs_defer_ops	*dfops;	/* deferred ops (for agfl) */
 	struct xfs_mount *mp;		/* file system mount point */
 	struct xfs_buf	*agbp;		/* buffer for a.g. freelist header */
 	struct xfs_perag *pag;		/* per-ag struct for this agno */
diff --git a/fs/xfs/libxfs/xfs_defer.h b/fs/xfs/libxfs/xfs_defer.h
index d4f046dd44bd..29c6b550f49b 100644
--- a/fs/xfs/libxfs/xfs_defer.h
+++ b/fs/xfs/libxfs/xfs_defer.h
@@ -55,6 +55,7 @@  enum xfs_defer_ops_type {
 	XFS_DEFER_OPS_TYPE_REFCOUNT,
 	XFS_DEFER_OPS_TYPE_RMAP,
 	XFS_DEFER_OPS_TYPE_FREE,
+	XFS_DEFER_OPS_TYPE_AGFL_FREE,
 	XFS_DEFER_OPS_TYPE_MAX,
 };
 
diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
index d718a10c2271..3c5d9f8cbb9d 100644
--- a/fs/xfs/xfs_trace.h
+++ b/fs/xfs/xfs_trace.h
@@ -2426,6 +2426,8 @@  DEFINE_DEFER_PENDING_EVENT(xfs_defer_pending_abort);
 #define DEFINE_BMAP_FREE_DEFERRED_EVENT DEFINE_PHYS_EXTENT_DEFERRED_EVENT
 DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_bmap_free_defer);
 DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_bmap_free_deferred);
+DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_agfl_free_defer);
+DEFINE_BMAP_FREE_DEFERRED_EVENT(xfs_agfl_free_deferred);
 
 /* rmap tracepoints */
 DECLARE_EVENT_CLASS(xfs_rmap_class,
diff --git a/fs/xfs/xfs_trans_extfree.c b/fs/xfs/xfs_trans_extfree.c
index ab438647592a..f5620796ae25 100644
--- a/fs/xfs/xfs_trans_extfree.c
+++ b/fs/xfs/xfs_trans_extfree.c
@@ -231,9 +231,79 @@  static const struct xfs_defer_op_type xfs_extent_free_defer_type = {
 	.cancel_item	= xfs_extent_free_cancel_item,
 };
 
+/*
+ * AGFL blocks are accounted differently in the reserve pools and are not
+ * inserted into the busy extent list.
+ */
+STATIC int
+xfs_agfl_free_finish_item(
+	struct xfs_trans		*tp,
+	struct xfs_defer_ops		*dop,
+	struct list_head		*item,
+	void				*done_item,
+	void				**state)
+{
+	struct xfs_mount		*mp = tp->t_mountp;
+	struct xfs_efd_log_item		*efdp = done_item;
+	struct xfs_extent_free_item	*free;
+	struct xfs_extent		*extp;
+	struct xfs_buf			*agbp;
+	int				error;
+	xfs_agnumber_t			agno;
+	xfs_agblock_t			agbno;
+	uint				next_extent;
+
+	free = container_of(item, struct xfs_extent_free_item, xefi_list);
+	ASSERT(free->xefi_blockcount == 1);
+	agno = XFS_FSB_TO_AGNO(mp, free->xefi_startblock);
+	agbno = XFS_FSB_TO_AGBNO(mp, free->xefi_startblock);
+
+	trace_xfs_agfl_free_deferred(mp, agno, 0, agbno, free->xefi_blockcount);
+
+	error = xfs_alloc_read_agf(mp, tp, agno, 0, &agbp);
+	if (!error)
+		error = xfs_free_agfl_block(tp, agno, agbno, agbp,
+					    &free->xefi_oinfo);
+
+	/*
+	 * Mark the transaction dirty, even on error. This ensures the
+	 * transaction is aborted, which:
+	 *
+	 * 1.) releases the EFI and frees the EFD
+	 * 2.) shuts down the filesystem
+	 */
+	tp->t_flags |= XFS_TRANS_DIRTY;
+	efdp->efd_item.li_desc->lid_flags |= XFS_LID_DIRTY;
+
+	next_extent = efdp->efd_next_extent;
+	ASSERT(next_extent < efdp->efd_format.efd_nextents);
+	extp = &(efdp->efd_format.efd_extents[next_extent]);
+	extp->ext_start = free->xefi_startblock;
+	extp->ext_len = free->xefi_blockcount;
+	efdp->efd_next_extent++;
+
+	kmem_free(free);
+	return error;
+}
+
+
+/* sub-type with special handling for AGFL deferred frees */
+static const struct xfs_defer_op_type xfs_agfl_free_defer_type = {
+	.type		= XFS_DEFER_OPS_TYPE_AGFL_FREE,
+	.max_items	= XFS_EFI_MAX_FAST_EXTENTS,
+	.diff_items	= xfs_extent_free_diff_items,
+	.create_intent	= xfs_extent_free_create_intent,
+	.abort_intent	= xfs_extent_free_abort_intent,
+	.log_item	= xfs_extent_free_log_item,
+	.create_done	= xfs_extent_free_create_done,
+	.finish_item	= xfs_agfl_free_finish_item,
+	.cancel_item	= xfs_extent_free_cancel_item,
+};
+
 /* Register the deferred op type. */
 void
 xfs_extent_free_init_defer_op(void)
 {
 	xfs_defer_init_op_type(&xfs_extent_free_defer_type);
+	xfs_defer_init_op_type(&xfs_agfl_free_defer_type);
 }