Message ID | 20190927171802.45582-2-bfoster@redhat.com (mailing list archive) |
---|---|
State | Accepted, archived |
Headers | show |
Series | xfs: rework near mode extent allocation | expand |
On Fri, Sep 27, 2019 at 01:17:52PM -0400, Brian Foster wrote: > The upcoming allocation algorithm update searches multiple > allocation btree cursors concurrently. As such, it requires an > active state to track when a particular cursor should continue > searching. While active state will be modified based on higher level > logic, we can define base functionality based on the result of > allocation btree lookups. > > Define an active flag in the private area of the btree cursor. > Update it based on the result of lookups in the existing allocation > btree helpers. Finally, provide a new helper to query the current > state. I vaguely remember having the discussion before, but why isn't the active flag in the generic part of xfs_btree_cur and just tracked for all types? That would seem bother simpler and more useful in the long run.
On Mon, Sep 30, 2019 at 01:11:38AM -0700, Christoph Hellwig wrote: > On Fri, Sep 27, 2019 at 01:17:52PM -0400, Brian Foster wrote: > > The upcoming allocation algorithm update searches multiple > > allocation btree cursors concurrently. As such, it requires an > > active state to track when a particular cursor should continue > > searching. While active state will be modified based on higher level > > logic, we can define base functionality based on the result of > > allocation btree lookups. > > > > Define an active flag in the private area of the btree cursor. > > Update it based on the result of lookups in the existing allocation > > btree helpers. Finally, provide a new helper to query the current > > state. > > I vaguely remember having the discussion before, but why isn't the > active flag in the generic part of xfs_btree_cur and just tracked > for all types? That would seem bother simpler and more useful in > the long run. The active flag was in the allocation cursor originally and was moved to the private portion of the btree cursor simply because IIRC that's where you suggested to put it. FWIW, that seems like the appropriate place to me because 1.) as of right now I don't have any other use case in mind outside of allocbt cursors 2.) flag state is similarly managed in the allocation btree helpers and 3.) the flag is not necessarily used as a generic btree cursor state (it is more accurately a superset of the generic btree state where the allocation algorithm can also make higher level changes). The latter bit is why it was originally put in the allocation tracking structure, FWIW. I've no fundamental objection to moving some or all of this to more generic code down the road, but I'd prefer not to do that until there's another user so the above can be rectified against an actual use case. I can include the reasoning for the current placement in the commit log description if that is useful. Brian
On Fri, Sep 27, 2019 at 01:17:52PM -0400, Brian Foster wrote: > The upcoming allocation algorithm update searches multiple > allocation btree cursors concurrently. As such, it requires an > active state to track when a particular cursor should continue > searching. While active state will be modified based on higher level > logic, we can define base functionality based on the result of > allocation btree lookups. > > Define an active flag in the private area of the btree cursor. > Update it based on the result of lookups in the existing allocation > btree helpers. Finally, provide a new helper to query the current > state. > > Signed-off-by: Brian Foster <bfoster@redhat.com> Looks good to me, Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> --D > --- > fs/xfs/libxfs/xfs_alloc.c | 24 +++++++++++++++++++++--- > fs/xfs/libxfs/xfs_alloc_btree.c | 1 + > fs/xfs/libxfs/xfs_btree.h | 3 +++ > 3 files changed, 25 insertions(+), 3 deletions(-) > > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c > index 533b04aaf6f6..0ecc142c833b 100644 > --- a/fs/xfs/libxfs/xfs_alloc.c > +++ b/fs/xfs/libxfs/xfs_alloc.c > @@ -146,9 +146,13 @@ xfs_alloc_lookup_eq( > xfs_extlen_t len, /* length of extent */ > int *stat) /* success/failure */ > { > + int error; > + > cur->bc_rec.a.ar_startblock = bno; > cur->bc_rec.a.ar_blockcount = len; > - return xfs_btree_lookup(cur, XFS_LOOKUP_EQ, stat); > + error = xfs_btree_lookup(cur, XFS_LOOKUP_EQ, stat); > + cur->bc_private.a.priv.abt.active = (*stat == 1); > + return error; > } > > /* > @@ -162,9 +166,13 @@ xfs_alloc_lookup_ge( > xfs_extlen_t len, /* length of extent */ > int *stat) /* success/failure */ > { > + int error; > + > cur->bc_rec.a.ar_startblock = bno; > cur->bc_rec.a.ar_blockcount = len; > - return xfs_btree_lookup(cur, XFS_LOOKUP_GE, stat); > + error = xfs_btree_lookup(cur, XFS_LOOKUP_GE, stat); > + cur->bc_private.a.priv.abt.active = (*stat == 1); > + return error; > } > > /* > @@ -178,9 +186,19 @@ xfs_alloc_lookup_le( > xfs_extlen_t len, /* length of extent */ > int *stat) /* success/failure */ > { > + int error; > cur->bc_rec.a.ar_startblock = bno; > cur->bc_rec.a.ar_blockcount = len; > - return xfs_btree_lookup(cur, XFS_LOOKUP_LE, stat); > + error = xfs_btree_lookup(cur, XFS_LOOKUP_LE, stat); > + cur->bc_private.a.priv.abt.active = (*stat == 1); > + return error; > +} > + > +static inline bool > +xfs_alloc_cur_active( > + struct xfs_btree_cur *cur) > +{ > + return cur && cur->bc_private.a.priv.abt.active; > } > > /* > diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c > index 2a94543857a1..279694d73e4e 100644 > --- a/fs/xfs/libxfs/xfs_alloc_btree.c > +++ b/fs/xfs/libxfs/xfs_alloc_btree.c > @@ -507,6 +507,7 @@ xfs_allocbt_init_cursor( > > cur->bc_private.a.agbp = agbp; > cur->bc_private.a.agno = agno; > + cur->bc_private.a.priv.abt.active = false; > > if (xfs_sb_version_hascrc(&mp->m_sb)) > cur->bc_flags |= XFS_BTREE_CRC_BLOCKS; > diff --git a/fs/xfs/libxfs/xfs_btree.h b/fs/xfs/libxfs/xfs_btree.h > index ced1e65d1483..b4e3ec1d7ff9 100644 > --- a/fs/xfs/libxfs/xfs_btree.h > +++ b/fs/xfs/libxfs/xfs_btree.h > @@ -183,6 +183,9 @@ union xfs_btree_cur_private { > unsigned long nr_ops; /* # record updates */ > int shape_changes; /* # of extent splits */ > } refc; > + struct { > + bool active; /* allocation cursor state */ > + } abt; > }; > > /* > -- > 2.20.1 >
On Mon, Sep 30, 2019 at 08:17:01AM -0400, Brian Foster wrote: > The active flag was in the allocation cursor originally and was moved to > the private portion of the btree cursor simply because IIRC that's where > you suggested to put it. My memory starts fading, but IIRC you had a separate containing structure and I asked to move it into xfs_btree_cur itself. > FWIW, that seems like the appropriate place to > me because 1.) as of right now I don't have any other use case in mind > outside of allocbt cursors 2.) flag state is similarly managed in the > allocation btree helpers and 3.) the flag is not necessarily used as a > generic btree cursor state (it is more accurately a superset of the > generic btree state where the allocation algorithm can also make higher > level changes). The latter bit is why it was originally put in the > allocation tracking structure, FWIW. Ok, sounds fine with me for now. I just feels like doing it in the generic code would actually be simpler than updating all the wrappers.
On Mon, Sep 30, 2019 at 11:36:34PM -0700, Christoph Hellwig wrote: > On Mon, Sep 30, 2019 at 08:17:01AM -0400, Brian Foster wrote: > > The active flag was in the allocation cursor originally and was moved to > > the private portion of the btree cursor simply because IIRC that's where > > you suggested to put it. > > My memory starts fading, but IIRC you had a separate containing > structure and I asked to move it into xfs_btree_cur itself. > Right, that's the "allocation cursor" structure. I'd eventually like to fold that into or with the existing allocation arg structure, but that's something for after the other allocation modes are converted. Anyways.. this was all buried in a single patch as well that makes it harder to dig out. For reference, the original feedback was here: https://marc.info/?l=linux-xfs&m=155750947225047&w=2 > > FWIW, that seems like the appropriate place to > > me because 1.) as of right now I don't have any other use case in mind > > outside of allocbt cursors 2.) flag state is similarly managed in the > > allocation btree helpers and 3.) the flag is not necessarily used as a > > generic btree cursor state (it is more accurately a superset of the > > generic btree state where the allocation algorithm can also make higher > > level changes). The latter bit is why it was originally put in the > > allocation tracking structure, FWIW. > > Ok, sounds fine with me for now. I just feels like doing it in the > generic code would actually be simpler than updating all the wrappers. Ok. It's not quite as simple due to the semantics described above. I'm not totally convinced the generic "active" state would exactly match the semantics used by the block allocation code. I'd hate to bury it in there as is and have it end up being a landmine or wart if it is not ever reused outside of extent allocation (or replaced with something cleaner, ideally). Brian
diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c index 533b04aaf6f6..0ecc142c833b 100644 --- a/fs/xfs/libxfs/xfs_alloc.c +++ b/fs/xfs/libxfs/xfs_alloc.c @@ -146,9 +146,13 @@ xfs_alloc_lookup_eq( xfs_extlen_t len, /* length of extent */ int *stat) /* success/failure */ { + int error; + cur->bc_rec.a.ar_startblock = bno; cur->bc_rec.a.ar_blockcount = len; - return xfs_btree_lookup(cur, XFS_LOOKUP_EQ, stat); + error = xfs_btree_lookup(cur, XFS_LOOKUP_EQ, stat); + cur->bc_private.a.priv.abt.active = (*stat == 1); + return error; } /* @@ -162,9 +166,13 @@ xfs_alloc_lookup_ge( xfs_extlen_t len, /* length of extent */ int *stat) /* success/failure */ { + int error; + cur->bc_rec.a.ar_startblock = bno; cur->bc_rec.a.ar_blockcount = len; - return xfs_btree_lookup(cur, XFS_LOOKUP_GE, stat); + error = xfs_btree_lookup(cur, XFS_LOOKUP_GE, stat); + cur->bc_private.a.priv.abt.active = (*stat == 1); + return error; } /* @@ -178,9 +186,19 @@ xfs_alloc_lookup_le( xfs_extlen_t len, /* length of extent */ int *stat) /* success/failure */ { + int error; cur->bc_rec.a.ar_startblock = bno; cur->bc_rec.a.ar_blockcount = len; - return xfs_btree_lookup(cur, XFS_LOOKUP_LE, stat); + error = xfs_btree_lookup(cur, XFS_LOOKUP_LE, stat); + cur->bc_private.a.priv.abt.active = (*stat == 1); + return error; +} + +static inline bool +xfs_alloc_cur_active( + struct xfs_btree_cur *cur) +{ + return cur && cur->bc_private.a.priv.abt.active; } /* diff --git a/fs/xfs/libxfs/xfs_alloc_btree.c b/fs/xfs/libxfs/xfs_alloc_btree.c index 2a94543857a1..279694d73e4e 100644 --- a/fs/xfs/libxfs/xfs_alloc_btree.c +++ b/fs/xfs/libxfs/xfs_alloc_btree.c @@ -507,6 +507,7 @@ xfs_allocbt_init_cursor( cur->bc_private.a.agbp = agbp; cur->bc_private.a.agno = agno; + cur->bc_private.a.priv.abt.active = false; if (xfs_sb_version_hascrc(&mp->m_sb)) cur->bc_flags |= XFS_BTREE_CRC_BLOCKS; diff --git a/fs/xfs/libxfs/xfs_btree.h b/fs/xfs/libxfs/xfs_btree.h index ced1e65d1483..b4e3ec1d7ff9 100644 --- a/fs/xfs/libxfs/xfs_btree.h +++ b/fs/xfs/libxfs/xfs_btree.h @@ -183,6 +183,9 @@ union xfs_btree_cur_private { unsigned long nr_ops; /* # record updates */ int shape_changes; /* # of extent splits */ } refc; + struct { + bool active; /* allocation cursor state */ + } abt; }; /*
The upcoming allocation algorithm update searches multiple allocation btree cursors concurrently. As such, it requires an active state to track when a particular cursor should continue searching. While active state will be modified based on higher level logic, we can define base functionality based on the result of allocation btree lookups. Define an active flag in the private area of the btree cursor. Update it based on the result of lookups in the existing allocation btree helpers. Finally, provide a new helper to query the current state. Signed-off-by: Brian Foster <bfoster@redhat.com> --- fs/xfs/libxfs/xfs_alloc.c | 24 +++++++++++++++++++++--- fs/xfs/libxfs/xfs_alloc_btree.c | 1 + fs/xfs/libxfs/xfs_btree.h | 3 +++ 3 files changed, 25 insertions(+), 3 deletions(-)