diff mbox series

[v3,3/4] xfs: randomly fall back to near mode lookup algorithm in debug mode

Message ID 20190815125538.49570-4-bfoster@redhat.com (mailing list archive)
State Deferred, archived
Headers show
Series xfs: rework near mode extent allocation | expand

Commit Message

Brian Foster Aug. 15, 2019, 12:55 p.m. UTC
The last block scan is the dominant near mode allocation algorithm
for a newer filesystem with fewer, large free extents. Add debug
mode logic to randomly fall back to lookup mode to improve
regression test coverage.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/libxfs/xfs_alloc.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

Comments

Darrick J. Wong Aug. 17, 2019, 1:37 a.m. UTC | #1
On Thu, Aug 15, 2019 at 08:55:37AM -0400, Brian Foster wrote:
> The last block scan is the dominant near mode allocation algorithm
> for a newer filesystem with fewer, large free extents. Add debug
> mode logic to randomly fall back to lookup mode to improve
> regression test coverage.

How about just using an errortag since the new sysfs interface lets
testcases / admins control the frequency?

--D

> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
>  fs/xfs/libxfs/xfs_alloc.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> index 7753b61ba532..d550aa5597bf 100644
> --- a/fs/xfs/libxfs/xfs_alloc.c
> +++ b/fs/xfs/libxfs/xfs_alloc.c
> @@ -1266,6 +1266,7 @@ xfs_alloc_ag_vextent_near(
>  	int			i;
>  	xfs_agblock_t		bno;
>  	xfs_extlen_t		len;
> +	bool			lastblock;
>  
>  	/* handle unitialized agbno range so caller doesn't have to */
>  	if (!args->min_agbno && !args->max_agbno)
> @@ -1291,7 +1292,12 @@ xfs_alloc_ag_vextent_near(
>  	 * Otherwise run the optimized lookup search algorithm from the current
>  	 * location to the end of the tree.
>  	 */
> -	if (xfs_btree_islastblock(acur.cnt, 0)) {
> +	lastblock = xfs_btree_islastblock(acur.cnt, 0);
> +#ifdef DEBUG
> +	if (lastblock)
> +		lastblock = prandom_u32() & 1;
> +#endif
> +	if (lastblock) {
>  		int	j;
>  
>  		trace_xfs_alloc_cur_lastblock(args);
> -- 
> 2.20.1
>
Brian Foster Aug. 19, 2019, 6:19 p.m. UTC | #2
On Fri, Aug 16, 2019 at 06:37:03PM -0700, Darrick J. Wong wrote:
> On Thu, Aug 15, 2019 at 08:55:37AM -0400, Brian Foster wrote:
> > The last block scan is the dominant near mode allocation algorithm
> > for a newer filesystem with fewer, large free extents. Add debug
> > mode logic to randomly fall back to lookup mode to improve
> > regression test coverage.
> 
> How about just using an errortag since the new sysfs interface lets
> testcases / admins control the frequency?
> 

We could do that, but my understanding of the equivalent logic in the
current algorithm is that we want broad coverage of both near mode
sub-algorithms across the entire suite of tests. Hence we randomly drop
allocations into either algorithm when DEBUG mode is enabled. IIRC, we
do something similar with sparse inodes (i.e., randomly allocate sparse
inode chunks even when unnecessary) so the functionality isn't only
covered by targeted tests.

Do we have the ability to have always on error tags as such? I thought
we had default frequency values for each tag, but I thought they still
had to be explicitly enabled. If that's the case, I'm sure we could come
up with such an on-by-default mechanism and perhaps switch over these
remaining DEBUG mode hacks, but that's a follow up thing IMO..

Brian

> --D
> 
> > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > ---
> >  fs/xfs/libxfs/xfs_alloc.c | 8 +++++++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
> > index 7753b61ba532..d550aa5597bf 100644
> > --- a/fs/xfs/libxfs/xfs_alloc.c
> > +++ b/fs/xfs/libxfs/xfs_alloc.c
> > @@ -1266,6 +1266,7 @@ xfs_alloc_ag_vextent_near(
> >  	int			i;
> >  	xfs_agblock_t		bno;
> >  	xfs_extlen_t		len;
> > +	bool			lastblock;
> >  
> >  	/* handle unitialized agbno range so caller doesn't have to */
> >  	if (!args->min_agbno && !args->max_agbno)
> > @@ -1291,7 +1292,12 @@ xfs_alloc_ag_vextent_near(
> >  	 * Otherwise run the optimized lookup search algorithm from the current
> >  	 * location to the end of the tree.
> >  	 */
> > -	if (xfs_btree_islastblock(acur.cnt, 0)) {
> > +	lastblock = xfs_btree_islastblock(acur.cnt, 0);
> > +#ifdef DEBUG
> > +	if (lastblock)
> > +		lastblock = prandom_u32() & 1;
> > +#endif
> > +	if (lastblock) {
> >  		int	j;
> >  
> >  		trace_xfs_alloc_cur_lastblock(args);
> > -- 
> > 2.20.1
> >
diff mbox series

Patch

diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index 7753b61ba532..d550aa5597bf 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -1266,6 +1266,7 @@  xfs_alloc_ag_vextent_near(
 	int			i;
 	xfs_agblock_t		bno;
 	xfs_extlen_t		len;
+	bool			lastblock;
 
 	/* handle unitialized agbno range so caller doesn't have to */
 	if (!args->min_agbno && !args->max_agbno)
@@ -1291,7 +1292,12 @@  xfs_alloc_ag_vextent_near(
 	 * Otherwise run the optimized lookup search algorithm from the current
 	 * location to the end of the tree.
 	 */
-	if (xfs_btree_islastblock(acur.cnt, 0)) {
+	lastblock = xfs_btree_islastblock(acur.cnt, 0);
+#ifdef DEBUG
+	if (lastblock)
+		lastblock = prandom_u32() & 1;
+#endif
+	if (lastblock) {
 		int	j;
 
 		trace_xfs_alloc_cur_lastblock(args);