diff mbox series

[v3,1/3] btrfs: defrag: don't try to merge regular extents with preallocated extents

Message ID 20220126005850.14729-1-wqu@suse.com (mailing list archive)
State New, archived
Headers show
Series [v3,1/3] btrfs: defrag: don't try to merge regular extents with preallocated extents | expand

Commit Message

Qu Wenruo Jan. 26, 2022, 12:58 a.m. UTC
[BUG]
With older kernels (before v5.16), btrfs will defrag preallocated extents.
While with newer kernels (v5.16 and newer) btrfs will not defrag
preallocated extents, but it will defrag the extent just before the
preallocated extent, even it's just a single sector.

This can be exposed by the following small script:

	mkfs.btrfs -f $dev > /dev/null

	mount $dev $mnt
	xfs_io -f -c "pwrite 0 4k" -c sync -c "falloc 4k 16K" $mnt/file
	xfs_io -c "fiemap -v" $mnt/file
	btrfs fi defrag $mnt/file
	sync
	xfs_io -c "fiemap -v" $mnt/file

The output looks like this on older kernels:

/mnt/btrfs/file:
 EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
   0: [0..7]:          26624..26631         8   0x0
   1: [8..39]:         26632..26663        32 0x801
/mnt/btrfs/file:
 EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
   0: [0..39]:         26664..26703        40   0x1

Which defrags the single sector along with the preallocated extent, and
replace them with an regular extent into a new location (caused by data
COW).
This wastes most of the data IO just for the preallocated range.

On the other hand, v5.16 is slightly better:

/mnt/btrfs/file:
 EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
   0: [0..7]:          26624..26631         8   0x0
   1: [8..39]:         26632..26663        32 0x801
/mnt/btrfs/file:
 EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
   0: [0..7]:          26664..26671         8   0x0
   1: [8..39]:         26632..26663        32 0x801

The preallocated range is not defragged, but the sector before it still
gets defragged, which has no need for it.

[CAUSE]
One of the function reused by the old and new behavior is
defrag_check_next_extent(), it will determine if we should defrag
current extent by checking the next one.

It only checks if the next extent is a hole or inlined, but it doesn't
check if it's preallocated.

On the other hand, out of the function, both old and new kernel will
reject preallocated extents.

Such inconsistent behavior causes above behavior.

[FIX]
- Also check if next extent is preallocated
  If so, don't defrag current extent.

- Add comments for each branch why we reject the extent

This will reduce the IO caused by defrag ioctl and autodefrag.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
Changelog:
v2:
- Use @extent_thresh from caller to replace the harded coded threshold
  Now caller has full control over the extent threshold value.

- Remove the old ambiguous check based on physical address
  The original check is too specific, only reject extents which are
  physically adjacent, AND too large.
  Since we have correct size check now, and the physically adjacent check
  is not always a win.
  So remove the old check completely.

v3:
- Split the @extent_thresh and physicall adjacent check into other
  patches

- Simplify the comment
---
 fs/btrfs/ioctl.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

Comments

Filipe Manana Jan. 26, 2022, 11:26 a.m. UTC | #1
On Wed, Jan 26, 2022 at 08:58:48AM +0800, Qu Wenruo wrote:
> [BUG]
> With older kernels (before v5.16), btrfs will defrag preallocated extents.
> While with newer kernels (v5.16 and newer) btrfs will not defrag
> preallocated extents, but it will defrag the extent just before the
> preallocated extent, even it's just a single sector.

In that case, isn't a Fixes: tag missing?

> 
> This can be exposed by the following small script:
> 
> 	mkfs.btrfs -f $dev > /dev/null
> 
> 	mount $dev $mnt
> 	xfs_io -f -c "pwrite 0 4k" -c sync -c "falloc 4k 16K" $mnt/file
> 	xfs_io -c "fiemap -v" $mnt/file
> 	btrfs fi defrag $mnt/file
> 	sync
> 	xfs_io -c "fiemap -v" $mnt/file
> 
> The output looks like this on older kernels:
> 
> /mnt/btrfs/file:
>  EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>    0: [0..7]:          26624..26631         8   0x0
>    1: [8..39]:         26632..26663        32 0x801
> /mnt/btrfs/file:
>  EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>    0: [0..39]:         26664..26703        40   0x1
> 
> Which defrags the single sector along with the preallocated extent, and
> replace them with an regular extent into a new location (caused by data
> COW).
> This wastes most of the data IO just for the preallocated range.
> 
> On the other hand, v5.16 is slightly better:
> 
> /mnt/btrfs/file:
>  EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>    0: [0..7]:          26624..26631         8   0x0
>    1: [8..39]:         26632..26663        32 0x801
> /mnt/btrfs/file:
>  EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>    0: [0..7]:          26664..26671         8   0x0
>    1: [8..39]:         26632..26663        32 0x801
> 
> The preallocated range is not defragged, but the sector before it still
> gets defragged, which has no need for it.
> 
> [CAUSE]
> One of the function reused by the old and new behavior is
> defrag_check_next_extent(), it will determine if we should defrag
> current extent by checking the next one.
> 
> It only checks if the next extent is a hole or inlined, but it doesn't
> check if it's preallocated.
> 
> On the other hand, out of the function, both old and new kernel will
> reject preallocated extents.
> 
> Such inconsistent behavior causes above behavior.
> 
> [FIX]
> - Also check if next extent is preallocated
>   If so, don't defrag current extent.
> 
> - Add comments for each branch why we reject the extent
> 
> This will reduce the IO caused by defrag ioctl and autodefrag.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
> Changelog:
> v2:
> - Use @extent_thresh from caller to replace the harded coded threshold
>   Now caller has full control over the extent threshold value.
> 
> - Remove the old ambiguous check based on physical address
>   The original check is too specific, only reject extents which are
>   physically adjacent, AND too large.
>   Since we have correct size check now, and the physically adjacent check
>   is not always a win.
>   So remove the old check completely.
> 
> v3:
> - Split the @extent_thresh and physicall adjacent check into other
>   patches
> 
> - Simplify the comment
> ---
>  fs/btrfs/ioctl.c | 20 +++++++++++++-------
>  1 file changed, 13 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
> index 91ba2efe9792..0d8bfc716e6b 100644
> --- a/fs/btrfs/ioctl.c
> +++ b/fs/btrfs/ioctl.c
> @@ -1053,19 +1053,25 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
>  				     bool locked)
>  {
>  	struct extent_map *next;
> -	bool ret = true;
> +	bool ret = false;
>  
>  	/* this is the last extent */
>  	if (em->start + em->len >= i_size_read(inode))
> -		return false;
> +		return ret;
>  
>  	next = defrag_lookup_extent(inode, em->start + em->len, locked);
> +	/* No more em or hole */
>  	if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
> -		ret = false;
> -	else if ((em->block_start + em->block_len == next->block_start) &&
> -		 (em->block_len > SZ_128K && next->block_len > SZ_128K))
> -		ret = false;
> -
> +		goto out;
> +	/* Preallocated */
> +	if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))

The comment is superfluous, the name of the flag is pretty informative that
we are checking for a preallocated extent. You don't need to send a new
version just to remove the comment however.

For the Fixes: tag, you can just comment and David will pick it.

Reviewed-by: Filipe Manana <fdmanana@suse.com>

Thanks.

> +		goto out;
> +	/* Physically adjacent and large enough */
> +	if ((em->block_start + em->block_len == next->block_start) &&
> +	    (em->block_len > SZ_128K && next->block_len > SZ_128K))
> +		goto out;
> +	ret = true;
> +out:
>  	free_extent_map(next);
>  	return ret;
>  }
> -- 
> 2.34.1
>
Qu Wenruo Jan. 26, 2022, 11:33 a.m. UTC | #2
On 2022/1/26 19:26, Filipe Manana wrote:
> On Wed, Jan 26, 2022 at 08:58:48AM +0800, Qu Wenruo wrote:
>> [BUG]
>> With older kernels (before v5.16), btrfs will defrag preallocated extents.
>> While with newer kernels (v5.16 and newer) btrfs will not defrag
>> preallocated extents, but it will defrag the extent just before the
>> preallocated extent, even it's just a single sector.
>
> In that case, isn't a Fixes: tag missing?

The function defrag_check_next_extent() is reused from older kernel
code, dating back to 2012.
(And even that 2012 commit is not really the root cause).
Thus it's missing preallocated check from the very beginning unfortunately.

Does it still make sense to include such an old commit?

It would make more sense to CC this to v5.x stable branch, though.

Any advice on this situation?

Thanks,
Qu
>
>>
>> This can be exposed by the following small script:
>>
>> 	mkfs.btrfs -f $dev > /dev/null
>>
>> 	mount $dev $mnt
>> 	xfs_io -f -c "pwrite 0 4k" -c sync -c "falloc 4k 16K" $mnt/file
>> 	xfs_io -c "fiemap -v" $mnt/file
>> 	btrfs fi defrag $mnt/file
>> 	sync
>> 	xfs_io -c "fiemap -v" $mnt/file
>>
>> The output looks like this on older kernels:
>>
>> /mnt/btrfs/file:
>>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>>     0: [0..7]:          26624..26631         8   0x0
>>     1: [8..39]:         26632..26663        32 0x801
>> /mnt/btrfs/file:
>>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>>     0: [0..39]:         26664..26703        40   0x1
>>
>> Which defrags the single sector along with the preallocated extent, and
>> replace them with an regular extent into a new location (caused by data
>> COW).
>> This wastes most of the data IO just for the preallocated range.
>>
>> On the other hand, v5.16 is slightly better:
>>
>> /mnt/btrfs/file:
>>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>>     0: [0..7]:          26624..26631         8   0x0
>>     1: [8..39]:         26632..26663        32 0x801
>> /mnt/btrfs/file:
>>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>>     0: [0..7]:          26664..26671         8   0x0
>>     1: [8..39]:         26632..26663        32 0x801
>>
>> The preallocated range is not defragged, but the sector before it still
>> gets defragged, which has no need for it.
>>
>> [CAUSE]
>> One of the function reused by the old and new behavior is
>> defrag_check_next_extent(), it will determine if we should defrag
>> current extent by checking the next one.
>>
>> It only checks if the next extent is a hole or inlined, but it doesn't
>> check if it's preallocated.
>>
>> On the other hand, out of the function, both old and new kernel will
>> reject preallocated extents.
>>
>> Such inconsistent behavior causes above behavior.
>>
>> [FIX]
>> - Also check if next extent is preallocated
>>    If so, don't defrag current extent.
>>
>> - Add comments for each branch why we reject the extent
>>
>> This will reduce the IO caused by defrag ioctl and autodefrag.
>>
>> Signed-off-by: Qu Wenruo <wqu@suse.com>
>> ---
>> Changelog:
>> v2:
>> - Use @extent_thresh from caller to replace the harded coded threshold
>>    Now caller has full control over the extent threshold value.
>>
>> - Remove the old ambiguous check based on physical address
>>    The original check is too specific, only reject extents which are
>>    physically adjacent, AND too large.
>>    Since we have correct size check now, and the physically adjacent check
>>    is not always a win.
>>    So remove the old check completely.
>>
>> v3:
>> - Split the @extent_thresh and physicall adjacent check into other
>>    patches
>>
>> - Simplify the comment
>> ---
>>   fs/btrfs/ioctl.c | 20 +++++++++++++-------
>>   1 file changed, 13 insertions(+), 7 deletions(-)
>>
>> diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
>> index 91ba2efe9792..0d8bfc716e6b 100644
>> --- a/fs/btrfs/ioctl.c
>> +++ b/fs/btrfs/ioctl.c
>> @@ -1053,19 +1053,25 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
>>   				     bool locked)
>>   {
>>   	struct extent_map *next;
>> -	bool ret = true;
>> +	bool ret = false;
>>
>>   	/* this is the last extent */
>>   	if (em->start + em->len >= i_size_read(inode))
>> -		return false;
>> +		return ret;
>>
>>   	next = defrag_lookup_extent(inode, em->start + em->len, locked);
>> +	/* No more em or hole */
>>   	if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
>> -		ret = false;
>> -	else if ((em->block_start + em->block_len == next->block_start) &&
>> -		 (em->block_len > SZ_128K && next->block_len > SZ_128K))
>> -		ret = false;
>> -
>> +		goto out;
>> +	/* Preallocated */
>> +	if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))
>
> The comment is superfluous, the name of the flag is pretty informative that
> we are checking for a preallocated extent. You don't need to send a new
> version just to remove the comment however.
>
> For the Fixes: tag, you can just comment and David will pick it.
>
> Reviewed-by: Filipe Manana <fdmanana@suse.com>
>
> Thanks.
>
>> +		goto out;
>> +	/* Physically adjacent and large enough */
>> +	if ((em->block_start + em->block_len == next->block_start) &&
>> +	    (em->block_len > SZ_128K && next->block_len > SZ_128K))
>> +		goto out;
>> +	ret = true;
>> +out:
>>   	free_extent_map(next);
>>   	return ret;
>>   }
>> --
>> 2.34.1
>>
Filipe Manana Jan. 26, 2022, 11:47 a.m. UTC | #3
On Wed, Jan 26, 2022 at 11:33 AM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
>
> On 2022/1/26 19:26, Filipe Manana wrote:
> > On Wed, Jan 26, 2022 at 08:58:48AM +0800, Qu Wenruo wrote:
> >> [BUG]
> >> With older kernels (before v5.16), btrfs will defrag preallocated extents.
> >> While with newer kernels (v5.16 and newer) btrfs will not defrag
> >> preallocated extents, but it will defrag the extent just before the
> >> preallocated extent, even it's just a single sector.
> >
> > In that case, isn't a Fixes: tag missing?
>
> The function defrag_check_next_extent() is reused from older kernel
> code, dating back to 2012.
> (And even that 2012 commit is not really the root cause).
> Thus it's missing preallocated check from the very beginning unfortunately.
>
> Does it still make sense to include such an old commit?
>
> It would make more sense to CC this to v5.x stable branch, though.
>
> Any advice on this situation?

Hum, ok, then I think it's fine without the tag.
Thanks.

>
> Thanks,
> Qu
> >
> >>
> >> This can be exposed by the following small script:
> >>
> >>      mkfs.btrfs -f $dev > /dev/null
> >>
> >>      mount $dev $mnt
> >>      xfs_io -f -c "pwrite 0 4k" -c sync -c "falloc 4k 16K" $mnt/file
> >>      xfs_io -c "fiemap -v" $mnt/file
> >>      btrfs fi defrag $mnt/file
> >>      sync
> >>      xfs_io -c "fiemap -v" $mnt/file
> >>
> >> The output looks like this on older kernels:
> >>
> >> /mnt/btrfs/file:
> >>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
> >>     0: [0..7]:          26624..26631         8   0x0
> >>     1: [8..39]:         26632..26663        32 0x801
> >> /mnt/btrfs/file:
> >>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
> >>     0: [0..39]:         26664..26703        40   0x1
> >>
> >> Which defrags the single sector along with the preallocated extent, and
> >> replace them with an regular extent into a new location (caused by data
> >> COW).
> >> This wastes most of the data IO just for the preallocated range.
> >>
> >> On the other hand, v5.16 is slightly better:
> >>
> >> /mnt/btrfs/file:
> >>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
> >>     0: [0..7]:          26624..26631         8   0x0
> >>     1: [8..39]:         26632..26663        32 0x801
> >> /mnt/btrfs/file:
> >>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
> >>     0: [0..7]:          26664..26671         8   0x0
> >>     1: [8..39]:         26632..26663        32 0x801
> >>
> >> The preallocated range is not defragged, but the sector before it still
> >> gets defragged, which has no need for it.
> >>
> >> [CAUSE]
> >> One of the function reused by the old and new behavior is
> >> defrag_check_next_extent(), it will determine if we should defrag
> >> current extent by checking the next one.
> >>
> >> It only checks if the next extent is a hole or inlined, but it doesn't
> >> check if it's preallocated.
> >>
> >> On the other hand, out of the function, both old and new kernel will
> >> reject preallocated extents.
> >>
> >> Such inconsistent behavior causes above behavior.
> >>
> >> [FIX]
> >> - Also check if next extent is preallocated
> >>    If so, don't defrag current extent.
> >>
> >> - Add comments for each branch why we reject the extent
> >>
> >> This will reduce the IO caused by defrag ioctl and autodefrag.
> >>
> >> Signed-off-by: Qu Wenruo <wqu@suse.com>
> >> ---
> >> Changelog:
> >> v2:
> >> - Use @extent_thresh from caller to replace the harded coded threshold
> >>    Now caller has full control over the extent threshold value.
> >>
> >> - Remove the old ambiguous check based on physical address
> >>    The original check is too specific, only reject extents which are
> >>    physically adjacent, AND too large.
> >>    Since we have correct size check now, and the physically adjacent check
> >>    is not always a win.
> >>    So remove the old check completely.
> >>
> >> v3:
> >> - Split the @extent_thresh and physicall adjacent check into other
> >>    patches
> >>
> >> - Simplify the comment
> >> ---
> >>   fs/btrfs/ioctl.c | 20 +++++++++++++-------
> >>   1 file changed, 13 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
> >> index 91ba2efe9792..0d8bfc716e6b 100644
> >> --- a/fs/btrfs/ioctl.c
> >> +++ b/fs/btrfs/ioctl.c
> >> @@ -1053,19 +1053,25 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
> >>                                   bool locked)
> >>   {
> >>      struct extent_map *next;
> >> -    bool ret = true;
> >> +    bool ret = false;
> >>
> >>      /* this is the last extent */
> >>      if (em->start + em->len >= i_size_read(inode))
> >> -            return false;
> >> +            return ret;
> >>
> >>      next = defrag_lookup_extent(inode, em->start + em->len, locked);
> >> +    /* No more em or hole */
> >>      if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
> >> -            ret = false;
> >> -    else if ((em->block_start + em->block_len == next->block_start) &&
> >> -             (em->block_len > SZ_128K && next->block_len > SZ_128K))
> >> -            ret = false;
> >> -
> >> +            goto out;
> >> +    /* Preallocated */
> >> +    if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))
> >
> > The comment is superfluous, the name of the flag is pretty informative that
> > we are checking for a preallocated extent. You don't need to send a new
> > version just to remove the comment however.
> >
> > For the Fixes: tag, you can just comment and David will pick it.
> >
> > Reviewed-by: Filipe Manana <fdmanana@suse.com>
> >
> > Thanks.
> >
> >> +            goto out;
> >> +    /* Physically adjacent and large enough */
> >> +    if ((em->block_start + em->block_len == next->block_start) &&
> >> +        (em->block_len > SZ_128K && next->block_len > SZ_128K))
> >> +            goto out;
> >> +    ret = true;
> >> +out:
> >>      free_extent_map(next);
> >>      return ret;
> >>   }
> >> --
> >> 2.34.1
> >>
Qu Wenruo Jan. 28, 2022, 6:31 a.m. UTC | #4
On 2022/1/26 08:58, Qu Wenruo wrote:
> [BUG]
> With older kernels (before v5.16), btrfs will defrag preallocated extents.
> While with newer kernels (v5.16 and newer) btrfs will not defrag
> preallocated extents, but it will defrag the extent just before the
> preallocated extent, even it's just a single sector.
> 
> This can be exposed by the following small script:
> 
> 	mkfs.btrfs -f $dev > /dev/null
> 
> 	mount $dev $mnt
> 	xfs_io -f -c "pwrite 0 4k" -c sync -c "falloc 4k 16K" $mnt/file
> 	xfs_io -c "fiemap -v" $mnt/file
> 	btrfs fi defrag $mnt/file
> 	sync
> 	xfs_io -c "fiemap -v" $mnt/file
> 
> The output looks like this on older kernels:
> 
> /mnt/btrfs/file:
>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>     0: [0..7]:          26624..26631         8   0x0
>     1: [8..39]:         26632..26663        32 0x801
> /mnt/btrfs/file:
>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>     0: [0..39]:         26664..26703        40   0x1
> 
> Which defrags the single sector along with the preallocated extent, and
> replace them with an regular extent into a new location (caused by data
> COW).
> This wastes most of the data IO just for the preallocated range.
> 
> On the other hand, v5.16 is slightly better:
> 
> /mnt/btrfs/file:
>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>     0: [0..7]:          26624..26631         8   0x0
>     1: [8..39]:         26632..26663        32 0x801
> /mnt/btrfs/file:
>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>     0: [0..7]:          26664..26671         8   0x0
>     1: [8..39]:         26632..26663        32 0x801
> 
> The preallocated range is not defragged, but the sector before it still
> gets defragged, which has no need for it.
> 
> [CAUSE]
> One of the function reused by the old and new behavior is
> defrag_check_next_extent(), it will determine if we should defrag
> current extent by checking the next one.
> 
> It only checks if the next extent is a hole or inlined, but it doesn't
> check if it's preallocated.
> 
> On the other hand, out of the function, both old and new kernel will
> reject preallocated extents.
> 
> Such inconsistent behavior causes above behavior.
> 
> [FIX]
> - Also check if next extent is preallocated
>    If so, don't defrag current extent.
> 
> - Add comments for each branch why we reject the extent
> 
> This will reduce the IO caused by defrag ioctl and autodefrag.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
> Changelog:
> v2:
> - Use @extent_thresh from caller to replace the harded coded threshold
>    Now caller has full control over the extent threshold value.
> 
> - Remove the old ambiguous check based on physical address
>    The original check is too specific, only reject extents which are
>    physically adjacent, AND too large.
>    Since we have correct size check now, and the physically adjacent check
>    is not always a win.
>    So remove the old check completely.
> 
> v3:
> - Split the @extent_thresh and physicall adjacent check into other
>    patches
> 
> - Simplify the comment
> ---
>   fs/btrfs/ioctl.c | 20 +++++++++++++-------
>   1 file changed, 13 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
> index 91ba2efe9792..0d8bfc716e6b 100644
> --- a/fs/btrfs/ioctl.c
> +++ b/fs/btrfs/ioctl.c
> @@ -1053,19 +1053,25 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
>   				     bool locked)
>   {
>   	struct extent_map *next;
> -	bool ret = true;
> +	bool ret = false;
>   
>   	/* this is the last extent */
>   	if (em->start + em->len >= i_size_read(inode))
> -		return false;
> +		return ret;
>   
>   	next = defrag_lookup_extent(inode, em->start + em->len, locked);
> +	/* No more em or hole */
>   	if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
> -		ret = false;
> -	else if ((em->block_start + em->block_len == next->block_start) &&
> -		 (em->block_len > SZ_128K && next->block_len > SZ_128K))
> -		ret = false;
> -
> +		goto out;
> +	/* Preallocated */
> +	if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))

Nope, during the extra tests for unrelated work, I exposed this bug, we 
should check @next, not @em.

Needs to update it again...

Thanks,
Qu
> +		goto out;
> +	/* Physically adjacent and large enough */
> +	if ((em->block_start + em->block_len == next->block_start) &&
> +	    (em->block_len > SZ_128K && next->block_len > SZ_128K))
> +		goto out;
> +	ret = true;
> +out:
>   	free_extent_map(next);
>   	return ret;
>   }
diff mbox series

Patch

diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index 91ba2efe9792..0d8bfc716e6b 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -1053,19 +1053,25 @@  static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
 				     bool locked)
 {
 	struct extent_map *next;
-	bool ret = true;
+	bool ret = false;
 
 	/* this is the last extent */
 	if (em->start + em->len >= i_size_read(inode))
-		return false;
+		return ret;
 
 	next = defrag_lookup_extent(inode, em->start + em->len, locked);
+	/* No more em or hole */
 	if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
-		ret = false;
-	else if ((em->block_start + em->block_len == next->block_start) &&
-		 (em->block_len > SZ_128K && next->block_len > SZ_128K))
-		ret = false;
-
+		goto out;
+	/* Preallocated */
+	if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))
+		goto out;
+	/* Physically adjacent and large enough */
+	if ((em->block_start + em->block_len == next->block_start) &&
+	    (em->block_len > SZ_128K && next->block_len > SZ_128K))
+		goto out;
+	ret = true;
+out:
 	free_extent_map(next);
 	return ret;
 }