diff mbox series

[2/2] btrfs: make btrfs_truncate_block() zero folio range for certain subpage corner cases

Message ID d66c922e591b3a57a230ca357b9085fe6ae53812.1744344865.git.wqu@suse.com (mailing list archive)
State New
Headers show
Series btrfs: fix corner cases for subpage generic/363 failures | expand

Commit Message

Qu Wenruo April 11, 2025, 5:14 a.m. UTC
[BUG]
For the following fsx -e 1 run, the btrfs still fails the run on 64K
page size with 4K fs block size:

READ BAD DATA: offset = 0x26b3a, size = 0xfafa, fname = /mnt/btrfs/junk
OFFSET      GOOD    BAD     RANGE
0x26b3a     0x0000  0x15b4  0x0
operation# (mod 256) for the bad data may be 21
[...]
LOG DUMP (28 total operations):
1(  1 mod 256): SKIPPED (no operation)
2(  2 mod 256): SKIPPED (no operation)
3(  3 mod 256): SKIPPED (no operation)
4(  4 mod 256): SKIPPED (no operation)
5(  5 mod 256): WRITE    0x1ea90 thru 0x285e0	(0x9b51 bytes) HOLE
6(  6 mod 256): ZERO     0x1b1a8 thru 0x20bd4	(0x5a2d bytes)
7(  7 mod 256): FALLOC   0x22b1a thru 0x272fa	(0x47e0 bytes) INTERIOR
8(  8 mod 256): WRITE    0x741d thru 0x13522	(0xc106 bytes)
9(  9 mod 256): MAPWRITE 0x73ee thru 0xdeeb	(0x6afe bytes)
10( 10 mod 256): FALLOC   0xb719 thru 0xb994	(0x27b bytes) INTERIOR
11( 11 mod 256): COPY 0x15ed8 thru 0x18be1	(0x2d0a bytes) to 0x25f6e thru 0x28c77
12( 12 mod 256): ZERO     0x1615e thru 0x1770e	(0x15b1 bytes)
13( 13 mod 256): SKIPPED (no operation)
14( 14 mod 256): DEDUPE 0x20000 thru 0x27fff	(0x8000 bytes) to 0x1000 thru 0x8fff
15( 15 mod 256): SKIPPED (no operation)
16( 16 mod 256): CLONE 0xa000 thru 0xffff	(0x6000 bytes) to 0x36000 thru 0x3bfff
17( 17 mod 256): ZERO     0x14adc thru 0x1b78a	(0x6caf bytes)
18( 18 mod 256): TRUNCATE DOWN	from 0x3c000 to 0x1e2e3	******WWWW
19( 19 mod 256): CLONE 0x4000 thru 0x11fff	(0xe000 bytes) to 0x16000 thru 0x23fff
20( 20 mod 256): FALLOC   0x311e1 thru 0x3681b	(0x563a bytes) PAST_EOF
21( 21 mod 256): FALLOC   0x351c5 thru 0x40000	(0xae3b bytes) EXTENDING
22( 22 mod 256): WRITE    0x920 thru 0x7e51	(0x7532 bytes)
23( 23 mod 256): COPY 0x2b58 thru 0xc508	(0x99b1 bytes) to 0x117b1 thru 0x1b161
24( 24 mod 256): TRUNCATE DOWN	from 0x40000 to 0x3c9a5
25( 25 mod 256): SKIPPED (no operation)
26( 26 mod 256): MAPWRITE 0x25020 thru 0x26b06	(0x1ae7 bytes)
27( 27 mod 256): SKIPPED (no operation)
28( 28 mod 256): READ     0x26b3a thru 0x36633	(0xfafa bytes)	***RRRR***

[CAUSE]
The involved operations are:

 fallocating to largest ever: 0x40000
 21 pollute_eof	0x24000 thru	0x2ffff	(0xc000 bytes)
 21 falloc	from 0x351c5 to 0x40000 (0xae3b bytes)
 28 read	0x26b3a thru	0x36633	(0xfafa bytes)

At operation #21 a pollute_eof is done, by memory mappaed write into
range [0x24000, 0x2ffff).
At this stage, the inode size is 0x24000, which is block aligned.

Then fallocate happens, and since it's expanding the inode, it will call
btrfs_truncate_block() to truncate any unaligned range.

But since the inode size is already block aligned,
btrfs_truncate_block() does nothing and exit.

However remember the folio at 0x20000 has some range polluted already,
although they will not be written back to disk, it still affects the
page cache, resulting the later operation #28 to read out the polluted
value.

[FIX]
Instead of early exit from btrfs_truncate_block() if the range is
already block aligned, do extra filio zeroing if the fs block size is
smaller than the page size.

This is to address exactly the above case where memory mapped write can
still leave some garbage beyond EOF.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/inode.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 82 insertions(+), 1 deletion(-)

Comments

kernel test robot April 12, 2025, 5:12 a.m. UTC | #1
Hi Qu,

kernel test robot noticed the following build warnings:

[auto build test WARNING on kdave/for-next]
[also build test WARNING on linus/master v6.15-rc1 next-20250411]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Qu-Wenruo/btrfs-make-btrfs_truncate_block-to-zero-involved-blocks-in-a-folio/20250411-131525
base:   https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git for-next
patch link:    https://lore.kernel.org/r/d66c922e591b3a57a230ca357b9085fe6ae53812.1744344865.git.wqu%40suse.com
patch subject: [PATCH 2/2] btrfs: make btrfs_truncate_block() zero folio range for certain subpage corner cases
config: arm-randconfig-002-20250412 (https://download.01.org/0day-ci/archive/20250412/202504121224.JBBIEqsn-lkp@intel.com/config)
compiler: arm-linux-gnueabi-gcc (GCC) 7.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250412/202504121224.JBBIEqsn-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202504121224.JBBIEqsn-lkp@intel.com/

All warnings (new ones prefixed by >>):

   In file included from include/linux/kernel.h:27:0,
                    from include/linux/cpumask.h:11,
                    from include/linux/smp.h:13,
                    from include/linux/lockdep.h:14,
                    from include/linux/spinlock.h:63,
                    from include/linux/swait.h:7,
                    from include/linux/completion.h:12,
                    from include/linux/crypto.h:15,
                    from include/crypto/hash.h:12,
                    from fs/btrfs/inode.c:6:
   fs/btrfs/inode.c: In function 'zero_range_folio':
>> include/linux/math.h:15:29: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
    #define __round_mask(x, y) ((__typeof__(x))((y)-1))
                                ^
   include/linux/math.h:35:34: note: in expansion of macro '__round_mask'
    #define round_down(x, y) ((x) & ~__round_mask(x, y))
                                     ^~~~~~~~~~~~
   fs/btrfs/inode.c:4843:16: note: in expansion of macro 'round_down'
     block_start = round_down(clamp_start, block_size);
                   ^~~~~~~~~~
>> include/linux/math.h:15:29: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
    #define __round_mask(x, y) ((__typeof__(x))((y)-1))
                                ^
   include/linux/math.h:25:36: note: in expansion of macro '__round_mask'
    #define round_up(x, y) ((((x)-1) | __round_mask(x, y))+1)
                                       ^~~~~~~~~~~~
   fs/btrfs/inode.c:4844:14: note: in expansion of macro 'round_up'
     block_end = round_up(clamp_end + 1, block_size) - 1;
                 ^~~~~~~~


vim +15 include/linux/math.h

aa6159ab99a9ab Andy Shevchenko 2020-12-15   8  
aa6159ab99a9ab Andy Shevchenko 2020-12-15   9  /*
aa6159ab99a9ab Andy Shevchenko 2020-12-15  10   * This looks more complex than it should be. But we need to
aa6159ab99a9ab Andy Shevchenko 2020-12-15  11   * get the type for the ~ right in round_down (it needs to be
aa6159ab99a9ab Andy Shevchenko 2020-12-15  12   * as wide as the result!), and we want to evaluate the macro
aa6159ab99a9ab Andy Shevchenko 2020-12-15  13   * arguments just once each.
aa6159ab99a9ab Andy Shevchenko 2020-12-15  14   */
aa6159ab99a9ab Andy Shevchenko 2020-12-15 @15  #define __round_mask(x, y) ((__typeof__(x))((y)-1))
aa6159ab99a9ab Andy Shevchenko 2020-12-15  16
kernel test robot April 12, 2025, 5:54 a.m. UTC | #2
Hi Qu,

kernel test robot noticed the following build warnings:

[auto build test WARNING on kdave/for-next]
[also build test WARNING on linus/master v6.15-rc1 next-20250411]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Qu-Wenruo/btrfs-make-btrfs_truncate_block-to-zero-involved-blocks-in-a-folio/20250411-131525
base:   https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git for-next
patch link:    https://lore.kernel.org/r/d66c922e591b3a57a230ca357b9085fe6ae53812.1744344865.git.wqu%40suse.com
patch subject: [PATCH 2/2] btrfs: make btrfs_truncate_block() zero folio range for certain subpage corner cases
config: x86_64-buildonly-randconfig-001-20250412 (https://download.01.org/0day-ci/archive/20250412/202504121352.3gzerLun-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250412/202504121352.3gzerLun-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202504121352.3gzerLun-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> vmlinux.o: warning: objtool: zero_range_folio+0x21f: relocation to !ENDBR: bd_may_claim+0x112
Andy Shevchenko April 12, 2025, 6:35 p.m. UTC | #3
Fri, Apr 11, 2025 at 02:44:01PM +0930, Qu Wenruo kirjoitti:
> [BUG]
> For the following fsx -e 1 run, the btrfs still fails the run on 64K
> page size with 4K fs block size:
> 
> READ BAD DATA: offset = 0x26b3a, size = 0xfafa, fname = /mnt/btrfs/junk
> OFFSET      GOOD    BAD     RANGE
> 0x26b3a     0x0000  0x15b4  0x0
> operation# (mod 256) for the bad data may be 21

[...]

> +static int zero_range_folio(struct btrfs_inode *inode, loff_t from, loff_t end,
> +			    u64 orig_start, u64 orig_end,
> +			    enum btrfs_truncate_where where)
> +{
> +	const u32 blocksize = inode->root->fs_info->sectorsize;
> +	struct address_space *mapping = inode->vfs_inode.i_mapping;
> +	struct extent_io_tree *io_tree = &inode->io_tree;
> +	struct extent_state *cached_state = NULL;
> +	struct btrfs_ordered_extent *ordered;
> +	pgoff_t index = (where == BTRFS_TRUNCATE_HEAD_BLOCK) ?
> +			(from >> PAGE_SHIFT) : (end >> PAGE_SHIFT);

You want to use PFN_*() macros from the pfn.h perhaps?

> +	struct folio *folio;
> +	u64 block_start;
> +	u64 block_end;
> +	u64 clamp_start;
> +	u64 clamp_end;
> +	int ret = 0;
> +
> +	/*
> +	 * The target head/tail block is already block aligned.
> +	 * If block size >= PAGE_SIZE, meaning it's impossible to mmap a
> +	 * page containing anything other than the target block.
> +	 */
> +	if (blocksize >= PAGE_SIZE)
> +		return 0;
> +again:
> +	folio = filemap_lock_folio(mapping, index);
> +	/* No folio present. */
> +	if (IS_ERR(folio))
> +		return 0;
> +
> +	if (!folio_test_uptodate(folio)) {
> +		ret = btrfs_read_folio(NULL, folio);
> +		folio_lock(folio);
> +		if (folio->mapping != mapping) {
> +			folio_unlock(folio);
> +			folio_put(folio);
> +			goto again;
> +		}
> +		if (!folio_test_uptodate(folio)) {
> +			ret = -EIO;
> +			goto out_unlock;
> +		}
> +	}
> +	folio_wait_writeback(folio);
> +
> +	clamp_start = max_t(u64, folio_pos(folio), orig_start);
> +	clamp_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1,
> +			  orig_end);

You probably wanted clamp() ?

> +	block_start = round_down(clamp_start, block_size);
> +	block_end = round_up(clamp_end + 1, block_size) - 1;

LKP rightfully complains, I believe you want to use ALIGN*() macros instead.

> +	lock_extent(io_tree, block_start, block_end, &cached_state);
> +	ordered = btrfs_lookup_ordered_range(inode, block_start, block_end + 1 - block_end);
> +	if (ordered) {
> +		unlock_extent(io_tree, block_start, block_end, &cached_state);
> +		folio_unlock(folio);
> +		folio_put(folio);
> +		btrfs_start_ordered_extent(ordered);
> +		btrfs_put_ordered_extent(ordered);
> +		goto again;
> +	}
> +	folio_zero_range(folio, clamp_start - folio_pos(folio),
> +			 clamp_end - clamp_start + 1);
> +	unlock_extent(io_tree, block_start, block_end, &cached_state);
> +
> +out_unlock:
> +	folio_unlock(folio);
> +	folio_put(folio);
> +	return ret;
> +}
Qu Wenruo April 14, 2025, 1:20 a.m. UTC | #4
在 2025/4/13 04:05, Andy Shevchenko 写道:
> Fri, Apr 11, 2025 at 02:44:01PM +0930, Qu Wenruo kirjoitti:
>> [BUG]
>> For the following fsx -e 1 run, the btrfs still fails the run on 64K
>> page size with 4K fs block size:
>>
>> READ BAD DATA: offset = 0x26b3a, size = 0xfafa, fname = /mnt/btrfs/junk
>> OFFSET      GOOD    BAD     RANGE
>> 0x26b3a     0x0000  0x15b4  0x0
>> operation# (mod 256) for the bad data may be 21
>
> [...]
>
>> +static int zero_range_folio(struct btrfs_inode *inode, loff_t from, loff_t end,
>> +			    u64 orig_start, u64 orig_end,
>> +			    enum btrfs_truncate_where where)
>> +{
>> +	const u32 blocksize = inode->root->fs_info->sectorsize;
>> +	struct address_space *mapping = inode->vfs_inode.i_mapping;
>> +	struct extent_io_tree *io_tree = &inode->io_tree;
>> +	struct extent_state *cached_state = NULL;
>> +	struct btrfs_ordered_extent *ordered;
>> +	pgoff_t index = (where == BTRFS_TRUNCATE_HEAD_BLOCK) ?
>> +			(from >> PAGE_SHIFT) : (end >> PAGE_SHIFT);
>
> You want to use PFN_*() macros from the pfn.h perhaps?
>
>> +	struct folio *folio;
>> +	u64 block_start;
>> +	u64 block_end;
>> +	u64 clamp_start;
>> +	u64 clamp_end;
>> +	int ret = 0;
>> +
>> +	/*
>> +	 * The target head/tail block is already block aligned.
>> +	 * If block size >= PAGE_SIZE, meaning it's impossible to mmap a
>> +	 * page containing anything other than the target block.
>> +	 */
>> +	if (blocksize >= PAGE_SIZE)
>> +		return 0;
>> +again:
>> +	folio = filemap_lock_folio(mapping, index);
>> +	/* No folio present. */
>> +	if (IS_ERR(folio))
>> +		return 0;
>> +
>> +	if (!folio_test_uptodate(folio)) {
>> +		ret = btrfs_read_folio(NULL, folio);
>> +		folio_lock(folio);
>> +		if (folio->mapping != mapping) {
>> +			folio_unlock(folio);
>> +			folio_put(folio);
>> +			goto again;
>> +		}
>> +		if (!folio_test_uptodate(folio)) {
>> +			ret = -EIO;
>> +			goto out_unlock;
>> +		}
>> +	}
>> +	folio_wait_writeback(folio);
>> +
>> +	clamp_start = max_t(u64, folio_pos(folio), orig_start);
>> +	clamp_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1,
>> +			  orig_end);
>
> You probably wanted clamp() ?

Thanks a lot for the help!

It's way more readable than the open-coded one.

>
>> +	block_start = round_down(clamp_start, block_size);
>> +	block_end = round_up(clamp_end + 1, block_size) - 1;
>
> LKP rightfully complains, I believe you want to use ALIGN*() macros instead.

Personally speaking I really want to explicitly show whether it's
rounding up or down.

And unfortunately the ALIGN() itself doesn't show that (meanwhile the
ALIGN_DOWN() is pretty fine).

Can I just do a forced conversion on the @blocksize to fix the warning?

Thanks,
Qu

>
>> +	lock_extent(io_tree, block_start, block_end, &cached_state);
>> +	ordered = btrfs_lookup_ordered_range(inode, block_start, block_end + 1 - block_end);
>> +	if (ordered) {
>> +		unlock_extent(io_tree, block_start, block_end, &cached_state);
>> +		folio_unlock(folio);
>> +		folio_put(folio);
>> +		btrfs_start_ordered_extent(ordered);
>> +		btrfs_put_ordered_extent(ordered);
>> +		goto again;
>> +	}
>> +	folio_zero_range(folio, clamp_start - folio_pos(folio),
>> +			 clamp_end - clamp_start + 1);
>> +	unlock_extent(io_tree, block_start, block_end, &cached_state);
>> +
>> +out_unlock:
>> +	folio_unlock(folio);
>> +	folio_put(folio);
>> +	return ret;
>> +}
>
Qu Wenruo April 14, 2025, 3:37 a.m. UTC | #5
在 2025/4/14 10:50, Qu Wenruo 写道:
>
>
> 在 2025/4/13 04:05, Andy Shevchenko 写道:
>> Fri, Apr 11, 2025 at 02:44:01PM +0930, Qu Wenruo kirjoitti:
>>> [BUG]
>>> For the following fsx -e 1 run, the btrfs still fails the run on 64K
>>> page size with 4K fs block size:
>>>
>>> READ BAD DATA: offset = 0x26b3a, size = 0xfafa, fname = /mnt/btrfs/junk
>>> OFFSET      GOOD    BAD     RANGE
>>> 0x26b3a     0x0000  0x15b4  0x0
>>> operation# (mod 256) for the bad data may be 21
>>
>> [...]
>>
>>> +static int zero_range_folio(struct btrfs_inode *inode, loff_t from,
>>> loff_t end,
>>> +                u64 orig_start, u64 orig_end,
>>> +                enum btrfs_truncate_where where)
>>> +{
>>> +    const u32 blocksize = inode->root->fs_info->sectorsize;
>>> +    struct address_space *mapping = inode->vfs_inode.i_mapping;
>>> +    struct extent_io_tree *io_tree = &inode->io_tree;
>>> +    struct extent_state *cached_state = NULL;
>>> +    struct btrfs_ordered_extent *ordered;
>>> +    pgoff_t index = (where == BTRFS_TRUNCATE_HEAD_BLOCK) ?
>>> +            (from >> PAGE_SHIFT) : (end >> PAGE_SHIFT);
>>
>> You want to use PFN_*() macros from the pfn.h perhaps?
>>
>>> +    struct folio *folio;
>>> +    u64 block_start;
>>> +    u64 block_end;
>>> +    u64 clamp_start;
>>> +    u64 clamp_end;
>>> +    int ret = 0;
>>> +
>>> +    /*
>>> +     * The target head/tail block is already block aligned.
>>> +     * If block size >= PAGE_SIZE, meaning it's impossible to mmap a
>>> +     * page containing anything other than the target block.
>>> +     */
>>> +    if (blocksize >= PAGE_SIZE)
>>> +        return 0;
>>> +again:
>>> +    folio = filemap_lock_folio(mapping, index);
>>> +    /* No folio present. */
>>> +    if (IS_ERR(folio))
>>> +        return 0;
>>> +
>>> +    if (!folio_test_uptodate(folio)) {
>>> +        ret = btrfs_read_folio(NULL, folio);
>>> +        folio_lock(folio);
>>> +        if (folio->mapping != mapping) {
>>> +            folio_unlock(folio);
>>> +            folio_put(folio);
>>> +            goto again;
>>> +        }
>>> +        if (!folio_test_uptodate(folio)) {
>>> +            ret = -EIO;
>>> +            goto out_unlock;
>>> +        }
>>> +    }
>>> +    folio_wait_writeback(folio);
>>> +
>>> +    clamp_start = max_t(u64, folio_pos(folio), orig_start);
>>> +    clamp_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1,
>>> +              orig_end);
>>
>> You probably wanted clamp() ?
>
> Thanks a lot for the help!
>
> It's way more readable than the open-coded one.
>
>>
>>> +    block_start = round_down(clamp_start, block_size);
>>> +    block_end = round_up(clamp_end + 1, block_size) - 1;
>>
>> LKP rightfully complains, I believe you want to use ALIGN*() macros
>> instead.
>
> Personally speaking I really want to explicitly show whether it's
> rounding up or down.
>
> And unfortunately the ALIGN() itself doesn't show that (meanwhile the
> ALIGN_DOWN() is pretty fine).
>
> Can I just do a forced conversion on the @blocksize to fix the warning?

Nevermind, it's a typo, it should be "blocksize" not "block_size", the
latter is a different function defined in blkdev.h.

Thanks,
Qu

>
> Thanks,
> Qu
>
>>
>>> +    lock_extent(io_tree, block_start, block_end, &cached_state);
>>> +    ordered = btrfs_lookup_ordered_range(inode, block_start,
>>> block_end + 1 - block_end);
>>> +    if (ordered) {
>>> +        unlock_extent(io_tree, block_start, block_end, &cached_state);
>>> +        folio_unlock(folio);
>>> +        folio_put(folio);
>>> +        btrfs_start_ordered_extent(ordered);
>>> +        btrfs_put_ordered_extent(ordered);
>>> +        goto again;
>>> +    }
>>> +    folio_zero_range(folio, clamp_start - folio_pos(folio),
>>> +             clamp_end - clamp_start + 1);
>>> +    unlock_extent(io_tree, block_start, block_end, &cached_state);
>>> +
>>> +out_unlock:
>>> +    folio_unlock(folio);
>>> +    folio_put(folio);
>>> +    return ret;
>>> +}
>>
>
Andy Shevchenko April 14, 2025, 10:40 a.m. UTC | #6
On Mon, Apr 14, 2025 at 4:20 AM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
> 在 2025/4/13 04:05, Andy Shevchenko 写道:
> > Fri, Apr 11, 2025 at 02:44:01PM +0930, Qu Wenruo kirjoitti:

[...]

> >> +    block_start = round_down(clamp_start, block_size);
> >> +    block_end = round_up(clamp_end + 1, block_size) - 1;
> >
> > LKP rightfully complains, I believe you want to use ALIGN*() macros instead.
>
> Personally speaking I really want to explicitly show whether it's
> rounding up or down.
>
> And unfortunately the ALIGN() itself doesn't show that (meanwhile the
> ALIGN_DOWN() is pretty fine).
>
> Can I just do a forced conversion on the @blocksize to fix the warning?

ALIGN*() are for pointers, the round_*() are for integers. So, please
use ALIGN*().
David Sterba April 15, 2025, 6:18 p.m. UTC | #7
On Mon, Apr 14, 2025 at 01:40:11PM +0300, Andy Shevchenko wrote:
> On Mon, Apr 14, 2025 at 4:20 AM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
> > 在 2025/4/13 04:05, Andy Shevchenko 写道:
> > > Fri, Apr 11, 2025 at 02:44:01PM +0930, Qu Wenruo kirjoitti:
> 
> [...]
> 
> > >> +    block_start = round_down(clamp_start, block_size);
> > >> +    block_end = round_up(clamp_end + 1, block_size) - 1;
> > >
> > > LKP rightfully complains, I believe you want to use ALIGN*() macros instead.
> >
> > Personally speaking I really want to explicitly show whether it's
> > rounding up or down.
> >
> > And unfortunately the ALIGN() itself doesn't show that (meanwhile the
> > ALIGN_DOWN() is pretty fine).
> >
> > Can I just do a forced conversion on the @blocksize to fix the warning?
> 
> ALIGN*() are for pointers, the round_*() are for integers. So, please
> use ALIGN*().

clamp_start and blocksize are integers and there's a lot of use of ALIGN
with integers too. There's no documentation saying it should be used for
pointers, I can see PTR_ALIGN that does the explicit cast to unsigned
logn and then passes it to ALIGN (as integer).

Historically in the btrfs code the use of ALIGN and round_* is basically
50/50 so we don't have a consistent style, although we'd like to. As the
round_up and round_down are clear I'd rather keep using them in new
code.
Andy Shevchenko April 15, 2025, 6:21 p.m. UTC | #8
On Tue, Apr 15, 2025 at 9:18 PM David Sterba <dsterba@suse.cz> wrote:
> On Mon, Apr 14, 2025 at 01:40:11PM +0300, Andy Shevchenko wrote:
> > On Mon, Apr 14, 2025 at 4:20 AM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
> > > 在 2025/4/13 04:05, Andy Shevchenko 写道:
> > > > Fri, Apr 11, 2025 at 02:44:01PM +0930, Qu Wenruo kirjoitti:

[...]

> > > >> +    block_start = round_down(clamp_start, block_size);
> > > >> +    block_end = round_up(clamp_end + 1, block_size) - 1;
> > > >
> > > > LKP rightfully complains, I believe you want to use ALIGN*() macros instead.
> > >
> > > Personally speaking I really want to explicitly show whether it's
> > > rounding up or down.
> > >
> > > And unfortunately the ALIGN() itself doesn't show that (meanwhile the
> > > ALIGN_DOWN() is pretty fine).
> > >
> > > Can I just do a forced conversion on the @blocksize to fix the warning?
> >
> > ALIGN*() are for pointers, the round_*() are for integers. So, please
> > use ALIGN*().
>
> clamp_start and blocksize are integers and there's a lot of use of ALIGN
> with integers too. There's no documentation saying it should be used for
> pointers, I can see PTR_ALIGN that does the explicit cast to unsigned
> logn and then passes it to ALIGN (as integer).

Yes, because the unsigned long is natural holder for the addresses and
due to some APIs use it instead of pointers (for whatever reasons) the
PTR_ALIGN() does that. But you see the difference? round_*() expect
_the same_ types of the arguments, while ALIGN*() do not. That is what
makes it so.

> Historically in the btrfs code the use of ALIGN and round_* is basically
> 50/50 so we don't have a consistent style, although we'd like to. As the
> round_up and round_down are clear I'd rather keep using them in new
> code.

And how do you suggest avoiding the warning, please?
diff mbox series

Patch

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 0700a161b80e..2dc9b565f1f1 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -4777,6 +4777,87 @@  static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
 	return ret;
 }
 
+/*
+ * A helper to zero out all blocks inside range [@orig_start, @orig_end) of
+ * the target folio.
+ * The target folio is the one containing the head or tail block of the range
+ * [@from, @end].
+ *
+ * This is a special case for fs block size < page size, where even if the range
+ * [from, end] is already block aligned, we can still have blocks beyond EOF being
+ * polluted by memory mapped write.
+ */
+static int zero_range_folio(struct btrfs_inode *inode, loff_t from, loff_t end,
+			    u64 orig_start, u64 orig_end,
+			    enum btrfs_truncate_where where)
+{
+	const u32 blocksize = inode->root->fs_info->sectorsize;
+	struct address_space *mapping = inode->vfs_inode.i_mapping;
+	struct extent_io_tree *io_tree = &inode->io_tree;
+	struct extent_state *cached_state = NULL;
+	struct btrfs_ordered_extent *ordered;
+	pgoff_t index = (where == BTRFS_TRUNCATE_HEAD_BLOCK) ?
+			(from >> PAGE_SHIFT) : (end >> PAGE_SHIFT);
+	struct folio *folio;
+	u64 block_start;
+	u64 block_end;
+	u64 clamp_start;
+	u64 clamp_end;
+	int ret = 0;
+
+	/*
+	 * The target head/tail block is already block aligned.
+	 * If block size >= PAGE_SIZE, meaning it's impossible to mmap a
+	 * page containing anything other than the target block.
+	 */
+	if (blocksize >= PAGE_SIZE)
+		return 0;
+again:
+	folio = filemap_lock_folio(mapping, index);
+	/* No folio present. */
+	if (IS_ERR(folio))
+		return 0;
+
+	if (!folio_test_uptodate(folio)) {
+		ret = btrfs_read_folio(NULL, folio);
+		folio_lock(folio);
+		if (folio->mapping != mapping) {
+			folio_unlock(folio);
+			folio_put(folio);
+			goto again;
+		}
+		if (!folio_test_uptodate(folio)) {
+			ret = -EIO;
+			goto out_unlock;
+		}
+	}
+	folio_wait_writeback(folio);
+
+	clamp_start = max_t(u64, folio_pos(folio), orig_start);
+	clamp_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1,
+			  orig_end);
+	block_start = round_down(clamp_start, block_size);
+	block_end = round_up(clamp_end + 1, block_size) - 1;
+	lock_extent(io_tree, block_start, block_end, &cached_state);
+	ordered = btrfs_lookup_ordered_range(inode, block_start, block_end + 1 - block_end);
+	if (ordered) {
+		unlock_extent(io_tree, block_start, block_end, &cached_state);
+		folio_unlock(folio);
+		folio_put(folio);
+		btrfs_start_ordered_extent(ordered);
+		btrfs_put_ordered_extent(ordered);
+		goto again;
+	}
+	folio_zero_range(folio, clamp_start - folio_pos(folio),
+			 clamp_end - clamp_start + 1);
+	unlock_extent(io_tree, block_start, block_end, &cached_state);
+
+out_unlock:
+	folio_unlock(folio);
+	folio_put(folio);
+	return ret;
+}
+
 /*
  * Read, zero a chunk and write a block.
  *
@@ -4819,7 +4900,7 @@  int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t end,
 		ASSERT(where == BTRFS_TRUNCATE_HEAD_BLOCK);
 
 	if (IS_ALIGNED(from, blocksize) && IS_ALIGNED(end + 1, blocksize))
-		goto out;
+		return zero_range_folio(inode, from, end, orig_start, orig_end, where);
 
 	if (where == BTRFS_TRUNCATE_HEAD_BLOCK)
 		block_start = round_down(from, blocksize);