Message ID | bcbfce0ff7e21bbfed2484b1457e560edf78020d.1642436805.git.fdmanana@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | btrfs: fix too long loop when defragging a 1 byte file | expand |
On Mon, Jan 17, 2022 at 04:28:29PM +0000, fdmanana@kernel.org wrote: > From: Filipe Manana <fdmanana@suse.com> > > When attempting to defrag a file with a single byte, we can end up in a > too long loop, which is nearly infinite because at btrfs_defrag_file() > we end up with the variable last_byte assigned with a value of > 18446744073709551615 (which is (u64)-1). The problem comes from the fact > we end up doing: > > last_byte = round_up(last_byte, fs_info->sectorsize) - 1; > > So if last_byte was assigned 0, which is i_size - 1, we underflow and > end up with the value 18446744073709551615. > > This is trivial to reproduce and the following script triggers it: > > $ cat test.sh > #!/bin/bash > > DEV=/dev/sdj > MNT=/mnt/sdj > > mkfs.btrfs -f $DEV > mount $DEV $MNT > > echo -n "X" > $MNT/foobar > > btrfs filesystem defragment $MNT/foobar > > umount $MNT > > So fix this by not decrementing last_byte by 1 before doing the sector > size round up. Also, to make it easier to follow, make the round up right > after computing last_byte. > > Fixes: 7b508037d4cac3 ("btrfs: defrag: use defrag_one_cluster() to implement btrfs_defrag_file()") > Reported-by: Anthony Ruhier <aruhier@mailbox.org> > Link: https://lore.kernel.org/linux-btrfs/0a269612-e43f-da22-c5bc-b34b1b56ebe8@mailbox.org/ > Signed-off-by: Filipe Manana <fdmanana@suse.com> Thank you very much, I'll try to get it to out ASAP so it could get released in the next week stable update.
On 2022/1/18 00:28, fdmanana@kernel.org wrote: > From: Filipe Manana <fdmanana@suse.com> > > When attempting to defrag a file with a single byte, we can end up in a > too long loop, which is nearly infinite because at btrfs_defrag_file() > we end up with the variable last_byte assigned with a value of > 18446744073709551615 (which is (u64)-1). The problem comes from the fact > we end up doing: > > last_byte = round_up(last_byte, fs_info->sectorsize) - 1; > > So if last_byte was assigned 0, which is i_size - 1, we underflow and > end up with the value 18446744073709551615. > > This is trivial to reproduce and the following script triggers it: > > $ cat test.sh > #!/bin/bash > > DEV=/dev/sdj > MNT=/mnt/sdj > > mkfs.btrfs -f $DEV > mount $DEV $MNT > > echo -n "X" > $MNT/foobar > > btrfs filesystem defragment $MNT/foobar > > umount $MNT > > So fix this by not decrementing last_byte by 1 before doing the sector > size round up. Also, to make it easier to follow, make the round up right > after computing last_byte. > > Fixes: 7b508037d4cac3 ("btrfs: defrag: use defrag_one_cluster() to implement btrfs_defrag_file()") > Reported-by: Anthony Ruhier <aruhier@mailbox.org> > Link: https://lore.kernel.org/linux-btrfs/0a269612-e43f-da22-c5bc-b34b1b56ebe8@mailbox.org/ > Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Thanks, Qu > --- > fs/btrfs/ioctl.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c > index a5bd6926f7ff..6ad2bc2e5af3 100644 > --- a/fs/btrfs/ioctl.c > +++ b/fs/btrfs/ioctl.c > @@ -1518,12 +1518,16 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, > > if (range->start + range->len > range->start) { > /* Got a specific range */ > - last_byte = min(isize, range->start + range->len) - 1; > + last_byte = min(isize, range->start + range->len); > } else { > /* Defrag until file end */ > - last_byte = isize - 1; > + last_byte = isize; > } > > + /* Align the range */ > + cur = round_down(range->start, fs_info->sectorsize); > + last_byte = round_up(last_byte, fs_info->sectorsize) - 1; > + > /* > * If we were not given a ra, allocate a readahead context. As > * readahead is just an optimization, defrag will work without it so > @@ -1536,10 +1540,6 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, > file_ra_state_init(ra, inode->i_mapping); > } > > - /* Align the range */ > - cur = round_down(range->start, fs_info->sectorsize); > - last_byte = round_up(last_byte, fs_info->sectorsize) - 1; > - > while (cur < last_byte) { > u64 cluster_end; >
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index a5bd6926f7ff..6ad2bc2e5af3 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1518,12 +1518,16 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, if (range->start + range->len > range->start) { /* Got a specific range */ - last_byte = min(isize, range->start + range->len) - 1; + last_byte = min(isize, range->start + range->len); } else { /* Defrag until file end */ - last_byte = isize - 1; + last_byte = isize; } + /* Align the range */ + cur = round_down(range->start, fs_info->sectorsize); + last_byte = round_up(last_byte, fs_info->sectorsize) - 1; + /* * If we were not given a ra, allocate a readahead context. As * readahead is just an optimization, defrag will work without it so @@ -1536,10 +1540,6 @@ int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra, file_ra_state_init(ra, inode->i_mapping); } - /* Align the range */ - cur = round_down(range->start, fs_info->sectorsize); - last_byte = round_up(last_byte, fs_info->sectorsize) - 1; - while (cur < last_byte) { u64 cluster_end;