mbox series

[RFC,0/1] adapting btrfs/237 to work with the new reclaim algorithm

Message ID 20220819115337.35681-1-p.raghav@samsung.com (mailing list archive)
Headers show
Series adapting btrfs/237 to work with the new reclaim algorithm | expand

Message

Pankaj Raghav Aug. 19, 2022, 11:53 a.m. UTC
Hi ,
Since 3687fcb0752a ("btrfs: zoned: make auto-reclaim less aggressive")
commit, reclaim algorithm has been changed to trigger auto-reclaim once
the fs used size is more than a certain threshold. This change breaks
237 test.

I tried to adapt the test by doing the following:
- Write a small file first
- Write a big file that increases the disk usage to be more than the
  reclaim threshold
- Delete the big file to trigger threshold
- Ensure the small file is relocated and the space used by the big file
  is reclaimed.

My test case works properly for small ZNS drives but not for bigger
sized drives in QEMU. When I use a drive with a size of 100G, not all
zones that were used by the big file are correctly reclaimed.
Either I am not setting up the test correctly or there is something
wrong on how reclaim works for zoned devices.

I created a simple script to reproduce the scenario instead of running
the test. Please adapt the $DEV and $big_file_size based on the drive
size. As I am setting the bg_reclaim_threshold to be 51, $big_file_size
should be at least 51% of the drive size.

```
DEV=nvme0n3
DEV_PATH=/dev/$DEV
big_file_size=2500M

echo "mq-deadline" > /sys/block/$DEV/queue/scheduler
umount /mnt/scratch
blkzone reset $DEV_PATH
mkfs.btrfs -f -d single -m single $DEV_PATH > /dev/null;  mount -t btrfs $DEV_PATH \
	/mnt/scratch
uuid=$(btrfs fi show $DEV_PATH | grep 'uuid' | awk '{print $NF}')

echo "51" > /sys/fs/btrfs/$uuid/bg_reclaim_threshold

fio --filename=/mnt/scratch/test2 --size=1M --rw=write --bs=4k \
	--name=btrfs_zoned > /dev/null
btrfs fi sync /mnt/scratch

echo "Open zones before big file trasfer:"
blkzone report $DEV_PATH | grep -v -e em -e nw | wc -l

fio --filename=/mnt/scratch/test1 --size=$big_file_size --rw=write --bs=4k \
	--ioengine=io_uring --name=btrfs_zoned > /dev/null
btrfs fi sync /mnt/scratch

echo "Open zones before removing the file:"
blkzone report $DEV_PATH | grep -v -e em -e nw | wc -l
rm /mnt/scratch/test1
btrfs fi sync /mnt/scratch

echo "Going to sleep. Removed the file"
sleep 30

echo "Open zones after reclaim:"
blkzone report $DEV_PATH | grep -v -e em -e nw | wc -l
```

I am getting the following output in QEMU:

- 5GB ZNS drive with 128MB zone size (and cap) and it is working as
  expected:

```
Open zones before big file trasfer:
4
Open zones before removing the file:
23
Going to sleep. Removed the file
Open zones after reclaim:
4
```

- 100GB ZNS drive with 128MB zone size (and cap) and it is **not
  working** as expected:

```
Open zones before big file trasfer:
4
Open zones before removing the file:
455
Going to sleep. Removed the file
Open zones after reclaim:
411
```

Only partial reclaim is happening for bigger sized drives. The issue
with that is, if I do another FIO transfer, the drive spits out ENOSPC
before its actual capacity is reached as most of the zones have not been
reclaimed back and are basically in an unusable state.

Is there a limit on how many bgs can be reclaimed?

Let me know if I am doing something wrong in the test or if it is an
actual issue.

Pankaj Raghav (1):
  btrfs/237: adapt the test to work with the new reclaim algorithm

 tests/btrfs/237 | 80 +++++++++++++++++++++++++++++++++++--------------
 1 file changed, 57 insertions(+), 23 deletions(-)

Comments

Johannes Thumshirn Aug. 22, 2022, 2:29 p.m. UTC | #1
On 19.08.22 13:53, Pankaj Raghav wrote:
> Hi ,
> Since 3687fcb0752a ("btrfs: zoned: make auto-reclaim less aggressive")
> commit, reclaim algorithm has been changed to trigger auto-reclaim once
> the fs used size is more than a certain threshold. This change breaks
> 237 test.
> 
> I tried to adapt the test by doing the following:
> - Write a small file first
> - Write a big file that increases the disk usage to be more than the
>   reclaim threshold
> - Delete the big file to trigger threshold
> - Ensure the small file is relocated and the space used by the big file
>   is reclaimed.
> 
> My test case works properly for small ZNS drives but not for bigger
> sized drives in QEMU. When I use a drive with a size of 100G, not all
> zones that were used by the big file are correctly reclaimed.
> Either I am not setting up the test correctly or there is something
> wrong on how reclaim works for zoned devices.
> 
> I created a simple script to reproduce the scenario instead of running
> the test. Please adapt the $DEV and $big_file_size based on the drive
> size. As I am setting the bg_reclaim_threshold to be 51, $big_file_size
> should be at least 51% of the drive size.
> 
> ```
> DEV=nvme0n3
> DEV_PATH=/dev/$DEV
> big_file_size=2500M
> 
> echo "mq-deadline" > /sys/block/$DEV/queue/scheduler
> umount /mnt/scratch
> blkzone reset $DEV_PATH
> mkfs.btrfs -f -d single -m single $DEV_PATH > /dev/null;  mount -t btrfs $DEV_PATH \
> 	/mnt/scratch
> uuid=$(btrfs fi show $DEV_PATH | grep 'uuid' | awk '{print $NF}')
> 
> echo "51" > /sys/fs/btrfs/$uuid/bg_reclaim_threshold
> 
> fio --filename=/mnt/scratch/test2 --size=1M --rw=write --bs=4k \
> 	--name=btrfs_zoned > /dev/null
> btrfs fi sync /mnt/scratch
> 
> echo "Open zones before big file trasfer:"
> blkzone report $DEV_PATH | grep -v -e em -e nw | wc -l
> 
> fio --filename=/mnt/scratch/test1 --size=$big_file_size --rw=write --bs=4k \
> 	--ioengine=io_uring --name=btrfs_zoned > /dev/null
> btrfs fi sync /mnt/scratch
> 
> echo "Open zones before removing the file:"
> blkzone report $DEV_PATH | grep -v -e em -e nw | wc -l
> rm /mnt/scratch/test1
> btrfs fi sync /mnt/scratch
> 
> echo "Going to sleep. Removed the file"
> sleep 30
> 
> echo "Open zones after reclaim:"
> blkzone report $DEV_PATH | grep -v -e em -e nw | wc -l
> ```
> 
> I am getting the following output in QEMU:
> 
> - 5GB ZNS drive with 128MB zone size (and cap) and it is working as
>   expected:
> 
> ```
> Open zones before big file trasfer:
> 4
> Open zones before removing the file:
> 23
> Going to sleep. Removed the file
> Open zones after reclaim:
> 4
> ```
> 
> - 100GB ZNS drive with 128MB zone size (and cap) and it is **not
>   working** as expected:
> 
> ```
> Open zones before big file trasfer:
> 4
> Open zones before removing the file:
> 455
> Going to sleep. Removed the file
> Open zones after reclaim:
> 411
> ```
> 
> Only partial reclaim is happening for bigger sized drives. The issue
> with that is, if I do another FIO transfer, the drive spits out ENOSPC
> before its actual capacity is reached as most of the zones have not been
> reclaimed back and are basically in an unusable state.
> 
> Is there a limit on how many bgs can be reclaimed?
> 
> Let me know if I am doing something wrong in the test or if it is an
> actual issue.

Can you try setting max_active_zones to 0? I have the feeling it's yet 
another (or perhaps already known, Naohiro shoudl know that) issue with 
MAZ handling.
Pankaj Raghav Aug. 23, 2022, 11:46 a.m. UTC | #2
On 2022-08-22 16:29, Johannes Thumshirn wrote:
>>
>> Only partial reclaim is happening for bigger sized drives. The issue
>> with that is, if I do another FIO transfer, the drive spits out ENOSPC
>> before its actual capacity is reached as most of the zones have not been
>> reclaimed back and are basically in an unusable state.
>>
>> Is there a limit on how many bgs can be reclaimed?
>>
>> Let me know if I am doing something wrong in the test or if it is an
>> actual issue.
> 
> Can you try setting max_active_zones to 0? I have the feeling it's yet 
> another (or perhaps already known, Naohiro shoudl know that) issue with 
> MAZ handling.

The Max active zones is set to 0 (QEMU defaults to 0). I also changed the
backing image format of QEMU from qcow to raw, and still the same issue of
partial reclaim for a drive size of 100G.

I tried the same test in a 100G drive with 1G zone size, and it is working
as expected.

root@zns-btrfs-simple-zns:/data# ./reclaim_script.sh
Open zones before big file transfer:
4
Open zones before removing the file:
59
Going to sleep. Removed the file
Open zones after reclaim:
4

I am not 100% sure what is causing this issue of partial reclaim when the
number of zones is higher.
Johannes Thumshirn Dec. 5, 2022, 7:56 a.m. UTC | #3
On 19.08.22 13:53, Pankaj Raghav wrote:
> Hi ,
> Since 3687fcb0752a ("btrfs: zoned: make auto-reclaim less aggressive")
> commit, reclaim algorithm has been changed to trigger auto-reclaim once
> the fs used size is more than a certain threshold. This change breaks
> 237 test.
> 
> I tried to adapt the test by doing the following:
> - Write a small file first
> - Write a big file that increases the disk usage to be more than the
>   reclaim threshold
> - Delete the big file to trigger threshold
> - Ensure the small file is relocated and the space used by the big file
>   is reclaimed.
> 
> My test case works properly for small ZNS drives but not for bigger
> sized drives in QEMU. When I use a drive with a size of 100G, not all
> zones that were used by the big file are correctly reclaimed.
> Either I am not setting up the test correctly or there is something
> wrong on how reclaim works for zoned devices.
> 
> I created a simple script to reproduce the scenario instead of running
> the test. Please adapt the $DEV and $big_file_size based on the drive
> size. As I am setting the bg_reclaim_threshold to be 51, $big_file_size
> should be at least 51% of the drive size.
> 
> ```
> DEV=nvme0n3
> DEV_PATH=/dev/$DEV
> big_file_size=2500M
> 
> echo "mq-deadline" > /sys/block/$DEV/queue/scheduler
> umount /mnt/scratch
> blkzone reset $DEV_PATH
> mkfs.btrfs -f -d single -m single $DEV_PATH > /dev/null;  mount -t btrfs $DEV_PATH \
> 	/mnt/scratch
> uuid=$(btrfs fi show $DEV_PATH | grep 'uuid' | awk '{print $NF}')
> 
> echo "51" > /sys/fs/btrfs/$uuid/bg_reclaim_threshold
> 
> fio --filename=/mnt/scratch/test2 --size=1M --rw=write --bs=4k \
> 	--name=btrfs_zoned > /dev/null
> btrfs fi sync /mnt/scratch
> 
> echo "Open zones before big file trasfer:"
> blkzone report $DEV_PATH | grep -v -e em -e nw | wc -l
> 
> fio --filename=/mnt/scratch/test1 --size=$big_file_size --rw=write --bs=4k \
> 	--ioengine=io_uring --name=btrfs_zoned > /dev/null
> btrfs fi sync /mnt/scratch
> 
> echo "Open zones before removing the file:"
> blkzone report $DEV_PATH | grep -v -e em -e nw | wc -l
> rm /mnt/scratch/test1
> btrfs fi sync /mnt/scratch
> 
> echo "Going to sleep. Removed the file"
> sleep 30
> 
> echo "Open zones after reclaim:"
> blkzone report $DEV_PATH | grep -v -e em -e nw | wc -l
> ```
> 
> I am getting the following output in QEMU:
> 
> - 5GB ZNS drive with 128MB zone size (and cap) and it is working as
>   expected:
> 
> ```
> Open zones before big file trasfer:
> 4
> Open zones before removing the file:
> 23
> Going to sleep. Removed the file
> Open zones after reclaim:
> 4
> ```
> 
> - 100GB ZNS drive with 128MB zone size (and cap) and it is **not
>   working** as expected:
> 
> ```
> Open zones before big file trasfer:
> 4
> Open zones before removing the file:
> 455
> Going to sleep. Removed the file
> Open zones after reclaim:
> 411
> ```
> 
> Only partial reclaim is happening for bigger sized drives. The issue
> with that is, if I do another FIO transfer, the drive spits out ENOSPC
> before its actual capacity is reached as most of the zones have not been
> reclaimed back and are basically in an unusable state.
> 
> Is there a limit on how many bgs can be reclaimed?
> 
> Let me know if I am doing something wrong in the test or if it is an
> actual issue.
> 
> Pankaj Raghav (1):
>   btrfs/237: adapt the test to work with the new reclaim algorithm
> 
>  tests/btrfs/237 | 80 +++++++++++++++++++++++++++++++++++--------------
>  1 file changed, 57 insertions(+), 23 deletions(-)
> 

Btw, what ever happend to this patch?
Pankaj Raghav Dec. 5, 2022, 2:53 p.m. UTC | #4
Hi Johannes,

> Btw, what ever happend to this patch?

As I said before, I had trouble reproducing reclaim for 100G drive size,
and asked if you could reproduce the same on your end. I did not get any
reply to that.

I wanted to discuss with you what I was seeing during ALPSS, but we never
got around that!

Regards,
Pankaj

On 2022-08-23 13:46, Pankaj Raghav wrote:
> On 2022-08-22 16:29, Johannes Thumshirn wrote:
>>>
>>> Only partial reclaim is happening for bigger sized drives. The issue
>>> with that is, if I do another FIO transfer, the drive spits out ENOSPC
>>> before its actual capacity is reached as most of the zones have not been
>>> reclaimed back and are basically in an unusable state.
>>>
>>> Is there a limit on how many bgs can be reclaimed?
>>>
>>> Let me know if I am doing something wrong in the test or if it is an
>>> actual issue.
>>
>> Can you try setting max_active_zones to 0? I have the feeling it's yet 
>> another (or perhaps already known, Naohiro shoudl know that) issue with 
>> MAZ handling.
> 
> The Max active zones is set to 0 (QEMU defaults to 0). I also changed the
> backing image format of QEMU from qcow to raw, and still the same issue of
> partial reclaim for a drive size of 100G.
> 
> I tried the same test in a 100G drive with 1G zone size, and it is working
> as expected.
> 
> root@zns-btrfs-simple-zns:/data# ./reclaim_script.sh
> Open zones before big file transfer:
> 4
> Open zones before removing the file:
> 59
> Going to sleep. Removed the file
> Open zones after reclaim:
> 4
> 
> I am not 100% sure what is causing this issue of partial reclaim when the
> number of zones is higher.
Johannes Thumshirn Dec. 5, 2022, 4:04 p.m. UTC | #5
On 05.12.22 15:53, Pankaj Raghav wrote:
> Hi Johannes,
> 
>> Btw, what ever happend to this patch?
> 
> As I said before, I had trouble reproducing reclaim for 100G drive size,
> and asked if you could reproduce the same on your end. I did not get any
> reply to that.
> 
> I wanted to discuss with you what I was seeing during ALPSS, but we never
> got around that!

Ah right! I'll try to reproduce it on my end as well.

But even with that one problem it makes the test pass again on my other setups.

So I think it's still an improvement to the status quo. Can you maybe resend it,
so it's again on Zorro's list?

Thanks,
	Johannes
Pankaj Raghav Dec. 7, 2022, 4:01 p.m. UTC | #6
On Mon, Dec 05, 2022 at 04:04:08PM +0000, Johannes Thumshirn wrote:
> On 05.12.22 15:53, Pankaj Raghav wrote:
> > Hi Johannes,
> > 
> >> Btw, what ever happend to this patch?
> > 
> > As I said before, I had trouble reproducing reclaim for 100G drive size,
> > and asked if you could reproduce the same on your end. I did not get any
> > reply to that.
> > 
> > I wanted to discuss with you what I was seeing during ALPSS, but we never
> > got around that!
> 
> Ah right! I'll try to reproduce it on my end as well.
> 
> But even with that one problem it makes the test pass again on my other setups.
> 
> So I think it's still an improvement to the status quo. Can you maybe resend it,
> so it's again on Zorro's list?

That sounds good. I will test it again and resend it next week. Thanks!
Pankaj Raghav Dec. 13, 2022, 1:35 p.m. UTC | #7
Hi Johannes,
  I gave my test a retry, and it started failing for all cases. This commit
by Boris changed the behavior of reclaim to be less aggressive:

https://lore.kernel.org/linux-btrfs/977bdffbf57cca3ee6541efa1563167d4d282b08.1665701210.git.boris@bur.io/

It looks like I need to change the test to cater the current behavior.

The current reclaim algorithm is consistent across all sizes, unlike before.

I will change the test to do the following:
- Write a small file
- Write a big file that crosses the reclaim limit
- Delete the big file
- Check that **only** the block group that contained the small file is
reclaimed, and the small file is relocated to a new block group.

Let me know if the flow of the test case is correct.

On 2022-12-05 17:04, Johannes Thumshirn wrote:
> On 05.12.22 15:53, Pankaj Raghav wrote:
>> Hi Johannes,
>>
>>> Btw, what ever happend to this patch?
>>
>> As I said before, I had trouble reproducing reclaim for 100G drive size,
>> and asked if you could reproduce the same on your end. I did not get any
>> reply to that.
>>
>> I wanted to discuss with you what I was seeing during ALPSS, but we never
>> got around that!
> 
> Ah right! I'll try to reproduce it on my end as well.
> 
> But even with that one problem it makes the test pass again on my other setups.
> 
> So I think it's still an improvement to the status quo. Can you maybe resend it,
> so it's again on Zorro's list?
> 
> Thanks,
> 	Johannes
> 
>