diff mbox series

[RFC,3/3] btrfs: zoned: kick cleaner kthread if low on space

Message ID 20240328-hans-v1-3-4cd558959407@kernel.org (mailing list archive)
State New, archived
Headers show
Series btrfs: zoned: reclaim block-groups more aggressively | expand

Commit Message

Johannes Thumshirn March 28, 2024, 1:56 p.m. UTC
From: Johannes Thumshirn <johannes.thumshirn@wdc.com>

Kick the cleaner kthread on chunk allocation if we're slowly running out
of usable space on a zoned filesystem.

Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 fs/btrfs/zoned.c | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

Damien Le Moal March 28, 2024, 11:06 p.m. UTC | #1
On 3/28/24 22:56, Johannes Thumshirn wrote:
> From: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> 
> Kick the cleaner kthread on chunk allocation if we're slowly running out
> of usable space on a zoned filesystem.
> 
> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> ---
>  fs/btrfs/zoned.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
> index fb8707f4cab5..25c1a17873db 100644
> --- a/fs/btrfs/zoned.c
> +++ b/fs/btrfs/zoned.c
> @@ -1040,6 +1040,7 @@ int btrfs_reset_sb_log_zones(struct block_device *bdev, int mirror)
>  u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
>  				 u64 hole_end, u64 num_bytes)
>  {
> +	struct btrfs_fs_info *fs_info = device->fs_info;
>  	struct btrfs_zoned_device_info *zinfo = device->zone_info;
>  	const u8 shift = zinfo->zone_size_shift;
>  	u64 nzones = num_bytes >> shift;
> @@ -1051,6 +1052,11 @@ u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
>  	ASSERT(IS_ALIGNED(hole_start, zinfo->zone_size));
>  	ASSERT(IS_ALIGNED(num_bytes, zinfo->zone_size));
>  
> +	if (!test_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags) &&
> +	    btrfs_zoned_should_reclaim(fs_info)) {
> +		wake_up_process(fs_info->cleaner_kthread);
> +	}

Nit: no need for the curly brackets.

> +
>  	while (pos < hole_end) {
>  		begin = pos >> shift;
>  		end = begin + nzones;
>
Boris Burkov April 2, 2024, 5:09 p.m. UTC | #2
On Thu, Mar 28, 2024 at 02:56:33PM +0100, Johannes Thumshirn wrote:
> From: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> 
> Kick the cleaner kthread on chunk allocation if we're slowly running out
> of usable space on a zoned filesystem.

I'm really excited about this and think it probably makes sense on
not-zoned as well.

Have you found that this is a fast enough to help real allocations that
are in trouble? I'd be worried that it's a lot faster than waiting 30
seconds but still not urgent enough to save the day for someone trying
to allocate right now.

Not a blocking request for this patch, which I think is a definite
improvement, but we could do something like add a pass to find_free_extent
(before allocating a fresh BG?!?) that blocks on reclaim running if we
have kicked it (or errors out in a way that signals to the outer loop
that we can wait for reclaim, if needed for locking or whatever)

> 
> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> ---
>  fs/btrfs/zoned.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
> index fb8707f4cab5..25c1a17873db 100644
> --- a/fs/btrfs/zoned.c
> +++ b/fs/btrfs/zoned.c
> @@ -1040,6 +1040,7 @@ int btrfs_reset_sb_log_zones(struct block_device *bdev, int mirror)
>  u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
>  				 u64 hole_end, u64 num_bytes)
>  {
> +	struct btrfs_fs_info *fs_info = device->fs_info;
>  	struct btrfs_zoned_device_info *zinfo = device->zone_info;
>  	const u8 shift = zinfo->zone_size_shift;
>  	u64 nzones = num_bytes >> shift;
> @@ -1051,6 +1052,11 @@ u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
>  	ASSERT(IS_ALIGNED(hole_start, zinfo->zone_size));
>  	ASSERT(IS_ALIGNED(num_bytes, zinfo->zone_size));
>  
> +	if (!test_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags) &&
> +	    btrfs_zoned_should_reclaim(fs_info)) {
> +		wake_up_process(fs_info->cleaner_kthread);
> +	}
> +
>  	while (pos < hole_end) {
>  		begin = pos >> shift;
>  		end = begin + nzones;
> 
> -- 
> 2.35.3
>
David Sterba April 4, 2024, 7:45 p.m. UTC | #3
On Thu, Mar 28, 2024 at 02:56:33PM +0100, Johannes Thumshirn wrote:
> From: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> 
> Kick the cleaner kthread on chunk allocation if we're slowly running out
> of usable space on a zoned filesystem.

For zoned mode it makes more sense, as Boris noted it could be done for
non-zoned too but maybe based on additional criteria as reusing the
space is easier.

Also cleaner does several things so it may start doing subvolume
cleaning or defragmentation at the worst time.

> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> ---
>  fs/btrfs/zoned.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
> index fb8707f4cab5..25c1a17873db 100644
> --- a/fs/btrfs/zoned.c
> +++ b/fs/btrfs/zoned.c
> @@ -1040,6 +1040,7 @@ int btrfs_reset_sb_log_zones(struct block_device *bdev, int mirror)
>  u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
>  				 u64 hole_end, u64 num_bytes)
>  {
> +	struct btrfs_fs_info *fs_info = device->fs_info;
>  	struct btrfs_zoned_device_info *zinfo = device->zone_info;
>  	const u8 shift = zinfo->zone_size_shift;
>  	u64 nzones = num_bytes >> shift;
> @@ -1051,6 +1052,11 @@ u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
>  	ASSERT(IS_ALIGNED(hole_start, zinfo->zone_size));
>  	ASSERT(IS_ALIGNED(num_bytes, zinfo->zone_size));
>  
> +	if (!test_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags) &&
> +	    btrfs_zoned_should_reclaim(fs_info)) {
> +		wake_up_process(fs_info->cleaner_kthread);

You can drop the BTRFS_FS_CLEANER_RUNNING condition, this is not a
performance sensitive where the wake up of an already running process
would hurt.

> +	}
> +
>  	while (pos < hole_end) {
>  		begin = pos >> shift;
>  		end = begin + nzones;
> 
> -- 
> 2.35.3
>
diff mbox series

Patch

diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index fb8707f4cab5..25c1a17873db 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1040,6 +1040,7 @@  int btrfs_reset_sb_log_zones(struct block_device *bdev, int mirror)
 u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
 				 u64 hole_end, u64 num_bytes)
 {
+	struct btrfs_fs_info *fs_info = device->fs_info;
 	struct btrfs_zoned_device_info *zinfo = device->zone_info;
 	const u8 shift = zinfo->zone_size_shift;
 	u64 nzones = num_bytes >> shift;
@@ -1051,6 +1052,11 @@  u64 btrfs_find_allocatable_zones(struct btrfs_device *device, u64 hole_start,
 	ASSERT(IS_ALIGNED(hole_start, zinfo->zone_size));
 	ASSERT(IS_ALIGNED(num_bytes, zinfo->zone_size));
 
+	if (!test_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags) &&
+	    btrfs_zoned_should_reclaim(fs_info)) {
+		wake_up_process(fs_info->cleaner_kthread);
+	}
+
 	while (pos < hole_end) {
 		begin = pos >> shift;
 		end = begin + nzones;