diff mbox series

[v2] btrfs: wait for actual caching progress during allocation

Message ID 5b2f035fed89ad01183916d606e5735ffc2cf9c1.1689970027.git.josef@toxicpanda.com (mailing list archive)
State New, archived
Headers show
Series [v2] btrfs: wait for actual caching progress during allocation | expand

Commit Message

Josef Bacik July 21, 2023, 8:09 p.m. UTC
Recently we've been having mysterious hangs while running generic/475 on
the CI system.  This turned out to be something like this

Task 1
dmsetup suspend --nolockfs
-> __dm_suspend
 -> dm_wait_for_completion
  -> dm_wait_for_bios_completion
   -> Unable to complete because of IO's on a plug in Task 2

Task 2
wb_workfn
-> wb_writeback
 -> blk_start_plug
  -> writeback_sb_inodes
   -> Infinite loop unable to make an allocation

Task 3
cache_block_group
->read_extent_buffer_pages
 ->Waiting for IO to complete that can't be submitted because Task 1
   suspended the DM device

The problem here is that we need Task 2 to be scheduled completely for
the blk plug to flush.  Normally this would happen, we normally wait for
the block group caching to finish (Task 3), and this schedule would
result in the block plug flushing.

However if there's enough free space available from the current caching
to satisfy the allocation we won't actually wait for the caching to
complete.  This check however just checks that we have enough space, not
that we can make the allocation.  In this particular case we were trying
to allocate 9mib, and we had 10mib of free space, but we didn't have
9mib of contiguous space to allocate, and thus the allocation failed and
we looped.

We specifically don't cycle through the FFE loop until we stop finding
cached block groups because we don't want to allocate new block groups
just because we're caching, so we short circuit the normal loop once we
hit LOOP_CACHING_WAIT and we found a caching block group.

This is normally fine, except in this particular case where the caching
thread can't make progress because the dm device has been suspended.

Fix this by not only waiting for free space to >= the amount of space we
want to allocate, but also that we make some progress in caching from
the time we start waiting.  This will keep us from busy looping when the
caching is taking a while but still theoretically has enough space for
us to allocate from, and fixes this particular case by forcing us to
actually sleep and wait for forward progress, which will flush the plug.

With this fix we're no longer hanging with generic/475.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
v1->v2:
- Reworked the fix to just make sure we're waiting for forward progress on each
  run through btrfs_wait_block_group_cache_progress() instead of further
  complicating the free extent finding code.
- Dropped the comments patch and the RAID patch.

 fs/btrfs/block-group.c | 17 +++++++++++++++--
 fs/btrfs/block-group.h |  1 +
 2 files changed, 16 insertions(+), 2 deletions(-)

Comments

Boris Burkov July 21, 2023, 8:28 p.m. UTC | #1
On Fri, Jul 21, 2023 at 04:09:43PM -0400, Josef Bacik wrote:
> Recently we've been having mysterious hangs while running generic/475 on
> the CI system.  This turned out to be something like this
> 
> Task 1
> dmsetup suspend --nolockfs
> -> __dm_suspend
>  -> dm_wait_for_completion
>   -> dm_wait_for_bios_completion
>    -> Unable to complete because of IO's on a plug in Task 2
> 
> Task 2
> wb_workfn
> -> wb_writeback
>  -> blk_start_plug
>   -> writeback_sb_inodes
>    -> Infinite loop unable to make an allocation
> 
> Task 3
> cache_block_group
> ->read_extent_buffer_pages
>  ->Waiting for IO to complete that can't be submitted because Task 1
>    suspended the DM device
> 
> The problem here is that we need Task 2 to be scheduled completely for
> the blk plug to flush.  Normally this would happen, we normally wait for
> the block group caching to finish (Task 3), and this schedule would
> result in the block plug flushing.
> 
> However if there's enough free space available from the current caching
> to satisfy the allocation we won't actually wait for the caching to
> complete.  This check however just checks that we have enough space, not
> that we can make the allocation.  In this particular case we were trying
> to allocate 9mib, and we had 10mib of free space, but we didn't have
> 9mib of contiguous space to allocate, and thus the allocation failed and
> we looped.
> 
> We specifically don't cycle through the FFE loop until we stop finding
> cached block groups because we don't want to allocate new block groups
> just because we're caching, so we short circuit the normal loop once we
> hit LOOP_CACHING_WAIT and we found a caching block group.
> 
> This is normally fine, except in this particular case where the caching
> thread can't make progress because the dm device has been suspended.
> 
> Fix this by not only waiting for free space to >= the amount of space we
> want to allocate, but also that we make some progress in caching from
> the time we start waiting.  This will keep us from busy looping when the
> caching is taking a while but still theoretically has enough space for
> us to allocate from, and fixes this particular case by forcing us to
> actually sleep and wait for forward progress, which will flush the plug.
> 
> With this fix we're no longer hanging with generic/475.
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>

Reviewed-by: Boris Burkov <boris@bur.io>

I like this fix a lot! I had missed we already block on a form of
caching progress.

One question, though:
is the logic in release_block_group() to clear the retry bools enough to
make sure we keep waiting on this through multiple passes so that we
don't just fix the most common case while failing to guarantee that we
truly always wait for forward progress?

> ---
> v1->v2:
> - Reworked the fix to just make sure we're waiting for forward progress on each
>   run through btrfs_wait_block_group_cache_progress() instead of further
>   complicating the free extent finding code.
> - Dropped the comments patch and the RAID patch.
> 
>  fs/btrfs/block-group.c | 17 +++++++++++++++--
>  fs/btrfs/block-group.h |  1 +
>  2 files changed, 16 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index 91d38f38c1e7..a127865f49f9 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -441,13 +441,23 @@ void btrfs_wait_block_group_cache_progress(struct btrfs_block_group *cache,
>  					   u64 num_bytes)
>  {
>  	struct btrfs_caching_control *caching_ctl;
> +	int progress;
>  
>  	caching_ctl = btrfs_get_caching_control(cache);
>  	if (!caching_ctl)
>  		return;
>  
> +	/*
> +	 * We've already failed to allocate from this block group, so even if
> +	 * there's enough space in the block group it isn't contiguous enough to
> +	 * allow for an allocation, so wait for at least the next wakeup tick,
> +	 * or for the thing to be done.
> +	 */
> +	progress = atomic_read(&caching_ctl->progress);
> +
>  	wait_event(caching_ctl->wait, btrfs_block_group_done(cache) ||
> -		   (cache->free_space_ctl->free_space >= num_bytes));
> +		   (progress != atomic_read(&caching_ctl->progress) &&
> +		    (cache->free_space_ctl->free_space >= num_bytes)));
>  
>  	btrfs_put_caching_control(caching_ctl);
>  }
> @@ -808,8 +818,10 @@ static int load_extent_tree_free(struct btrfs_caching_control *caching_ctl)
>  
>  			if (total_found > CACHING_CTL_WAKE_UP) {
>  				total_found = 0;
> -				if (wakeup)
> +				if (wakeup) {
> +					atomic_inc(&caching_ctl->progress);
>  					wake_up(&caching_ctl->wait);
> +				}
>  			}
>  		}
>  		path->slots[0]++;
> @@ -922,6 +934,7 @@ int btrfs_cache_block_group(struct btrfs_block_group *cache, bool wait)
>  	init_waitqueue_head(&caching_ctl->wait);
>  	caching_ctl->block_group = cache;
>  	refcount_set(&caching_ctl->count, 2);
> +	atomic_set(&caching_ctl->progress, 0);
>  	btrfs_init_work(&caching_ctl->work, caching_thread, NULL, NULL);
>  
>  	spin_lock(&cache->lock);
> diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
> index cdf674a5dbad..ae1cf4429cca 100644
> --- a/fs/btrfs/block-group.h
> +++ b/fs/btrfs/block-group.h
> @@ -90,6 +90,7 @@ struct btrfs_caching_control {
>  	wait_queue_head_t wait;
>  	struct btrfs_work work;
>  	struct btrfs_block_group *block_group;
> +	atomic_t progress;
>  	refcount_t count;
>  };
>  
> -- 
> 2.41.0
>
Josef Bacik July 25, 2023, 1:16 p.m. UTC | #2
On Fri, Jul 21, 2023 at 01:28:17PM -0700, Boris Burkov wrote:
> On Fri, Jul 21, 2023 at 04:09:43PM -0400, Josef Bacik wrote:
> > Recently we've been having mysterious hangs while running generic/475 on
> > the CI system.  This turned out to be something like this
> > 
> > Task 1
> > dmsetup suspend --nolockfs
> > -> __dm_suspend
> >  -> dm_wait_for_completion
> >   -> dm_wait_for_bios_completion
> >    -> Unable to complete because of IO's on a plug in Task 2
> > 
> > Task 2
> > wb_workfn
> > -> wb_writeback
> >  -> blk_start_plug
> >   -> writeback_sb_inodes
> >    -> Infinite loop unable to make an allocation
> > 
> > Task 3
> > cache_block_group
> > ->read_extent_buffer_pages
> >  ->Waiting for IO to complete that can't be submitted because Task 1
> >    suspended the DM device
> > 
> > The problem here is that we need Task 2 to be scheduled completely for
> > the blk plug to flush.  Normally this would happen, we normally wait for
> > the block group caching to finish (Task 3), and this schedule would
> > result in the block plug flushing.
> > 
> > However if there's enough free space available from the current caching
> > to satisfy the allocation we won't actually wait for the caching to
> > complete.  This check however just checks that we have enough space, not
> > that we can make the allocation.  In this particular case we were trying
> > to allocate 9mib, and we had 10mib of free space, but we didn't have
> > 9mib of contiguous space to allocate, and thus the allocation failed and
> > we looped.
> > 
> > We specifically don't cycle through the FFE loop until we stop finding
> > cached block groups because we don't want to allocate new block groups
> > just because we're caching, so we short circuit the normal loop once we
> > hit LOOP_CACHING_WAIT and we found a caching block group.
> > 
> > This is normally fine, except in this particular case where the caching
> > thread can't make progress because the dm device has been suspended.
> > 
> > Fix this by not only waiting for free space to >= the amount of space we
> > want to allocate, but also that we make some progress in caching from
> > the time we start waiting.  This will keep us from busy looping when the
> > caching is taking a while but still theoretically has enough space for
> > us to allocate from, and fixes this particular case by forcing us to
> > actually sleep and wait for forward progress, which will flush the plug.
> > 
> > With this fix we're no longer hanging with generic/475.
> > 
> > Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> 
> Reviewed-by: Boris Burkov <boris@bur.io>
> 
> I like this fix a lot! I had missed we already block on a form of
> caching progress.
> 
> One question, though:
> is the logic in release_block_group() to clear the retry bools enough to
> make sure we keep waiting on this through multiple passes so that we
> don't just fix the most common case while failing to guarantee that we
> truly always wait for forward progress?

Yes, because the bit in the update loop where if we find a block group that was
caching we'll return and keep searching through, so we won't actually progress
the loop until after all of the block groups have been completely cached.
Thanks,

Josef
David Sterba July 26, 2023, 5:35 p.m. UTC | #3
On Fri, Jul 21, 2023 at 04:09:43PM -0400, Josef Bacik wrote:
> Recently we've been having mysterious hangs while running generic/475 on
> the CI system.  This turned out to be something like this
> 
> Task 1
> dmsetup suspend --nolockfs
> -> __dm_suspend
>  -> dm_wait_for_completion
>   -> dm_wait_for_bios_completion
>    -> Unable to complete because of IO's on a plug in Task 2
> 
> Task 2
> wb_workfn
> -> wb_writeback
>  -> blk_start_plug
>   -> writeback_sb_inodes
>    -> Infinite loop unable to make an allocation
> 
> Task 3
> cache_block_group
> ->read_extent_buffer_pages
>  ->Waiting for IO to complete that can't be submitted because Task 1
>    suspended the DM device
> 
> The problem here is that we need Task 2 to be scheduled completely for
> the blk plug to flush.  Normally this would happen, we normally wait for
> the block group caching to finish (Task 3), and this schedule would
> result in the block plug flushing.
> 
> However if there's enough free space available from the current caching
> to satisfy the allocation we won't actually wait for the caching to
> complete.  This check however just checks that we have enough space, not
> that we can make the allocation.  In this particular case we were trying
> to allocate 9mib, and we had 10mib of free space, but we didn't have
> 9mib of contiguous space to allocate, and thus the allocation failed and
> we looped.
> 
> We specifically don't cycle through the FFE loop until we stop finding
> cached block groups because we don't want to allocate new block groups
> just because we're caching, so we short circuit the normal loop once we
> hit LOOP_CACHING_WAIT and we found a caching block group.
> 
> This is normally fine, except in this particular case where the caching
> thread can't make progress because the dm device has been suspended.
> 
> Fix this by not only waiting for free space to >= the amount of space we
> want to allocate, but also that we make some progress in caching from
> the time we start waiting.  This will keep us from busy looping when the
> caching is taking a while but still theoretically has enough space for
> us to allocate from, and fixes this particular case by forcing us to
> actually sleep and wait for forward progress, which will flush the plug.
> 
> With this fix we're no longer hanging with generic/475.
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> ---
> v1->v2:
> - Reworked the fix to just make sure we're waiting for forward progress on each
>   run through btrfs_wait_block_group_cache_progress() instead of further
>   complicating the free extent finding code.
> - Dropped the comments patch and the RAID patch.

Added to misc-next, thanks.
diff mbox series

Patch

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 91d38f38c1e7..a127865f49f9 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -441,13 +441,23 @@  void btrfs_wait_block_group_cache_progress(struct btrfs_block_group *cache,
 					   u64 num_bytes)
 {
 	struct btrfs_caching_control *caching_ctl;
+	int progress;
 
 	caching_ctl = btrfs_get_caching_control(cache);
 	if (!caching_ctl)
 		return;
 
+	/*
+	 * We've already failed to allocate from this block group, so even if
+	 * there's enough space in the block group it isn't contiguous enough to
+	 * allow for an allocation, so wait for at least the next wakeup tick,
+	 * or for the thing to be done.
+	 */
+	progress = atomic_read(&caching_ctl->progress);
+
 	wait_event(caching_ctl->wait, btrfs_block_group_done(cache) ||
-		   (cache->free_space_ctl->free_space >= num_bytes));
+		   (progress != atomic_read(&caching_ctl->progress) &&
+		    (cache->free_space_ctl->free_space >= num_bytes)));
 
 	btrfs_put_caching_control(caching_ctl);
 }
@@ -808,8 +818,10 @@  static int load_extent_tree_free(struct btrfs_caching_control *caching_ctl)
 
 			if (total_found > CACHING_CTL_WAKE_UP) {
 				total_found = 0;
-				if (wakeup)
+				if (wakeup) {
+					atomic_inc(&caching_ctl->progress);
 					wake_up(&caching_ctl->wait);
+				}
 			}
 		}
 		path->slots[0]++;
@@ -922,6 +934,7 @@  int btrfs_cache_block_group(struct btrfs_block_group *cache, bool wait)
 	init_waitqueue_head(&caching_ctl->wait);
 	caching_ctl->block_group = cache;
 	refcount_set(&caching_ctl->count, 2);
+	atomic_set(&caching_ctl->progress, 0);
 	btrfs_init_work(&caching_ctl->work, caching_thread, NULL, NULL);
 
 	spin_lock(&cache->lock);
diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
index cdf674a5dbad..ae1cf4429cca 100644
--- a/fs/btrfs/block-group.h
+++ b/fs/btrfs/block-group.h
@@ -90,6 +90,7 @@  struct btrfs_caching_control {
 	wait_queue_head_t wait;
 	struct btrfs_work work;
 	struct btrfs_block_group *block_group;
+	atomic_t progress;
 	refcount_t count;
 };