diff mbox series

[2/5] btrfs: delayed refs pre-flushing should only run the heads we have

Message ID 20200313211220.148772-3-josef@toxicpanda.com (mailing list archive)
State New, archived
Headers show
Series Fix up some stupid delayed ref flushing behaviors | expand

Commit Message

Josef Bacik March 13, 2020, 9:12 p.m. UTC
Previously our delayed ref running used the total number of items as the
items to run.  However we changed that to number of heads to run with
the delayed_refs_rsv, as generally we want to run all of the operations
for one bytenr.

But with btrfs_run_delayed_refs(trans, 0) we set our count to 2x the
number of items that we have.  This is generally fine, but if we have
some operation generation loads of delayed refs while we're doing this
pre-flushing in the transaction commit, we'll just spin forever doing
delayed refs.

Fix this to simply pick the number of delayed refs we currently have,
that way we do not end up doing a lot of extra work that's being
generated in other threads.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/extent-tree.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Nikolay Borisov April 3, 2020, 2:31 p.m. UTC | #1
On 13.03.20 г. 23:12 ч., Josef Bacik wrote:
> Previously our delayed ref running used the total number of items as the
> items to run.  However we changed that to number of heads to run with
> the delayed_refs_rsv, as generally we want to run all of the operations
> for one bytenr.
> 
> But with btrfs_run_delayed_refs(trans, 0) we set our count to 2x the
> number of items that we have.  This is generally fine, but if we have
> some operation generation loads of delayed refs while we're doing this
> pre-flushing in the transaction commit, we'll just spin forever doing
> delayed refs.
> 
> Fix this to simply pick the number of delayed refs we currently have,
> that way we do not end up doing a lot of extra work that's being
> generated in other threads.

Indeed there is a mismatch between delayed_refs->num_entries and what we
account in __btrfs_run_delayed_refs. In the function we count on a
per-head (which can include multiple delayed refs ops) granularity not
on per-refs-per-head. So this fix makes sense.

Reviewed-by: Nikolay Borisov <nborisov@suse.com>
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> ---
>  fs/btrfs/extent-tree.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index 8e5b49baad98..2925b3ad77a1 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -2196,7 +2196,7 @@ int btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
>  
>  	delayed_refs = &trans->transaction->delayed_refs;
>  	if (count == 0)
> -		count = atomic_read(&delayed_refs->num_entries) * 2;
> +		count = delayed_refs->num_heads_ready;
>  
>  again:
>  #ifdef SCRAMBLE_DELAYED_REFS
>
diff mbox series

Patch

diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 8e5b49baad98..2925b3ad77a1 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -2196,7 +2196,7 @@  int btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
 
 	delayed_refs = &trans->transaction->delayed_refs;
 	if (count == 0)
-		count = atomic_read(&delayed_refs->num_entries) * 2;
+		count = delayed_refs->num_heads_ready;
 
 again:
 #ifdef SCRAMBLE_DELAYED_REFS