Message ID | 4ff8bc928e3770828b2e272e817bc649bf98ab6d.1602863482.git.josef@toxicpanda.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | A variety of lock contention fixes | expand |
On 16.10.20 г. 18:52 ч., Josef Bacik wrote: > Previously our delayed ref running used the total number of items as the > items to run. However we changed that to number of heads to run with > the delayed_refs_rsv, as generally we want to run all of the operations > for one bytenr. > > But with btrfs_run_delayed_refs(trans, 0) we set our count to 2x the > number of items that we have. This is generally fine, but if we have > some operation generation loads of delayed refs while we're doing this > pre-flushing in the transaction commit, we'll just spin forever doing > delayed refs. > > Fix this to simply pick the number of delayed refs we currently have, > that way we do not end up doing a lot of extra work that's being > generated in other threads. > > Signed-off-by: Josef Bacik <josef@toxicpanda.com> > Reviewed-by: Nikolay Borisov <nborisov@suse.com> Turns out this patch regresses on generic/371 so refrain from merging for the time being. <snip>
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 5fd60b13f4f8..a7f0a1480cd9 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -2180,7 +2180,7 @@ int btrfs_run_delayed_refs(struct btrfs_trans_handle *trans, delayed_refs = &trans->transaction->delayed_refs; if (count == 0) - count = atomic_read(&delayed_refs->num_entries) * 2; + count = delayed_refs->num_heads_ready; again: #ifdef SCRAMBLE_DELAYED_REFS