Message ID | 20181203152038.21388-5-josef@toxicpanda.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Delayed refs rsv | expand |
On 3.12.18 г. 17:20 ч., Josef Bacik wrote: > From: Josef Bacik <jbacik@fb.com> > > We use this number to figure out how many delayed refs to run, but > __btrfs_run_delayed_refs really only checks every time we need a new > delayed ref head, so we always run at least one ref head completely no > matter what the number of items on it. Fix the accounting to only be > adjusted when we add/remove a ref head. David, I think also warrants a forward looking sentence stating that the number is also going to be used to calculate the required number of bytes in the delayed refs rsv. Something along the lines of: In addition to using this number to limit the number of delayed refs run, a future patch is also going to use it to calculate the amount of space required for delayed refs space reservation. > > Reviewed-by: Nikolay Borisov <nborisov@suse.com> > Signed-off-by: Josef Bacik <jbacik@fb.com> > --- > fs/btrfs/delayed-ref.c | 3 --- > 1 file changed, 3 deletions(-) > > diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c > index b3e4c9fcb664..48725fa757a3 100644 > --- a/fs/btrfs/delayed-ref.c > +++ b/fs/btrfs/delayed-ref.c > @@ -251,8 +251,6 @@ static inline void drop_delayed_ref(struct btrfs_trans_handle *trans, > ref->in_tree = 0; > btrfs_put_delayed_ref(ref); > atomic_dec(&delayed_refs->num_entries); > - if (trans->delayed_ref_updates) > - trans->delayed_ref_updates--; > } > > static bool merge_ref(struct btrfs_trans_handle *trans, > @@ -467,7 +465,6 @@ static int insert_delayed_ref(struct btrfs_trans_handle *trans, > if (ref->action == BTRFS_ADD_DELAYED_REF) > list_add_tail(&ref->add_list, &href->ref_add_list); > atomic_inc(&root->num_entries); > - trans->delayed_ref_updates++; > spin_unlock(&href->lock); > return ret; > } >
On Fri, Dec 07, 2018 at 03:01:45PM +0200, Nikolay Borisov wrote: > > > On 3.12.18 г. 17:20 ч., Josef Bacik wrote: > > From: Josef Bacik <jbacik@fb.com> > > > > We use this number to figure out how many delayed refs to run, but > > __btrfs_run_delayed_refs really only checks every time we need a new > > delayed ref head, so we always run at least one ref head completely no > > matter what the number of items on it. Fix the accounting to only be > > adjusted when we add/remove a ref head. > > David, > > I think also warrants a forward looking sentence stating that the number > is also going to be used to calculate the required number of bytes in > the delayed refs rsv. Something along the lines of: > > In addition to using this number to limit the number of delayed refs > run, a future patch is also going to use it to calculate the amount of > space required for delayed refs space reservation. Added to the changelog, thanks.
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c index b3e4c9fcb664..48725fa757a3 100644 --- a/fs/btrfs/delayed-ref.c +++ b/fs/btrfs/delayed-ref.c @@ -251,8 +251,6 @@ static inline void drop_delayed_ref(struct btrfs_trans_handle *trans, ref->in_tree = 0; btrfs_put_delayed_ref(ref); atomic_dec(&delayed_refs->num_entries); - if (trans->delayed_ref_updates) - trans->delayed_ref_updates--; } static bool merge_ref(struct btrfs_trans_handle *trans, @@ -467,7 +465,6 @@ static int insert_delayed_ref(struct btrfs_trans_handle *trans, if (ref->action == BTRFS_ADD_DELAYED_REF) list_add_tail(&ref->add_list, &href->ref_add_list); atomic_inc(&root->num_entries); - trans->delayed_ref_updates++; spin_unlock(&href->lock); return ret; }