Message ID | 9a99c2204e431bc7a40bd1fb7c26ebb5fa741910.1621526221.git.dsterba@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Support resize and device delete cancel ops | expand |
On 5/21/21 8:06 AM, David Sterba wrote: > Add try-lock for exclusive operation start to allow callers to do more > checks. The same operation must already be running. The try-lock and > unlock must pair and are a substitute for btrfs_exclop_start, thus it > must also pair with btrfs_exclop_finish to release the exclop context. > > Signed-off-by: David Sterba <dsterba@suse.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Thanks, Josef
On 21/05/2021 20:06, David Sterba wrote: > Add try-lock for exclusive operation start to allow callers to do more > checks. The same operation must already be running. The try-lock and > unlock must pair and are a substitute for btrfs_exclop_start, thus it > must also pair with btrfs_exclop_finish to release the exclop context. > > Signed-off-by: David Sterba <dsterba@suse.com> > --- > fs/btrfs/ctree.h | 3 +++ > fs/btrfs/ioctl.c | 26 ++++++++++++++++++++++++++ > 2 files changed, 29 insertions(+) > > diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h > index 3dfc32a3ebab..0dffc06b5ad4 100644 > --- a/fs/btrfs/ctree.h > +++ b/fs/btrfs/ctree.h > @@ -3231,6 +3231,9 @@ void btrfs_update_ioctl_balance_args(struct btrfs_fs_info *fs_info, > struct btrfs_ioctl_balance_args *bargs); > bool btrfs_exclop_start(struct btrfs_fs_info *fs_info, > enum btrfs_exclusive_operation type); > +bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info, > + enum btrfs_exclusive_operation type); > +void btrfs_exclop_start_unlock(struct btrfs_fs_info *fs_info); > void btrfs_exclop_finish(struct btrfs_fs_info *fs_info); > > /* file.c */ > diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c > index c4e710ea08ba..cacd6ee17d8e 100644 > --- a/fs/btrfs/ioctl.c > +++ b/fs/btrfs/ioctl.c > @@ -371,6 +371,32 @@ bool btrfs_exclop_start(struct btrfs_fs_info *fs_info, > return ret; > } > > +/* > + * Conditionally allow to enter the exclusive operation in case it's compatible > + * with the running one. This must be paired with btrfs_exclop_start_unlock and > + * btrfs_exclop_finish. > + * > + * Compatibility: > + * - the same type is already running > + * - not BTRFS_EXCLOP_NONE - this is intentionally incompatible and the caller > + * must check the condition first that would allow none -> @type > + */ > +bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info, > + enum btrfs_exclusive_operation type) > +{ > + spin_lock(&fs_info->super_lock); > + if (fs_info->exclusive_operation == type) > + return true; > + > + spin_unlock(&fs_info->super_lock); > + return false; > +} > + Sorry for the late comment. Nit: This function implements a conditional lock. But the function name misleads to some operation similar to spin_trylock() or mutex_trylock(). How about btrfs_exclop_start_cond_lock() instead? Otherwise, looks good. Reviewed-by: Anand Jain <anand.jain@oracle.com> Thanks. > +void btrfs_exclop_start_unlock(struct btrfs_fs_info *fs_info) > +{ > + spin_unlock(&fs_info->super_lock); > +} > + > void btrfs_exclop_finish(struct btrfs_fs_info *fs_info) > { > spin_lock(&fs_info->super_lock); >
On Thu, May 27, 2021 at 03:43:46PM +0800, Anand Jain wrote: > On 21/05/2021 20:06, David Sterba wrote: > Nit: > This function implements a conditional lock. But the function name > misleads to some operation similar to spin_trylock() or > mutex_trylock(). How about btrfs_exclop_start_cond_lock() instead? The semantics is same as spin_trylock so it's named like it. Try lock is a conditional lock so I would not want to cause confusion by using another naming scheme. https://elixir.bootlin.com/linux/latest/source/include/linux/spinlock_api_smp.h#L86 static inline int __raw_spin_trylock(raw_spinlock_t *lock) { preempt_disable(); if (do_raw_spin_trylock(lock)) { spin_acquire(&lock->dep_map, 0, 1, _RET_IP_); return 1; } preempt_enable(); return 0; } bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info, enum btrfs_exclusive_operation type) { spin_lock(&fs_info->super_lock); if (fs_info->exclusive_operation == type) return true; spin_unlock(&fs_info->super_lock); return false; } The code flow is the same.
On 28/05/2021 20:30, David Sterba wrote: > On Thu, May 27, 2021 at 03:43:46PM +0800, Anand Jain wrote: >> On 21/05/2021 20:06, David Sterba wrote: >> Nit: >> This function implements a conditional lock. But the function name >> misleads to some operation similar to spin_trylock() or >> mutex_trylock(). How about btrfs_exclop_start_cond_lock() instead? > > The semantics is same as spin_trylock so it's named like it. Try lock is > a conditional lock so I would not want to cause confusion by using > another naming scheme. > > https://elixir.bootlin.com/linux/latest/source/include/linux/spinlock_api_smp.h#L86 > > static inline int __raw_spin_trylock(raw_spinlock_t *lock) > { > preempt_disable(); > if (do_raw_spin_trylock(lock)) { > spin_acquire(&lock->dep_map, 0, 1, _RET_IP_); > return 1; > } > preempt_enable(); > return 0; > } > > bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info, > enum btrfs_exclusive_operation type) > { > spin_lock(&fs_info->super_lock); > if (fs_info->exclusive_operation == type) > return true; > > spin_unlock(&fs_info->super_lock); > return false; > } > > The code flow is the same. > Oh. Ok, now I understand your POV. I looked up the document to check what it says, and it matched with my understanding too, as below. https://www.kernel.org/doc/htmldocs/kernel-locking/trylock-functions.html ----- :: They can be used if you need no access to the data protected with the lock when some other thread is holding the lock. :: ---- Mainly ...trylocks are non-blocking/non-waiting locks. However, btrfs_exclop_start_try_lock() can be blocking. But as this is trivial should be ok. Thanks for clarifying. - Anand
On Sat, May 29, 2021 at 09:48:46PM +0800, Anand Jain wrote: > On 28/05/2021 20:30, David Sterba wrote: > > On Thu, May 27, 2021 at 03:43:46PM +0800, Anand Jain wrote: > >> On 21/05/2021 20:06, David Sterba wrote: > > The code flow is the same. > > > > Oh. Ok, now I understand your POV. > > I looked up the document to check what it says, and it matched with > my understanding too, as below. > > https://www.kernel.org/doc/htmldocs/kernel-locking/trylock-functions.html > ----- > :: > They can be used if you need no access to the data protected with the > lock when some other thread is holding the lock. > :: > ---- > > Mainly ...trylocks are non-blocking/non-waiting locks. > > However, btrfs_exclop_start_try_lock() can be blocking. I see, the blocking is there. It should be unnoticeable in practice as it's locking only a few instructions. The semantics is slighly different than a plain lock as it needs to check if the exclusive operation is locked, not the lock itself. If it had to be a true nonblocking trylock, it would have to check the exclusive_operation value unlocked and either bail if it's different or recheck it again under lock. This sounds more complicated than we need.
diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 3dfc32a3ebab..0dffc06b5ad4 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -3231,6 +3231,9 @@ void btrfs_update_ioctl_balance_args(struct btrfs_fs_info *fs_info, struct btrfs_ioctl_balance_args *bargs); bool btrfs_exclop_start(struct btrfs_fs_info *fs_info, enum btrfs_exclusive_operation type); +bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info, + enum btrfs_exclusive_operation type); +void btrfs_exclop_start_unlock(struct btrfs_fs_info *fs_info); void btrfs_exclop_finish(struct btrfs_fs_info *fs_info); /* file.c */ diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index c4e710ea08ba..cacd6ee17d8e 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -371,6 +371,32 @@ bool btrfs_exclop_start(struct btrfs_fs_info *fs_info, return ret; } +/* + * Conditionally allow to enter the exclusive operation in case it's compatible + * with the running one. This must be paired with btrfs_exclop_start_unlock and + * btrfs_exclop_finish. + * + * Compatibility: + * - the same type is already running + * - not BTRFS_EXCLOP_NONE - this is intentionally incompatible and the caller + * must check the condition first that would allow none -> @type + */ +bool btrfs_exclop_start_try_lock(struct btrfs_fs_info *fs_info, + enum btrfs_exclusive_operation type) +{ + spin_lock(&fs_info->super_lock); + if (fs_info->exclusive_operation == type) + return true; + + spin_unlock(&fs_info->super_lock); + return false; +} + +void btrfs_exclop_start_unlock(struct btrfs_fs_info *fs_info) +{ + spin_unlock(&fs_info->super_lock); +} + void btrfs_exclop_finish(struct btrfs_fs_info *fs_info) { spin_lock(&fs_info->super_lock);
Add try-lock for exclusive operation start to allow callers to do more checks. The same operation must already be running. The try-lock and unlock must pair and are a substitute for btrfs_exclop_start, thus it must also pair with btrfs_exclop_finish to release the exclop context. Signed-off-by: David Sterba <dsterba@suse.com> --- fs/btrfs/ctree.h | 3 +++ fs/btrfs/ioctl.c | 26 ++++++++++++++++++++++++++ 2 files changed, 29 insertions(+)