Message ID | 20220901173516.702122-5-surenb@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | per-VMA locks proposal | expand |
On Thu, Sep 01, 2022 at 10:34:52AM -0700, Suren Baghdasaryan wrote: > Move mmap_lock assert function definitions up so that they can be used > by other mmap_lock routines. > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > --- > include/linux/mmap_lock.h | 24 ++++++++++++------------ > 1 file changed, 12 insertions(+), 12 deletions(-) > > diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h > index 96e113e23d04..e49ba91bb1f0 100644 > --- a/include/linux/mmap_lock.h > +++ b/include/linux/mmap_lock.h > @@ -60,6 +60,18 @@ static inline void __mmap_lock_trace_released(struct mm_struct *mm, bool write) > > #endif /* CONFIG_TRACING */ > > +static inline void mmap_assert_locked(struct mm_struct *mm) > +{ > + lockdep_assert_held(&mm->mmap_lock); > + VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); These look redundant to me - maybe there's a reason the VM developers want both, but I would drop the VM_BUG_ON() and just keep the lockdep_assert_held(), since that's the standard way to write that assertion.
* Kent Overstreet <kent.overstreet@linux.dev> [220901 16:24]: > On Thu, Sep 01, 2022 at 10:34:52AM -0700, Suren Baghdasaryan wrote: > > Move mmap_lock assert function definitions up so that they can be used > > by other mmap_lock routines. > > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > > --- > > include/linux/mmap_lock.h | 24 ++++++++++++------------ > > 1 file changed, 12 insertions(+), 12 deletions(-) > > > > diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h > > index 96e113e23d04..e49ba91bb1f0 100644 > > --- a/include/linux/mmap_lock.h > > +++ b/include/linux/mmap_lock.h > > @@ -60,6 +60,18 @@ static inline void __mmap_lock_trace_released(struct mm_struct *mm, bool write) > > > > #endif /* CONFIG_TRACING */ > > > > +static inline void mmap_assert_locked(struct mm_struct *mm) > > +{ > > + lockdep_assert_held(&mm->mmap_lock); > > + VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); > > These look redundant to me - maybe there's a reason the VM developers want both, > but I would drop the VM_BUG_ON() and just keep the lockdep_assert_held(), since > that's the standard way to write that assertion. I think this is because the VM_BUG_ON_MM() will give you a lot more information and BUG_ON(). lockdep_assert_held() does not return a value and is a WARN_ON(). So they are partially redundant.
On Thu, Sep 1, 2022 at 1:51 PM Liam Howlett <liam.howlett@oracle.com> wrote: > > * Kent Overstreet <kent.overstreet@linux.dev> [220901 16:24]: > > On Thu, Sep 01, 2022 at 10:34:52AM -0700, Suren Baghdasaryan wrote: > > > Move mmap_lock assert function definitions up so that they can be used > > > by other mmap_lock routines. > > > > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > > > --- > > > include/linux/mmap_lock.h | 24 ++++++++++++------------ > > > 1 file changed, 12 insertions(+), 12 deletions(-) > > > > > > diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h > > > index 96e113e23d04..e49ba91bb1f0 100644 > > > --- a/include/linux/mmap_lock.h > > > +++ b/include/linux/mmap_lock.h > > > @@ -60,6 +60,18 @@ static inline void __mmap_lock_trace_released(struct mm_struct *mm, bool write) > > > > > > #endif /* CONFIG_TRACING */ > > > > > > +static inline void mmap_assert_locked(struct mm_struct *mm) > > > +{ > > > + lockdep_assert_held(&mm->mmap_lock); > > > + VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); > > > > These look redundant to me - maybe there's a reason the VM developers want both, > > but I would drop the VM_BUG_ON() and just keep the lockdep_assert_held(), since > > that's the standard way to write that assertion. > > I think this is because the VM_BUG_ON_MM() will give you a lot more > information and BUG_ON(). > > lockdep_assert_held() does not return a value and is a WARN_ON(). > > So they are partially redundant. Yeah and I do not intend to change the existing functionality in this patchset. If needed we can post a separate patch removing the redundancy but from my experience debugging this code, VM_BUG_ON_MM reports were very useful. > > -- > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. >
On 2022-09-01 16:24:09 [-0400], Kent Overstreet wrote: > > --- a/include/linux/mmap_lock.h > > +++ b/include/linux/mmap_lock.h > > @@ -60,6 +60,18 @@ static inline void __mmap_lock_trace_released(struct mm_struct *mm, bool write) > > > > #endif /* CONFIG_TRACING */ > > > > +static inline void mmap_assert_locked(struct mm_struct *mm) > > +{ > > + lockdep_assert_held(&mm->mmap_lock); > > + VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); > > These look redundant to me - maybe there's a reason the VM developers want both, > but I would drop the VM_BUG_ON() and just keep the lockdep_assert_held(), since > that's the standard way to write that assertion. Exactly. rwsem_is_locked() returns true only if the lock is "locked" not necessary by the caller. lockdep_assert_held() checks that the lock is locked by the caller - this is the important part. Sebastian
On Thu, Sep 1, 2022 at 11:23 PM Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > On 2022-09-01 16:24:09 [-0400], Kent Overstreet wrote: > > > --- a/include/linux/mmap_lock.h > > > +++ b/include/linux/mmap_lock.h > > > @@ -60,6 +60,18 @@ static inline void __mmap_lock_trace_released(struct mm_struct *mm, bool write) > > > > > > #endif /* CONFIG_TRACING */ > > > > > > +static inline void mmap_assert_locked(struct mm_struct *mm) > > > +{ > > > + lockdep_assert_held(&mm->mmap_lock); > > > + VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); > > > > These look redundant to me - maybe there's a reason the VM developers want both, > > but I would drop the VM_BUG_ON() and just keep the lockdep_assert_held(), since > > that's the standard way to write that assertion. > > Exactly. rwsem_is_locked() returns true only if the lock is "locked" not > necessary by the caller. lockdep_assert_held() checks that the lock is > locked by the caller - this is the important part. Ok, if at the end of the day there is a consensus that this redundancy should be removed then I'll do that in a patch separate from this series. Please note that in this patch I'm not changing these functions in any way, just moving them. > > Sebastian > > -- > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. >
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 96e113e23d04..e49ba91bb1f0 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -60,6 +60,18 @@ static inline void __mmap_lock_trace_released(struct mm_struct *mm, bool write) #endif /* CONFIG_TRACING */ +static inline void mmap_assert_locked(struct mm_struct *mm) +{ + lockdep_assert_held(&mm->mmap_lock); + VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); +} + +static inline void mmap_assert_write_locked(struct mm_struct *mm) +{ + lockdep_assert_held_write(&mm->mmap_lock); + VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); +} + static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); @@ -150,18 +162,6 @@ static inline void mmap_read_unlock_non_owner(struct mm_struct *mm) up_read_non_owner(&mm->mmap_lock); } -static inline void mmap_assert_locked(struct mm_struct *mm) -{ - lockdep_assert_held(&mm->mmap_lock); - VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); -} - -static inline void mmap_assert_write_locked(struct mm_struct *mm) -{ - lockdep_assert_held_write(&mm->mmap_lock); - VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); -} - static inline int mmap_lock_is_contended(struct mm_struct *mm) { return rwsem_is_contended(&mm->mmap_lock);
Move mmap_lock assert function definitions up so that they can be used by other mmap_lock routines. Signed-off-by: Suren Baghdasaryan <surenb@google.com> --- include/linux/mmap_lock.h | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-)