diff mbox series

[1/8] mmap locking API: initial implementation as rwsem wrappers

Message ID 20200326070236.235835-2-walken@google.com (mailing list archive)
State New, archived
Headers show
Series Add a new mmap locking API wrapping mmap_sem calls | expand

Commit Message

Michel Lespinasse March 26, 2020, 7:02 a.m. UTC
This change wraps the existing mmap_sem related rwsem calls into a new
mmap locking API. There are two justifications for the new API:

- At first, it provides an easy hooking point to instrument mmap_sem
  locking latencies independently of any other rwsems.

- In the future, it may be a starting point for replacing the rwsem
  implementation with a different one, such as range locks.

Signed-off-by: Michel Lespinasse <walken@google.com>
---
 include/linux/mm.h        |  1 +
 include/linux/mmap_lock.h | 59 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 60 insertions(+)
 create mode 100644 include/linux/mmap_lock.h

Comments

Jason Gunthorpe March 26, 2020, 5:56 p.m. UTC | #1
On Thu, Mar 26, 2020 at 12:02:29AM -0700, Michel Lespinasse wrote:

> +static inline bool mmap_is_locked(struct mm_struct *mm)
> +{
> +	return rwsem_is_locked(&mm->mmap_sem) != 0;
> +}

I've been wondering if the various VM_BUG(rwsem_is_locked()) would be
better as lockdep expressions? Certainly when lockdep is enabled it
should be preferred, IMHO.

So, I think if inlines are to be introduced this should be something
similar to netdev's ASSERT_RTNL which seems to have worked well.

Maybe ASSERT_MMAP_SEM_READ/WRITE/HELD() and do the VM_BUG or
lockdep_is_held as appropriate?

Jason
Matthew Wilcox March 26, 2020, 6:06 p.m. UTC | #2
On Thu, Mar 26, 2020 at 02:56:44PM -0300, Jason Gunthorpe wrote:
> On Thu, Mar 26, 2020 at 12:02:29AM -0700, Michel Lespinasse wrote:
> 
> > +static inline bool mmap_is_locked(struct mm_struct *mm)
> > +{
> > +	return rwsem_is_locked(&mm->mmap_sem) != 0;
> > +}
> 
> I've been wondering if the various VM_BUG(rwsem_is_locked()) would be
> better as lockdep expressions? Certainly when lockdep is enabled it
> should be preferred, IMHO.
> 
> So, I think if inlines are to be introduced this should be something
> similar to netdev's ASSERT_RTNL which seems to have worked well.
> 
> Maybe ASSERT_MMAP_SEM_READ/WRITE/HELD() and do the VM_BUG or
> lockdep_is_held as appropriate?

I'd rather see lockdep_assert_held() used directly rather than have
a wrapper.  This API includes options to check for it beind explicitly
held for read/write/any, which should be useful.
Matthew Wilcox March 26, 2020, 6:09 p.m. UTC | #3
On Thu, Mar 26, 2020 at 11:06:21AM -0700, Matthew Wilcox wrote:
> On Thu, Mar 26, 2020 at 02:56:44PM -0300, Jason Gunthorpe wrote:
> > On Thu, Mar 26, 2020 at 12:02:29AM -0700, Michel Lespinasse wrote:
> > 
> > > +static inline bool mmap_is_locked(struct mm_struct *mm)
> > > +{
> > > +	return rwsem_is_locked(&mm->mmap_sem) != 0;
> > > +}
> > 
> > I've been wondering if the various VM_BUG(rwsem_is_locked()) would be
> > better as lockdep expressions? Certainly when lockdep is enabled it
> > should be preferred, IMHO.
> > 
> > So, I think if inlines are to be introduced this should be something
> > similar to netdev's ASSERT_RTNL which seems to have worked well.
> > 
> > Maybe ASSERT_MMAP_SEM_READ/WRITE/HELD() and do the VM_BUG or
> > lockdep_is_held as appropriate?
> 
> I'd rather see lockdep_assert_held() used directly rather than have
> a wrapper.  This API includes options to check for it beind explicitly
> held for read/write/any, which should be useful.

... oh, but that requires naming the lock, which we're trying to get
away from.

I guess we need a wrapper, but yes, by all means, let's level it up
to put the VM_BUG_ON inside the wrapper, as that means we can get the
mm dumped everywhere, rather than just the few places that have done
VM_BUG_ON_MM instead of plain VM_BUG_ON.
Michel Lespinasse March 26, 2020, 10:09 p.m. UTC | #4
I don't think we strongly need an API for such assertions at this
point. There are actually a number of them (mostly the lockdep form)
being handled in the last patch renaming the mmap_sem field.

If a new form of lock is introduced in the future, it is doable to
implement it in such a way that lockdep assertions will work on it (I
have that working in my range locking patchset). For a range lock you
would probably want to add a new API anyway so that the assert can
verify that the specific range is locked, but IMO there is no strong
justification for new assertion APIs until we get there.

If there is no opposition to replacing rwsem_is_locked with
lockdep_assert_held, then I think that is workable. mmap_is_locked()
only has 5 call sites, so that's not a very large change.

On Thu, Mar 26, 2020 at 11:09 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Mar 26, 2020 at 11:06:21AM -0700, Matthew Wilcox wrote:
> > On Thu, Mar 26, 2020 at 02:56:44PM -0300, Jason Gunthorpe wrote:
> > > On Thu, Mar 26, 2020 at 12:02:29AM -0700, Michel Lespinasse wrote:
> > >
> > > > +static inline bool mmap_is_locked(struct mm_struct *mm)
> > > > +{
> > > > + return rwsem_is_locked(&mm->mmap_sem) != 0;
> > > > +}
> > >
> > > I've been wondering if the various VM_BUG(rwsem_is_locked()) would be
> > > better as lockdep expressions? Certainly when lockdep is enabled it
> > > should be preferred, IMHO.
> > >
> > > So, I think if inlines are to be introduced this should be something
> > > similar to netdev's ASSERT_RTNL which seems to have worked well.
> > >
> > > Maybe ASSERT_MMAP_SEM_READ/WRITE/HELD() and do the VM_BUG or
> > > lockdep_is_held as appropriate?
> >
> > I'd rather see lockdep_assert_held() used directly rather than have
> > a wrapper.  This API includes options to check for it beind explicitly
> > held for read/write/any, which should be useful.
>
> ... oh, but that requires naming the lock, which we're trying to get
> away from.
>
> I guess we need a wrapper, but yes, by all means, let's level it up
> to put the VM_BUG_ON inside the wrapper, as that means we can get the
> mm dumped everywhere, rather than just the few places that have done
> VM_BUG_ON_MM instead of plain VM_BUG_ON.
diff mbox series

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index c54fb96cb1e6..2f13c9198999 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -15,6 +15,7 @@ 
 #include <linux/atomic.h>
 #include <linux/debug_locks.h>
 #include <linux/mm_types.h>
+#include <linux/mmap_lock.h>
 #include <linux/range.h>
 #include <linux/pfn.h>
 #include <linux/percpu-refcount.h>
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
new file mode 100644
index 000000000000..cffd25afe92b
--- /dev/null
+++ b/include/linux/mmap_lock.h
@@ -0,0 +1,59 @@ 
+#ifndef _LINUX_MMAP_LOCK_H
+#define _LINUX_MMAP_LOCK_H
+
+static inline void mmap_init_lock(struct mm_struct *mm)
+{
+	init_rwsem(&mm->mmap_sem);
+}
+
+static inline void mmap_write_lock(struct mm_struct *mm)
+{
+	down_write(&mm->mmap_sem);
+}
+
+static inline int mmap_write_lock_killable(struct mm_struct *mm)
+{
+	return down_write_killable(&mm->mmap_sem);
+}
+
+static inline bool mmap_write_trylock(struct mm_struct *mm)
+{
+	return down_write_trylock(&mm->mmap_sem) != 0;
+}
+
+static inline void mmap_write_unlock(struct mm_struct *mm)
+{
+	up_write(&mm->mmap_sem);
+}
+
+static inline void mmap_downgrade_write_lock(struct mm_struct *mm)
+{
+	downgrade_write(&mm->mmap_sem);
+}
+
+static inline void mmap_read_lock(struct mm_struct *mm)
+{
+	down_read(&mm->mmap_sem);
+}
+
+static inline int mmap_read_lock_killable(struct mm_struct *mm)
+{
+	return down_read_killable(&mm->mmap_sem);
+}
+
+static inline bool mmap_read_trylock(struct mm_struct *mm)
+{
+	return down_read_trylock(&mm->mmap_sem) != 0;
+}
+
+static inline void mmap_read_unlock(struct mm_struct *mm)
+{
+	up_read(&mm->mmap_sem);
+}
+
+static inline bool mmap_is_locked(struct mm_struct *mm)
+{
+	return rwsem_is_locked(&mm->mmap_sem) != 0;
+}
+
+#endif /* _LINUX_MMAP_LOCK_H */