[3/3] mm, notifier: Add a lockdep map for invalidate_range_start
diff mbox series

Message ID 20181122165106.18238-4-daniel.vetter@ffwll.ch
State New
Headers show
Series
  • RFC: mmu notifier debug checks
Related show

Commit Message

Daniel Vetter Nov. 22, 2018, 4:51 p.m. UTC
This is a similar idea to the fs_reclaim fake lockdep lock. It's
fairly easy to provoke a specific notifier to be run on a specific
range: Just prep it, and then munmap() it.

A bit harder, but still doable, is to provoke the mmu notifiers for
all the various callchains that might lead to them. But both at the
same time is really hard to reliable hit, especially when you want to
exercise paths like direct reclaim or compaction, where it's not
easy to control what exactly will be unmapped.

By introducing a lockdep map to tie them all together we allow lockdep
to see a lot more dependencies, without having to actually hit them
in a single challchain while testing.

Aside: Since I typed this to test i915 mmu notifiers I've only rolled
this out for the invaliate_range_start callback. If there's
interest, we should probably roll this out to all of them. But my
undestanding of core mm is seriously lacking, and I'm not clear on
whether we need a lockdep map for each callback, or whether some can
be shared.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: linux-mm@kvack.org
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
---
 include/linux/mmu_notifier.h | 7 +++++++
 mm/mmu_notifier.c            | 7 +++++++
 2 files changed, 14 insertions(+)

Comments

Daniel Vetter Nov. 27, 2018, 7:49 a.m. UTC | #1
On Thu, Nov 22, 2018 at 05:51:06PM +0100, Daniel Vetter wrote:
> This is a similar idea to the fs_reclaim fake lockdep lock. It's
> fairly easy to provoke a specific notifier to be run on a specific
> range: Just prep it, and then munmap() it.
> 
> A bit harder, but still doable, is to provoke the mmu notifiers for
> all the various callchains that might lead to them. But both at the
> same time is really hard to reliable hit, especially when you want to
> exercise paths like direct reclaim or compaction, where it's not
> easy to control what exactly will be unmapped.
> 
> By introducing a lockdep map to tie them all together we allow lockdep
> to see a lot more dependencies, without having to actually hit them
> in a single challchain while testing.
> 
> Aside: Since I typed this to test i915 mmu notifiers I've only rolled
> this out for the invaliate_range_start callback. If there's
> interest, we should probably roll this out to all of them. But my
> undestanding of core mm is seriously lacking, and I'm not clear on
> whether we need a lockdep map for each callback, or whether some can
> be shared.
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
> Cc: linux-mm@kvack.org
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>

Any comments on this one here? This is really the main ingredient for
catching deadlocks in mmu notifier callbacks. The other two patches are
more the icing on the cake.

Thanks, Daniel

> ---
>  include/linux/mmu_notifier.h | 7 +++++++
>  mm/mmu_notifier.c            | 7 +++++++
>  2 files changed, 14 insertions(+)
> 
> diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> index 9893a6432adf..a39ba218dbbe 100644
> --- a/include/linux/mmu_notifier.h
> +++ b/include/linux/mmu_notifier.h
> @@ -12,6 +12,10 @@ struct mmu_notifier_ops;
>  
>  #ifdef CONFIG_MMU_NOTIFIER
>  
> +#ifdef CONFIG_LOCKDEP
> +extern struct lockdep_map __mmu_notifier_invalidate_range_start_map;
> +#endif
> +
>  /*
>   * The mmu notifier_mm structure is allocated and installed in
>   * mm->mmu_notifier_mm inside the mm_take_all_locks() protected
> @@ -267,8 +271,11 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
>  static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
>  				  unsigned long start, unsigned long end)
>  {
> +	mutex_acquire(&__mmu_notifier_invalidate_range_start_map, 0, 0,
> +		      _RET_IP_);
>  	if (mm_has_notifiers(mm))
>  		__mmu_notifier_invalidate_range_start(mm, start, end, true);
> +	mutex_release(&__mmu_notifier_invalidate_range_start_map, 1, _RET_IP_);
>  }
>  
>  static inline int mmu_notifier_invalidate_range_start_nonblock(struct mm_struct *mm,
> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 4d282cfb296e..c6e797927376 100644
> --- a/mm/mmu_notifier.c
> +++ b/mm/mmu_notifier.c
> @@ -23,6 +23,13 @@
>  /* global SRCU for all MMs */
>  DEFINE_STATIC_SRCU(srcu);
>  
> +#ifdef CONFIG_LOCKDEP
> +struct lockdep_map __mmu_notifier_invalidate_range_start_map = {
> +	.name = "mmu_notifier_invalidate_range_start"
> +};
> +EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start_map);
> +#endif
> +
>  /*
>   * This function allows mmu_notifier::release callback to delay a call to
>   * a function that will free appropriate resources. The function must be
> -- 
> 2.19.1
>
Chris Wilson Nov. 27, 2018, 4:49 p.m. UTC | #2
Quoting Daniel Vetter (2018-11-27 07:49:18)
> On Thu, Nov 22, 2018 at 05:51:06PM +0100, Daniel Vetter wrote:
> > This is a similar idea to the fs_reclaim fake lockdep lock. It's
> > fairly easy to provoke a specific notifier to be run on a specific
> > range: Just prep it, and then munmap() it.
> > 
> > A bit harder, but still doable, is to provoke the mmu notifiers for
> > all the various callchains that might lead to them. But both at the
> > same time is really hard to reliable hit, especially when you want to
> > exercise paths like direct reclaim or compaction, where it's not
> > easy to control what exactly will be unmapped.
> > 
> > By introducing a lockdep map to tie them all together we allow lockdep
> > to see a lot more dependencies, without having to actually hit them
> > in a single challchain while testing.
> > 
> > Aside: Since I typed this to test i915 mmu notifiers I've only rolled
> > this out for the invaliate_range_start callback. If there's
> > interest, we should probably roll this out to all of them. But my
> > undestanding of core mm is seriously lacking, and I'm not clear on
> > whether we need a lockdep map for each callback, or whether some can
> > be shared.
> > 
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: David Rientjes <rientjes@google.com>
> > Cc: "Jérôme Glisse" <jglisse@redhat.com>
> > Cc: Michal Hocko <mhocko@suse.com>
> > Cc: "Christian König" <christian.koenig@amd.com>
> > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
> > Cc: linux-mm@kvack.org
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> 
> Any comments on this one here? This is really the main ingredient for
> catching deadlocks in mmu notifier callbacks. The other two patches are
> more the icing on the cake.
> 
> Thanks, Daniel
> 
> > ---
> >  include/linux/mmu_notifier.h | 7 +++++++
> >  mm/mmu_notifier.c            | 7 +++++++
> >  2 files changed, 14 insertions(+)
> > 
> > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> > index 9893a6432adf..a39ba218dbbe 100644
> > --- a/include/linux/mmu_notifier.h
> > +++ b/include/linux/mmu_notifier.h
> > @@ -12,6 +12,10 @@ struct mmu_notifier_ops;
> >  
> >  #ifdef CONFIG_MMU_NOTIFIER
> >  
> > +#ifdef CONFIG_LOCKDEP
> > +extern struct lockdep_map __mmu_notifier_invalidate_range_start_map;
> > +#endif
> > +
> >  /*
> >   * The mmu notifier_mm structure is allocated and installed in
> >   * mm->mmu_notifier_mm inside the mm_take_all_locks() protected
> > @@ -267,8 +271,11 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> >  static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> >                                 unsigned long start, unsigned long end)
> >  {
> > +     mutex_acquire(&__mmu_notifier_invalidate_range_start_map, 0, 0,
> > +                   _RET_IP_);

Would not lock_acquire_shared() be more appropriate, i.e. treat this as
a rwsem_acquire_read()?
-Chris
Daniel Vetter Nov. 27, 2018, 5:28 p.m. UTC | #3
On Tue, Nov 27, 2018 at 5:50 PM Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> Quoting Daniel Vetter (2018-11-27 07:49:18)
> > On Thu, Nov 22, 2018 at 05:51:06PM +0100, Daniel Vetter wrote:
> > > This is a similar idea to the fs_reclaim fake lockdep lock. It's
> > > fairly easy to provoke a specific notifier to be run on a specific
> > > range: Just prep it, and then munmap() it.
> > >
> > > A bit harder, but still doable, is to provoke the mmu notifiers for
> > > all the various callchains that might lead to them. But both at the
> > > same time is really hard to reliable hit, especially when you want to
> > > exercise paths like direct reclaim or compaction, where it's not
> > > easy to control what exactly will be unmapped.
> > >
> > > By introducing a lockdep map to tie them all together we allow lockdep
> > > to see a lot more dependencies, without having to actually hit them
> > > in a single challchain while testing.
> > >
> > > Aside: Since I typed this to test i915 mmu notifiers I've only rolled
> > > this out for the invaliate_range_start callback. If there's
> > > interest, we should probably roll this out to all of them. But my
> > > undestanding of core mm is seriously lacking, and I'm not clear on
> > > whether we need a lockdep map for each callback, or whether some can
> > > be shared.
> > >
> > > Cc: Andrew Morton <akpm@linux-foundation.org>
> > > Cc: David Rientjes <rientjes@google.com>
> > > Cc: "Jérôme Glisse" <jglisse@redhat.com>
> > > Cc: Michal Hocko <mhocko@suse.com>
> > > Cc: "Christian König" <christian.koenig@amd.com>
> > > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
> > > Cc: linux-mm@kvack.org
> > > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> >
> > Any comments on this one here? This is really the main ingredient for
> > catching deadlocks in mmu notifier callbacks. The other two patches are
> > more the icing on the cake.
> >
> > Thanks, Daniel
> >
> > > ---
> > >  include/linux/mmu_notifier.h | 7 +++++++
> > >  mm/mmu_notifier.c            | 7 +++++++
> > >  2 files changed, 14 insertions(+)
> > >
> > > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> > > index 9893a6432adf..a39ba218dbbe 100644
> > > --- a/include/linux/mmu_notifier.h
> > > +++ b/include/linux/mmu_notifier.h
> > > @@ -12,6 +12,10 @@ struct mmu_notifier_ops;
> > >
> > >  #ifdef CONFIG_MMU_NOTIFIER
> > >
> > > +#ifdef CONFIG_LOCKDEP
> > > +extern struct lockdep_map __mmu_notifier_invalidate_range_start_map;
> > > +#endif
> > > +
> > >  /*
> > >   * The mmu notifier_mm structure is allocated and installed in
> > >   * mm->mmu_notifier_mm inside the mm_take_all_locks() protected
> > > @@ -267,8 +271,11 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> > >  static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > >                                 unsigned long start, unsigned long end)
> > >  {
> > > +     mutex_acquire(&__mmu_notifier_invalidate_range_start_map, 0, 0,
> > > +                   _RET_IP_);
>
> Would not lock_acquire_shared() be more appropriate, i.e. treat this as
> a rwsem_acquire_read()?

read lock critical sections can't create any dependencies against any
other read lock critical section of the same lock. Switching this to a
read lock would just render the annotation pointless (if you don't
include at least some write lock critical section somewhere, but I
have no idea where you'd do that). A read lock that you only ever take
for reading essentially doesn't do anything at all.

So not clear on why you're suggesting this?

It's the exact same idea like fs_reclaim of intserting a fake lock to
tie all possible callchains to a given functions together with all
possible callchains from that function. Of course this is only valid
if all NxM combinations could happen in theory. For fs_reclaim that's
true because direct reclaim can pick anything it wants to
shrink/evict. For mmu notifier that's true as long as we assume any
mmu notifier can be in use by any process, which only depends upon
sufficiently contrived/evil userspace.

I guess I could use lock_map_acquire/release() wrappers for this like
fs_reclaim, would be a bit more clear.
-Daniel
Chris Wilson Nov. 27, 2018, 5:33 p.m. UTC | #4
Quoting Daniel Vetter (2018-11-27 17:28:43)
> On Tue, Nov 27, 2018 at 5:50 PM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> >
> > Quoting Daniel Vetter (2018-11-27 07:49:18)
> > > On Thu, Nov 22, 2018 at 05:51:06PM +0100, Daniel Vetter wrote:
> > > > This is a similar idea to the fs_reclaim fake lockdep lock. It's
> > > > fairly easy to provoke a specific notifier to be run on a specific
> > > > range: Just prep it, and then munmap() it.
> > > >
> > > > A bit harder, but still doable, is to provoke the mmu notifiers for
> > > > all the various callchains that might lead to them. But both at the
> > > > same time is really hard to reliable hit, especially when you want to
> > > > exercise paths like direct reclaim or compaction, where it's not
> > > > easy to control what exactly will be unmapped.
> > > >
> > > > By introducing a lockdep map to tie them all together we allow lockdep
> > > > to see a lot more dependencies, without having to actually hit them
> > > > in a single challchain while testing.
> > > >
> > > > Aside: Since I typed this to test i915 mmu notifiers I've only rolled
> > > > this out for the invaliate_range_start callback. If there's
> > > > interest, we should probably roll this out to all of them. But my
> > > > undestanding of core mm is seriously lacking, and I'm not clear on
> > > > whether we need a lockdep map for each callback, or whether some can
> > > > be shared.
> > > >
> > > > Cc: Andrew Morton <akpm@linux-foundation.org>
> > > > Cc: David Rientjes <rientjes@google.com>
> > > > Cc: "Jérôme Glisse" <jglisse@redhat.com>
> > > > Cc: Michal Hocko <mhocko@suse.com>
> > > > Cc: "Christian König" <christian.koenig@amd.com>
> > > > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > > Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
> > > > Cc: linux-mm@kvack.org
> > > > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > >
> > > Any comments on this one here? This is really the main ingredient for
> > > catching deadlocks in mmu notifier callbacks. The other two patches are
> > > more the icing on the cake.
> > >
> > > Thanks, Daniel
> > >
> > > > ---
> > > >  include/linux/mmu_notifier.h | 7 +++++++
> > > >  mm/mmu_notifier.c            | 7 +++++++
> > > >  2 files changed, 14 insertions(+)
> > > >
> > > > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> > > > index 9893a6432adf..a39ba218dbbe 100644
> > > > --- a/include/linux/mmu_notifier.h
> > > > +++ b/include/linux/mmu_notifier.h
> > > > @@ -12,6 +12,10 @@ struct mmu_notifier_ops;
> > > >
> > > >  #ifdef CONFIG_MMU_NOTIFIER
> > > >
> > > > +#ifdef CONFIG_LOCKDEP
> > > > +extern struct lockdep_map __mmu_notifier_invalidate_range_start_map;
> > > > +#endif
> > > > +
> > > >  /*
> > > >   * The mmu notifier_mm structure is allocated and installed in
> > > >   * mm->mmu_notifier_mm inside the mm_take_all_locks() protected
> > > > @@ -267,8 +271,11 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> > > >  static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > > >                                 unsigned long start, unsigned long end)
> > > >  {
> > > > +     mutex_acquire(&__mmu_notifier_invalidate_range_start_map, 0, 0,
> > > > +                   _RET_IP_);
> >
> > Would not lock_acquire_shared() be more appropriate, i.e. treat this as
> > a rwsem_acquire_read()?
> 
> read lock critical sections can't create any dependencies against any
> other read lock critical section of the same lock. Switching this to a
> read lock would just render the annotation pointless (if you don't
> include at least some write lock critical section somewhere, but I
> have no idea where you'd do that). A read lock that you only ever take
> for reading essentially doesn't do anything at all.
> 
> So not clear on why you're suggesting this?

Just that it's not acting as a mutex, so emulating one looks wrong.
-Chris
Daniel Vetter Nov. 27, 2018, 5:39 p.m. UTC | #5
On Tue, Nov 27, 2018 at 05:33:58PM +0000, Chris Wilson wrote:
> Quoting Daniel Vetter (2018-11-27 17:28:43)
> > On Tue, Nov 27, 2018 at 5:50 PM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > >
> > > Quoting Daniel Vetter (2018-11-27 07:49:18)
> > > > On Thu, Nov 22, 2018 at 05:51:06PM +0100, Daniel Vetter wrote:
> > > > > This is a similar idea to the fs_reclaim fake lockdep lock. It's
> > > > > fairly easy to provoke a specific notifier to be run on a specific
> > > > > range: Just prep it, and then munmap() it.
> > > > >
> > > > > A bit harder, but still doable, is to provoke the mmu notifiers for
> > > > > all the various callchains that might lead to them. But both at the
> > > > > same time is really hard to reliable hit, especially when you want to
> > > > > exercise paths like direct reclaim or compaction, where it's not
> > > > > easy to control what exactly will be unmapped.
> > > > >
> > > > > By introducing a lockdep map to tie them all together we allow lockdep
> > > > > to see a lot more dependencies, without having to actually hit them
> > > > > in a single challchain while testing.
> > > > >
> > > > > Aside: Since I typed this to test i915 mmu notifiers I've only rolled
> > > > > this out for the invaliate_range_start callback. If there's
> > > > > interest, we should probably roll this out to all of them. But my
> > > > > undestanding of core mm is seriously lacking, and I'm not clear on
> > > > > whether we need a lockdep map for each callback, or whether some can
> > > > > be shared.
> > > > >
> > > > > Cc: Andrew Morton <akpm@linux-foundation.org>
> > > > > Cc: David Rientjes <rientjes@google.com>
> > > > > Cc: "Jérôme Glisse" <jglisse@redhat.com>
> > > > > Cc: Michal Hocko <mhocko@suse.com>
> > > > > Cc: "Christian König" <christian.koenig@amd.com>
> > > > > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > > > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > > > Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
> > > > > Cc: linux-mm@kvack.org
> > > > > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > > >
> > > > Any comments on this one here? This is really the main ingredient for
> > > > catching deadlocks in mmu notifier callbacks. The other two patches are
> > > > more the icing on the cake.
> > > >
> > > > Thanks, Daniel
> > > >
> > > > > ---
> > > > >  include/linux/mmu_notifier.h | 7 +++++++
> > > > >  mm/mmu_notifier.c            | 7 +++++++
> > > > >  2 files changed, 14 insertions(+)
> > > > >
> > > > > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> > > > > index 9893a6432adf..a39ba218dbbe 100644
> > > > > --- a/include/linux/mmu_notifier.h
> > > > > +++ b/include/linux/mmu_notifier.h
> > > > > @@ -12,6 +12,10 @@ struct mmu_notifier_ops;
> > > > >
> > > > >  #ifdef CONFIG_MMU_NOTIFIER
> > > > >
> > > > > +#ifdef CONFIG_LOCKDEP
> > > > > +extern struct lockdep_map __mmu_notifier_invalidate_range_start_map;
> > > > > +#endif
> > > > > +
> > > > >  /*
> > > > >   * The mmu notifier_mm structure is allocated and installed in
> > > > >   * mm->mmu_notifier_mm inside the mm_take_all_locks() protected
> > > > > @@ -267,8 +271,11 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> > > > >  static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > > > >                                 unsigned long start, unsigned long end)
> > > > >  {
> > > > > +     mutex_acquire(&__mmu_notifier_invalidate_range_start_map, 0, 0,
> > > > > +                   _RET_IP_);
> > >
> > > Would not lock_acquire_shared() be more appropriate, i.e. treat this as
> > > a rwsem_acquire_read()?
> > 
> > read lock critical sections can't create any dependencies against any
> > other read lock critical section of the same lock. Switching this to a
> > read lock would just render the annotation pointless (if you don't
> > include at least some write lock critical section somewhere, but I
> > have no idea where you'd do that). A read lock that you only ever take
> > for reading essentially doesn't do anything at all.
> > 
> > So not clear on why you're suggesting this?
> 
> Just that it's not acting as a mutex, so emulating one looks wrong.

Ok, I think switching to lock_map_acquire/release should address that.
-Daniel

Patch
diff mbox series

diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 9893a6432adf..a39ba218dbbe 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -12,6 +12,10 @@  struct mmu_notifier_ops;
 
 #ifdef CONFIG_MMU_NOTIFIER
 
+#ifdef CONFIG_LOCKDEP
+extern struct lockdep_map __mmu_notifier_invalidate_range_start_map;
+#endif
+
 /*
  * The mmu notifier_mm structure is allocated and installed in
  * mm->mmu_notifier_mm inside the mm_take_all_locks() protected
@@ -267,8 +271,11 @@  static inline void mmu_notifier_change_pte(struct mm_struct *mm,
 static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
 				  unsigned long start, unsigned long end)
 {
+	mutex_acquire(&__mmu_notifier_invalidate_range_start_map, 0, 0,
+		      _RET_IP_);
 	if (mm_has_notifiers(mm))
 		__mmu_notifier_invalidate_range_start(mm, start, end, true);
+	mutex_release(&__mmu_notifier_invalidate_range_start_map, 1, _RET_IP_);
 }
 
 static inline int mmu_notifier_invalidate_range_start_nonblock(struct mm_struct *mm,
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 4d282cfb296e..c6e797927376 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -23,6 +23,13 @@ 
 /* global SRCU for all MMs */
 DEFINE_STATIC_SRCU(srcu);
 
+#ifdef CONFIG_LOCKDEP
+struct lockdep_map __mmu_notifier_invalidate_range_start_map = {
+	.name = "mmu_notifier_invalidate_range_start"
+};
+EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start_map);
+#endif
+
 /*
  * This function allows mmu_notifier::release callback to delay a call to
  * a function that will free appropriate resources. The function must be