diff mbox series

mm/thp: fix call to mmu_notifier in set_pmd_migration_entry()

Message ID 20181012160953.5841-1-jglisse@redhat.com (mailing list archive)
State New, archived
Headers show
Series mm/thp: fix call to mmu_notifier in set_pmd_migration_entry() | expand

Commit Message

Jerome Glisse Oct. 12, 2018, 4:09 p.m. UTC
From: Jérôme Glisse <jglisse@redhat.com>

Inside set_pmd_migration_entry() we are holding page table locks and
thus we can not sleep so we can not call invalidate_range_start/end()

So remove call to mmu_notifier_invalidate_range_start/end() and add
call to mmu_notifier_invalidate_range(). Note that we are already
calling mmu_notifier_invalidate_range_start/end() inside the function
calling set_pmd_migration_entry() (see try_to_unmap_one()).

Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reported-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Zi Yan <zi.yan@cs.rutgers.edu>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: David Nellans <dnellans@nvidia.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
---
 mm/huge_memory.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

Comments

Zi Yan Oct. 12, 2018, 4:20 p.m. UTC | #1
On 12 Oct 2018, at 12:09, jglisse@redhat.com wrote:

> From: Jérôme Glisse <jglisse@redhat.com>
>
> Inside set_pmd_migration_entry() we are holding page table locks and
> thus we can not sleep so we can not call invalidate_range_start/end()
>
> So remove call to mmu_notifier_invalidate_range_start/end() and add
> call to mmu_notifier_invalidate_range(). Note that we are already
> calling mmu_notifier_invalidate_range_start/end() inside the function
> calling set_pmd_migration_entry() (see try_to_unmap_one()).
>
> Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> Reported-by: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: Zi Yan <zi.yan@cs.rutgers.edu>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: David Nellans <dnellans@nvidia.com>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> ---
>  mm/huge_memory.c | 7 +------
>  1 file changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 533f9b00147d..93cb80fe12cb 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2885,9 +2885,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>  	if (!(pvmw->pmd && !pvmw->pte))
>  		return;
>
> -	mmu_notifier_invalidate_range_start(mm, address,
> -			address + HPAGE_PMD_SIZE);
> -
>  	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
>  	pmdval = *pvmw->pmd;
>  	pmdp_invalidate(vma, address, pvmw->pmd);
> @@ -2898,11 +2895,9 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>  	if (pmd_soft_dirty(pmdval))
>  		pmdswp = pmd_swp_mksoft_dirty(pmdswp);
>  	set_pmd_at(mm, address, pvmw->pmd, pmdswp);
> +	mmu_notifier_invalidate_range(mm, address, address + HPAGE_PMD_SIZE);
>  	page_remove_rmap(page, true);
>  	put_page(page);
> -
> -	mmu_notifier_invalidate_range_end(mm, address,
> -			address + HPAGE_PMD_SIZE);
>  }
>
>  void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
> -- 
> 2.17.2

Yes, these are the redundant calls to mmu_notifier_invalidate_range_start/end()
in set_pmd_migration_entry(). Thanks for the patch.

Fixes: 616b8371539a6 (mm: thp: enable thp migration in generic path)

Reviewed-by: Zi Yan <zi.yan@cs.rutgers.edu>



—
Best Regards,
Yan Zi
Michal Hocko Oct. 12, 2018, 4:55 p.m. UTC | #2
On Fri 12-10-18 12:09:53, jglisse@redhat.com wrote:
> From: Jérôme Glisse <jglisse@redhat.com>
> 
> Inside set_pmd_migration_entry() we are holding page table locks and
> thus we can not sleep so we can not call invalidate_range_start/end()
> 
> So remove call to mmu_notifier_invalidate_range_start/end() and add
> call to mmu_notifier_invalidate_range(). Note that we are already
> calling mmu_notifier_invalidate_range_start/end() inside the function
> calling set_pmd_migration_entry() (see try_to_unmap_one()).
> 
> Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> Reported-by: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: Zi Yan <zi.yan@cs.rutgers.edu>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: David Nellans <dnellans@nvidia.com>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Andrea Arcangeli <aarcange@redhat.com>

Is this worth backporting to stable trees?

The patch looks good to me
Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/huge_memory.c | 7 +------
>  1 file changed, 1 insertion(+), 6 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 533f9b00147d..93cb80fe12cb 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2885,9 +2885,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>  	if (!(pvmw->pmd && !pvmw->pte))
>  		return;
>  
> -	mmu_notifier_invalidate_range_start(mm, address,
> -			address + HPAGE_PMD_SIZE);
> -
>  	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
>  	pmdval = *pvmw->pmd;
>  	pmdp_invalidate(vma, address, pvmw->pmd);
> @@ -2898,11 +2895,9 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>  	if (pmd_soft_dirty(pmdval))
>  		pmdswp = pmd_swp_mksoft_dirty(pmdswp);
>  	set_pmd_at(mm, address, pvmw->pmd, pmdswp);
> +	mmu_notifier_invalidate_range(mm, address, address + HPAGE_PMD_SIZE);
>  	page_remove_rmap(page, true);
>  	put_page(page);
> -
> -	mmu_notifier_invalidate_range_end(mm, address,
> -			address + HPAGE_PMD_SIZE);
>  }
>  
>  void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
> -- 
> 2.17.2
Jerome Glisse Oct. 12, 2018, 5:05 p.m. UTC | #3
On Fri, Oct 12, 2018 at 06:55:48PM +0200, Michal Hocko wrote:
> On Fri 12-10-18 12:09:53, jglisse@redhat.com wrote:
> > From: Jérôme Glisse <jglisse@redhat.com>
> > 
> > Inside set_pmd_migration_entry() we are holding page table locks and
> > thus we can not sleep so we can not call invalidate_range_start/end()
> > 
> > So remove call to mmu_notifier_invalidate_range_start/end() and add
> > call to mmu_notifier_invalidate_range(). Note that we are already
> > calling mmu_notifier_invalidate_range_start/end() inside the function
> > calling set_pmd_migration_entry() (see try_to_unmap_one()).
> > 
> > Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> > Reported-by: Andrea Arcangeli <aarcange@redhat.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > Cc: Zi Yan <zi.yan@cs.rutgers.edu>
> > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Cc: "H. Peter Anvin" <hpa@zytor.com>
> > Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> > Cc: Dave Hansen <dave.hansen@intel.com>
> > Cc: David Nellans <dnellans@nvidia.com>
> > Cc: Ingo Molnar <mingo@elte.hu>
> > Cc: Mel Gorman <mgorman@techsingularity.net>
> > Cc: Minchan Kim <minchan@kernel.org>
> > Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Vlastimil Babka <vbabka@suse.cz>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Andrea Arcangeli <aarcange@redhat.com>
> 
> Is this worth backporting to stable trees?

Yes it is i forgot to cc stable :(


> 
> The patch looks good to me
> Acked-by: Michal Hocko <mhocko@suse.com>
> 
> > ---
> >  mm/huge_memory.c | 7 +------
> >  1 file changed, 1 insertion(+), 6 deletions(-)
> > 
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 533f9b00147d..93cb80fe12cb 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -2885,9 +2885,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
> >  	if (!(pvmw->pmd && !pvmw->pte))
> >  		return;
> >  
> > -	mmu_notifier_invalidate_range_start(mm, address,
> > -			address + HPAGE_PMD_SIZE);
> > -
> >  	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
> >  	pmdval = *pvmw->pmd;
> >  	pmdp_invalidate(vma, address, pvmw->pmd);
> > @@ -2898,11 +2895,9 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
> >  	if (pmd_soft_dirty(pmdval))
> >  		pmdswp = pmd_swp_mksoft_dirty(pmdswp);
> >  	set_pmd_at(mm, address, pvmw->pmd, pmdswp);
> > +	mmu_notifier_invalidate_range(mm, address, address + HPAGE_PMD_SIZE);
> >  	page_remove_rmap(page, true);
> >  	put_page(page);
> > -
> > -	mmu_notifier_invalidate_range_end(mm, address,
> > -			address + HPAGE_PMD_SIZE);
> >  }
> >  
> >  void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
> > -- 
> > 2.17.2
> 
> -- 
> Michal Hocko
> SUSE Labs
Andrea Arcangeli Oct. 12, 2018, 5:24 p.m. UTC | #4
Hello,

On Fri, Oct 12, 2018 at 12:20:54PM -0400, Zi Yan wrote:
> On 12 Oct 2018, at 12:09, jglisse@redhat.com wrote:
> 
> > From: Jérôme Glisse <jglisse@redhat.com>
> >
> > Inside set_pmd_migration_entry() we are holding page table locks and
> > thus we can not sleep so we can not call invalidate_range_start/end()
> >
> > So remove call to mmu_notifier_invalidate_range_start/end() and add
> > call to mmu_notifier_invalidate_range(). Note that we are already

Why the call to mmu_notifier_invalidate_range if we're under
range_start and followed by range_end? (it's not _range_only_end, if
it was _range_only_end the above would be needed)

> > calling mmu_notifier_invalidate_range_start/end() inside the function
> > calling set_pmd_migration_entry() (see try_to_unmap_one()).
> >
> > Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> > Reported-by: Andrea Arcangeli <aarcange@redhat.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > Cc: Zi Yan <zi.yan@cs.rutgers.edu>
> > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Cc: "H. Peter Anvin" <hpa@zytor.com>
> > Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> > Cc: Dave Hansen <dave.hansen@intel.com>
> > Cc: David Nellans <dnellans@nvidia.com>
> > Cc: Ingo Molnar <mingo@elte.hu>
> > Cc: Mel Gorman <mgorman@techsingularity.net>
> > Cc: Minchan Kim <minchan@kernel.org>
> > Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Vlastimil Babka <vbabka@suse.cz>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Andrea Arcangeli <aarcange@redhat.com>
> > ---
> >  mm/huge_memory.c | 7 +------
> >  1 file changed, 1 insertion(+), 6 deletions(-)
> >
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 533f9b00147d..93cb80fe12cb 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -2885,9 +2885,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
> >  	if (!(pvmw->pmd && !pvmw->pte))
> >  		return;
> >
> > -	mmu_notifier_invalidate_range_start(mm, address,
> > -			address + HPAGE_PMD_SIZE);
> > -
> >  	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
> >  	pmdval = *pvmw->pmd;
> >  	pmdp_invalidate(vma, address, pvmw->pmd);
> > @@ -2898,11 +2895,9 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
> >  	if (pmd_soft_dirty(pmdval))
> >  		pmdswp = pmd_swp_mksoft_dirty(pmdswp);
> >  	set_pmd_at(mm, address, pvmw->pmd, pmdswp);
> > +	mmu_notifier_invalidate_range(mm, address, address + HPAGE_PMD_SIZE);

It's not obvious why it's needed, if it's needed maybe a comment can
be added.

> >  	page_remove_rmap(page, true);
> >  	put_page(page);
> > -
> > -	mmu_notifier_invalidate_range_end(mm, address,
> > -			address + HPAGE_PMD_SIZE);
> >  }
> >
> >  void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
> > -- 
> > 2.17.2
> 
> Yes, these are the redundant calls to mmu_notifier_invalidate_range_start/end()
> in set_pmd_migration_entry(). Thanks for the patch.

They're not just redundant, it's called in non blockable path with
__mmu_notifier_invalidate_range_start(blockable=true).

Furthermore mmu notifier API doesn't support nesting.

KVM is actually robust against the nesting:

	kvm->mmu_notifier_count++;

	kvm->mmu_notifier_count--;

and KVM is always fine with non blockable calls, but that's not
universally true for all mmu notifier users.

Thanks,
Andrea
Jerome Glisse Oct. 12, 2018, 5:35 p.m. UTC | #5
On Fri, Oct 12, 2018 at 01:24:22PM -0400, Andrea Arcangeli wrote:
> Hello,
> 
> On Fri, Oct 12, 2018 at 12:20:54PM -0400, Zi Yan wrote:
> > On 12 Oct 2018, at 12:09, jglisse@redhat.com wrote:
> > 
> > > From: Jérôme Glisse <jglisse@redhat.com>
> > >
> > > Inside set_pmd_migration_entry() we are holding page table locks and
> > > thus we can not sleep so we can not call invalidate_range_start/end()
> > >
> > > So remove call to mmu_notifier_invalidate_range_start/end() and add
> > > call to mmu_notifier_invalidate_range(). Note that we are already
> 
> Why the call to mmu_notifier_invalidate_range if we're under
> range_start and followed by range_end? (it's not _range_only_end, if
> it was _range_only_end the above would be needed)

I wanted to be extra safe and accept to over invalidate. You are right
that it is not strictly necessary. I am fine with removing it.

> 
> > > calling mmu_notifier_invalidate_range_start/end() inside the function
> > > calling set_pmd_migration_entry() (see try_to_unmap_one()).
> > >
> > > Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> > > Reported-by: Andrea Arcangeli <aarcange@redhat.com>
> > > Cc: Andrew Morton <akpm@linux-foundation.org>
> > > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > > Cc: Zi Yan <zi.yan@cs.rutgers.edu>
> > > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > > Cc: "H. Peter Anvin" <hpa@zytor.com>
> > > Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> > > Cc: Dave Hansen <dave.hansen@intel.com>
> > > Cc: David Nellans <dnellans@nvidia.com>
> > > Cc: Ingo Molnar <mingo@elte.hu>
> > > Cc: Mel Gorman <mgorman@techsingularity.net>
> > > Cc: Minchan Kim <minchan@kernel.org>
> > > Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> > > Cc: Thomas Gleixner <tglx@linutronix.de>
> > > Cc: Vlastimil Babka <vbabka@suse.cz>
> > > Cc: Michal Hocko <mhocko@kernel.org>
> > > Cc: Andrea Arcangeli <aarcange@redhat.com>
> > > ---
> > >  mm/huge_memory.c | 7 +------
> > >  1 file changed, 1 insertion(+), 6 deletions(-)
> > >
> > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > index 533f9b00147d..93cb80fe12cb 100644
> > > --- a/mm/huge_memory.c
> > > +++ b/mm/huge_memory.c
> > > @@ -2885,9 +2885,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
> > >  	if (!(pvmw->pmd && !pvmw->pte))
> > >  		return;
> > >
> > > -	mmu_notifier_invalidate_range_start(mm, address,
> > > -			address + HPAGE_PMD_SIZE);
> > > -
> > >  	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
> > >  	pmdval = *pvmw->pmd;
> > >  	pmdp_invalidate(vma, address, pvmw->pmd);
> > > @@ -2898,11 +2895,9 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
> > >  	if (pmd_soft_dirty(pmdval))
> > >  		pmdswp = pmd_swp_mksoft_dirty(pmdswp);
> > >  	set_pmd_at(mm, address, pvmw->pmd, pmdswp);
> > > +	mmu_notifier_invalidate_range(mm, address, address + HPAGE_PMD_SIZE);
> 
> It's not obvious why it's needed, if it's needed maybe a comment can
> be added.

We can remove it. Should i post a v2 without it ?

Cheers,
Jérôme
Andrea Arcangeli Oct. 12, 2018, 5:58 p.m. UTC | #6
On Fri, Oct 12, 2018 at 01:35:19PM -0400, Jerome Glisse wrote:
> On Fri, Oct 12, 2018 at 01:24:22PM -0400, Andrea Arcangeli wrote:
> > Hello,
> > 
> > On Fri, Oct 12, 2018 at 12:20:54PM -0400, Zi Yan wrote:
> > > On 12 Oct 2018, at 12:09, jglisse@redhat.com wrote:
> > > 
> > > > From: Jérôme Glisse <jglisse@redhat.com>
> > > >
> > > > Inside set_pmd_migration_entry() we are holding page table locks and
> > > > thus we can not sleep so we can not call invalidate_range_start/end()
> > > >
> > > > So remove call to mmu_notifier_invalidate_range_start/end() and add
> > > > call to mmu_notifier_invalidate_range(). Note that we are already
> > 
> > Why the call to mmu_notifier_invalidate_range if we're under
> > range_start and followed by range_end? (it's not _range_only_end, if
> > it was _range_only_end the above would be needed)
> 
> I wanted to be extra safe and accept to over invalidate. You are right
> that it is not strictly necessary. I am fine with removing it.

If it's superfluous, I'd generally prefer strict code unless there's a
very explicit comment about it that says it's actually superfluous.
Otherwise after a while we don't know why it was added there.

> We can remove it. Should i post a v2 without it ?

That's fine with me yes.

Thanks,
Andrea
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 533f9b00147d..93cb80fe12cb 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2885,9 +2885,6 @@  void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
 	if (!(pvmw->pmd && !pvmw->pte))
 		return;
 
-	mmu_notifier_invalidate_range_start(mm, address,
-			address + HPAGE_PMD_SIZE);
-
 	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
 	pmdval = *pvmw->pmd;
 	pmdp_invalidate(vma, address, pvmw->pmd);
@@ -2898,11 +2895,9 @@  void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
 	if (pmd_soft_dirty(pmdval))
 		pmdswp = pmd_swp_mksoft_dirty(pmdswp);
 	set_pmd_at(mm, address, pvmw->pmd, pmdswp);
+	mmu_notifier_invalidate_range(mm, address, address + HPAGE_PMD_SIZE);
 	page_remove_rmap(page, true);
 	put_page(page);
-
-	mmu_notifier_invalidate_range_end(mm, address,
-			address + HPAGE_PMD_SIZE);
 }
 
 void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)