diff mbox series

mm,do_huge_pmd_numa_page: remove unnecessary TLB flushing code

Message ID 20210720065529.716031-1-ying.huang@intel.com (mailing list archive)
State New
Headers show
Series mm,do_huge_pmd_numa_page: remove unnecessary TLB flushing code | expand

Commit Message

Huang, Ying July 20, 2021, 6:55 a.m. UTC
Before the commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
handling"), the TLB flushing is done in do_huge_pmd_numa_page() itself
via flush_tlb_range().

But after commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
handling"), the TLB flushing is done in migrate_pages() as in the
following code path anyway.

do_huge_pmd_numa_page
  migrate_misplaced_page
    migrate_pages

So now, the TLB flushing code in do_huge_pmd_numa_page() becomes
unnecessary.  So the code is deleted in this patch to simplify the
code.  This is only code cleanup, there's no visible performance
difference.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Zi Yan <ziy@nvidia.com>
---
 mm/huge_memory.c | 26 --------------------------
 1 file changed, 26 deletions(-)

Comments

Zi Yan July 20, 2021, 1:36 p.m. UTC | #1
On 20 Jul 2021, at 2:55, Huang Ying wrote:

> Before the commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> handling"), the TLB flushing is done in do_huge_pmd_numa_page() itself
> via flush_tlb_range().
>
> But after commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> handling"), the TLB flushing is done in migrate_pages() as in the
> following code path anyway.
>
> do_huge_pmd_numa_page
>   migrate_misplaced_page
>     migrate_pages
>
> So now, the TLB flushing code in do_huge_pmd_numa_page() becomes
> unnecessary.  So the code is deleted in this patch to simplify the
> code.  This is only code cleanup, there's no visible performance
> difference.

But we save a potential TLB flush here right? Maybe the if statement
is not a common case.

Anyway, LGTM, Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com>

>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Dan Carpenter <dan.carpenter@oracle.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
> Cc: Heiko Carstens <hca@linux.ibm.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> Cc: Zi Yan <ziy@nvidia.com>
> ---
>  mm/huge_memory.c | 26 --------------------------
>  1 file changed, 26 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index afff3ac87067..9f21e44c9030 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1440,32 +1440,6 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
>  		goto out;
>  	}
>
> -	/*
> -	 * Since we took the NUMA fault, we must have observed the !accessible
> -	 * bit. Make sure all other CPUs agree with that, to avoid them
> -	 * modifying the page we're about to migrate.
> -	 *
> -	 * Must be done under PTL such that we'll observe the relevant
> -	 * inc_tlb_flush_pending().
> -	 *
> -	 * We are not sure a pending tlb flush here is for a huge page
> -	 * mapping or not. Hence use the tlb range variant
> -	 */
> -	if (mm_tlb_flush_pending(vma->vm_mm)) {
> -		flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
> -		/*
> -		 * change_huge_pmd() released the pmd lock before
> -		 * invalidating the secondary MMUs sharing the primary
> -		 * MMU pagetables (with ->invalidate_range()). The
> -		 * mmu_notifier_invalidate_range_end() (which
> -		 * internally calls ->invalidate_range()) in
> -		 * change_pmd_range() will run after us, so we can't
> -		 * rely on it here and we need an explicit invalidate.
> -		 */
> -		mmu_notifier_invalidate_range(vma->vm_mm, haddr,
> -					      haddr + HPAGE_PMD_SIZE);
> -	}
> -
>  	pmd = pmd_modify(oldpmd, vma->vm_page_prot);
>  	page = vm_normal_page_pmd(vma, haddr, pmd);
>  	if (!page)
> -- 
> 2.30.2


—
Best Regards,
Yan, Zi
Christian Borntraeger July 20, 2021, 2:25 p.m. UTC | #2
On 20.07.21 08:55, Huang Ying wrote:
> Before the commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> handling"), the TLB flushing is done in do_huge_pmd_numa_page() itself
> via flush_tlb_range().
> 
> But after commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> handling"), the TLB flushing is done in migrate_pages() as in the
> following code path anyway.
> 
> do_huge_pmd_numa_page
>    migrate_misplaced_page
>      migrate_pages
> 
> So now, the TLB flushing code in do_huge_pmd_numa_page() becomes
> unnecessary.  So the code is deleted in this patch to simplify the
> code.  This is only code cleanup, there's no visible performance
> difference.
> 
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Dan Carpenter <dan.carpenter@oracle.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
> Cc: Heiko Carstens <hca@linux.ibm.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> Cc: Zi Yan <ziy@nvidia.com>
> ---
>   mm/huge_memory.c | 26 --------------------------
>   1 file changed, 26 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index afff3ac87067..9f21e44c9030 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1440,32 +1440,6 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
>   		goto out;
>   	}
>   
> -	/*
> -	 * Since we took the NUMA fault, we must have observed the !accessible
> -	 * bit. Make sure all other CPUs agree with that, to avoid them
> -	 * modifying the page we're about to migrate.
> -	 *
> -	 * Must be done under PTL such that we'll observe the relevant
> -	 * inc_tlb_flush_pending().
> -	 *
> -	 * We are not sure a pending tlb flush here is for a huge page
> -	 * mapping or not. Hence use the tlb range variant
> -	 */
> -	if (mm_tlb_flush_pending(vma->vm_mm)) {
> -		flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
> -		/*
> -		 * change_huge_pmd() released the pmd lock before
> -		 * invalidating the secondary MMUs sharing the primary
> -		 * MMU pagetables (with ->invalidate_range()). The
> -		 * mmu_notifier_invalidate_range_end() (which
> -		 * internally calls ->invalidate_range()) in
> -		 * change_pmd_range() will run after us, so we can't
> -		 * rely on it here and we need an explicit invalidate.
> -		 */
> -		mmu_notifier_invalidate_range(vma->vm_mm, haddr,
> -					      haddr + HPAGE_PMD_SIZE);
> -	}
> CC Paolo/KVM list so we also remove the mmu notifier here. Do we need those
now in migrate_pages? I am not an expert in that code, but I cant find
an equivalent mmu_notifier in migrate_misplaced_pages.
I might be totally wrong, just something that I noticed.

>   	pmd = pmd_modify(oldpmd, vma->vm_page_prot);
>   	page = vm_normal_page_pmd(vma, haddr, pmd);
>   	if (!page)
>
Yang Shi July 20, 2021, 8:48 p.m. UTC | #3
On Mon, Jul 19, 2021 at 11:56 PM Huang Ying <ying.huang@intel.com> wrote:
>
> Before the commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> handling"), the TLB flushing is done in do_huge_pmd_numa_page() itself
> via flush_tlb_range().
>
> But after commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> handling"), the TLB flushing is done in migrate_pages() as in the
> following code path anyway.
>
> do_huge_pmd_numa_page
>   migrate_misplaced_page
>     migrate_pages
>
> So now, the TLB flushing code in do_huge_pmd_numa_page() becomes
> unnecessary.  So the code is deleted in this patch to simplify the
> code.  This is only code cleanup, there's no visible performance
> difference.

Yes, there is tlb flush in try_to_migrate(), but it seems mmu notifier
invalidate is missed for the THP migration case. I'm not quite sure
why it is not needed, maybe just missed?

So, you may need the below change too:

diff --git a/mm/rmap.c b/mm/rmap.c
index 2d29a57d29e8..e1c8b654563d 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1749,6 +1749,8 @@ static bool try_to_migrate_one(struct page
*page, struct vm_area_struct *vma,
                                       !PageTransCompound(page), page);

                        set_pmd_migration_entry(&pvmw, page);
+                       mmu_notifier_invalidate_range(mm, range.start,
+                                                     range.end);
                        continue;
                }
 #endif

>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Dan Carpenter <dan.carpenter@oracle.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
> Cc: Heiko Carstens <hca@linux.ibm.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> Cc: Zi Yan <ziy@nvidia.com>
> ---
>  mm/huge_memory.c | 26 --------------------------
>  1 file changed, 26 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index afff3ac87067..9f21e44c9030 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1440,32 +1440,6 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
>                 goto out;
>         }
>
> -       /*
> -        * Since we took the NUMA fault, we must have observed the !accessible
> -        * bit. Make sure all other CPUs agree with that, to avoid them
> -        * modifying the page we're about to migrate.
> -        *
> -        * Must be done under PTL such that we'll observe the relevant
> -        * inc_tlb_flush_pending().
> -        *
> -        * We are not sure a pending tlb flush here is for a huge page
> -        * mapping or not. Hence use the tlb range variant
> -        */
> -       if (mm_tlb_flush_pending(vma->vm_mm)) {
> -               flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
> -               /*
> -                * change_huge_pmd() released the pmd lock before
> -                * invalidating the secondary MMUs sharing the primary
> -                * MMU pagetables (with ->invalidate_range()). The
> -                * mmu_notifier_invalidate_range_end() (which
> -                * internally calls ->invalidate_range()) in
> -                * change_pmd_range() will run after us, so we can't
> -                * rely on it here and we need an explicit invalidate.
> -                */
> -               mmu_notifier_invalidate_range(vma->vm_mm, haddr,
> -                                             haddr + HPAGE_PMD_SIZE);
> -       }
> -
>         pmd = pmd_modify(oldpmd, vma->vm_page_prot);
>         page = vm_normal_page_pmd(vma, haddr, pmd);
>         if (!page)
> --
> 2.30.2
>
Yang Shi July 20, 2021, 8:53 p.m. UTC | #4
On Tue, Jul 20, 2021 at 7:25 AM Christian Borntraeger
<borntraeger@de.ibm.com> wrote:
>
>
>
> On 20.07.21 08:55, Huang Ying wrote:
> > Before the commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> > handling"), the TLB flushing is done in do_huge_pmd_numa_page() itself
> > via flush_tlb_range().
> >
> > But after commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> > handling"), the TLB flushing is done in migrate_pages() as in the
> > following code path anyway.
> >
> > do_huge_pmd_numa_page
> >    migrate_misplaced_page
> >      migrate_pages
> >
> > So now, the TLB flushing code in do_huge_pmd_numa_page() becomes
> > unnecessary.  So the code is deleted in this patch to simplify the
> > code.  This is only code cleanup, there's no visible performance
> > difference.
> >
> > Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> > Cc: Yang Shi <shy828301@gmail.com>
> > Cc: Dan Carpenter <dan.carpenter@oracle.com>
> > Cc: Mel Gorman <mgorman@suse.de>
> > Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> > Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
> > Cc: Heiko Carstens <hca@linux.ibm.com>
> > Cc: Hugh Dickins <hughd@google.com>
> > Cc: Andrea Arcangeli <aarcange@redhat.com>
> > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Cc: Michal Hocko <mhocko@suse.com>
> > Cc: Vasily Gorbik <gor@linux.ibm.com>
> > Cc: Zi Yan <ziy@nvidia.com>
> > ---
> >   mm/huge_memory.c | 26 --------------------------
> >   1 file changed, 26 deletions(-)
> >
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index afff3ac87067..9f21e44c9030 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -1440,32 +1440,6 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
> >               goto out;
> >       }
> >
> > -     /*
> > -      * Since we took the NUMA fault, we must have observed the !accessible
> > -      * bit. Make sure all other CPUs agree with that, to avoid them
> > -      * modifying the page we're about to migrate.
> > -      *
> > -      * Must be done under PTL such that we'll observe the relevant
> > -      * inc_tlb_flush_pending().
> > -      *
> > -      * We are not sure a pending tlb flush here is for a huge page
> > -      * mapping or not. Hence use the tlb range variant
> > -      */
> > -     if (mm_tlb_flush_pending(vma->vm_mm)) {
> > -             flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
> > -             /*
> > -              * change_huge_pmd() released the pmd lock before
> > -              * invalidating the secondary MMUs sharing the primary
> > -              * MMU pagetables (with ->invalidate_range()). The
> > -              * mmu_notifier_invalidate_range_end() (which
> > -              * internally calls ->invalidate_range()) in
> > -              * change_pmd_range() will run after us, so we can't
> > -              * rely on it here and we need an explicit invalidate.
> > -              */
> > -             mmu_notifier_invalidate_range(vma->vm_mm, haddr,
> > -                                           haddr + HPAGE_PMD_SIZE);
> > -     }
> > CC Paolo/KVM list so we also remove the mmu notifier here. Do we need those
> now in migrate_pages? I am not an expert in that code, but I cant find
> an equivalent mmu_notifier in migrate_misplaced_pages.
> I might be totally wrong, just something that I noticed.

Do you mean the missed mmu notifier invalidate for the THP migration
case? Yes, I noticed that too. But I'm not sure whether it is intended
or just missed.

Zi Yan is the author for THP migration code, he may have some clue.

>
> >       pmd = pmd_modify(oldpmd, vma->vm_page_prot);
> >       page = vm_normal_page_pmd(vma, haddr, pmd);
> >       if (!page)
> >
Zi Yan July 20, 2021, 9:04 p.m. UTC | #5
On 20 Jul 2021, at 16:53, Yang Shi wrote:

> On Tue, Jul 20, 2021 at 7:25 AM Christian Borntraeger
> <borntraeger@de.ibm.com> wrote:
>>
>>
>>
>> On 20.07.21 08:55, Huang Ying wrote:
>>> Before the commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
>>> handling"), the TLB flushing is done in do_huge_pmd_numa_page() itself
>>> via flush_tlb_range().
>>>
>>> But after commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
>>> handling"), the TLB flushing is done in migrate_pages() as in the
>>> following code path anyway.
>>>
>>> do_huge_pmd_numa_page
>>>    migrate_misplaced_page
>>>      migrate_pages
>>>
>>> So now, the TLB flushing code in do_huge_pmd_numa_page() becomes
>>> unnecessary.  So the code is deleted in this patch to simplify the
>>> code.  This is only code cleanup, there's no visible performance
>>> difference.
>>>
>>> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
>>> Cc: Yang Shi <shy828301@gmail.com>
>>> Cc: Dan Carpenter <dan.carpenter@oracle.com>
>>> Cc: Mel Gorman <mgorman@suse.de>
>>> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>>> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
>>> Cc: Heiko Carstens <hca@linux.ibm.com>
>>> Cc: Hugh Dickins <hughd@google.com>
>>> Cc: Andrea Arcangeli <aarcange@redhat.com>
>>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>>> Cc: Michal Hocko <mhocko@suse.com>
>>> Cc: Vasily Gorbik <gor@linux.ibm.com>
>>> Cc: Zi Yan <ziy@nvidia.com>
>>> ---
>>>   mm/huge_memory.c | 26 --------------------------
>>>   1 file changed, 26 deletions(-)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index afff3ac87067..9f21e44c9030 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -1440,32 +1440,6 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
>>>               goto out;
>>>       }
>>>
>>> -     /*
>>> -      * Since we took the NUMA fault, we must have observed the !accessible
>>> -      * bit. Make sure all other CPUs agree with that, to avoid them
>>> -      * modifying the page we're about to migrate.
>>> -      *
>>> -      * Must be done under PTL such that we'll observe the relevant
>>> -      * inc_tlb_flush_pending().
>>> -      *
>>> -      * We are not sure a pending tlb flush here is for a huge page
>>> -      * mapping or not. Hence use the tlb range variant
>>> -      */
>>> -     if (mm_tlb_flush_pending(vma->vm_mm)) {
>>> -             flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
>>> -             /*
>>> -              * change_huge_pmd() released the pmd lock before
>>> -              * invalidating the secondary MMUs sharing the primary
>>> -              * MMU pagetables (with ->invalidate_range()). The
>>> -              * mmu_notifier_invalidate_range_end() (which
>>> -              * internally calls ->invalidate_range()) in
>>> -              * change_pmd_range() will run after us, so we can't
>>> -              * rely on it here and we need an explicit invalidate.
>>> -              */
>>> -             mmu_notifier_invalidate_range(vma->vm_mm, haddr,
>>> -                                           haddr + HPAGE_PMD_SIZE);
>>> -     }
>>> CC Paolo/KVM list so we also remove the mmu notifier here. Do we need those
>> now in migrate_pages? I am not an expert in that code, but I cant find
>> an equivalent mmu_notifier in migrate_misplaced_pages.
>> I might be totally wrong, just something that I noticed.
>
> Do you mean the missed mmu notifier invalidate for the THP migration
> case? Yes, I noticed that too. But I'm not sure whether it is intended
> or just missed.

From my understand of mmu_notifier document, mmu_notifier_invalidate_range()
is needed only if the PTE is updated to point to a new page or the page pointed
by the PTE is freed. Page migration does not fall into either case.
In addition, in migrate_pages(), more specifically try_to_migrate_one(),
there is a pair of mmu_notifier_invalidate_range_start() and
mmu_notifier_invalidate_range_end() around the PTE manipulation code, which should
be sufficient to notify secondary TLBs (including KVM) about the PTE change
for page migration. Correct me if I am wrong.

—
Best Regards,
Yan, Zi
Yang Shi July 20, 2021, 10:19 p.m. UTC | #6
On Tue, Jul 20, 2021 at 2:04 PM Zi Yan <ziy@nvidia.com> wrote:
>
> On 20 Jul 2021, at 16:53, Yang Shi wrote:
>
> > On Tue, Jul 20, 2021 at 7:25 AM Christian Borntraeger
> > <borntraeger@de.ibm.com> wrote:
> >>
> >>
> >>
> >> On 20.07.21 08:55, Huang Ying wrote:
> >>> Before the commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> >>> handling"), the TLB flushing is done in do_huge_pmd_numa_page() itself
> >>> via flush_tlb_range().
> >>>
> >>> But after commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> >>> handling"), the TLB flushing is done in migrate_pages() as in the
> >>> following code path anyway.
> >>>
> >>> do_huge_pmd_numa_page
> >>>    migrate_misplaced_page
> >>>      migrate_pages
> >>>
> >>> So now, the TLB flushing code in do_huge_pmd_numa_page() becomes
> >>> unnecessary.  So the code is deleted in this patch to simplify the
> >>> code.  This is only code cleanup, there's no visible performance
> >>> difference.
> >>>
> >>> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> >>> Cc: Yang Shi <shy828301@gmail.com>
> >>> Cc: Dan Carpenter <dan.carpenter@oracle.com>
> >>> Cc: Mel Gorman <mgorman@suse.de>
> >>> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> >>> Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
> >>> Cc: Heiko Carstens <hca@linux.ibm.com>
> >>> Cc: Hugh Dickins <hughd@google.com>
> >>> Cc: Andrea Arcangeli <aarcange@redhat.com>
> >>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> >>> Cc: Michal Hocko <mhocko@suse.com>
> >>> Cc: Vasily Gorbik <gor@linux.ibm.com>
> >>> Cc: Zi Yan <ziy@nvidia.com>
> >>> ---
> >>>   mm/huge_memory.c | 26 --------------------------
> >>>   1 file changed, 26 deletions(-)
> >>>
> >>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >>> index afff3ac87067..9f21e44c9030 100644
> >>> --- a/mm/huge_memory.c
> >>> +++ b/mm/huge_memory.c
> >>> @@ -1440,32 +1440,6 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
> >>>               goto out;
> >>>       }
> >>>
> >>> -     /*
> >>> -      * Since we took the NUMA fault, we must have observed the !accessible
> >>> -      * bit. Make sure all other CPUs agree with that, to avoid them
> >>> -      * modifying the page we're about to migrate.
> >>> -      *
> >>> -      * Must be done under PTL such that we'll observe the relevant
> >>> -      * inc_tlb_flush_pending().
> >>> -      *
> >>> -      * We are not sure a pending tlb flush here is for a huge page
> >>> -      * mapping or not. Hence use the tlb range variant
> >>> -      */
> >>> -     if (mm_tlb_flush_pending(vma->vm_mm)) {
> >>> -             flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
> >>> -             /*
> >>> -              * change_huge_pmd() released the pmd lock before
> >>> -              * invalidating the secondary MMUs sharing the primary
> >>> -              * MMU pagetables (with ->invalidate_range()). The
> >>> -              * mmu_notifier_invalidate_range_end() (which
> >>> -              * internally calls ->invalidate_range()) in
> >>> -              * change_pmd_range() will run after us, so we can't
> >>> -              * rely on it here and we need an explicit invalidate.
> >>> -              */
> >>> -             mmu_notifier_invalidate_range(vma->vm_mm, haddr,
> >>> -                                           haddr + HPAGE_PMD_SIZE);
> >>> -     }
> >>> CC Paolo/KVM list so we also remove the mmu notifier here. Do we need those
> >> now in migrate_pages? I am not an expert in that code, but I cant find
> >> an equivalent mmu_notifier in migrate_misplaced_pages.
> >> I might be totally wrong, just something that I noticed.
> >
> > Do you mean the missed mmu notifier invalidate for the THP migration
> > case? Yes, I noticed that too. But I'm not sure whether it is intended
> > or just missed.
>
> From my understand of mmu_notifier document, mmu_notifier_invalidate_range()
> is needed only if the PTE is updated to point to a new page or the page pointed
> by the PTE is freed. Page migration does not fall into either case.
> In addition, in migrate_pages(), more specifically try_to_migrate_one(),
> there is a pair of mmu_notifier_invalidate_range_start() and
> mmu_notifier_invalidate_range_end() around the PTE manipulation code, which should
> be sufficient to notify secondary TLBs (including KVM) about the PTE change
> for page migration. Correct me if I am wrong.

Thanks, I think you are correct. By looking into commit 7066f0f933a1
("mm: thp: fix mmu_notifier in migrate_misplaced_transhuge_page()"),
the tlb flush and mmu notifier invalidate were needed since the old
numa fault implementation didn't change PTE to migration entry so it
may cause data corruption due to the writes from GPU secondary MMU.

The refactor does use the generic migration code which converts PTE to
migration entry before copying data to the new page.

>
> —
> Best Regards,
> Yan, Zi
Yang Shi July 20, 2021, 10:21 p.m. UTC | #7
On Tue, Jul 20, 2021 at 1:48 PM Yang Shi <shy828301@gmail.com> wrote:
>
> On Mon, Jul 19, 2021 at 11:56 PM Huang Ying <ying.huang@intel.com> wrote:
> >
> > Before the commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> > handling"), the TLB flushing is done in do_huge_pmd_numa_page() itself
> > via flush_tlb_range().
> >
> > But after commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
> > handling"), the TLB flushing is done in migrate_pages() as in the
> > following code path anyway.
> >
> > do_huge_pmd_numa_page
> >   migrate_misplaced_page
> >     migrate_pages
> >
> > So now, the TLB flushing code in do_huge_pmd_numa_page() becomes
> > unnecessary.  So the code is deleted in this patch to simplify the
> > code.  This is only code cleanup, there's no visible performance
> > difference.
>
> Yes, there is tlb flush in try_to_migrate(), but it seems mmu notifier
> invalidate is missed for the THP migration case. I'm not quite sure
> why it is not needed, maybe just missed?
>
> So, you may need the below change too:
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 2d29a57d29e8..e1c8b654563d 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1749,6 +1749,8 @@ static bool try_to_migrate_one(struct page
> *page, struct vm_area_struct *vma,
>                                        !PageTransCompound(page), page);
>
>                         set_pmd_migration_entry(&pvmw, page);
> +                       mmu_notifier_invalidate_range(mm, range.start,
> +                                                     range.end);
>                         continue;
>                 }
>  #endif

Per the discussion with Zi Yan the mmu notifier invalidate should be
not needed. The patch looks good to me. Reviewed-by: Yang Shi
<shy828301@gmail.com>

>
> >
> > Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> > Cc: Yang Shi <shy828301@gmail.com>
> > Cc: Dan Carpenter <dan.carpenter@oracle.com>
> > Cc: Mel Gorman <mgorman@suse.de>
> > Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> > Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
> > Cc: Heiko Carstens <hca@linux.ibm.com>
> > Cc: Hugh Dickins <hughd@google.com>
> > Cc: Andrea Arcangeli <aarcange@redhat.com>
> > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Cc: Michal Hocko <mhocko@suse.com>
> > Cc: Vasily Gorbik <gor@linux.ibm.com>
> > Cc: Zi Yan <ziy@nvidia.com>
> > ---
> >  mm/huge_memory.c | 26 --------------------------
> >  1 file changed, 26 deletions(-)
> >
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index afff3ac87067..9f21e44c9030 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -1440,32 +1440,6 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
> >                 goto out;
> >         }
> >
> > -       /*
> > -        * Since we took the NUMA fault, we must have observed the !accessible
> > -        * bit. Make sure all other CPUs agree with that, to avoid them
> > -        * modifying the page we're about to migrate.
> > -        *
> > -        * Must be done under PTL such that we'll observe the relevant
> > -        * inc_tlb_flush_pending().
> > -        *
> > -        * We are not sure a pending tlb flush here is for a huge page
> > -        * mapping or not. Hence use the tlb range variant
> > -        */
> > -       if (mm_tlb_flush_pending(vma->vm_mm)) {
> > -               flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
> > -               /*
> > -                * change_huge_pmd() released the pmd lock before
> > -                * invalidating the secondary MMUs sharing the primary
> > -                * MMU pagetables (with ->invalidate_range()). The
> > -                * mmu_notifier_invalidate_range_end() (which
> > -                * internally calls ->invalidate_range()) in
> > -                * change_pmd_range() will run after us, so we can't
> > -                * rely on it here and we need an explicit invalidate.
> > -                */
> > -               mmu_notifier_invalidate_range(vma->vm_mm, haddr,
> > -                                             haddr + HPAGE_PMD_SIZE);
> > -       }
> > -
> >         pmd = pmd_modify(oldpmd, vma->vm_page_prot);
> >         page = vm_normal_page_pmd(vma, haddr, pmd);
> >         if (!page)
> > --
> > 2.30.2
> >
Sean Christopherson July 21, 2021, 3:41 p.m. UTC | #8
On Tue, Jul 20, 2021, Yang Shi wrote:
> On Tue, Jul 20, 2021 at 2:04 PM Zi Yan <ziy@nvidia.com> wrote:
> >
> > On 20 Jul 2021, at 16:53, Yang Shi wrote:
> >
> > > On Tue, Jul 20, 2021 at 7:25 AM Christian Borntraeger
> > > <borntraeger@de.ibm.com> wrote:
> > >>> -     if (mm_tlb_flush_pending(vma->vm_mm)) {
> > >>> -             flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
> > >>> -             /*
> > >>> -              * change_huge_pmd() released the pmd lock before
> > >>> -              * invalidating the secondary MMUs sharing the primary
> > >>> -              * MMU pagetables (with ->invalidate_range()). The
> > >>> -              * mmu_notifier_invalidate_range_end() (which
> > >>> -              * internally calls ->invalidate_range()) in
> > >>> -              * change_pmd_range() will run after us, so we can't
> > >>> -              * rely on it here and we need an explicit invalidate.
> > >>> -              */
> > >>> -             mmu_notifier_invalidate_range(vma->vm_mm, haddr,
> > >>> -                                           haddr + HPAGE_PMD_SIZE);
> > >>> -     }
> > >>> CC Paolo/KVM list so we also remove the mmu notifier here. Do we need those
> > >> now in migrate_pages? I am not an expert in that code, but I cant find
> > >> an equivalent mmu_notifier in migrate_misplaced_pages.
> > >> I might be totally wrong, just something that I noticed.
> > >
> > > Do you mean the missed mmu notifier invalidate for the THP migration
> > > case? Yes, I noticed that too. But I'm not sure whether it is intended
> > > or just missed.
> >
> > From my understand of mmu_notifier document, mmu_notifier_invalidate_range()
> > is needed only if the PTE is updated to point to a new page or the page pointed
> > by the PTE is freed. Page migration does not fall into either case.

The "new page" part of

  a page table entry is updated to point to a new page

is referring to a different physical page, i.e. a different pfn, not a different
struct page.  do_huge_pmd_numa_page() is moving a THP between nodes, thus it's
changing the backing pfn and needs to invalidate secondary MMUs at some point.

> > In addition, in migrate_pages(), more specifically try_to_migrate_one(),
> > there is a pair of mmu_notifier_invalidate_range_start() and
> > mmu_notifier_invalidate_range_end() around the PTE manipulation code, which should
> > be sufficient to notify secondary TLBs (including KVM) about the PTE change
> > for page migration. Correct me if I am wrong.
> 
> Thanks, I think you are correct. By looking into commit 7066f0f933a1
> ("mm: thp: fix mmu_notifier in migrate_misplaced_transhuge_page()"),
> the tlb flush and mmu notifier invalidate were needed since the old
> numa fault implementation didn't change PTE to migration entry so it
> may cause data corruption due to the writes from GPU secondary MMU.
> 
> The refactor does use the generic migration code which converts PTE to
> migration entry before copying data to the new page.

That's my understanding as well, based on this blurb from commit 7066f0f933a1.

    The standard PAGE_SIZEd migrate_misplaced_page is less accelerated and
    uses the generic migrate_pages which transitions the pte from
    numa/protnone to a migration entry in try_to_unmap_one() and flushes TLBs
    and all mmu notifiers there before copying the page.

That analysis/justification for removing the invalidate_range() call should be
captured in the changelog.  Confirmation from Andrea would be a nice bonus.
Huang, Ying July 22, 2021, 12:26 a.m. UTC | #9
Sean Christopherson <seanjc@google.com> writes:
>> 
>> Thanks, I think you are correct. By looking into commit 7066f0f933a1
>> ("mm: thp: fix mmu_notifier in migrate_misplaced_transhuge_page()"),
>> the tlb flush and mmu notifier invalidate were needed since the old
>> numa fault implementation didn't change PTE to migration entry so it
>> may cause data corruption due to the writes from GPU secondary MMU.
>> 
>> The refactor does use the generic migration code which converts PTE to
>> migration entry before copying data to the new page.
>
> That's my understanding as well, based on this blurb from commit 7066f0f933a1.
>
>     The standard PAGE_SIZEd migrate_misplaced_page is less accelerated and
>     uses the generic migrate_pages which transitions the pte from
>     numa/protnone to a migration entry in try_to_unmap_one() and flushes TLBs
>     and all mmu notifiers there before copying the page.
>
> That analysis/justification for removing the invalidate_range() call should be
> captured in the changelog.  Confirmation from Andrea would be a nice bonus.

When we flush CPU TLB for a page that may be shared with device/VM TLB,
we will call MMU notifiers for the page to flush the device/VM TLB.
Right?  So when we replaced CPU TLB flushing in do_huge_pmd_numa_page()
with that in try_to_migrate_one(), we will replace the MMU notifiers
calling too.  Do you agree?

Best Regards,
Huang, Ying
Christian Borntraeger July 22, 2021, 7:36 a.m. UTC | #10
On 22.07.21 02:26, Huang, Ying wrote:
> Sean Christopherson <seanjc@google.com> writes:
>>>
>>> Thanks, I think you are correct. By looking into commit 7066f0f933a1
>>> ("mm: thp: fix mmu_notifier in migrate_misplaced_transhuge_page()"),
>>> the tlb flush and mmu notifier invalidate were needed since the old
>>> numa fault implementation didn't change PTE to migration entry so it
>>> may cause data corruption due to the writes from GPU secondary MMU.
>>>
>>> The refactor does use the generic migration code which converts PTE to
>>> migration entry before copying data to the new page.
>>
>> That's my understanding as well, based on this blurb from commit 7066f0f933a1.
>>
>>      The standard PAGE_SIZEd migrate_misplaced_page is less accelerated and
>>      uses the generic migrate_pages which transitions the pte from
>>      numa/protnone to a migration entry in try_to_unmap_one() and flushes TLBs
>>      and all mmu notifiers there before copying the page.
>>
>> That analysis/justification for removing the invalidate_range() call should be
>> captured in the changelog.  Confirmation from Andrea would be a nice bonus.
> 
> When we flush CPU TLB for a page that may be shared with device/VM TLB,
> we will call MMU notifiers for the page to flush the device/VM TLB.
> Right?  So when we replaced CPU TLB flushing in do_huge_pmd_numa_page()
> with that in try_to_migrate_one(), we will replace the MMU notifiers
> calling too.  Do you agree?

Can someone write an updated commit messages that contains this information?
Huang, Ying July 22, 2021, 11:10 p.m. UTC | #11
Christian Borntraeger <borntraeger@de.ibm.com> writes:

> On 22.07.21 02:26, Huang, Ying wrote:
>> Sean Christopherson <seanjc@google.com> writes:
>>>>
>>>> Thanks, I think you are correct. By looking into commit 7066f0f933a1
>>>> ("mm: thp: fix mmu_notifier in migrate_misplaced_transhuge_page()"),
>>>> the tlb flush and mmu notifier invalidate were needed since the old
>>>> numa fault implementation didn't change PTE to migration entry so it
>>>> may cause data corruption due to the writes from GPU secondary MMU.
>>>>
>>>> The refactor does use the generic migration code which converts PTE to
>>>> migration entry before copying data to the new page.
>>>
>>> That's my understanding as well, based on this blurb from commit 7066f0f933a1.
>>>
>>>      The standard PAGE_SIZEd migrate_misplaced_page is less accelerated and
>>>      uses the generic migrate_pages which transitions the pte from
>>>      numa/protnone to a migration entry in try_to_unmap_one() and flushes TLBs
>>>      and all mmu notifiers there before copying the page.
>>>
>>> That analysis/justification for removing the invalidate_range() call should be
>>> captured in the changelog.  Confirmation from Andrea would be a nice bonus.
>> When we flush CPU TLB for a page that may be shared with device/VM
>> TLB,
>> we will call MMU notifiers for the page to flush the device/VM TLB.
>> Right?  So when we replaced CPU TLB flushing in do_huge_pmd_numa_page()
>> with that in try_to_migrate_one(), we will replace the MMU notifiers
>> calling too.  Do you agree?
>
> Can someone write an updated commit messages that contains this information?

OK.  I will update the patch description to add MMU notifiers
description.

Best Regards,
Huang, Ying
Huang, Ying July 23, 2021, 12:03 a.m. UTC | #12
Christian Borntraeger <borntraeger@de.ibm.com> writes:

> On 22.07.21 02:26, Huang, Ying wrote:
>> Sean Christopherson <seanjc@google.com> writes:
>>>>
>>>> Thanks, I think you are correct. By looking into commit 7066f0f933a1
>>>> ("mm: thp: fix mmu_notifier in migrate_misplaced_transhuge_page()"),
>>>> the tlb flush and mmu notifier invalidate were needed since the old
>>>> numa fault implementation didn't change PTE to migration entry so it
>>>> may cause data corruption due to the writes from GPU secondary MMU.
>>>>
>>>> The refactor does use the generic migration code which converts PTE to
>>>> migration entry before copying data to the new page.
>>>
>>> That's my understanding as well, based on this blurb from commit 7066f0f933a1.
>>>
>>>      The standard PAGE_SIZEd migrate_misplaced_page is less accelerated and
>>>      uses the generic migrate_pages which transitions the pte from
>>>      numa/protnone to a migration entry in try_to_unmap_one() and flushes TLBs
>>>      and all mmu notifiers there before copying the page.
>>>
>>> That analysis/justification for removing the invalidate_range() call should be
>>> captured in the changelog.  Confirmation from Andrea would be a nice bonus.
>> When we flush CPU TLB for a page that may be shared with device/VM
>> TLB,
>> we will call MMU notifiers for the page to flush the device/VM TLB.
>> Right?  So when we replaced CPU TLB flushing in do_huge_pmd_numa_page()
>> with that in try_to_migrate_one(), we will replace the MMU notifiers
>> calling too.  Do you agree?
>
> Can someone write an updated commit messages that contains this information?

Hi, Andrew,

Can you help to add the following text to the end of the original patch
description?

"
The mmu_notifier_invalidate_range() in do_huge_pmd_numa_page() is
deleted too.  Because migrate_pages() takes care of that too when CPU
TLB is flushed.
"

Or, if you prefer the complete patch, it's as below.

Best Regards,
Huang, Ying

------------------------------------8<---------------------------------------------
From a7ce0c58dcc0d2f0d87b43b4e93a6623d78c9c25 Mon Sep 17 00:00:00 2001
From: Huang Ying <ying.huang@intel.com>
Date: Tue, 13 Jul 2021 13:41:37 +0800
Subject: [PATCH -V2] mm,do_huge_pmd_numa_page: remove unnecessary TLB flushing
 code

Before the commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
handling"), the TLB flushing is done in do_huge_pmd_numa_page() itself
via flush_tlb_range().

But after commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault
handling"), the TLB flushing is done in migrate_pages() as in the
following code path anyway.

do_huge_pmd_numa_page
  migrate_misplaced_page
    migrate_pages

So now, the TLB flushing code in do_huge_pmd_numa_page() becomes
unnecessary.  So the code is deleted in this patch to simplify the
code.  This is only code cleanup, there's no visible performance
difference.

The mmu_notifier_invalidate_range() in do_huge_pmd_numa_page() is
deleted too.  Because migrate_pages() takes care of that too when CPU
TLB is flushed.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
---
 mm/huge_memory.c | 26 --------------------------
 1 file changed, 26 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index afff3ac87067..9f21e44c9030 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1440,32 +1440,6 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 		goto out;
 	}
 
-	/*
-	 * Since we took the NUMA fault, we must have observed the !accessible
-	 * bit. Make sure all other CPUs agree with that, to avoid them
-	 * modifying the page we're about to migrate.
-	 *
-	 * Must be done under PTL such that we'll observe the relevant
-	 * inc_tlb_flush_pending().
-	 *
-	 * We are not sure a pending tlb flush here is for a huge page
-	 * mapping or not. Hence use the tlb range variant
-	 */
-	if (mm_tlb_flush_pending(vma->vm_mm)) {
-		flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
-		/*
-		 * change_huge_pmd() released the pmd lock before
-		 * invalidating the secondary MMUs sharing the primary
-		 * MMU pagetables (with ->invalidate_range()). The
-		 * mmu_notifier_invalidate_range_end() (which
-		 * internally calls ->invalidate_range()) in
-		 * change_pmd_range() will run after us, so we can't
-		 * rely on it here and we need an explicit invalidate.
-		 */
-		mmu_notifier_invalidate_range(vma->vm_mm, haddr,
-					      haddr + HPAGE_PMD_SIZE);
-	}
-
 	pmd = pmd_modify(oldpmd, vma->vm_page_prot);
 	page = vm_normal_page_pmd(vma, haddr, pmd);
 	if (!page)
Andrew Morton July 23, 2021, 8:19 p.m. UTC | #13
On Fri, 23 Jul 2021 08:03:42 +0800 "Huang, Ying" <ying.huang@intel.com> wrote:

> Can you help to add the following text to the end of the original patch
> description?
> 
> "
> The mmu_notifier_invalidate_range() in do_huge_pmd_numa_page() is
> deleted too.  Because migrate_pages() takes care of that too when CPU
> TLB is flushed.
> "

Done.
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index afff3ac87067..9f21e44c9030 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1440,32 +1440,6 @@  vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 		goto out;
 	}
 
-	/*
-	 * Since we took the NUMA fault, we must have observed the !accessible
-	 * bit. Make sure all other CPUs agree with that, to avoid them
-	 * modifying the page we're about to migrate.
-	 *
-	 * Must be done under PTL such that we'll observe the relevant
-	 * inc_tlb_flush_pending().
-	 *
-	 * We are not sure a pending tlb flush here is for a huge page
-	 * mapping or not. Hence use the tlb range variant
-	 */
-	if (mm_tlb_flush_pending(vma->vm_mm)) {
-		flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
-		/*
-		 * change_huge_pmd() released the pmd lock before
-		 * invalidating the secondary MMUs sharing the primary
-		 * MMU pagetables (with ->invalidate_range()). The
-		 * mmu_notifier_invalidate_range_end() (which
-		 * internally calls ->invalidate_range()) in
-		 * change_pmd_range() will run after us, so we can't
-		 * rely on it here and we need an explicit invalidate.
-		 */
-		mmu_notifier_invalidate_range(vma->vm_mm, haddr,
-					      haddr + HPAGE_PMD_SIZE);
-	}
-
 	pmd = pmd_modify(oldpmd, vma->vm_page_prot);
 	page = vm_normal_page_pmd(vma, haddr, pmd);
 	if (!page)