diff mbox series

[v2,3/9] mm/hugetlb: make pud_huge() and huge_pud() aware of non-present pud entry

Message ID 20220623235153.2623702-4-naoya.horiguchi@linux.dev (mailing list archive)
State New
Headers show
Series mm, hwpoison: enable 1GB hugepage support (v2) | expand

Commit Message

Naoya Horiguchi June 23, 2022, 11:51 p.m. UTC
From: Naoya Horiguchi <naoya.horiguchi@nec.com>

follow_pud_mask() does not support non-present pud entry now.  As long as
I tested on x86_64 server, follow_pud_mask() still simply returns
no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe
user-visible effect should happen.  But generally we should call
follow_huge_pud() for non-present pud entry for 1GB hugetlb page.

Update pud_huge() and huge_pud() to handle non-present pud entries.  The
changes are similar to previous works for pud entries commit e66f17ff7177
("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit
cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage").

Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
---
 arch/x86/mm/hugetlbpage.c |  3 ++-
 mm/hugetlb.c              | 26 +++++++++++++++++++++++++-
 2 files changed, 27 insertions(+), 2 deletions(-)

Comments

HORIGUCHI NAOYA(堀口 直也) June 24, 2022, 8:40 a.m. UTC | #1
Sorry, I found that $SUBJECT mentions wrong function name (I meant
follow_huge_pud(), not huge_pud()), this will be fixed in the later version.

On Fri, Jun 24, 2022 at 08:51:47AM +0900, Naoya Horiguchi wrote:
> From: Naoya Horiguchi <naoya.horiguchi@nec.com>
> 
> follow_pud_mask() does not support non-present pud entry now.  As long as
> I tested on x86_64 server, follow_pud_mask() still simply returns
> no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe
> user-visible effect should happen.  But generally we should call
> follow_huge_pud() for non-present pud entry for 1GB hugetlb page.
> 
> Update pud_huge() and huge_pud() to handle non-present pud entries.  The

here the same typo, too.

- Naoya Horiguchi

> changes are similar to previous works for pud entries commit e66f17ff7177
> ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit
> cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage").
> 
> Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
> ---
Mike Kravetz June 25, 2022, 12:02 a.m. UTC | #2
On 06/24/22 08:51, Naoya Horiguchi wrote:
> From: Naoya Horiguchi <naoya.horiguchi@nec.com>
> 
> follow_pud_mask() does not support non-present pud entry now.  As long as
> I tested on x86_64 server, follow_pud_mask() still simply returns
> no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe
> user-visible effect should happen.  But generally we should call
> follow_huge_pud() for non-present pud entry for 1GB hugetlb page.
> 
> Update pud_huge() and huge_pud() to handle non-present pud entries.  The
> changes are similar to previous works for pud entries commit e66f17ff7177
> ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit
> cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage").
> 
> Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
> ---
>  arch/x86/mm/hugetlbpage.c |  3 ++-
>  mm/hugetlb.c              | 26 +++++++++++++++++++++++++-
>  2 files changed, 27 insertions(+), 2 deletions(-)

Thanks.  Overall looks good with typos corrected that you already noticed.
One question below.
> 
> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
> index a0d023cb4292..5fb86fb49ba8 100644
> --- a/arch/x86/mm/hugetlbpage.c
> +++ b/arch/x86/mm/hugetlbpage.c
> @@ -70,7 +70,8 @@ int pmd_huge(pmd_t pmd)
>  
>  int pud_huge(pud_t pud)
>  {
> -	return !!(pud_val(pud) & _PAGE_PSE);
> +	return !pud_none(pud) &&
> +		(pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
>  }
>  #endif
>  
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index f59f43c06601..b7ae5f73f3b2 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6946,10 +6946,34 @@ struct page * __weak
>  follow_huge_pud(struct mm_struct *mm, unsigned long address,
>  		pud_t *pud, int flags)
>  {
> +	struct page *page = NULL;
> +	spinlock_t *ptl;
> +	pte_t pte;
> +
>  	if (flags & (FOLL_GET | FOLL_PIN))
>  		return NULL;
>  
> -	return pte_page(*(pte_t *)pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
> +retry:
> +	ptl = huge_pte_lock(hstate_sizelog(PUD_SHIFT), mm, (pte_t *)pud);
> +	if (!pud_huge(*pud))
> +		goto out;
> +	pte = huge_ptep_get((pte_t *)pud);
> +	if (pte_present(pte)) {
> +		page = pud_page(*pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
> +		if (WARN_ON_ONCE(!try_grab_page(page, flags))) {

The call to try_grab_page() seems a bit strange since flags will not include
FOLL_GET | FOLL_PIN as tested above.  Is try_grab_page() always going be a
noop here?
Miaohe Lin June 25, 2022, 9:42 a.m. UTC | #3
On 2022/6/24 7:51, Naoya Horiguchi wrote:
> From: Naoya Horiguchi <naoya.horiguchi@nec.com>
> 
> follow_pud_mask() does not support non-present pud entry now.  As long as
> I tested on x86_64 server, follow_pud_mask() still simply returns
> no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe
> user-visible effect should happen.  But generally we should call
> follow_huge_pud() for non-present pud entry for 1GB hugetlb page.
> 
> Update pud_huge() and huge_pud() to handle non-present pud entries.  The
> changes are similar to previous works for pud entries commit e66f17ff7177
> ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit
> cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage").
> 
> Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
> ---
>  arch/x86/mm/hugetlbpage.c |  3 ++-
>  mm/hugetlb.c              | 26 +++++++++++++++++++++++++-
>  2 files changed, 27 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
> index a0d023cb4292..5fb86fb49ba8 100644
> --- a/arch/x86/mm/hugetlbpage.c
> +++ b/arch/x86/mm/hugetlbpage.c
> @@ -70,7 +70,8 @@ int pmd_huge(pmd_t pmd)
>  

No strong opinion but a comment similar to pmd_huge might be better?

/*
 * pmd_huge() returns 1 if @pmd is hugetlb related entry, that is normal
 * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry.
 * Otherwise, returns 0.
 */

>  int pud_huge(pud_t pud)
>  {
> -	return !!(pud_val(pud) & _PAGE_PSE);
> +	return !pud_none(pud) &&
> +		(pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
>  }
>  #endif
>  
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index f59f43c06601..b7ae5f73f3b2 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6946,10 +6946,34 @@ struct page * __weak
>  follow_huge_pud(struct mm_struct *mm, unsigned long address,
>  		pud_t *pud, int flags)
>  {
> +	struct page *page = NULL;
> +	spinlock_t *ptl;
> +	pte_t pte;
> +
>  	if (flags & (FOLL_GET | FOLL_PIN))
>  		return NULL;

Should the above check be modified? It seems the below try_grab_page might not grab the page as
expected (as Mike pointed out). Or the extra page refcnt is unneeded?

>  
> -	return pte_page(*(pte_t *)pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
> +retry:
> +	ptl = huge_pte_lock(hstate_sizelog(PUD_SHIFT), mm, (pte_t *)pud);
> +	if (!pud_huge(*pud))
> +		goto out;
> +	pte = huge_ptep_get((pte_t *)pud);
> +	if (pte_present(pte)) {
> +		page = pud_page(*pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
> +		if (WARN_ON_ONCE(!try_grab_page(page, flags))) {
> +			page = NULL;
> +			goto out;
> +		}
> +	} else {
> +		if (is_hugetlb_entry_migration(pte)) {
> +			spin_unlock(ptl);
> +			__migration_entry_wait(mm, (pte_t *)pud, ptl);
> +			goto retry;
> +		}

Again. No strong opinion but a comment similar to follow_huge_pmd might be better?

/*
 * hwpoisoned entry is treated as no_page_table in
 * follow_page_mask().
 */

Thanks!

> +	}
> +out:
> +	spin_unlock(ptl);
> +	return page;
>  }
>  
>  struct page * __weak
>
HORIGUCHI NAOYA(堀口 直也) June 27, 2022, 7:07 a.m. UTC | #4
On Fri, Jun 24, 2022 at 05:02:01PM -0700, Mike Kravetz wrote:
> On 06/24/22 08:51, Naoya Horiguchi wrote:
> > From: Naoya Horiguchi <naoya.horiguchi@nec.com>
> > 
> > follow_pud_mask() does not support non-present pud entry now.  As long as
> > I tested on x86_64 server, follow_pud_mask() still simply returns
> > no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe
> > user-visible effect should happen.  But generally we should call
> > follow_huge_pud() for non-present pud entry for 1GB hugetlb page.
> > 
> > Update pud_huge() and huge_pud() to handle non-present pud entries.  The
> > changes are similar to previous works for pud entries commit e66f17ff7177
> > ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit
> > cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage").
> > 
> > Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
> > ---
> >  arch/x86/mm/hugetlbpage.c |  3 ++-
> >  mm/hugetlb.c              | 26 +++++++++++++++++++++++++-
> >  2 files changed, 27 insertions(+), 2 deletions(-)
> 
> Thanks.  Overall looks good with typos corrected that you already noticed.
> One question below.
> > 
> > diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
> > index a0d023cb4292..5fb86fb49ba8 100644
> > --- a/arch/x86/mm/hugetlbpage.c
> > +++ b/arch/x86/mm/hugetlbpage.c
> > @@ -70,7 +70,8 @@ int pmd_huge(pmd_t pmd)
> >  
> >  int pud_huge(pud_t pud)
> >  {
> > -	return !!(pud_val(pud) & _PAGE_PSE);
> > +	return !pud_none(pud) &&
> > +		(pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
> >  }
> >  #endif
> >  
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index f59f43c06601..b7ae5f73f3b2 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -6946,10 +6946,34 @@ struct page * __weak
> >  follow_huge_pud(struct mm_struct *mm, unsigned long address,
> >  		pud_t *pud, int flags)
> >  {
> > +	struct page *page = NULL;
> > +	spinlock_t *ptl;
> > +	pte_t pte;
> > +
> >  	if (flags & (FOLL_GET | FOLL_PIN))
> >  		return NULL;
> >  
> > -	return pte_page(*(pte_t *)pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
> > +retry:
> > +	ptl = huge_pte_lock(hstate_sizelog(PUD_SHIFT), mm, (pte_t *)pud);
> > +	if (!pud_huge(*pud))
> > +		goto out;
> > +	pte = huge_ptep_get((pte_t *)pud);
> > +	if (pte_present(pte)) {
> > +		page = pud_page(*pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
> > +		if (WARN_ON_ONCE(!try_grab_page(page, flags))) {
> 
> The call to try_grab_page() seems a bit strange since flags will not include
> FOLL_GET | FOLL_PIN as tested above.  Is try_grab_page() always going be a
> noop here?

Thanks, you're right.  "flags & (FOLL_GET | FOLL_PIN)" check should be
updated.  Probably it's to be aligned to what follow_huge_pmd() checks first.

Thanks,
Naoya Horiguchi

> 
> -- 
> Mike Kravetz
> 
> > +			page = NULL;
> > +			goto out;
> > +		}
> > +	} else {
> > +		if (is_hugetlb_entry_migration(pte)) {
> > +			spin_unlock(ptl);
> > +			__migration_entry_wait(mm, (pte_t *)pud, ptl);
> > +			goto retry;
> > +		}
> > +	}
> > +out:
> > +	spin_unlock(ptl);
> > +	return page;
> >  }
> >  
> >  struct page * __weak
> > -- 
> > 2.25.1
> >
HORIGUCHI NAOYA(堀口 直也) June 27, 2022, 7:24 a.m. UTC | #5
On Sat, Jun 25, 2022 at 05:42:17PM +0800, Miaohe Lin wrote:
> On 2022/6/24 7:51, Naoya Horiguchi wrote:
> > From: Naoya Horiguchi <naoya.horiguchi@nec.com>
> > 
> > follow_pud_mask() does not support non-present pud entry now.  As long as
> > I tested on x86_64 server, follow_pud_mask() still simply returns
> > no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe
> > user-visible effect should happen.  But generally we should call
> > follow_huge_pud() for non-present pud entry for 1GB hugetlb page.
> > 
> > Update pud_huge() and huge_pud() to handle non-present pud entries.  The
> > changes are similar to previous works for pud entries commit e66f17ff7177
> > ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit
> > cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage").
> > 
> > Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
> > ---
> >  arch/x86/mm/hugetlbpage.c |  3 ++-
> >  mm/hugetlb.c              | 26 +++++++++++++++++++++++++-
> >  2 files changed, 27 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
> > index a0d023cb4292..5fb86fb49ba8 100644
> > --- a/arch/x86/mm/hugetlbpage.c
> > +++ b/arch/x86/mm/hugetlbpage.c
> > @@ -70,7 +70,8 @@ int pmd_huge(pmd_t pmd)
> >  
> 
> No strong opinion but a comment similar to pmd_huge might be better?
> 
> /*
>  * pmd_huge() returns 1 if @pmd is hugetlb related entry, that is normal
>  * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry.
>  * Otherwise, returns 0.
>  */

OK, I'll add some.

> 
> >  int pud_huge(pud_t pud)
> >  {
> > -	return !!(pud_val(pud) & _PAGE_PSE);
> > +	return !pud_none(pud) &&
> > +		(pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
> >  }
> >  #endif
> >  
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index f59f43c06601..b7ae5f73f3b2 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -6946,10 +6946,34 @@ struct page * __weak
> >  follow_huge_pud(struct mm_struct *mm, unsigned long address,
> >  		pud_t *pud, int flags)
> >  {
> > +	struct page *page = NULL;
> > +	spinlock_t *ptl;
> > +	pte_t pte;
> > +
> >  	if (flags & (FOLL_GET | FOLL_PIN))
> >  		return NULL;
> 
> Should the above check be modified? It seems the below try_grab_page might not grab the page as
> expected (as Mike pointed out). Or the extra page refcnt is unneeded?

Yes, this check should be updated.

> 
> >  
> > -	return pte_page(*(pte_t *)pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
> > +retry:
> > +	ptl = huge_pte_lock(hstate_sizelog(PUD_SHIFT), mm, (pte_t *)pud);
> > +	if (!pud_huge(*pud))
> > +		goto out;
> > +	pte = huge_ptep_get((pte_t *)pud);
> > +	if (pte_present(pte)) {
> > +		page = pud_page(*pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
> > +		if (WARN_ON_ONCE(!try_grab_page(page, flags))) {
> > +			page = NULL;
> > +			goto out;
> > +		}
> > +	} else {
> > +		if (is_hugetlb_entry_migration(pte)) {
> > +			spin_unlock(ptl);
> > +			__migration_entry_wait(mm, (pte_t *)pud, ptl);
> > +			goto retry;
> > +		}
> 
> Again. No strong opinion but a comment similar to follow_huge_pmd might be better?
> 
> /*
>  * hwpoisoned entry is treated as no_page_table in
>  * follow_page_mask().
>  */

Will add comment on this too. Thank you.

- Naoya Horiguchi
diff mbox series

Patch

diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index a0d023cb4292..5fb86fb49ba8 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -70,7 +70,8 @@  int pmd_huge(pmd_t pmd)
 
 int pud_huge(pud_t pud)
 {
-	return !!(pud_val(pud) & _PAGE_PSE);
+	return !pud_none(pud) &&
+		(pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
 }
 #endif
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f59f43c06601..b7ae5f73f3b2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6946,10 +6946,34 @@  struct page * __weak
 follow_huge_pud(struct mm_struct *mm, unsigned long address,
 		pud_t *pud, int flags)
 {
+	struct page *page = NULL;
+	spinlock_t *ptl;
+	pte_t pte;
+
 	if (flags & (FOLL_GET | FOLL_PIN))
 		return NULL;
 
-	return pte_page(*(pte_t *)pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
+retry:
+	ptl = huge_pte_lock(hstate_sizelog(PUD_SHIFT), mm, (pte_t *)pud);
+	if (!pud_huge(*pud))
+		goto out;
+	pte = huge_ptep_get((pte_t *)pud);
+	if (pte_present(pte)) {
+		page = pud_page(*pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
+		if (WARN_ON_ONCE(!try_grab_page(page, flags))) {
+			page = NULL;
+			goto out;
+		}
+	} else {
+		if (is_hugetlb_entry_migration(pte)) {
+			spin_unlock(ptl);
+			__migration_entry_wait(mm, (pte_t *)pud, ptl);
+			goto retry;
+		}
+	}
+out:
+	spin_unlock(ptl);
+	return page;
 }
 
 struct page * __weak