diff mbox series

[v2,06/19] mm/pagewalk: Check pfnmap for folio_walk_start()

Message ID 20240826204353.2228736-7-peterx@redhat.com (mailing list archive)
State New
Headers show
Series mm: Support huge pfnmaps | expand

Commit Message

Peter Xu Aug. 26, 2024, 8:43 p.m. UTC
Teach folio_walk_start() to recognize special pmd/pud mappings, and fail
them properly as it means there's no folio backing them.

Cc: David Hildenbrand <david@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/pagewalk.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

David Hildenbrand Aug. 28, 2024, 7:44 a.m. UTC | #1
On 26.08.24 22:43, Peter Xu wrote:
> Teach folio_walk_start() to recognize special pmd/pud mappings, and fail
> them properly as it means there's no folio backing them.
> 
> Cc: David Hildenbrand <david@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>   mm/pagewalk.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index cd79fb3b89e5..12be5222d70e 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -753,7 +753,7 @@ struct folio *folio_walk_start(struct folio_walk *fw,
>   		fw->pudp = pudp;
>   		fw->pud = pud;
>   
> -		if (!pud_present(pud) || pud_devmap(pud)) {
> +		if (!pud_present(pud) || pud_devmap(pud) || pud_special(pud)) {
>   			spin_unlock(ptl);
>   			goto not_found;
>   		} else if (!pud_leaf(pud)) {
> @@ -783,7 +783,7 @@ struct folio *folio_walk_start(struct folio_walk *fw,
>   		fw->pmdp = pmdp;
>   		fw->pmd = pmd;
>   
> -		if (pmd_none(pmd)) {
> +		if (pmd_none(pmd) || pmd_special(pmd)) {
>   			spin_unlock(ptl);
>   			goto not_found;
>   		} else if (!pmd_leaf(pmd)) {

As raised, this is not the right way to to it. You should follow what
CONFIG_ARCH_HAS_PTE_SPECIAL and vm_normal_page() does.

It's even spelled out in vm_normal_page_pmd() that at the time it was
introduced there was no pmd_special(), so there was no way to handle that.



diff --git a/mm/memory.c b/mm/memory.c
index f0cf5d02b4740..272445e9db147 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -672,15 +672,29 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
  {
         unsigned long pfn = pmd_pfn(pmd);
  
-       /*
-        * There is no pmd_special() but there may be special pmds, e.g.
-        * in a direct-access (dax) mapping, so let's just replicate the
-        * !CONFIG_ARCH_HAS_PTE_SPECIAL case from vm_normal_page() here.
-        */
+       if (IS_ENABLED(CONFIG_ARCH_HAS_PMD_SPECIAL)) {
+               if (likely(!pmd_special(pmd)))
+                       goto check_pfn;
+               if (vma->vm_ops && vma->vm_ops->find_special_page)
+                       return vma->vm_ops->find_special_page(vma, addr);
+               if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
+                       return NULL;
+               if (is_huge_zero_pmd(pmd))
+                       return NULL;
+               if (pmd_devmap(pmd))
+                       /* See vm_normal_page() */
+                       return NULL;
+               return NULL;
+       }
+
+       /* !CONFIG_ARCH_HAS_PMD_SPECIAL case follows: */
+
         if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
                 if (vma->vm_flags & VM_MIXEDMAP) {
                         if (!pfn_valid(pfn))
                                 return NULL;
+                       if (is_huge_zero_pmd(pmd))
+                               return NULL;
                         goto out;
                 } else {
                         unsigned long off;
@@ -692,6 +706,11 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
                 }
         }
  
+       /*
+        * For historical reasons, these might not have pmd_special() set,
+        * so we'll check them manually, in contrast to vm_normal_page().
+        */
+check_pfn:
         if (pmd_devmap(pmd))
                 return NULL;
         if (is_huge_zero_pmd(pmd))



We should then look into mapping huge zeropages also with pmd_special.
pmd_devmap we'll leave alone until removed. But that's indeoendent of your series.

I wonder if CONFIG_ARCH_HAS_PTE_SPECIAL is sufficient and we don't need additional
CONFIG_ARCH_HAS_PMD_SPECIAL.

As I said, if you need someone to add vm_normal_page_pud(), I can handle that.
Peter Xu Aug. 28, 2024, 2:24 p.m. UTC | #2
On Wed, Aug 28, 2024 at 09:44:04AM +0200, David Hildenbrand wrote:
> On 26.08.24 22:43, Peter Xu wrote:
> > Teach folio_walk_start() to recognize special pmd/pud mappings, and fail
> > them properly as it means there's no folio backing them.
> > 
> > Cc: David Hildenbrand <david@redhat.com>
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >   mm/pagewalk.c | 4 ++--
> >   1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> > index cd79fb3b89e5..12be5222d70e 100644
> > --- a/mm/pagewalk.c
> > +++ b/mm/pagewalk.c
> > @@ -753,7 +753,7 @@ struct folio *folio_walk_start(struct folio_walk *fw,
> >   		fw->pudp = pudp;
> >   		fw->pud = pud;
> > -		if (!pud_present(pud) || pud_devmap(pud)) {
> > +		if (!pud_present(pud) || pud_devmap(pud) || pud_special(pud)) {
> >   			spin_unlock(ptl);
> >   			goto not_found;
> >   		} else if (!pud_leaf(pud)) {
> > @@ -783,7 +783,7 @@ struct folio *folio_walk_start(struct folio_walk *fw,
> >   		fw->pmdp = pmdp;
> >   		fw->pmd = pmd;
> > -		if (pmd_none(pmd)) {
> > +		if (pmd_none(pmd) || pmd_special(pmd)) {
> >   			spin_unlock(ptl);
> >   			goto not_found;
> >   		} else if (!pmd_leaf(pmd)) {
> 
> As raised, this is not the right way to to it. You should follow what
> CONFIG_ARCH_HAS_PTE_SPECIAL and vm_normal_page() does.
> 
> It's even spelled out in vm_normal_page_pmd() that at the time it was
> introduced there was no pmd_special(), so there was no way to handle that.

I can try to do something like that, but even so it'll be mostly cosmetic
changes, and AFAICT there's no real functional difference.

Meanwhile, see below comment.

> 
> 
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index f0cf5d02b4740..272445e9db147 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -672,15 +672,29 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>  {
>         unsigned long pfn = pmd_pfn(pmd);
> -       /*
> -        * There is no pmd_special() but there may be special pmds, e.g.
> -        * in a direct-access (dax) mapping, so let's just replicate the
> -        * !CONFIG_ARCH_HAS_PTE_SPECIAL case from vm_normal_page() here.
> -        */

This one is correct; I overlooked this comment which can be obsolete.  I
can either refine this patch or add one patch on top to refine the comment
at least.

> +       if (IS_ENABLED(CONFIG_ARCH_HAS_PMD_SPECIAL)) {

We don't yet have CONFIG_ARCH_HAS_PMD_SPECIAL, but I get your point.

> +               if (likely(!pmd_special(pmd)))
> +                       goto check_pfn;
> +               if (vma->vm_ops && vma->vm_ops->find_special_page)
> +                       return vma->vm_ops->find_special_page(vma, addr);

Why do we ever need this?  This is so far destined to be totally a waste of
cycles.  I think it's better we leave that until either xen/gntdev.c or any
new driver start to use it, rather than keeping dead code around.

> +               if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
> +                       return NULL;
> +               if (is_huge_zero_pmd(pmd))
> +                       return NULL;

This is meaningless too until we make huge zero pmd apply special bit
first, which does sound like to be outside the scope of this series.

> +               if (pmd_devmap(pmd))
> +                       /* See vm_normal_page() */
> +                       return NULL;

When will it be pmd_devmap() if it's already pmd_special()?

> +               return NULL;

And see this one.. it's after:

  if (xxx)
      return NULL;
  if (yyy)
      return NULL;
  if (zzz)
      return NULL;
  return NULL;

Hmm??  If so, what's the difference if we simply check pmd_special and
return NULL..

> +       }
> +
> +       /* !CONFIG_ARCH_HAS_PMD_SPECIAL case follows: */
> +
>         if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
>                 if (vma->vm_flags & VM_MIXEDMAP) {
>                         if (!pfn_valid(pfn))
>                                 return NULL;
> +                       if (is_huge_zero_pmd(pmd))
> +                               return NULL;

I'd rather not touch here as this series doesn't change anything for
MIXEDMAP yet..

>                         goto out;
>                 } else {
>                         unsigned long off;
> @@ -692,6 +706,11 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>                 }
>         }
> +       /*
> +        * For historical reasons, these might not have pmd_special() set,
> +        * so we'll check them manually, in contrast to vm_normal_page().
> +        */
> +check_pfn:
>         if (pmd_devmap(pmd))
>                 return NULL;
>         if (is_huge_zero_pmd(pmd))
> 
> 
> 
> We should then look into mapping huge zeropages also with pmd_special.
> pmd_devmap we'll leave alone until removed. But that's indeoendent of your series.

This does look reasonable to match what we do with pte zeropage.  Could you
remind me what might be the benefit when we switch to using special bit for
pmd zero pages?

> 
> I wonder if CONFIG_ARCH_HAS_PTE_SPECIAL is sufficient and we don't need additional
> CONFIG_ARCH_HAS_PMD_SPECIAL.

The hope is we can always reuse the bit in the pte to work the same for
pmd/pud.

Now we require arch to select ARCH_SUPPORTS_HUGE_PFNMAP to say "pmd/pud has
the same special bit defined".

> 
> As I said, if you need someone to add vm_normal_page_pud(), I can handle that.

I'm pretty confused why we need that for this series alone.

If you prefer vm_normal_page_pud() to be defined and check pud_special()
there, I can do that.  But again, I don't yet see how that can make a
functional difference considering the so far very limited usage of the
special bit, and wonder whether we can do that on top when it became
necessary (and when we start to have functional requirement of such).

Thanks,
David Hildenbrand Aug. 28, 2024, 3:30 p.m. UTC | #3
> This one is correct; I overlooked this comment which can be obsolete.  I
> can either refine this patch or add one patch on top to refine the comment
> at least.

Probably best if you use what you consider reasonable in your patch.

> 
>> +       if (IS_ENABLED(CONFIG_ARCH_HAS_PMD_SPECIAL)) {
> 
> We don't yet have CONFIG_ARCH_HAS_PMD_SPECIAL, but I get your point.
> 
>> +               if (likely(!pmd_special(pmd)))
>> +                       goto check_pfn;
>> +               if (vma->vm_ops && vma->vm_ops->find_special_page)
>> +                       return vma->vm_ops->find_special_page(vma, addr);
> 
> Why do we ever need this?  This is so far destined to be totally a waste of
> cycles.  I think it's better we leave that until either xen/gntdev.c or any
> new driver start to use it, rather than keeping dead code around.

I just copy-pasted what we had in vm_normal_page() to showcase. If not 
required, good, we can add a comment we this is not required.

> 
>> +               if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
>> +                       return NULL;
>> +               if (is_huge_zero_pmd(pmd))
>> +                       return NULL;
> 
> This is meaningless too until we make huge zero pmd apply special bit
> first, which does sound like to be outside the scope of this series.

Again, copy-paste, but ...

> 
>> +               if (pmd_devmap(pmd))
>> +                       /* See vm_normal_page() */
>> +                       return NULL;
> 
> When will it be pmd_devmap() if it's already pmd_special()?
> 
>> +               return NULL;
> 
> And see this one.. it's after:
> 
>    if (xxx)
>        return NULL;
>    if (yyy)
>        return NULL;
>    if (zzz)
>        return NULL;
>    return NULL;
> 
> Hmm??  If so, what's the difference if we simply check pmd_special and
> return NULL..

Yes, they all return NULL. The compiler likely optimizes it all out. 
Maybe we have it like that for pure documentation purposes. But yeah, we 
should simply return NULL and think about cleaning up vm_normal_page() 
as well, it does look strange.

> 
>> +       }
>> +
>> +       /* !CONFIG_ARCH_HAS_PMD_SPECIAL case follows: */
>> +
>>          if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
>>                  if (vma->vm_flags & VM_MIXEDMAP) {
>>                          if (!pfn_valid(pfn))
>>                                  return NULL;
>> +                       if (is_huge_zero_pmd(pmd))
>> +                               return NULL;
> 
> I'd rather not touch here as this series doesn't change anything for
> MIXEDMAP yet..

Yes, that can be a separate change.

> 
>>                          goto out;
>>                  } else {
>>                          unsigned long off;
>> @@ -692,6 +706,11 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>>                  }
>>          }
>> +       /*
>> +        * For historical reasons, these might not have pmd_special() set,
>> +        * so we'll check them manually, in contrast to vm_normal_page().
>> +        */
>> +check_pfn:
>>          if (pmd_devmap(pmd))
>>                  return NULL;
>>          if (is_huge_zero_pmd(pmd))
>>
>>
>>
>> We should then look into mapping huge zeropages also with pmd_special.
>> pmd_devmap we'll leave alone until removed. But that's indeoendent of your series.
> 
> This does look reasonable to match what we do with pte zeropage.  Could you
> remind me what might be the benefit when we switch to using special bit for
> pmd zero pages?

See below. It's the way to tell the VM that a page is special, so you 
can avoid a separate check at relevant places, like GUP-fast or in 
vm_normal_*.

> 
>>
>> I wonder if CONFIG_ARCH_HAS_PTE_SPECIAL is sufficient and we don't need additional
>> CONFIG_ARCH_HAS_PMD_SPECIAL.
> 
> The hope is we can always reuse the bit in the pte to work the same for
> pmd/pud.
> 
> Now we require arch to select ARCH_SUPPORTS_HUGE_PFNMAP to say "pmd/pud has
> the same special bit defined".

Note that pte_special() is the way to signalize to the VM that a PTE 
does not reference a refcounted page, or is similarly special and shall 
mostly be ignored. It doesn't imply that it is a PFNAMP pte, not at all.

The shared zeropage is usually not refcounted (except during GUP 
FOLL_GET ... but not FOLL_PIN) and the huge zeropage is usually also not 
refcounted (but FOLL_PIN still does it). Both are special.


If you take a look at the history pte_special(), it was introduced for 
VM_MIXEDMAP handling on s390x, because pfn_valid() to identify "special" 
pages did not work:

commit 7e675137a8e1a4d45822746456dd389b65745bf6
Author: Nicholas Piggin <npiggin@gmail.com>
Date:   Mon Apr 28 02:13:00 2008 -0700

     mm: introduce pte_special pte bit


In the meantime, it's required for architectures that wants to support 
GUP-fast I think, to make GUP-fast bail out and fallback to the slow 
path where we do a vm_normal_page() -- or fail right at the VMA check 
for now (VM_PFNMAP).

An architecture that doesn't implement pte_special() can support pfnmaps 
but not GUP-fast. Similarly, an architecture that doesn't implement 
pmd_special() can support huge pfnmaps, but not GUP-fast.

If you take a closer look, really the only two code paths that look at 
pte_special() are GUP-fast and vm_normal_page().

If we use pmd_special/pud_special in other code than that, we are 
diverging from the pte_special() model, and are likely doing something 
wrong.

I see how you arrived at the current approach, focusing exclusively on 
x86. But I think this just adds inconsistency.

So my point is that we use the same model, where we limit

* pmd_special() to GUP-fast and vm_normal_page_pmd()
* pud_special() to GUP-fast and vm_normal_page_pud()

And simply do the exact same thing as we do for pte_special().

If an arch supports pmd_special() and pud_special() we can support both 
types of hugepfn mappings. If not, an architecture *might* support it, 
depending on support for GUP-fast and maybe depending on MIXEDMAP 
support (again, just like pte_special()). Not your task to worry about, 
you will only "unlock" x86.

So maybe we do want CONFIG_ARCH_HAS_PMD_SPECIAL as well, maybe it can be 
glued to CONFIG_ARCH_HAS_PTE_SPECIAL (but I'm afraid it can't unless all 
archs support both). I'll leave that up to you.

> 
>>
>> As I said, if you need someone to add vm_normal_page_pud(), I can handle that.
> 
> I'm pretty confused why we need that for this series alone.

See above.

> 
> If you prefer vm_normal_page_pud() to be defined and check pud_special()
> there, I can do that.  But again, I don't yet see how that can make a
> functional difference considering the so far very limited usage of the
> special bit, and wonder whether we can do that on top when it became
> necessary (and when we start to have functional requirement of such).

I hope my explanation why pte_special() even exists and how it is used 
makes it clearer.

It's not that much code to handle it like pte_special(), really. I don't 
expect you to teach GUP-slow about vm_normal_page() etc.

If you want me to just takeover some stuff, let me know.
Peter Xu Aug. 28, 2024, 7:45 p.m. UTC | #4
On Wed, Aug 28, 2024 at 05:30:43PM +0200, David Hildenbrand wrote:
> > This one is correct; I overlooked this comment which can be obsolete.  I
> > can either refine this patch or add one patch on top to refine the comment
> > at least.
> 
> Probably best if you use what you consider reasonable in your patch.
> 
> > 
> > > +       if (IS_ENABLED(CONFIG_ARCH_HAS_PMD_SPECIAL)) {
> > 
> > We don't yet have CONFIG_ARCH_HAS_PMD_SPECIAL, but I get your point.
> > 
> > > +               if (likely(!pmd_special(pmd)))
> > > +                       goto check_pfn;
> > > +               if (vma->vm_ops && vma->vm_ops->find_special_page)
> > > +                       return vma->vm_ops->find_special_page(vma, addr);
> > 
> > Why do we ever need this?  This is so far destined to be totally a waste of
> > cycles.  I think it's better we leave that until either xen/gntdev.c or any
> > new driver start to use it, rather than keeping dead code around.
> 
> I just copy-pasted what we had in vm_normal_page() to showcase. If not
> required, good, we can add a comment we this is not required.
> 
> > 
> > > +               if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
> > > +                       return NULL;
> > > +               if (is_huge_zero_pmd(pmd))
> > > +                       return NULL;
> > 
> > This is meaningless too until we make huge zero pmd apply special bit
> > first, which does sound like to be outside the scope of this series.
> 
> Again, copy-paste, but ...
> 
> > 
> > > +               if (pmd_devmap(pmd))
> > > +                       /* See vm_normal_page() */
> > > +                       return NULL;
> > 
> > When will it be pmd_devmap() if it's already pmd_special()?
> > 
> > > +               return NULL;
> > 
> > And see this one.. it's after:
> > 
> >    if (xxx)
> >        return NULL;
> >    if (yyy)
> >        return NULL;
> >    if (zzz)
> >        return NULL;
> >    return NULL;
> > 
> > Hmm??  If so, what's the difference if we simply check pmd_special and
> > return NULL..
> 
> Yes, they all return NULL. The compiler likely optimizes it all out. Maybe
> we have it like that for pure documentation purposes. But yeah, we should
> simply return NULL and think about cleaning up vm_normal_page() as well, it
> does look strange.
> 
> > 
> > > +       }
> > > +
> > > +       /* !CONFIG_ARCH_HAS_PMD_SPECIAL case follows: */
> > > +
> > >          if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
> > >                  if (vma->vm_flags & VM_MIXEDMAP) {
> > >                          if (!pfn_valid(pfn))
> > >                                  return NULL;
> > > +                       if (is_huge_zero_pmd(pmd))
> > > +                               return NULL;
> > 
> > I'd rather not touch here as this series doesn't change anything for
> > MIXEDMAP yet..
> 
> Yes, that can be a separate change.
> 
> > 
> > >                          goto out;
> > >                  } else {
> > >                          unsigned long off;
> > > @@ -692,6 +706,11 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
> > >                  }
> > >          }
> > > +       /*
> > > +        * For historical reasons, these might not have pmd_special() set,
> > > +        * so we'll check them manually, in contrast to vm_normal_page().
> > > +        */
> > > +check_pfn:
> > >          if (pmd_devmap(pmd))
> > >                  return NULL;
> > >          if (is_huge_zero_pmd(pmd))
> > > 
> > > 
> > > 
> > > We should then look into mapping huge zeropages also with pmd_special.
> > > pmd_devmap we'll leave alone until removed. But that's indeoendent of your series.
> > 
> > This does look reasonable to match what we do with pte zeropage.  Could you
> > remind me what might be the benefit when we switch to using special bit for
> > pmd zero pages?
> 
> See below. It's the way to tell the VM that a page is special, so you can
> avoid a separate check at relevant places, like GUP-fast or in vm_normal_*.
> 
> > 
> > > 
> > > I wonder if CONFIG_ARCH_HAS_PTE_SPECIAL is sufficient and we don't need additional
> > > CONFIG_ARCH_HAS_PMD_SPECIAL.
> > 
> > The hope is we can always reuse the bit in the pte to work the same for
> > pmd/pud.
> > 
> > Now we require arch to select ARCH_SUPPORTS_HUGE_PFNMAP to say "pmd/pud has
> > the same special bit defined".
> 
> Note that pte_special() is the way to signalize to the VM that a PTE does
> not reference a refcounted page, or is similarly special and shall mostly be
> ignored. It doesn't imply that it is a PFNAMP pte, not at all.

Right, it's just that this patch started with having pmd/pud special bit
sololy used for pfnmaps only so far.  I'd agree, again, that I think it
makes sense to keep it consistent with pte's in a longer run, but that'll
need to be done step by step, and tested properly on each of the goals
(e.g. when extend that to zeropage pmd).

> 
> The shared zeropage is usually not refcounted (except during GUP FOLL_GET
> ... but not FOLL_PIN) and the huge zeropage is usually also not refcounted
> (but FOLL_PIN still does it). Both are special.
> 
> 
> If you take a look at the history pte_special(), it was introduced for
> VM_MIXEDMAP handling on s390x, because pfn_valid() to identify "special"
> pages did not work:
> 
> commit 7e675137a8e1a4d45822746456dd389b65745bf6
> Author: Nicholas Piggin <npiggin@gmail.com>
> Date:   Mon Apr 28 02:13:00 2008 -0700
> 
>     mm: introduce pte_special pte bit
> 
> 
> In the meantime, it's required for architectures that wants to support
> GUP-fast I think, to make GUP-fast bail out and fallback to the slow path
> where we do a vm_normal_page() -- or fail right at the VMA check for now
> (VM_PFNMAP).

I wonder whether pfn_valid() would work for the archs that do not support
pte_special but to enable gup-fast.

Meanwhile I'm actually not 100% sure pte_special is only needed in
gup-fast.  See vm_normal_page() and for VM_PFNMAP when pte_special bit is
not defined:

		} else {
			unsigned long off;
			off = (addr - vma->vm_start) >> PAGE_SHIFT;
			if (pfn == vma->vm_pgoff + off) <------------------ [1]
				return NULL;
			if (!is_cow_mapping(vma->vm_flags))
				return NULL;
		}

I suspect things can go wrong when there's assumption on vm_pgoff [1].  At
least vfio-pci isn't storing vm_pgoff for the base PFN, so this check will
go wrong when pte_special is not supported on any arch but when vfio-pci is
present.  I suspect more drivers can break it.

So I wonder if it's really the case in real life that only gup-fast would
need the special bit.  It could be that we thought it like that, but nobody
really seriously tried run it without special bit yet to see things broke.

This series so far limit huge pfnmap with special bits; that make me feel
safer to do as a start point.

> 
> An architecture that doesn't implement pte_special() can support pfnmaps but
> not GUP-fast. Similarly, an architecture that doesn't implement
> pmd_special() can support huge pfnmaps, but not GUP-fast.
> 
> If you take a closer look, really the only two code paths that look at
> pte_special() are GUP-fast and vm_normal_page().
> 
> If we use pmd_special/pud_special in other code than that, we are diverging
> from the pte_special() model, and are likely doing something wrong.
> 
> I see how you arrived at the current approach, focusing exclusively on x86.
> But I think this just adds inconsistency.

Hmm, that's definitely not what I wanted to express..

IMHO it's about our current code base has very limited use of larger
mappings, especialy pud, so even if I try to create the so-called
vm_normal_page_pud() to match pte, it'll mostly only contain the pud
special bit test.

We could add some pfn_valid() checks (even if I know no arch that I can
support !special but rely on pfn_valid.. nowhere I can test at all),
process vm_ops->find_special_page() even if I know nobody is using it, and
so on (obviously pud zeropage is missing so nothing to copy over
there).. just trying to match vm_normal_page().

But so far they're all redundant, and I prefer not adding redundant or dead
codes; as simple as that..  It makes more sense to me sticking with what we
know that will work, and then go from there, then we can add things by
justifying them properly step by step later.

We indeed already have vm_normal_page_pmd(), please see below.

> 
> So my point is that we use the same model, where we limit
> 
> * pmd_special() to GUP-fast and vm_normal_page_pmd()
> * pud_special() to GUP-fast and vm_normal_page_pud()
> 
> And simply do the exact same thing as we do for pte_special().
> 
> If an arch supports pmd_special() and pud_special() we can support both
> types of hugepfn mappings. If not, an architecture *might* support it,
> depending on support for GUP-fast and maybe depending on MIXEDMAP support
> (again, just like pte_special()). Not your task to worry about, you will
> only "unlock" x86.

And arm64 2M.  Yes I think I'd better leave the rest to others if I have
totally no idea how to even test them..  Even with the current Alex was
helping or I don't really have hardwares on hand.

> 
> So maybe we do want CONFIG_ARCH_HAS_PMD_SPECIAL as well, maybe it can be
> glued to CONFIG_ARCH_HAS_PTE_SPECIAL (but I'm afraid it can't unless all
> archs support both). I'll leave that up to you.
> 
> > 
> > > 
> > > As I said, if you need someone to add vm_normal_page_pud(), I can handle that.
> > 
> > I'm pretty confused why we need that for this series alone.
> 
> See above.
> 
> > 
> > If you prefer vm_normal_page_pud() to be defined and check pud_special()
> > there, I can do that.  But again, I don't yet see how that can make a
> > functional difference considering the so far very limited usage of the
> > special bit, and wonder whether we can do that on top when it became
> > necessary (and when we start to have functional requirement of such).
> 
> I hope my explanation why pte_special() even exists and how it is used makes
> it clearer.
> 
> It's not that much code to handle it like pte_special(), really. I don't
> expect you to teach GUP-slow about vm_normal_page() etc.

One thing I can do here is I move the pmd_special() check into the existing
vm_normal_page_pmd(), then it'll be a fixup on top of this patch:

===8<===
diff --git a/mm/memory.c b/mm/memory.c
index 288f81a8698e..42674c0748cb 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -672,11 +672,10 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
 {
 	unsigned long pfn = pmd_pfn(pmd);
 
-	/*
-	 * There is no pmd_special() but there may be special pmds, e.g.
-	 * in a direct-access (dax) mapping, so let's just replicate the
-	 * !CONFIG_ARCH_HAS_PTE_SPECIAL case from vm_normal_page() here.
-	 */
+	/* Currently it's only used for huge pfnmaps */
+	if (unlikely(pmd_special(pmd)))
+		return NULL;
+
 	if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
 		if (vma->vm_flags & VM_MIXEDMAP) {
 			if (!pfn_valid(pfn))
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 12be5222d70e..461ea3bbd8d9 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -783,7 +783,7 @@ struct folio *folio_walk_start(struct folio_walk *fw,
 		fw->pmdp = pmdp;
 		fw->pmd = pmd;
 
-		if (pmd_none(pmd) || pmd_special(pmd)) {
+		if (pmd_none(pmd)) {
 			spin_unlock(ptl);
 			goto not_found;
 		} else if (!pmd_leaf(pmd)) {
Jason Gunthorpe Aug. 28, 2024, 11:46 p.m. UTC | #5
On Wed, Aug 28, 2024 at 03:45:49PM -0400, Peter Xu wrote:

> Meanwhile I'm actually not 100% sure pte_special is only needed in
> gup-fast.  See vm_normal_page() and for VM_PFNMAP when pte_special bit is
> not defined:
> 
> 		} else {
> 			unsigned long off;
> 			off = (addr - vma->vm_start) >> PAGE_SHIFT;
> 			if (pfn == vma->vm_pgoff + off) <------------------ [1]
> 				return NULL;
> 			if (!is_cow_mapping(vma->vm_flags))
> 				return NULL;
> 		}
> 
> I suspect things can go wrong when there's assumption on vm_pgoff [1].  At
> least vfio-pci isn't storing vm_pgoff for the base PFN, so this check will
> go wrong when pte_special is not supported on any arch but when vfio-pci is
> present.  I suspect more drivers can break it.

I think that is a very important point.

IIRC this was done magically in one of the ioremap pfns type calls,
and if VFIO is using fault instead it won't do it.

This probably needs more hand holding for the driver somehow..

> So I wonder if it's really the case in real life that only gup-fast would
> need the special bit.  It could be that we thought it like that, but nobody
> really seriously tried run it without special bit yet to see things broke.

Indeed.

What arches even use the whole 'special but not special' system?

Can we start banning some of this stuff on non-special arches?

Jason
David Hildenbrand Aug. 29, 2024, 6:35 a.m. UTC | #6
On 29.08.24 01:46, Jason Gunthorpe wrote:
> On Wed, Aug 28, 2024 at 03:45:49PM -0400, Peter Xu wrote:
> 
>> Meanwhile I'm actually not 100% sure pte_special is only needed in
>> gup-fast.  See vm_normal_page() and for VM_PFNMAP when pte_special bit is
>> not defined:
>>
>> 		} else {
>> 			unsigned long off;
>> 			off = (addr - vma->vm_start) >> PAGE_SHIFT;
>> 			if (pfn == vma->vm_pgoff + off) <------------------ [1]
>> 				return NULL;
>> 			if (!is_cow_mapping(vma->vm_flags))
>> 				return NULL;
>> 		}
>>
>> I suspect things can go wrong when there's assumption on vm_pgoff [1].  At
>> least vfio-pci isn't storing vm_pgoff for the base PFN, so this check will
>> go wrong when pte_special is not supported on any arch but when vfio-pci is
>> present.  I suspect more drivers can break it.

Fortunately, we did an excellent job at documenting vm_normal_page():

  * There are 2 broad cases. Firstly, an architecture may define a pte_special()
  * pte bit, in which case this function is trivial. Secondly, an architecture
  * may not have a spare pte bit, which requires a more complicated scheme,
  * described below.
  *
  * A raw VM_PFNMAP mapping (ie. one that is not COWed) is always considered a
  * special mapping (even if there are underlying and valid "struct pages").
  * COWed pages of a VM_PFNMAP are always normal.
  *
  * The way we recognize COWed pages within VM_PFNMAP mappings is through the
  * rules set up by "remap_pfn_range()": the vma will have the VM_PFNMAP bit
  * set, and the vm_pgoff will point to the first PFN mapped: thus every special
  * mapping will always honor the rule
  *
  *	pfn_of_page == vma->vm_pgoff + ((addr - vma->vm_start) >> PAGE_SHIFT)
  *
  * And for normal mappings this is false.
  *

remap_pfn_range_notrack() will currently handle that for us:

if (is_cow_mapping(vma->vm_flags)) {
	if (addr != vma->vm_start || end != vma->vm_end)
		return -EINVAL;
}

Even if [1] would succeed, the is_cow_mapping() check will return NULL and it will
all work as expected, even without pte_special().

Because VM_PFNMAP is easy: in a !COW mapping, everything is special.

> 
> I think that is a very important point.
> 
> IIRC this was done magically in one of the ioremap pfns type calls,
> and if VFIO is using fault instead it won't do it.
> 
> This probably needs more hand holding for the driver somehow..

As long as these drivers don't support COW-mappings. It's all good.

And IIUC, we cannot support COW mappings if we don't use remap_pfn_range().

For this reason, remap_pfn_range() also bails out if not the whole VMA is covered
in a COW mapping.

It would be great if we could detect and fail that. Likely when trying to insert
PFNs (*not* using remap_pfn_range) manually we would have to WARN if we stumble over
a COW mapping.

In the meantime, we should really avoid any new VM_PFNMAP COW users ...

> 
>> So I wonder if it's really the case in real life that only gup-fast would
>> need the special bit.  It could be that we thought it like that, but nobody
>> really seriously tried run it without special bit yet to see things broke.
> 
> Indeed.

VM_PFNMAP for sure works.

VM_MIXEDMAP, I am not so sure. The s390x introduction of pte_special() [again,
I posted the commit] raised why they need it: because pfn_valid() could have
returned non-refcounted pages. One would have to dig if that is even still the
case as of today, and if other architectures have similar constraints.


> 
> What arches even use the whole 'special but not special' system?
> 
> Can we start banning some of this stuff on non-special arches?

Again, VM_PFNMAP is not a problem. Only VM_MIXEDMAP, and I would love to
see that go. There are some, but not that many users ... but I'm afraid it's
not that easy :)
David Hildenbrand Aug. 29, 2024, 3:10 p.m. UTC | #7
>>>
>>> If you prefer vm_normal_page_pud() to be defined and check pud_special()
>>> there, I can do that.  But again, I don't yet see how that can make a
>>> functional difference considering the so far very limited usage of the
>>> special bit, and wonder whether we can do that on top when it became
>>> necessary (and when we start to have functional requirement of such).
>>
>> I hope my explanation why pte_special() even exists and how it is used makes
>> it clearer.
>>
>> It's not that much code to handle it like pte_special(), really. I don't
>> expect you to teach GUP-slow about vm_normal_page() etc.
> 
> One thing I can do here is I move the pmd_special() check into the existing
> vm_normal_page_pmd(), then it'll be a fixup on top of this patch:
> 
> ===8<===
> diff --git a/mm/memory.c b/mm/memory.c
> index 288f81a8698e..42674c0748cb 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -672,11 +672,10 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>   {
>   	unsigned long pfn = pmd_pfn(pmd);
>   
> -	/*
> -	 * There is no pmd_special() but there may be special pmds, e.g.
> -	 * in a direct-access (dax) mapping, so let's just replicate the
> -	 * !CONFIG_ARCH_HAS_PTE_SPECIAL case from vm_normal_page() here.
> -	 */
> +	/* Currently it's only used for huge pfnmaps */
> +	if (unlikely(pmd_special(pmd)))
> +		return NULL;


Better.

I'd appreciate a vm_normal_page_pud(), but I guess I have to be the one 
cleaning up the mess after you.
Peter Xu Aug. 29, 2024, 6:45 p.m. UTC | #8
On Thu, Aug 29, 2024 at 08:35:49AM +0200, David Hildenbrand wrote:
> Fortunately, we did an excellent job at documenting vm_normal_page():
> 
>  * There are 2 broad cases. Firstly, an architecture may define a pte_special()
>  * pte bit, in which case this function is trivial. Secondly, an architecture
>  * may not have a spare pte bit, which requires a more complicated scheme,
>  * described below.
>  *
>  * A raw VM_PFNMAP mapping (ie. one that is not COWed) is always considered a
>  * special mapping (even if there are underlying and valid "struct pages").
>  * COWed pages of a VM_PFNMAP are always normal.
>  *
>  * The way we recognize COWed pages within VM_PFNMAP mappings is through the
>  * rules set up by "remap_pfn_range()": the vma will have the VM_PFNMAP bit
>  * set, and the vm_pgoff will point to the first PFN mapped: thus every special
>  * mapping will always honor the rule
>  *
>  *	pfn_of_page == vma->vm_pgoff + ((addr - vma->vm_start) >> PAGE_SHIFT)
>  *
>  * And for normal mappings this is false.
>  *
> 
> remap_pfn_range_notrack() will currently handle that for us:
> 
> if (is_cow_mapping(vma->vm_flags)) {
> 	if (addr != vma->vm_start || end != vma->vm_end)
> 		return -EINVAL;
> }
> 
> Even if [1] would succeed, the is_cow_mapping() check will return NULL and it will
> all work as expected, even without pte_special().

IMHO referencing vm_pgoff is ambiguous, and could be wrong, if without a
clear contract.

For example, consider when the driver setup a MAP_PRIVATE + VM_PFNMAP vma,
vm_pgoff to be not the "base PFN" but some random value, then for a COWed
page it's possible the calculation accidentally satisfies "pfn ==
vma->vm_pgoff + off".  Then it could wrongly return NULL rather than the
COWed anonymous page here.  This is extremely unlikely, but just to show
why it's wrong to reference it at all.

> 
> Because VM_PFNMAP is easy: in a !COW mapping, everything is special.

Yes it's safe for vfio-pci, as vfio-pci doesn't have private mappings.  But
still, I don't think it's clear enough now on how VM_PFNMAP should be
mapped.
Peter Xu Aug. 29, 2024, 6:49 p.m. UTC | #9
On Thu, Aug 29, 2024 at 05:10:15PM +0200, David Hildenbrand wrote:
> > > > 
> > > > If you prefer vm_normal_page_pud() to be defined and check pud_special()
> > > > there, I can do that.  But again, I don't yet see how that can make a
> > > > functional difference considering the so far very limited usage of the
> > > > special bit, and wonder whether we can do that on top when it became
> > > > necessary (and when we start to have functional requirement of such).
> > > 
> > > I hope my explanation why pte_special() even exists and how it is used makes
> > > it clearer.
> > > 
> > > It's not that much code to handle it like pte_special(), really. I don't
> > > expect you to teach GUP-slow about vm_normal_page() etc.
> > 
> > One thing I can do here is I move the pmd_special() check into the existing
> > vm_normal_page_pmd(), then it'll be a fixup on top of this patch:
> > 
> > ===8<===
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 288f81a8698e..42674c0748cb 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -672,11 +672,10 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
> >   {
> >   	unsigned long pfn = pmd_pfn(pmd);
> > -	/*
> > -	 * There is no pmd_special() but there may be special pmds, e.g.
> > -	 * in a direct-access (dax) mapping, so let's just replicate the
> > -	 * !CONFIG_ARCH_HAS_PTE_SPECIAL case from vm_normal_page() here.
> > -	 */
> > +	/* Currently it's only used for huge pfnmaps */
> > +	if (unlikely(pmd_special(pmd)))
> > +		return NULL;
> 
> 
> Better.
> 
> I'd appreciate a vm_normal_page_pud(), but I guess I have to be the one
> cleaning up the mess after you.

I'll prepare either a fixup patch for above, or repost if there're more
changes.

Again, please leave explicit comment please.

As I mentioned, to me vm_normal_page_pud() currently should only contain
pud_special() check, as most of the things in pmd/pte don't seem to apply.

I don't feel strongly to add that wrapper yet in this case, but if you can
elaborate what you're suggesting otherwise, it may help me to understand
what you're looking for, then I can try to address them.

Or if you prefer some cleanup patch by yourself, please go ahead.
diff mbox series

Patch

diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index cd79fb3b89e5..12be5222d70e 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -753,7 +753,7 @@  struct folio *folio_walk_start(struct folio_walk *fw,
 		fw->pudp = pudp;
 		fw->pud = pud;
 
-		if (!pud_present(pud) || pud_devmap(pud)) {
+		if (!pud_present(pud) || pud_devmap(pud) || pud_special(pud)) {
 			spin_unlock(ptl);
 			goto not_found;
 		} else if (!pud_leaf(pud)) {
@@ -783,7 +783,7 @@  struct folio *folio_walk_start(struct folio_walk *fw,
 		fw->pmdp = pmdp;
 		fw->pmd = pmd;
 
-		if (pmd_none(pmd)) {
+		if (pmd_none(pmd) || pmd_special(pmd)) {
 			spin_unlock(ptl);
 			goto not_found;
 		} else if (!pmd_leaf(pmd)) {