diff mbox series

[v1,01/11] arm/pgtable: define PFN_PTE_SHIFT on arm and arm64

Message ID 20240122194200.381241-2-david@redhat.com (mailing list archive)
State New
Headers show
Series mm/memory: optimize fork() with PTE-mapped THP | expand

Commit Message

David Hildenbrand Jan. 22, 2024, 7:41 p.m. UTC
We want to make use of pte_next_pfn() outside of set_ptes(). Let's
simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/arm/include/asm/pgtable.h   | 2 ++
 arch/arm64/include/asm/pgtable.h | 2 ++
 2 files changed, 4 insertions(+)

Comments

Ryan Roberts Jan. 23, 2024, 10:34 a.m. UTC | #1
On 22/01/2024 19:41, David Hildenbrand wrote:
> We want to make use of pte_next_pfn() outside of set_ptes(). Let's
> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  arch/arm/include/asm/pgtable.h   | 2 ++
>  arch/arm64/include/asm/pgtable.h | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
> index d657b84b6bf70..be91e376df79e 100644
> --- a/arch/arm/include/asm/pgtable.h
> +++ b/arch/arm/include/asm/pgtable.h
> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval)
>  extern void __sync_icache_dcache(pte_t pteval);
>  #endif
>  
> +#define PFN_PTE_SHIFT		PAGE_SHIFT
> +
>  void set_ptes(struct mm_struct *mm, unsigned long addr,
>  		      pte_t *ptep, pte_t pteval, unsigned int nr);
>  #define set_ptes set_ptes
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 79ce70fbb751c..d4b3bd96e3304 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages)
>  		mte_sync_tags(pte, nr_pages);
>  }
>  
> +#define PFN_PTE_SHIFT		PAGE_SHIFT

I think this is buggy. And so is the arm64 implementation of set_ptes(). It
works fine for 48-bit output address, but for 52-bit OAs, the high bits are not
kept contigously, so if you happen to be setting a mapping for which the
physical memory block straddles bit 48, this won't work.

Today, only the 64K base page config can support 52 bits, and for this,
OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is
coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8].
Fortunately we already have helpers in arm64 to abstract this.

So I think arm64 will want to define its own pte_next_pfn():

#define pte_next_pfn pte_next_pfn
static inline pte_t pte_next_pfn(pte_t pte)
{
	return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
}

I'll do a separate patch to fix the already broken arm64 set_ptes() implementation.

I'm not sure if this type of problem might also apply to other arches?


> +
>  static inline void set_ptes(struct mm_struct *mm,
>  			    unsigned long __always_unused addr,
>  			    pte_t *ptep, pte_t pte, unsigned int nr)
David Hildenbrand Jan. 23, 2024, 10:48 a.m. UTC | #2
On 23.01.24 11:34, Ryan Roberts wrote:
> On 22/01/2024 19:41, David Hildenbrand wrote:
>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's
>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>   arch/arm/include/asm/pgtable.h   | 2 ++
>>   arch/arm64/include/asm/pgtable.h | 2 ++
>>   2 files changed, 4 insertions(+)
>>
>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
>> index d657b84b6bf70..be91e376df79e 100644
>> --- a/arch/arm/include/asm/pgtable.h
>> +++ b/arch/arm/include/asm/pgtable.h
>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval)
>>   extern void __sync_icache_dcache(pte_t pteval);
>>   #endif
>>   
>> +#define PFN_PTE_SHIFT		PAGE_SHIFT
>> +
>>   void set_ptes(struct mm_struct *mm, unsigned long addr,
>>   		      pte_t *ptep, pte_t pteval, unsigned int nr);
>>   #define set_ptes set_ptes
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index 79ce70fbb751c..d4b3bd96e3304 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages)
>>   		mte_sync_tags(pte, nr_pages);
>>   }
>>   
>> +#define PFN_PTE_SHIFT		PAGE_SHIFT
> 
> I think this is buggy. And so is the arm64 implementation of set_ptes(). It
> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not
> kept contigously, so if you happen to be setting a mapping for which the
> physical memory block straddles bit 48, this won't work.

Right, as soon as the PTE bits are not contiguous, this stops working, 
just like set_ptes() would, which I used as orientation.

> 
> Today, only the 64K base page config can support 52 bits, and for this,
> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is
> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8].
> Fortunately we already have helpers in arm64 to abstract this.
> 
> So I think arm64 will want to define its own pte_next_pfn():
> 
> #define pte_next_pfn pte_next_pfn
> static inline pte_t pte_next_pfn(pte_t pte)
> {
> 	return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
> }
> 
> I'll do a separate patch to fix the already broken arm64 set_ptes() implementation.

Make sense.

> 
> I'm not sure if this type of problem might also apply to other arches?

I saw similar handling in the PPC implementation of set_ptes, but was 
not able to convince me that it is actually required there.

pte_pfn on ppc does:

static inline unsigned long pte_pfn(pte_t pte)
{
	return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT;
}

But that means that the PFNs *are* contiguous. If high bits are used for 
something else, then we might produce a garbage PTE on overflow, but 
that shouldn't really matter I concluded for folio_pte_batch() purposes, 
we'd not detect "belongs to this folio batch" either way.

Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I 
just hope that we don't lose any other arbitrary PTE bits by doing the 
pte_pgprot().


I guess pte_pfn() implementations should tell us if anything special 
needs to happen.
David Hildenbrand Jan. 23, 2024, 11:02 a.m. UTC | #3
On 23.01.24 11:48, David Hildenbrand wrote:
> On 23.01.24 11:34, Ryan Roberts wrote:
>> On 22/01/2024 19:41, David Hildenbrand wrote:
>>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's
>>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
>>>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>>    arch/arm/include/asm/pgtable.h   | 2 ++
>>>    arch/arm64/include/asm/pgtable.h | 2 ++
>>>    2 files changed, 4 insertions(+)
>>>
>>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
>>> index d657b84b6bf70..be91e376df79e 100644
>>> --- a/arch/arm/include/asm/pgtable.h
>>> +++ b/arch/arm/include/asm/pgtable.h
>>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval)
>>>    extern void __sync_icache_dcache(pte_t pteval);
>>>    #endif
>>>    
>>> +#define PFN_PTE_SHIFT		PAGE_SHIFT
>>> +
>>>    void set_ptes(struct mm_struct *mm, unsigned long addr,
>>>    		      pte_t *ptep, pte_t pteval, unsigned int nr);
>>>    #define set_ptes set_ptes
>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>> index 79ce70fbb751c..d4b3bd96e3304 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages)
>>>    		mte_sync_tags(pte, nr_pages);
>>>    }
>>>    
>>> +#define PFN_PTE_SHIFT		PAGE_SHIFT
>>
>> I think this is buggy. And so is the arm64 implementation of set_ptes(). It
>> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not
>> kept contigously, so if you happen to be setting a mapping for which the
>> physical memory block straddles bit 48, this won't work.
> 
> Right, as soon as the PTE bits are not contiguous, this stops working,
> just like set_ptes() would, which I used as orientation.
> 
>>
>> Today, only the 64K base page config can support 52 bits, and for this,
>> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is
>> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8].
>> Fortunately we already have helpers in arm64 to abstract this.
>>
>> So I think arm64 will want to define its own pte_next_pfn():
>>
>> #define pte_next_pfn pte_next_pfn
>> static inline pte_t pte_next_pfn(pte_t pte)
>> {
>> 	return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
>> }
>>

Digging into the details, on arm64 we have:

#define pte_pfn(pte)           (__pte_to_phys(pte) >> PAGE_SHIFT)

and

#define __pte_to_phys(pte)     (pte_val(pte) & PTE_ADDR_MASK)

But that implies, that upstream the PFN is always contiguous, no?
Ryan Roberts Jan. 23, 2024, 11:08 a.m. UTC | #4
On 23/01/2024 10:48, David Hildenbrand wrote:
> On 23.01.24 11:34, Ryan Roberts wrote:
>> On 22/01/2024 19:41, David Hildenbrand wrote:
>>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's
>>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
>>>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>>   arch/arm/include/asm/pgtable.h   | 2 ++
>>>   arch/arm64/include/asm/pgtable.h | 2 ++
>>>   2 files changed, 4 insertions(+)
>>>
>>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
>>> index d657b84b6bf70..be91e376df79e 100644
>>> --- a/arch/arm/include/asm/pgtable.h
>>> +++ b/arch/arm/include/asm/pgtable.h
>>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval)
>>>   extern void __sync_icache_dcache(pte_t pteval);
>>>   #endif
>>>   +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>> +
>>>   void set_ptes(struct mm_struct *mm, unsigned long addr,
>>>                 pte_t *ptep, pte_t pteval, unsigned int nr);
>>>   #define set_ptes set_ptes
>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>> index 79ce70fbb751c..d4b3bd96e3304 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte,
>>> unsigned int nr_pages)
>>>           mte_sync_tags(pte, nr_pages);
>>>   }
>>>   +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>
>> I think this is buggy. And so is the arm64 implementation of set_ptes(). It
>> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not
>> kept contigously, so if you happen to be setting a mapping for which the
>> physical memory block straddles bit 48, this won't work.
> 
> Right, as soon as the PTE bits are not contiguous, this stops working, just like
> set_ptes() would, which I used as orientation.
> 
>>
>> Today, only the 64K base page config can support 52 bits, and for this,
>> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is
>> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8].
>> Fortunately we already have helpers in arm64 to abstract this.
>>
>> So I think arm64 will want to define its own pte_next_pfn():
>>
>> #define pte_next_pfn pte_next_pfn
>> static inline pte_t pte_next_pfn(pte_t pte)
>> {
>>     return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
>> }
>>
>> I'll do a separate patch to fix the already broken arm64 set_ptes()
>> implementation.
> 
> Make sense.
> 
>>
>> I'm not sure if this type of problem might also apply to other arches?
> 
> I saw similar handling in the PPC implementation of set_ptes, but was not able
> to convince me that it is actually required there.
> 
> pte_pfn on ppc does:
> 
> static inline unsigned long pte_pfn(pte_t pte)
> {
>     return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT;
> }
> 
> But that means that the PFNs *are* contiguous.

all the ppc pfn_pte() implementations also only shift the pfn, so I think ppc is
safe to just define PFN_PTE_SHIFT. Although 2 of the 3 implementations shift by
PTE_RPN_SHIFT and the other shifts by PAGE_SIZE, so you might want to define
PFN_PTE_SHIFT separately for all 3 configs?

> If high bits are used for
> something else, then we might produce a garbage PTE on overflow, but that
> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not
> detect "belongs to this folio batch" either way.

Exactly.

> 
> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just
> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot().

I don't see the need for ppc to implement pte_next_pfn().

pte_pgprot() is not a "proper" arch interface (its only required by the core-mm
if the arch implements a certain Kconfig IIRC). For arm64, all bits that are not
pfn are pgprot, so there are no bits lost.

> 
> 
> I guess pte_pfn() implementations should tell us if anything special needs to
> happen.
>
Christophe Leroy Jan. 23, 2024, 11:10 a.m. UTC | #5
Le 23/01/2024 à 11:48, David Hildenbrand a écrit :
> On 23.01.24 11:34, Ryan Roberts wrote:
>> On 22/01/2024 19:41, David Hildenbrand wrote:
>>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's
>>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
>>>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>>   arch/arm/include/asm/pgtable.h   | 2 ++
>>>   arch/arm64/include/asm/pgtable.h | 2 ++
>>>   2 files changed, 4 insertions(+)
>>>
>>> diff --git a/arch/arm/include/asm/pgtable.h 
>>> b/arch/arm/include/asm/pgtable.h
>>> index d657b84b6bf70..be91e376df79e 100644
>>> --- a/arch/arm/include/asm/pgtable.h
>>> +++ b/arch/arm/include/asm/pgtable.h
>>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t 
>>> pteval)
>>>   extern void __sync_icache_dcache(pte_t pteval);
>>>   #endif
>>> +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>> +
>>>   void set_ptes(struct mm_struct *mm, unsigned long addr,
>>>                 pte_t *ptep, pte_t pteval, unsigned int nr);
>>>   #define set_ptes set_ptes
>>> diff --git a/arch/arm64/include/asm/pgtable.h 
>>> b/arch/arm64/include/asm/pgtable.h
>>> index 79ce70fbb751c..d4b3bd96e3304 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t 
>>> pte, unsigned int nr_pages)
>>>           mte_sync_tags(pte, nr_pages);
>>>   }
>>> +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>
>> I think this is buggy. And so is the arm64 implementation of 
>> set_ptes(). It
>> works fine for 48-bit output address, but for 52-bit OAs, the high 
>> bits are not
>> kept contigously, so if you happen to be setting a mapping for which the
>> physical memory block straddles bit 48, this won't work.
> 
> Right, as soon as the PTE bits are not contiguous, this stops working, 
> just like set_ptes() would, which I used as orientation.
> 
>>
>> Today, only the 64K base page config can support 52 bits, and for this,
>> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base 
>> pages is
>> coming (hopefully v6.9) and in this case OA[51:50] are stored in 
>> PTE[9:8].
>> Fortunately we already have helpers in arm64 to abstract this.
>>
>> So I think arm64 will want to define its own pte_next_pfn():
>>
>> #define pte_next_pfn pte_next_pfn
>> static inline pte_t pte_next_pfn(pte_t pte)
>> {
>>     return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
>> }
>>
>> I'll do a separate patch to fix the already broken arm64 set_ptes() 
>> implementation.
> 
> Make sense.
> 
>>
>> I'm not sure if this type of problem might also apply to other arches?
> 
> I saw similar handling in the PPC implementation of set_ptes, but was 
> not able to convince me that it is actually required there.
> 
> pte_pfn on ppc does:
> 
> static inline unsigned long pte_pfn(pte_t pte)
> {
>      return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT;
> }
> 
> But that means that the PFNs *are* contiguous. If high bits are used for 
> something else, then we might produce a garbage PTE on overflow, but 
> that shouldn't really matter I concluded for folio_pte_batch() purposes, 
> we'd not detect "belongs to this folio batch" either way.

Yes PFNs are contiguous. The only thing is that the PFN is not located 
at PAGE_SHIFT, see 
https://elixir.bootlin.com/linux/v6.3-rc2/source/arch/powerpc/include/asm/nohash/pte-e500.h#L63

On powerpc e500 we have 24 PTE flags and the RPN starts above that.

The mask is then standard:

#define PTE_RPN_MASK	(~((1ULL << PTE_RPN_SHIFT) - 1))

Christophe

> 
> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I 
> just hope that we don't lose any other arbitrary PTE bits by doing the 
> pte_pgprot().
> 
> 
> I guess pte_pfn() implementations should tell us if anything special 
> needs to happen.
>
Christophe Leroy Jan. 23, 2024, 11:16 a.m. UTC | #6
Le 23/01/2024 à 12:08, Ryan Roberts a écrit :
> On 23/01/2024 10:48, David Hildenbrand wrote:
>> On 23.01.24 11:34, Ryan Roberts wrote:
>>> On 22/01/2024 19:41, David Hildenbrand wrote:
>>>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's
>>>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
>>>>
>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>> ---
>>>>    arch/arm/include/asm/pgtable.h   | 2 ++
>>>>    arch/arm64/include/asm/pgtable.h | 2 ++
>>>>    2 files changed, 4 insertions(+)
>>>>
>>>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
>>>> index d657b84b6bf70..be91e376df79e 100644
>>>> --- a/arch/arm/include/asm/pgtable.h
>>>> +++ b/arch/arm/include/asm/pgtable.h
>>>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval)
>>>>    extern void __sync_icache_dcache(pte_t pteval);
>>>>    #endif
>>>>    +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>>> +
>>>>    void set_ptes(struct mm_struct *mm, unsigned long addr,
>>>>                  pte_t *ptep, pte_t pteval, unsigned int nr);
>>>>    #define set_ptes set_ptes
>>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>>> index 79ce70fbb751c..d4b3bd96e3304 100644
>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte,
>>>> unsigned int nr_pages)
>>>>            mte_sync_tags(pte, nr_pages);
>>>>    }
>>>>    +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>>
>>> I think this is buggy. And so is the arm64 implementation of set_ptes(). It
>>> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not
>>> kept contigously, so if you happen to be setting a mapping for which the
>>> physical memory block straddles bit 48, this won't work.
>>
>> Right, as soon as the PTE bits are not contiguous, this stops working, just like
>> set_ptes() would, which I used as orientation.
>>
>>>
>>> Today, only the 64K base page config can support 52 bits, and for this,
>>> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is
>>> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8].
>>> Fortunately we already have helpers in arm64 to abstract this.
>>>
>>> So I think arm64 will want to define its own pte_next_pfn():
>>>
>>> #define pte_next_pfn pte_next_pfn
>>> static inline pte_t pte_next_pfn(pte_t pte)
>>> {
>>>      return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
>>> }
>>>
>>> I'll do a separate patch to fix the already broken arm64 set_ptes()
>>> implementation.
>>
>> Make sense.
>>
>>>
>>> I'm not sure if this type of problem might also apply to other arches?
>>
>> I saw similar handling in the PPC implementation of set_ptes, but was not able
>> to convince me that it is actually required there.
>>
>> pte_pfn on ppc does:
>>
>> static inline unsigned long pte_pfn(pte_t pte)
>> {
>>      return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT;
>> }
>>
>> But that means that the PFNs *are* contiguous.
> 
> all the ppc pfn_pte() implementations also only shift the pfn, so I think ppc is
> safe to just define PFN_PTE_SHIFT. Although 2 of the 3 implementations shift by
> PTE_RPN_SHIFT and the other shifts by PAGE_SIZE, so you might want to define
> PFN_PTE_SHIFT separately for all 3 configs?

We have PTE_RPN_SHIFT defined for all 4 implementations, for some of 
them you are right it is defined as PAGE_SHIFT, but I see no reason to 
define PFN_PTE_SHIFT separately.

> 
>> If high bits are used for
>> something else, then we might produce a garbage PTE on overflow, but that
>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not
>> detect "belongs to this folio batch" either way.
> 
> Exactly.
> 
>>
>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just
>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot().
> 
> I don't see the need for ppc to implement pte_next_pfn().

Agreed.

> 
> pte_pgprot() is not a "proper" arch interface (its only required by the core-mm
> if the arch implements a certain Kconfig IIRC). For arm64, all bits that are not
> pfn are pgprot, so there are no bits lost.
> 
>>
>>
>> I guess pte_pfn() implementations should tell us if anything special needs to
>> happen.
>>
>
Ryan Roberts Jan. 23, 2024, 11:17 a.m. UTC | #7
On 23/01/2024 11:02, David Hildenbrand wrote:
> On 23.01.24 11:48, David Hildenbrand wrote:
>> On 23.01.24 11:34, Ryan Roberts wrote:
>>> On 22/01/2024 19:41, David Hildenbrand wrote:
>>>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's
>>>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
>>>>
>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>> ---
>>>>    arch/arm/include/asm/pgtable.h   | 2 ++
>>>>    arch/arm64/include/asm/pgtable.h | 2 ++
>>>>    2 files changed, 4 insertions(+)
>>>>
>>>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
>>>> index d657b84b6bf70..be91e376df79e 100644
>>>> --- a/arch/arm/include/asm/pgtable.h
>>>> +++ b/arch/arm/include/asm/pgtable.h
>>>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval)
>>>>    extern void __sync_icache_dcache(pte_t pteval);
>>>>    #endif
>>>>    +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>>> +
>>>>    void set_ptes(struct mm_struct *mm, unsigned long addr,
>>>>                  pte_t *ptep, pte_t pteval, unsigned int nr);
>>>>    #define set_ptes set_ptes
>>>> diff --git a/arch/arm64/include/asm/pgtable.h
>>>> b/arch/arm64/include/asm/pgtable.h
>>>> index 79ce70fbb751c..d4b3bd96e3304 100644
>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte,
>>>> unsigned int nr_pages)
>>>>            mte_sync_tags(pte, nr_pages);
>>>>    }
>>>>    +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>>
>>> I think this is buggy. And so is the arm64 implementation of set_ptes(). It
>>> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not
>>> kept contigously, so if you happen to be setting a mapping for which the
>>> physical memory block straddles bit 48, this won't work.
>>
>> Right, as soon as the PTE bits are not contiguous, this stops working,
>> just like set_ptes() would, which I used as orientation.
>>
>>>
>>> Today, only the 64K base page config can support 52 bits, and for this,
>>> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is
>>> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8].
>>> Fortunately we already have helpers in arm64 to abstract this.
>>>
>>> So I think arm64 will want to define its own pte_next_pfn():
>>>
>>> #define pte_next_pfn pte_next_pfn
>>> static inline pte_t pte_next_pfn(pte_t pte)
>>> {
>>>     return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
>>> }
>>>
> 
> Digging into the details, on arm64 we have:
> 
> #define pte_pfn(pte)           (__pte_to_phys(pte) >> PAGE_SHIFT)
> 
> and
> 
> #define __pte_to_phys(pte)     (pte_val(pte) & PTE_ADDR_MASK)
> 
> But that implies, that upstream the PFN is always contiguous, no?
> 


But __pte_to_phys() and __phys_to_pte_val() depend on a Kconfig. If PA bits is
52, the bits are not all contiguous:

#ifdef CONFIG_ARM64_PA_BITS_52
static inline phys_addr_t __pte_to_phys(pte_t pte)
{
	return (pte_val(pte) & PTE_ADDR_LOW) |
		((pte_val(pte) & PTE_ADDR_HIGH) << PTE_ADDR_HIGH_SHIFT);
}
static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
{
	return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PTE_ADDR_MASK;
}
#else
#define __pte_to_phys(pte)	(pte_val(pte) & PTE_ADDR_MASK)
#define __phys_to_pte_val(phys)	(phys)
#endif
David Hildenbrand Jan. 23, 2024, 11:31 a.m. UTC | #8
>>
>>> If high bits are used for
>>> something else, then we might produce a garbage PTE on overflow, but that
>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not
>>> detect "belongs to this folio batch" either way.
>>
>> Exactly.
>>
>>>
>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just
>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot().
>>
>> I don't see the need for ppc to implement pte_next_pfn().
> 
> Agreed.

So likely we should then do on top for powerpc (whitespace damage):

diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index a04ae4449a025..549a440ed7f65 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
                         break;
                 ptep++;
                 addr += PAGE_SIZE;
-               /*
-                * increment the pfn.
-                */
-               pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte)));
+               pte = pte_next_pfn(pte);
         }
  }
David Hildenbrand Jan. 23, 2024, 11:33 a.m. UTC | #9
On 23.01.24 12:17, Ryan Roberts wrote:
> On 23/01/2024 11:02, David Hildenbrand wrote:
>> On 23.01.24 11:48, David Hildenbrand wrote:
>>> On 23.01.24 11:34, Ryan Roberts wrote:
>>>> On 22/01/2024 19:41, David Hildenbrand wrote:
>>>>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's
>>>>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
>>>>>
>>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>>> ---
>>>>>     arch/arm/include/asm/pgtable.h   | 2 ++
>>>>>     arch/arm64/include/asm/pgtable.h | 2 ++
>>>>>     2 files changed, 4 insertions(+)
>>>>>
>>>>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
>>>>> index d657b84b6bf70..be91e376df79e 100644
>>>>> --- a/arch/arm/include/asm/pgtable.h
>>>>> +++ b/arch/arm/include/asm/pgtable.h
>>>>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval)
>>>>>     extern void __sync_icache_dcache(pte_t pteval);
>>>>>     #endif
>>>>>     +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>>>> +
>>>>>     void set_ptes(struct mm_struct *mm, unsigned long addr,
>>>>>                   pte_t *ptep, pte_t pteval, unsigned int nr);
>>>>>     #define set_ptes set_ptes
>>>>> diff --git a/arch/arm64/include/asm/pgtable.h
>>>>> b/arch/arm64/include/asm/pgtable.h
>>>>> index 79ce70fbb751c..d4b3bd96e3304 100644
>>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte,
>>>>> unsigned int nr_pages)
>>>>>             mte_sync_tags(pte, nr_pages);
>>>>>     }
>>>>>     +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>>>
>>>> I think this is buggy. And so is the arm64 implementation of set_ptes(). It
>>>> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not
>>>> kept contigously, so if you happen to be setting a mapping for which the
>>>> physical memory block straddles bit 48, this won't work.
>>>
>>> Right, as soon as the PTE bits are not contiguous, this stops working,
>>> just like set_ptes() would, which I used as orientation.
>>>
>>>>
>>>> Today, only the 64K base page config can support 52 bits, and for this,
>>>> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is
>>>> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8].
>>>> Fortunately we already have helpers in arm64 to abstract this.
>>>>
>>>> So I think arm64 will want to define its own pte_next_pfn():
>>>>
>>>> #define pte_next_pfn pte_next_pfn
>>>> static inline pte_t pte_next_pfn(pte_t pte)
>>>> {
>>>>      return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
>>>> }
>>>>
>>
>> Digging into the details, on arm64 we have:
>>
>> #define pte_pfn(pte)           (__pte_to_phys(pte) >> PAGE_SHIFT)
>>
>> and
>>
>> #define __pte_to_phys(pte)     (pte_val(pte) & PTE_ADDR_MASK)
>>
>> But that implies, that upstream the PFN is always contiguous, no?
>>
> 
> 
> But __pte_to_phys() and __phys_to_pte_val() depend on a Kconfig. If PA bits is
> 52, the bits are not all contiguous:
> 
> #ifdef CONFIG_ARM64_PA_BITS_52
> static inline phys_addr_t __pte_to_phys(pte_t pte)
> {
> 	return (pte_val(pte) & PTE_ADDR_LOW) |
> 		((pte_val(pte) & PTE_ADDR_HIGH) << PTE_ADDR_HIGH_SHIFT);
> }
> static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
> {
> 	return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PTE_ADDR_MASK;
> }
> #else
> #define __pte_to_phys(pte)	(pte_val(pte) & PTE_ADDR_MASK)
> #define __phys_to_pte_val(phys)	(phys)
> #endif
> 

Ah, how could I've missed that. Agreed, set_ptes() and this patch are 
broken.

Do you want to send a patch to implement pte_next_pfn() on arm64, and 
then use pte_next_pfn() in set_ptes()? Then I can drop this patch here 
completely from this series.
Ryan Roberts Jan. 23, 2024, 11:38 a.m. UTC | #10
On 23/01/2024 11:31, David Hildenbrand wrote:
>>>
>>>> If high bits are used for
>>>> something else, then we might produce a garbage PTE on overflow, but that
>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not
>>>> detect "belongs to this folio batch" either way.
>>>
>>> Exactly.
>>>
>>>>
>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just
>>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot().
>>>
>>> I don't see the need for ppc to implement pte_next_pfn().
>>
>> Agreed.
> 
> So likely we should then do on top for powerpc (whitespace damage):
> 
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index a04ae4449a025..549a440ed7f65 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr,
> pte_t *ptep,
>                         break;
>                 ptep++;
>                 addr += PAGE_SIZE;
> -               /*
> -                * increment the pfn.
> -                */
> -               pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte)));
> +               pte = pte_next_pfn(pte);
>         }
>  }

Looks like commit 47b8def9358c ("powerpc/mm: Avoid calling
arch_enter/leave_lazy_mmu() in set_ptes") changed from doing the simple
increment to this more complex approach, but the log doesn't say why.

>  
> 
>
David Hildenbrand Jan. 23, 2024, 11:40 a.m. UTC | #11
On 23.01.24 12:38, Ryan Roberts wrote:
> On 23/01/2024 11:31, David Hildenbrand wrote:
>>>>
>>>>> If high bits are used for
>>>>> something else, then we might produce a garbage PTE on overflow, but that
>>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not
>>>>> detect "belongs to this folio batch" either way.
>>>>
>>>> Exactly.
>>>>
>>>>>
>>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just
>>>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot().
>>>>
>>>> I don't see the need for ppc to implement pte_next_pfn().
>>>
>>> Agreed.
>>
>> So likely we should then do on top for powerpc (whitespace damage):
>>
>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>> index a04ae4449a025..549a440ed7f65 100644
>> --- a/arch/powerpc/mm/pgtable.c
>> +++ b/arch/powerpc/mm/pgtable.c
>> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr,
>> pte_t *ptep,
>>                          break;
>>                  ptep++;
>>                  addr += PAGE_SIZE;
>> -               /*
>> -                * increment the pfn.
>> -                */
>> -               pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte)));
>> +               pte = pte_next_pfn(pte);
>>          }
>>   }
> 
> Looks like commit 47b8def9358c ("powerpc/mm: Avoid calling
> arch_enter/leave_lazy_mmu() in set_ptes") changed from doing the simple
> increment to this more complex approach, but the log doesn't say why.

@Aneesh, was that change on purpose?
Ryan Roberts Jan. 23, 2024, 11:44 a.m. UTC | #12
On 23/01/2024 11:33, David Hildenbrand wrote:
> On 23.01.24 12:17, Ryan Roberts wrote:
>> On 23/01/2024 11:02, David Hildenbrand wrote:
>>> On 23.01.24 11:48, David Hildenbrand wrote:
>>>> On 23.01.24 11:34, Ryan Roberts wrote:
>>>>> On 22/01/2024 19:41, David Hildenbrand wrote:
>>>>>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's
>>>>>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
>>>>>>
>>>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>>>> ---
>>>>>>     arch/arm/include/asm/pgtable.h   | 2 ++
>>>>>>     arch/arm64/include/asm/pgtable.h | 2 ++
>>>>>>     2 files changed, 4 insertions(+)
>>>>>>
>>>>>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
>>>>>> index d657b84b6bf70..be91e376df79e 100644
>>>>>> --- a/arch/arm/include/asm/pgtable.h
>>>>>> +++ b/arch/arm/include/asm/pgtable.h
>>>>>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval)
>>>>>>     extern void __sync_icache_dcache(pte_t pteval);
>>>>>>     #endif
>>>>>>     +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>>>>> +
>>>>>>     void set_ptes(struct mm_struct *mm, unsigned long addr,
>>>>>>                   pte_t *ptep, pte_t pteval, unsigned int nr);
>>>>>>     #define set_ptes set_ptes
>>>>>> diff --git a/arch/arm64/include/asm/pgtable.h
>>>>>> b/arch/arm64/include/asm/pgtable.h
>>>>>> index 79ce70fbb751c..d4b3bd96e3304 100644
>>>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>>>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte,
>>>>>> unsigned int nr_pages)
>>>>>>             mte_sync_tags(pte, nr_pages);
>>>>>>     }
>>>>>>     +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>>>>
>>>>> I think this is buggy. And so is the arm64 implementation of set_ptes(). It
>>>>> works fine for 48-bit output address, but for 52-bit OAs, the high bits are
>>>>> not
>>>>> kept contigously, so if you happen to be setting a mapping for which the
>>>>> physical memory block straddles bit 48, this won't work.
>>>>
>>>> Right, as soon as the PTE bits are not contiguous, this stops working,
>>>> just like set_ptes() would, which I used as orientation.
>>>>
>>>>>
>>>>> Today, only the 64K base page config can support 52 bits, and for this,
>>>>> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is
>>>>> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8].
>>>>> Fortunately we already have helpers in arm64 to abstract this.
>>>>>
>>>>> So I think arm64 will want to define its own pte_next_pfn():
>>>>>
>>>>> #define pte_next_pfn pte_next_pfn
>>>>> static inline pte_t pte_next_pfn(pte_t pte)
>>>>> {
>>>>>      return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
>>>>> }
>>>>>
>>>
>>> Digging into the details, on arm64 we have:
>>>
>>> #define pte_pfn(pte)           (__pte_to_phys(pte) >> PAGE_SHIFT)
>>>
>>> and
>>>
>>> #define __pte_to_phys(pte)     (pte_val(pte) & PTE_ADDR_MASK)
>>>
>>> But that implies, that upstream the PFN is always contiguous, no?
>>>
>>
>>
>> But __pte_to_phys() and __phys_to_pte_val() depend on a Kconfig. If PA bits is
>> 52, the bits are not all contiguous:
>>
>> #ifdef CONFIG_ARM64_PA_BITS_52
>> static inline phys_addr_t __pte_to_phys(pte_t pte)
>> {
>>     return (pte_val(pte) & PTE_ADDR_LOW) |
>>         ((pte_val(pte) & PTE_ADDR_HIGH) << PTE_ADDR_HIGH_SHIFT);
>> }
>> static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>> {
>>     return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PTE_ADDR_MASK;
>> }
>> #else
>> #define __pte_to_phys(pte)    (pte_val(pte) & PTE_ADDR_MASK)
>> #define __phys_to_pte_val(phys)    (phys)
>> #endif
>>
> 
> Ah, how could I've missed that. Agreed, set_ptes() and this patch are broken.
> 
> Do you want to send a patch to implement pte_next_pfn() on arm64, and then use
> pte_next_pfn() in set_ptes()? Then I can drop this patch here completely from
> this series.

Yes good idea. I probably won't get around to it until tomorrow.
Christophe Leroy Jan. 23, 2024, 11:48 a.m. UTC | #13
Le 23/01/2024 à 12:38, Ryan Roberts a écrit :
> On 23/01/2024 11:31, David Hildenbrand wrote:
>>>>
>>>>> If high bits are used for
>>>>> something else, then we might produce a garbage PTE on overflow, but that
>>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not
>>>>> detect "belongs to this folio batch" either way.
>>>>
>>>> Exactly.
>>>>
>>>>>
>>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just
>>>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot().
>>>>
>>>> I don't see the need for ppc to implement pte_next_pfn().
>>>
>>> Agreed.
>>
>> So likely we should then do on top for powerpc (whitespace damage):
>>
>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>> index a04ae4449a025..549a440ed7f65 100644
>> --- a/arch/powerpc/mm/pgtable.c
>> +++ b/arch/powerpc/mm/pgtable.c
>> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr,
>> pte_t *ptep,
>>                          break;
>>                  ptep++;
>>                  addr += PAGE_SIZE;
>> -               /*
>> -                * increment the pfn.
>> -                */
>> -               pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte)));
>> +               pte = pte_next_pfn(pte);
>>          }
>>   }
> 
> Looks like commit 47b8def9358c ("powerpc/mm: Avoid calling
> arch_enter/leave_lazy_mmu() in set_ptes") changed from doing the simple
> increment to this more complex approach, but the log doesn't say why.

Right. There was a discussion about it without any conclusion: 
https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20231024143604.16749-1-aneesh.kumar@linux.ibm.com/

As far as understand the simple increment is better on ppc/32 but worse 
in ppc/64.

Christophe
David Hildenbrand Jan. 23, 2024, 11:53 a.m. UTC | #14
On 23.01.24 12:48, Christophe Leroy wrote:
> 
> 
> Le 23/01/2024 à 12:38, Ryan Roberts a écrit :
>> On 23/01/2024 11:31, David Hildenbrand wrote:
>>>>>
>>>>>> If high bits are used for
>>>>>> something else, then we might produce a garbage PTE on overflow, but that
>>>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not
>>>>>> detect "belongs to this folio batch" either way.
>>>>>
>>>>> Exactly.
>>>>>
>>>>>>
>>>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just
>>>>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot().
>>>>>
>>>>> I don't see the need for ppc to implement pte_next_pfn().
>>>>
>>>> Agreed.
>>>
>>> So likely we should then do on top for powerpc (whitespace damage):
>>>
>>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>>> index a04ae4449a025..549a440ed7f65 100644
>>> --- a/arch/powerpc/mm/pgtable.c
>>> +++ b/arch/powerpc/mm/pgtable.c
>>> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr,
>>> pte_t *ptep,
>>>                           break;
>>>                   ptep++;
>>>                   addr += PAGE_SIZE;
>>> -               /*
>>> -                * increment the pfn.
>>> -                */
>>> -               pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte)));
>>> +               pte = pte_next_pfn(pte);
>>>           }
>>>    }
>>
>> Looks like commit 47b8def9358c ("powerpc/mm: Avoid calling
>> arch_enter/leave_lazy_mmu() in set_ptes") changed from doing the simple
>> increment to this more complex approach, but the log doesn't say why.
> 
> Right. There was a discussion about it without any conclusion:
> https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20231024143604.16749-1-aneesh.kumar@linux.ibm.com/
> 
> As far as understand the simple increment is better on ppc/32 but worse
> in ppc/64.

Sounds like we're micro-optimizing for a specific compiler version 
output. Hurray.
Matthew Wilcox Jan. 23, 2024, 3:01 p.m. UTC | #15
On Tue, Jan 23, 2024 at 10:34:21AM +0000, Ryan Roberts wrote:
> > +#define PFN_PTE_SHIFT		PAGE_SHIFT
> 
> I think this is buggy. And so is the arm64 implementation of set_ptes(). It
> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not
> kept contigously, so if you happen to be setting a mapping for which the
> physical memory block straddles bit 48, this won't work.

I'd like to see the folio allocation that can straddle bit 48 ...

agreed, it's not workable _in general_, but specifically for a memory
allocation from a power-of-two allocator, you'd have to do a 49-bit
allocation (half a petabyte) to care.
Ryan Roberts Jan. 23, 2024, 3:22 p.m. UTC | #16
On 23/01/2024 15:01, Matthew Wilcox wrote:
> On Tue, Jan 23, 2024 at 10:34:21AM +0000, Ryan Roberts wrote:
>>> +#define PFN_PTE_SHIFT		PAGE_SHIFT
>>
>> I think this is buggy. And so is the arm64 implementation of set_ptes(). It
>> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not
>> kept contigously, so if you happen to be setting a mapping for which the
>> physical memory block straddles bit 48, this won't work.
> 
> I'd like to see the folio allocation that can straddle bit 48 ...
> 
> agreed, it's not workable _in general_, but specifically for a memory
> allocation from a power-of-two allocator, you'd have to do a 49-bit
> allocation (half a petabyte) to care.

Hmm good point. So its a hypothetical bug, not an actual bug. Personally I'm
still inclined to "fix" it. Although its going to cost a few more instructions.
Shout if you disagree.
Aneesh Kumar K.V Jan. 24, 2024, 5:45 a.m. UTC | #17
David Hildenbrand <david@redhat.com> writes:

> On 23.01.24 12:38, Ryan Roberts wrote:
>> On 23/01/2024 11:31, David Hildenbrand wrote:
>>>>>
>>>>>> If high bits are used for
>>>>>> something else, then we might produce a garbage PTE on overflow, but that
>>>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not
>>>>>> detect "belongs to this folio batch" either way.
>>>>>
>>>>> Exactly.
>>>>>
>>>>>>
>>>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just
>>>>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot().
>>>>>
>>>>> I don't see the need for ppc to implement pte_next_pfn().
>>>>
>>>> Agreed.
>>>
>>> So likely we should then do on top for powerpc (whitespace damage):
>>>
>>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>>> index a04ae4449a025..549a440ed7f65 100644
>>> --- a/arch/powerpc/mm/pgtable.c
>>> +++ b/arch/powerpc/mm/pgtable.c
>>> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr,
>>> pte_t *ptep,
>>>                          break;
>>>                  ptep++;
>>>                  addr += PAGE_SIZE;
>>> -               /*
>>> -                * increment the pfn.
>>> -                */
>>> -               pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte)));
>>> +               pte = pte_next_pfn(pte);
>>>          }
>>>   }
>> 
>> Looks like commit 47b8def9358c ("powerpc/mm: Avoid calling
>> arch_enter/leave_lazy_mmu() in set_ptes") changed from doing the simple
>> increment to this more complex approach, but the log doesn't say why.
>
> @Aneesh, was that change on purpose?
>

Because we had a bug with the patch that introduced the change and that
line was confusing. The right thing should have been to add
pte_pfn_next() to make it clear. It was confusing because not all pte
format had pfn at PAGE_SHIFT offset (even though we did use the correct
PTE_RPN_SHIFT in this specific case). To make it simpler I ended up
switching that line to pte_pfn(pte) + 1 .

-aneesh
Aneesh Kumar K.V Jan. 24, 2024, 5:46 a.m. UTC | #18
David Hildenbrand <david@redhat.com> writes:

>>>
>>>> If high bits are used for
>>>> something else, then we might produce a garbage PTE on overflow, but that
>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, we'd not
>>>> detect "belongs to this folio batch" either way.
>>>
>>> Exactly.
>>>
>>>>
>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on ppc, I just
>>>> hope that we don't lose any other arbitrary PTE bits by doing the pte_pgprot().
>>>
>>> I don't see the need for ppc to implement pte_next_pfn().
>> 
>> Agreed.
>
> So likely we should then do on top for powerpc (whitespace damage):
>
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index a04ae4449a025..549a440ed7f65 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
>                          break;
>                  ptep++;
>                  addr += PAGE_SIZE;
> -               /*
> -                * increment the pfn.
> -                */
> -               pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte)));
> +               pte = pte_next_pfn(pte);
>          }
>   }

Agreed.

-aneesh
diff mbox series

Patch

diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index d657b84b6bf70..be91e376df79e 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -209,6 +209,8 @@  static inline void __sync_icache_dcache(pte_t pteval)
 extern void __sync_icache_dcache(pte_t pteval);
 #endif
 
+#define PFN_PTE_SHIFT		PAGE_SHIFT
+
 void set_ptes(struct mm_struct *mm, unsigned long addr,
 		      pte_t *ptep, pte_t pteval, unsigned int nr);
 #define set_ptes set_ptes
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 79ce70fbb751c..d4b3bd96e3304 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -341,6 +341,8 @@  static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages)
 		mte_sync_tags(pte, nr_pages);
 }
 
+#define PFN_PTE_SHIFT		PAGE_SHIFT
+
 static inline void set_ptes(struct mm_struct *mm,
 			    unsigned long __always_unused addr,
 			    pte_t *ptep, pte_t pte, unsigned int nr)