diff mbox series

riscv: mm: Implement pmdp_collapse_flush for THP

Message ID 20230125125512.2494577-1-mchitale@ventanamicro.com (mailing list archive)
State Superseded
Delegated to: Palmer Dabbelt
Headers show
Series riscv: mm: Implement pmdp_collapse_flush for THP | expand

Checks

Context Check Description
conchuod/cover_letter success Single patches do not need cover letters
conchuod/tree_selection success Guessed tree name to be fixes
conchuod/fixes_present success Fixes tag present in non-next series
conchuod/maintainers_pattern success MAINTAINERS pattern errors before the patch: 13 and now 13
conchuod/verify_signedoff success Signed-off-by tag matches author and committer
conchuod/kdoc success Errors and warnings before: 0 this patch: 0
conchuod/module_param success Was 0 now: 0
conchuod/build_rv64_gcc_allmodconfig success Errors and warnings before: 2014 this patch: 2014
conchuod/alphanumeric_selects success Out of order selects before the patch: 57 and now 57
conchuod/build_rv32_defconfig success Build OK
conchuod/dtb_warn_rv64 success Errors and warnings before: 2 this patch: 2
conchuod/header_inline success No static functions without inline keyword in header files
conchuod/checkpatch success total: 0 errors, 0 warnings, 0 checks, 30 lines checked
conchuod/source_inline success Was 0 now: 0
conchuod/build_rv64_nommu_k210_defconfig success Build OK
conchuod/verify_fixes success Fixes tag looks correct
conchuod/build_rv64_nommu_virt_defconfig success Build OK

Commit Message

Mayuresh Chitale Jan. 25, 2023, 12:55 p.m. UTC
When THP is enabled, 4K pages are collapsed into a single huge
page using the generic pmdp_collapse_flush() which will further
use flush_tlb_range() to shoot-down stale TLB entries. Unfortunately,
the generic pmdp_collapse_flush() only invalidates cached leaf PTEs
using address specific SFENCEs which results in repetitive (or
unpredictable) page faults on RISC-V implementations which cache
non-leaf PTEs.

Provide a RISC-V specific pmdp_collapse_flush() which ensures both
cached leaf and non-leaf PTEs are invalidated by using non-address
specific SFENCEs as recommended by the RISC-V privileged specification.

Fixes: e88b333142e4 ("riscv: mm: add THP support on 64-bit")
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
---
 arch/riscv/include/asm/pgtable.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

Comments

Andrew Jones Jan. 25, 2023, 3:35 p.m. UTC | #1
On Wed, Jan 25, 2023 at 06:25:11PM +0530, Mayuresh Chitale wrote:
> When THP is enabled, 4K pages are collapsed into a single huge
> page using the generic pmdp_collapse_flush() which will further
> use flush_tlb_range() to shoot-down stale TLB entries. Unfortunately,
> the generic pmdp_collapse_flush() only invalidates cached leaf PTEs
> using address specific SFENCEs which results in repetitive (or
> unpredictable) page faults on RISC-V implementations which cache
> non-leaf PTEs.
> 
> Provide a RISC-V specific pmdp_collapse_flush() which ensures both
> cached leaf and non-leaf PTEs are invalidated by using non-address
> specific SFENCEs as recommended by the RISC-V privileged specification.
> 
> Fixes: e88b333142e4 ("riscv: mm: add THP support on 64-bit")
> Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
> ---
>  arch/riscv/include/asm/pgtable.h | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 4eba9a98d0e3..6d948dec6020 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -721,6 +721,30 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>  	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>  	return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd)));
>  }
> +
> +#define pmdp_collapse_flush pmdp_collapse_flush
> +static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> +					unsigned long address, pmd_t *pmdp)
> +{
> +	pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);

The generic version of this function has a couple sanity checks for
kernels compiled with CONFIG_DEBUG_VM. Shouldn't we duplicate those?

Thanks,
drew

> +
> +	/*
> +	 * When leaf PTE enteries (regular pages) are collapsed into a leaf
> +	 * PMD entry (huge page), a valid non-leaf PTE is converted into a
> +	 * valid leaf PTE at the level 1 page table. The RISC-V privileged v1.12
> +	 * specification allows implementations to cache valid non-leaf PTEs,
> +	 * but the section "4.2.1 Supervisor Memory-Management Fence
> +	 * Instruction" recommends the following:
> +	 * "If software modifies a non-leaf PTE, it should execute SFENCE.VMA
> +	 * with rs1=x0. If any PTE along the traversal path had its G bit set,
> +	 * rs2 must be x0; otherwise, rs2 should be set to the ASID for which
> +	 * the translation is being modified."
> +	 * Based on the above recommendation, we should do full flush whenever
> +	 * leaf PTE entries are collapsed into a leaf PMD entry.
> +	 */
> +	flush_tlb_mm(vma->vm_mm);
> +	return pmd;
> +}
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
>  /*
> -- 
> 2.34.1
>
Alexandre Ghiti Jan. 26, 2023, 3:33 p.m. UTC | #2
Hi Mayuresh,

On 1/25/23 13:55, Mayuresh Chitale wrote:
> When THP is enabled, 4K pages are collapsed into a single huge
> page using the generic pmdp_collapse_flush() which will further
> use flush_tlb_range() to shoot-down stale TLB entries. Unfortunately,
> the generic pmdp_collapse_flush() only invalidates cached leaf PTEs
> using address specific SFENCEs which results in repetitive (or
> unpredictable) page faults on RISC-V implementations which cache
> non-leaf PTEs.


That's interesting! I'm wondering if the same issue will happen if a 
user maps 4K, unmaps it and at the same address maps a 2MB hugepage: I'm 
not sure the mm code would correctly flush the non-leaf PTE when 
unmapping the 4KB page. In that case, your patch only fixes the THP 
usecase and maybe we should try to catch this non-leaf -> leaf upgrade 
at some lower level page table functions, what do you think?

Alex


> Provide a RISC-V specific pmdp_collapse_flush() which ensures both
> cached leaf and non-leaf PTEs are invalidated by using non-address
> specific SFENCEs as recommended by the RISC-V privileged specification.
>
> Fixes: e88b333142e4 ("riscv: mm: add THP support on 64-bit")
> Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
> ---
>   arch/riscv/include/asm/pgtable.h | 24 ++++++++++++++++++++++++
>   1 file changed, 24 insertions(+)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 4eba9a98d0e3..6d948dec6020 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -721,6 +721,30 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>   	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>   	return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd)));
>   }
> +
> +#define pmdp_collapse_flush pmdp_collapse_flush
> +static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> +					unsigned long address, pmd_t *pmdp)
> +{
> +	pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
> +
> +	/*
> +	 * When leaf PTE enteries (regular pages) are collapsed into a leaf
> +	 * PMD entry (huge page), a valid non-leaf PTE is converted into a
> +	 * valid leaf PTE at the level 1 page table. The RISC-V privileged v1.12
> +	 * specification allows implementations to cache valid non-leaf PTEs,
> +	 * but the section "4.2.1 Supervisor Memory-Management Fence
> +	 * Instruction" recommends the following:
> +	 * "If software modifies a non-leaf PTE, it should execute SFENCE.VMA
> +	 * with rs1=x0. If any PTE along the traversal path had its G bit set,
> +	 * rs2 must be x0; otherwise, rs2 should be set to the ASID for which
> +	 * the translation is being modified."
> +	 * Based on the above recommendation, we should do full flush whenever
> +	 * leaf PTE entries are collapsed into a leaf PMD entry.
> +	 */
> +	flush_tlb_mm(vma->vm_mm);
> +	return pmd;
> +}
>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>   
>   /*
Anup Patel Jan. 26, 2023, 6:14 p.m. UTC | #3
On Thu, Jan 26, 2023 at 9:03 PM Alexandre Ghiti <alex@ghiti.fr> wrote:
>
> Hi Mayuresh,
>
> On 1/25/23 13:55, Mayuresh Chitale wrote:
> > When THP is enabled, 4K pages are collapsed into a single huge
> > page using the generic pmdp_collapse_flush() which will further
> > use flush_tlb_range() to shoot-down stale TLB entries. Unfortunately,
> > the generic pmdp_collapse_flush() only invalidates cached leaf PTEs
> > using address specific SFENCEs which results in repetitive (or
> > unpredictable) page faults on RISC-V implementations which cache
> > non-leaf PTEs.
>
>
> That's interesting! I'm wondering if the same issue will happen if a
> user maps 4K, unmaps it and at the same address maps a 2MB hugepage: I'm
> not sure the mm code would correctly flush the non-leaf PTE when
> unmapping the 4KB page. In that case, your patch only fixes the THP
> usecase and maybe we should try to catch this non-leaf -> leaf upgrade
> at some lower level page table functions, what do you think?

This issue can happen whenever existing/valid non-leaf PTE is modified.

We hit this issue in the THP case but we can also hit this issue in other
scenarios where the page table programming pattern is similar.

Regards,
Anup

>
> Alex
>
>
> > Provide a RISC-V specific pmdp_collapse_flush() which ensures both
> > cached leaf and non-leaf PTEs are invalidated by using non-address
> > specific SFENCEs as recommended by the RISC-V privileged specification.
> >
> > Fixes: e88b333142e4 ("riscv: mm: add THP support on 64-bit")
> > Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
> > ---
> >   arch/riscv/include/asm/pgtable.h | 24 ++++++++++++++++++++++++
> >   1 file changed, 24 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> > index 4eba9a98d0e3..6d948dec6020 100644
> > --- a/arch/riscv/include/asm/pgtable.h
> > +++ b/arch/riscv/include/asm/pgtable.h
> > @@ -721,6 +721,30 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
> >       page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
> >       return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd)));
> >   }
> > +
> > +#define pmdp_collapse_flush pmdp_collapse_flush
> > +static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> > +                                     unsigned long address, pmd_t *pmdp)
> > +{
> > +     pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
> > +
> > +     /*
> > +      * When leaf PTE enteries (regular pages) are collapsed into a leaf
> > +      * PMD entry (huge page), a valid non-leaf PTE is converted into a
> > +      * valid leaf PTE at the level 1 page table. The RISC-V privileged v1.12
> > +      * specification allows implementations to cache valid non-leaf PTEs,
> > +      * but the section "4.2.1 Supervisor Memory-Management Fence
> > +      * Instruction" recommends the following:
> > +      * "If software modifies a non-leaf PTE, it should execute SFENCE.VMA
> > +      * with rs1=x0. If any PTE along the traversal path had its G bit set,
> > +      * rs2 must be x0; otherwise, rs2 should be set to the ASID for which
> > +      * the translation is being modified."
> > +      * Based on the above recommendation, we should do full flush whenever
> > +      * leaf PTE entries are collapsed into a leaf PMD entry.
> > +      */
> > +     flush_tlb_mm(vma->vm_mm);
> > +     return pmd;
> > +}
> >   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> >
> >   /*
Alexandre Ghiti Jan. 27, 2023, 8:41 a.m. UTC | #4
On 1/26/23 19:14, Anup Patel wrote:
> On Thu, Jan 26, 2023 at 9:03 PM Alexandre Ghiti <alex@ghiti.fr> wrote:
>> Hi Mayuresh,
>>
>> On 1/25/23 13:55, Mayuresh Chitale wrote:
>>> When THP is enabled, 4K pages are collapsed into a single huge
>>> page using the generic pmdp_collapse_flush() which will further
>>> use flush_tlb_range() to shoot-down stale TLB entries. Unfortunately,
>>> the generic pmdp_collapse_flush() only invalidates cached leaf PTEs
>>> using address specific SFENCEs which results in repetitive (or
>>> unpredictable) page faults on RISC-V implementations which cache
>>> non-leaf PTEs.
>>
>> That's interesting! I'm wondering if the same issue will happen if a
>> user maps 4K, unmaps it and at the same address maps a 2MB hugepage: I'm
>> not sure the mm code would correctly flush the non-leaf PTE when
>> unmapping the 4KB page. In that case, your patch only fixes the THP
>> usecase and maybe we should try to catch this non-leaf -> leaf upgrade
>> at some lower level page table functions, what do you think?
> This issue can happen whenever existing/valid non-leaf PTE is modified.
>
> We hit this issue in the THP case but we can also hit this issue in other
> scenarios where the page table programming pattern is similar.


Then what about trying to get all those cases at once? We can easily 
catch those spurious page faults as the pte would be valid: we would 
just have to flush_tlb_mm at the first page fault, if any.


>
> Regards,
> Anup
>
>> Alex
>>
>>
>>> Provide a RISC-V specific pmdp_collapse_flush() which ensures both
>>> cached leaf and non-leaf PTEs are invalidated by using non-address
>>> specific SFENCEs as recommended by the RISC-V privileged specification.
>>>
>>> Fixes: e88b333142e4 ("riscv: mm: add THP support on 64-bit")
>>> Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
>>> ---
>>>    arch/riscv/include/asm/pgtable.h | 24 ++++++++++++++++++++++++
>>>    1 file changed, 24 insertions(+)
>>>
>>> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
>>> index 4eba9a98d0e3..6d948dec6020 100644
>>> --- a/arch/riscv/include/asm/pgtable.h
>>> +++ b/arch/riscv/include/asm/pgtable.h
>>> @@ -721,6 +721,30 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>>>        page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>>>        return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd)));
>>>    }
>>> +
>>> +#define pmdp_collapse_flush pmdp_collapse_flush
>>> +static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
>>> +                                     unsigned long address, pmd_t *pmdp)
>>> +{
>>> +     pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
>>> +
>>> +     /*
>>> +      * When leaf PTE enteries (regular pages) are collapsed into a leaf
>>> +      * PMD entry (huge page), a valid non-leaf PTE is converted into a
>>> +      * valid leaf PTE at the level 1 page table. The RISC-V privileged v1.12
>>> +      * specification allows implementations to cache valid non-leaf PTEs,
>>> +      * but the section "4.2.1 Supervisor Memory-Management Fence
>>> +      * Instruction" recommends the following:
>>> +      * "If software modifies a non-leaf PTE, it should execute SFENCE.VMA
>>> +      * with rs1=x0. If any PTE along the traversal path had its G bit set,
>>> +      * rs2 must be x0; otherwise, rs2 should be set to the ASID for which
>>> +      * the translation is being modified."
>>> +      * Based on the above recommendation, we should do full flush whenever
>>> +      * leaf PTE entries are collapsed into a leaf PMD entry.
>>> +      */
>>> +     flush_tlb_mm(vma->vm_mm);
>>> +     return pmd;
>>> +}
>>>    #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>>
>>>    /*
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
Mayuresh Chitale Jan. 30, 2023, 7:26 a.m. UTC | #5
On Wed, 2023-01-25 at 16:35 +0100, Andrew Jones wrote:
> On Wed, Jan 25, 2023 at 06:25:11PM +0530, Mayuresh Chitale wrote:
> > When THP is enabled, 4K pages are collapsed into a single huge
> > page using the generic pmdp_collapse_flush() which will further
> > use flush_tlb_range() to shoot-down stale TLB entries.
> > Unfortunately,
> > the generic pmdp_collapse_flush() only invalidates cached leaf PTEs
> > using address specific SFENCEs which results in repetitive (or
> > unpredictable) page faults on RISC-V implementations which cache
> > non-leaf PTEs.
> > 
> > Provide a RISC-V specific pmdp_collapse_flush() which ensures both
> > cached leaf and non-leaf PTEs are invalidated by using non-address
> > specific SFENCEs as recommended by the RISC-V privileged
> > specification.
> > 
> > Fixes: e88b333142e4 ("riscv: mm: add THP support on 64-bit")
> > Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
> > ---
> >  arch/riscv/include/asm/pgtable.h | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> > 
> > diff --git a/arch/riscv/include/asm/pgtable.h
> > b/arch/riscv/include/asm/pgtable.h
> > index 4eba9a98d0e3..6d948dec6020 100644
> > --- a/arch/riscv/include/asm/pgtable.h
> > +++ b/arch/riscv/include/asm/pgtable.h
> > @@ -721,6 +721,30 @@ static inline pmd_t pmdp_establish(struct
> > vm_area_struct *vma,
> >  	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
> >  	return __pmd(atomic_long_xchg((atomic_long_t *)pmdp,
> > pmd_val(pmd)));
> >  }
> > +
> > +#define pmdp_collapse_flush pmdp_collapse_flush
> > +static inline pmd_t pmdp_collapse_flush(struct vm_area_struct
> > *vma,
> > +					unsigned long address, pmd_t
> > *pmdp)
> > +{
> > +	pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
> 
> The generic version of this function has a couple sanity checks for
> kernels compiled with CONFIG_DEBUG_VM. Shouldn't we duplicate those?

Ok. I will add those checks too.
> 
> Thanks,
> drew
> 
> > +
> > +	/*
> > +	 * When leaf PTE enteries (regular pages) are collapsed into a
> > leaf
> > +	 * PMD entry (huge page), a valid non-leaf PTE is converted
> > into a
> > +	 * valid leaf PTE at the level 1 page table. The RISC-V
> > privileged v1.12
> > +	 * specification allows implementations to cache valid non-leaf 
> > PTEs,
> > +	 * but the section "4.2.1 Supervisor Memory-Management Fence
> > +	 * Instruction" recommends the following:
> > +	 * "If software modifies a non-leaf PTE, it should execute
> > SFENCE.VMA
> > +	 * with rs1=x0. If any PTE along the traversal path had its G
> > bit set,
> > +	 * rs2 must be x0; otherwise, rs2 should be set to the ASID for
> > which
> > +	 * the translation is being modified."
> > +	 * Based on the above recommendation, we should do full flush
> > whenever
> > +	 * leaf PTE entries are collapsed into a leaf PMD entry.
> > +	 */
> > +	flush_tlb_mm(vma->vm_mm);
> > +	return pmd;
> > +}
> >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> >  
> >  /*
> > -- 
> > 2.34.1
> > 
> 
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
Mayuresh Chitale Jan. 30, 2023, 7:29 a.m. UTC | #6
On Thu, 2023-01-26 at 16:33 +0100, Alexandre Ghiti wrote:
> Hi Mayuresh,
> 
> On 1/25/23 13:55, Mayuresh Chitale wrote:
> > When THP is enabled, 4K pages are collapsed into a single huge
> > page using the generic pmdp_collapse_flush() which will further
> > use flush_tlb_range() to shoot-down stale TLB entries.
> > Unfortunately,
> > the generic pmdp_collapse_flush() only invalidates cached leaf PTEs
> > using address specific SFENCEs which results in repetitive (or
> > unpredictable) page faults on RISC-V implementations which cache
> > non-leaf PTEs.
> 
> That's interesting! I'm wondering if the same issue will happen if a 
> user maps 4K, unmaps it and at the same address maps a 2MB hugepage:
> I'm 
> not sure the mm code would correctly flush the non-leaf PTE when 
> unmapping the 4KB page. In that case, your patch only fixes the THP 
> usecase and maybe we should try to catch this non-leaf -> leaf
> upgrade 
> at some lower level page table functions, what do you think?

I will look into it but I dont know how to reproduce the issue without
the THP use case. It would be great if you could share the test case or
test code to reproduce it.

> 
> Alex
> 
> 
> > Provide a RISC-V specific pmdp_collapse_flush() which ensures both
> > cached leaf and non-leaf PTEs are invalidated by using non-address
> > specific SFENCEs as recommended by the RISC-V privileged
> > specification.
> > 
> > Fixes: e88b333142e4 ("riscv: mm: add THP support on 64-bit")
> > Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
> > ---
> >   arch/riscv/include/asm/pgtable.h | 24 ++++++++++++++++++++++++
> >   1 file changed, 24 insertions(+)
> > 
> > diff --git a/arch/riscv/include/asm/pgtable.h
> > b/arch/riscv/include/asm/pgtable.h
> > index 4eba9a98d0e3..6d948dec6020 100644
> > --- a/arch/riscv/include/asm/pgtable.h
> > +++ b/arch/riscv/include/asm/pgtable.h
> > @@ -721,6 +721,30 @@ static inline pmd_t pmdp_establish(struct
> > vm_area_struct *vma,
> >   	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
> >   	return __pmd(atomic_long_xchg((atomic_long_t *)pmdp,
> > pmd_val(pmd)));
> >   }
> > +
> > +#define pmdp_collapse_flush pmdp_collapse_flush
> > +static inline pmd_t pmdp_collapse_flush(struct vm_area_struct
> > *vma,
> > +					unsigned long address, pmd_t
> > *pmdp)
> > +{
> > +	pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
> > +
> > +	/*
> > +	 * When leaf PTE enteries (regular pages) are collapsed into a
> > leaf
> > +	 * PMD entry (huge page), a valid non-leaf PTE is converted
> > into a
> > +	 * valid leaf PTE at the level 1 page table. The RISC-V
> > privileged v1.12
> > +	 * specification allows implementations to cache valid non-leaf 
> > PTEs,
> > +	 * but the section "4.2.1 Supervisor Memory-Management Fence
> > +	 * Instruction" recommends the following:
> > +	 * "If software modifies a non-leaf PTE, it should execute
> > SFENCE.VMA
> > +	 * with rs1=x0. If any PTE along the traversal path had its G
> > bit set,
> > +	 * rs2 must be x0; otherwise, rs2 should be set to the ASID for
> > which
> > +	 * the translation is being modified."
> > +	 * Based on the above recommendation, we should do full flush
> > whenever
> > +	 * leaf PTE entries are collapsed into a leaf PMD entry.
> > +	 */
> > +	flush_tlb_mm(vma->vm_mm);
> > +	return pmd;
> > +}
> >   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> >   
> >   /*
> 
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
Mayuresh Chitale Jan. 30, 2023, 7:34 a.m. UTC | #7
On Fri, 2023-01-27 at 09:41 +0100, Alexandre Ghiti wrote:
> On 1/26/23 19:14, Anup Patel wrote:
> > On Thu, Jan 26, 2023 at 9:03 PM Alexandre Ghiti <alex@ghiti.fr>
> > wrote:
> > > Hi Mayuresh,
> > > 
> > > On 1/25/23 13:55, Mayuresh Chitale wrote:
> > > > When THP is enabled, 4K pages are collapsed into a single huge
> > > > page using the generic pmdp_collapse_flush() which will further
> > > > use flush_tlb_range() to shoot-down stale TLB entries.
> > > > Unfortunately,
> > > > the generic pmdp_collapse_flush() only invalidates cached leaf
> > > > PTEs
> > > > using address specific SFENCEs which results in repetitive (or
> > > > unpredictable) page faults on RISC-V implementations which
> > > > cache
> > > > non-leaf PTEs.
> > > 
> > > That's interesting! I'm wondering if the same issue will happen
> > > if a
> > > user maps 4K, unmaps it and at the same address maps a 2MB
> > > hugepage: I'm
> > > not sure the mm code would correctly flush the non-leaf PTE when
> > > unmapping the 4KB page. In that case, your patch only fixes the
> > > THP
> > > usecase and maybe we should try to catch this non-leaf -> leaf
> > > upgrade
> > > at some lower level page table functions, what do you think?
> > This issue can happen whenever existing/valid non-leaf PTE is
> > modified.
> > 
> > We hit this issue in the THP case but we can also hit this issue in
> > other
> > scenarios where the page table programming pattern is similar.
> 
> Then what about trying to get all those cases at once? We can easily 
> catch those spurious page faults as the pte would be valid: we would 
> just have to flush_tlb_mm at the first page fault, if any.

IMO, we can consider fixing those cases in a separate patch.
> 
> 
> > Regards,
> > Anup
> > 
> > > Alex
> > > 
> > > 
> > > > Provide a RISC-V specific pmdp_collapse_flush() which ensures
> > > > both
> > > > cached leaf and non-leaf PTEs are invalidated by using non-
> > > > address
> > > > specific SFENCEs as recommended by the RISC-V privileged
> > > > specification.
> > > > 
> > > > Fixes: e88b333142e4 ("riscv: mm: add THP support on 64-bit")
> > > > Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
> > > > ---
> > > >    arch/riscv/include/asm/pgtable.h | 24
> > > > ++++++++++++++++++++++++
> > > >    1 file changed, 24 insertions(+)
> > > > 
> > > > diff --git a/arch/riscv/include/asm/pgtable.h
> > > > b/arch/riscv/include/asm/pgtable.h
> > > > index 4eba9a98d0e3..6d948dec6020 100644
> > > > --- a/arch/riscv/include/asm/pgtable.h
> > > > +++ b/arch/riscv/include/asm/pgtable.h
> > > > @@ -721,6 +721,30 @@ static inline pmd_t pmdp_establish(struct
> > > > vm_area_struct *vma,
> > > >        page_table_check_pmd_set(vma->vm_mm, address, pmdp,
> > > > pmd);
> > > >        return __pmd(atomic_long_xchg((atomic_long_t *)pmdp,
> > > > pmd_val(pmd)));
> > > >    }
> > > > +
> > > > +#define pmdp_collapse_flush pmdp_collapse_flush
> > > > +static inline pmd_t pmdp_collapse_flush(struct vm_area_struct
> > > > *vma,
> > > > +                                     unsigned long address,
> > > > pmd_t *pmdp)
> > > > +{
> > > > +     pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address,
> > > > pmdp);
> > > > +
> > > > +     /*
> > > > +      * When leaf PTE enteries (regular pages) are collapsed
> > > > into a leaf
> > > > +      * PMD entry (huge page), a valid non-leaf PTE is
> > > > converted into a
> > > > +      * valid leaf PTE at the level 1 page table. The RISC-V
> > > > privileged v1.12
> > > > +      * specification allows implementations to cache valid
> > > > non-leaf PTEs,
> > > > +      * but the section "4.2.1 Supervisor Memory-Management
> > > > Fence
> > > > +      * Instruction" recommends the following:
> > > > +      * "If software modifies a non-leaf PTE, it should
> > > > execute SFENCE.VMA
> > > > +      * with rs1=x0. If any PTE along the traversal path had
> > > > its G bit set,
> > > > +      * rs2 must be x0; otherwise, rs2 should be set to the
> > > > ASID for which
> > > > +      * the translation is being modified."
> > > > +      * Based on the above recommendation, we should do full
> > > > flush whenever
> > > > +      * leaf PTE entries are collapsed into a leaf PMD entry.
> > > > +      */
> > > > +     flush_tlb_mm(vma->vm_mm);
> > > > +     return pmd;
> > > > +}
> > > >    #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> > > > 
> > > >    /*
> > _______________________________________________
> > linux-riscv mailing list
> > linux-riscv@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-riscv
> 
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
diff mbox series

Patch

diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 4eba9a98d0e3..6d948dec6020 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -721,6 +721,30 @@  static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd)));
 }
+
+#define pmdp_collapse_flush pmdp_collapse_flush
+static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
+					unsigned long address, pmd_t *pmdp)
+{
+	pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
+
+	/*
+	 * When leaf PTE enteries (regular pages) are collapsed into a leaf
+	 * PMD entry (huge page), a valid non-leaf PTE is converted into a
+	 * valid leaf PTE at the level 1 page table. The RISC-V privileged v1.12
+	 * specification allows implementations to cache valid non-leaf PTEs,
+	 * but the section "4.2.1 Supervisor Memory-Management Fence
+	 * Instruction" recommends the following:
+	 * "If software modifies a non-leaf PTE, it should execute SFENCE.VMA
+	 * with rs1=x0. If any PTE along the traversal path had its G bit set,
+	 * rs2 must be x0; otherwise, rs2 should be set to the ASID for which
+	 * the translation is being modified."
+	 * Based on the above recommendation, we should do full flush whenever
+	 * leaf PTE entries are collapsed into a leaf PMD entry.
+	 */
+	flush_tlb_mm(vma->vm_mm);
+	return pmd;
+}
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 /*