Message ID | 20250217140809.1702789-7-ryan.roberts@arm.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Perf improvements for hugetlb and vmalloc on arm64 | expand |
On Mon, Feb 17, 2025 at 02:07:58PM +0000, Ryan Roberts wrote: > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index e255a36380dc..e4b1946b261f 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -317,10 +317,8 @@ static inline void __set_pte_nosync(pte_t *ptep, pte_t pte) > WRITE_ONCE(*ptep, pte); > } > > -static inline void __set_pte(pte_t *ptep, pte_t pte) > +static inline void __set_pte_complete(pte_t pte) > { > - __set_pte_nosync(ptep, pte); > - > /* > * Only if the new pte is valid and kernel, otherwise TLB maintenance > * or update_mmu_cache() have the necessary barriers. Unrelated to this patch but I just realised that this comment is stale, we no longer do anything in update_mmu_cache() since commit 120798d2e7d1 ("arm64: mm: remove dsb from update_mmu_cache"). If you respin, please remove the update_mmu_cache() part as well. Thanks.
On 22/02/2025 11:56, Catalin Marinas wrote: > On Mon, Feb 17, 2025 at 02:07:58PM +0000, Ryan Roberts wrote: >> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >> index e255a36380dc..e4b1946b261f 100644 >> --- a/arch/arm64/include/asm/pgtable.h >> +++ b/arch/arm64/include/asm/pgtable.h >> @@ -317,10 +317,8 @@ static inline void __set_pte_nosync(pte_t *ptep, pte_t pte) >> WRITE_ONCE(*ptep, pte); >> } >> >> -static inline void __set_pte(pte_t *ptep, pte_t pte) >> +static inline void __set_pte_complete(pte_t pte) >> { >> - __set_pte_nosync(ptep, pte); >> - >> /* >> * Only if the new pte is valid and kernel, otherwise TLB maintenance >> * or update_mmu_cache() have the necessary barriers. > > Unrelated to this patch but I just realised that this comment is stale, > we no longer do anything in update_mmu_cache() since commit 120798d2e7d1 > ("arm64: mm: remove dsb from update_mmu_cache"). If you respin, please > remove the update_mmu_cache() part as well. Will do! > > Thanks. >
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index e255a36380dc..e4b1946b261f 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -317,10 +317,8 @@ static inline void __set_pte_nosync(pte_t *ptep, pte_t pte) WRITE_ONCE(*ptep, pte); } -static inline void __set_pte(pte_t *ptep, pte_t pte) +static inline void __set_pte_complete(pte_t pte) { - __set_pte_nosync(ptep, pte); - /* * Only if the new pte is valid and kernel, otherwise TLB maintenance * or update_mmu_cache() have the necessary barriers. @@ -331,6 +329,12 @@ static inline void __set_pte(pte_t *ptep, pte_t pte) } } +static inline void __set_pte(pte_t *ptep, pte_t pte) +{ + __set_pte_nosync(ptep, pte); + __set_pte_complete(pte); +} + static inline pte_t __ptep_get(pte_t *ptep) { return READ_ONCE(*ptep); @@ -647,12 +651,14 @@ static inline void set_ptes_anysz(struct mm_struct *mm, pte_t *ptep, pte_t pte, for (;;) { __check_safe_pte_update(mm, ptep, pte); - __set_pte(ptep, pte); + __set_pte_nosync(ptep, pte); if (--nr == 0) break; ptep++; pte = pte_advance_pfn(pte, stride); } + + __set_pte_complete(pte); } static inline void __set_ptes(struct mm_struct *mm,
set_ptes_anysz() previously called __set_pte() for each PTE in the range, which would conditionally issue a DSB and ISB to make the new PTE value immediately visible to the table walker if the new PTE was valid and for kernel space. We can do better than this; let's hoist those barriers out of the loop so that they are only issued once at the end of the loop. We then reduce the cost by the number of PTEs in the range. Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> --- arch/arm64/include/asm/pgtable.h | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-)