Message ID | 20240215103205.2607016-17-ryan.roberts@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Transparent Contiguous PTEs for User Mappings | expand |
On Thu, Feb 15, 2024 at 10:32:03AM +0000, Ryan Roberts wrote: > When core code iterates over a range of ptes and calls ptep_get() for > each of them, if the range happens to cover contpte mappings, the number > of pte reads becomes amplified by a factor of the number of PTEs in a > contpte block. This is because for each call to ptep_get(), the > implementation must read all of the ptes in the contpte block to which > it belongs to gather the access and dirty bits. > > This causes a hotspot for fork(), as well as operations that unmap > memory such as munmap(), exit and madvise(MADV_DONTNEED). Fortunately we > can fix this by implementing pte_batch_hint() which allows their > iterators to skip getting the contpte tail ptes when gathering the batch > of ptes to operate on. This results in the number of PTE reads returning > to 1 per pte. > > Acked-by: Mark Rutland <mark.rutland@arm.com> > Reviewed-by: David Hildenbrand <david@redhat.com> > Tested-by: John Hubbard <jhubbard@nvidia.com> > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index a8f1a35e3086..d759a20d2929 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1213,6 +1213,15 @@ static inline void contpte_try_unfold(struct mm_struct *mm, unsigned long addr, __contpte_try_unfold(mm, addr, ptep, pte); } +#define pte_batch_hint pte_batch_hint +static inline unsigned int pte_batch_hint(pte_t *ptep, pte_t pte) +{ + if (!pte_valid_cont(pte)) + return 1; + + return CONT_PTES - (((unsigned long)ptep >> 3) & (CONT_PTES - 1)); +} + /* * The below functions constitute the public API that arm64 presents to the * core-mm to manipulate PTE entries within their page tables (or at least this