Message ID | 20240202080756.1453939-24-ryan.roberts@arm.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Transparent Contiguous PTEs for User Mappings | expand |
On 02.02.24 09:07, Ryan Roberts wrote: > When core code iterates over a range of ptes and calls ptep_get() for > each of them, if the range happens to cover contpte mappings, the number > of pte reads becomes amplified by a factor of the number of PTEs in a > contpte block. This is because for each call to ptep_get(), the > implementation must read all of the ptes in the contpte block to which > it belongs to gather the access and dirty bits. > > This causes a hotspot for fork(), as well as operations that unmap > memory such as munmap(), exit and madvise(MADV_DONTNEED). Fortunately we > can fix this by implementing pte_batch_hint() which allows their > iterators to skip getting the contpte tail ptes when gathering the batch > of ptes to operate on. This results in the number of PTE reads returning > to 1 per pte. > > Tested-by: John Hubbard <jhubbard@nvidia.com> > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> > --- > arch/arm64/include/asm/pgtable.h | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index ad04adb7b87f..353ea67b5d75 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -1220,6 +1220,15 @@ static inline void contpte_try_unfold(struct mm_struct *mm, unsigned long addr, > __contpte_try_unfold(mm, addr, ptep, pte); > } > > +#define pte_batch_hint pte_batch_hint > +static inline unsigned int pte_batch_hint(pte_t *ptep, pte_t pte) > +{ > + if (!pte_valid_cont(pte)) > + return 1; > + > + return CONT_PTES - (((unsigned long)ptep >> 3) & (CONT_PTES - 1)); > +} > + > /* > * The below functions constitute the public API that arm64 presents to the > * core-mm to manipulate PTE entries within their page tables (or at least this Reviewed-by: David Hildenbrand <david@redhat.com>
On Fri, Feb 02, 2024 at 08:07:54AM +0000, Ryan Roberts wrote: > When core code iterates over a range of ptes and calls ptep_get() for > each of them, if the range happens to cover contpte mappings, the number > of pte reads becomes amplified by a factor of the number of PTEs in a > contpte block. This is because for each call to ptep_get(), the > implementation must read all of the ptes in the contpte block to which > it belongs to gather the access and dirty bits. > > This causes a hotspot for fork(), as well as operations that unmap > memory such as munmap(), exit and madvise(MADV_DONTNEED). Fortunately we > can fix this by implementing pte_batch_hint() which allows their > iterators to skip getting the contpte tail ptes when gathering the batch > of ptes to operate on. This results in the number of PTE reads returning > to 1 per pte. > > Tested-by: John Hubbard <jhubbard@nvidia.com> > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Mark. > --- > arch/arm64/include/asm/pgtable.h | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index ad04adb7b87f..353ea67b5d75 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -1220,6 +1220,15 @@ static inline void contpte_try_unfold(struct mm_struct *mm, unsigned long addr, > __contpte_try_unfold(mm, addr, ptep, pte); > } > > +#define pte_batch_hint pte_batch_hint > +static inline unsigned int pte_batch_hint(pte_t *ptep, pte_t pte) > +{ > + if (!pte_valid_cont(pte)) > + return 1; > + > + return CONT_PTES - (((unsigned long)ptep >> 3) & (CONT_PTES - 1)); > +} > + > /* > * The below functions constitute the public API that arm64 presents to the > * core-mm to manipulate PTE entries within their page tables (or at least this > -- > 2.25.1 >
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index ad04adb7b87f..353ea67b5d75 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1220,6 +1220,15 @@ static inline void contpte_try_unfold(struct mm_struct *mm, unsigned long addr, __contpte_try_unfold(mm, addr, ptep, pte); } +#define pte_batch_hint pte_batch_hint +static inline unsigned int pte_batch_hint(pte_t *ptep, pte_t pte) +{ + if (!pte_valid_cont(pte)) + return 1; + + return CONT_PTES - (((unsigned long)ptep >> 3) & (CONT_PTES - 1)); +} + /* * The below functions constitute the public API that arm64 presents to the * core-mm to manipulate PTE entries within their page tables (or at least this