Message ID | 20200930131859.16989-1-will@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: mm: Make flush_tlb_fix_spurious_fault() a no-op | expand |
On Wed, Sep 30, 2020 at 02:18:59PM +0100, Will Deacon wrote: > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index f07333e86c2f..a696a7921da4 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -218,7 +218,9 @@ int ptep_set_access_flags(struct vm_area_struct *vma, > pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); > } while (pteval != old_pteval); > > - flush_tlb_fix_spurious_fault(vma, address); > + /* Invalidate a stale read-only entry */ > + if (dirty) > + flush_tlb_page(vma, address); > return 1; In my proposal I had a pte_accessible(pte) check instead of dirty here since we may go for an old pte directly to a writable one and a TLBI wouldn't be needed. Not that it matters from a performance perspective. Either way, Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index bc68da9f5706..02ad3105c14c 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -50,6 +50,14 @@ extern void __pgd_error(const char *file, int line, unsigned long val); __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1) #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +/* + * Outside of a few very special situations (e.g. hibernation), we always + * use broadcast TLB invalidation instructions, therefore a spurious page + * fault on one CPU which has been handled concurrently by another CPU + * does not need to perform additional invalidation. + */ +#define flush_tlb_fix_spurious_fault(vma, address) do { } while (0) + /* * ZERO_PAGE is a global shared page that is always zero: used * for zero-mapped memory areas etc.. diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index f07333e86c2f..a696a7921da4 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -218,7 +218,9 @@ int ptep_set_access_flags(struct vm_area_struct *vma, pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); } while (pteval != old_pteval); - flush_tlb_fix_spurious_fault(vma, address); + /* Invalidate a stale read-only entry */ + if (dirty) + flush_tlb_page(vma, address); return 1; }
Our use of broadcast TLB maintenance means that spurious page-faults that have been handled already by another CPU do not require additional TLB maintenance. Make flush_tlb_fix_spurious_fault() a no-op and rely on the existing TLB invalidation instead. Add an explicit flush_tlb_page() when making a page dirty, as the TLB is permitted to cache the old read-only entry. Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200728092220.GA21800@willie-the-truck Signed-off-by: Will Deacon <will@kernel.org> --- arch/arm64/include/asm/pgtable.h | 8 ++++++++ arch/arm64/mm/fault.c | 4 +++- 2 files changed, 11 insertions(+), 1 deletion(-)