diff mbox series

[v6,5/5] powerpc/64s/radix: combine final TLB flush and lazy tlb mm shootdown IPIs

Message ID 20230118080011.2258375-6-npiggin@gmail.com (mailing list archive)
State New
Headers show
Series shoot lazy tlbs | expand

Commit Message

Nicholas Piggin Jan. 18, 2023, 8 a.m. UTC
** Not for merge **

CONFIG_MMU_LAZY_TLB_SHOOTDOWN that requires IPIs to clear the "lazy tlb"
references to an mm that is being freed. With the radix MMU, the final
userspace exit TLB flush can be performed with IPIs, and those IPIs can
also clear lazy tlb mm references, which mostly eliminates the final
IPIs required by MMU_LAZY_TLB_SHOOTDOWN.

This does mean the final TLB flush is not done with TLBIE, which can be
faster than IPI+TLBIEL, but we would have to do those IPIs for lazy
shootdown so using TLBIEL should be a win.

The final cpumask test and possible IPIs are still needed to clean up
some rare race cases. We could prevent those entirely (e.g., prevent new
lazy tlb mm references if userspace has gone away, or move the final
TLB flush later), but I'd have to see actual numbers that matter before
adding any more complexity for it. I can't imagine it would ever be
worthwhile.

This takes lazy tlb mm shootdown IPI interrupts from 314 to 3 on a 144
CPU system doing a kernel compile. It also takes care of the one
potential problem workload which is a short-lived process with multiple
CPU-bound threads that want to be spread to other CPUs, because the mm
exit happens after the process is back to single-threaded.

---
 arch/powerpc/mm/book3s64/radix_tlb.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
index 282359ab525b..f34b78cb4c7d 100644
--- a/arch/powerpc/mm/book3s64/radix_tlb.c
+++ b/arch/powerpc/mm/book3s64/radix_tlb.c
@@ -1303,7 +1303,31 @@  void radix__tlb_flush(struct mmu_gather *tlb)
 	 * See the comment for radix in arch_exit_mmap().
 	 */
 	if (tlb->fullmm || tlb->need_flush_all) {
-		__flush_all_mm(mm, true);
+		if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_SHOOTDOWN)) {
+			/*
+			 * Shootdown based lazy tlb mm refcounting means we
+			 * have to IPI everyone in the mm_cpumask anyway soon
+			 * when the mm goes away, so might as well do it as
+			 * part of the final flush now.
+			 *
+			 * If lazy shootdown was improved to reduce IPIs (e.g.,
+			 * by batching), then it may end up being better to use
+			 * tlbies here instead.
+			 */
+			smp_mb(); /* see radix__flush_tlb_mm */
+			exit_flush_lazy_tlbs(mm);
+			_tlbiel_pid(mm->context.id, RIC_FLUSH_ALL);
+
+			/*
+			 * It should not be possible to have coprocessors still
+			 * attached here.
+			 */
+			if (WARN_ON_ONCE(atomic_read(&mm->context.copros) > 0))
+				__flush_all_mm(mm, true);
+		} else {
+			__flush_all_mm(mm, true);
+		}
+
 	} else if ( (psize = radix_get_mmu_psize(page_size)) == -1) {
 		if (!tlb->freed_tables)
 			radix__flush_tlb_mm(mm);