diff mbox series

[22/23] x86/mm: Optimize for_each_possible_lazymm_cpu()

Message ID 13849aa0218e0f32ac16b82950c682395a8fb5c7.1641659630.git.luto@kernel.org (mailing list archive)
State New
Headers show
Series mm, sched: Rework lazy mm handling | expand

Commit Message

Andy Lutomirski Jan. 8, 2022, 4:44 p.m. UTC
Now that x86 does not switch away from a lazy mm behind the scheduler's
back and thus clear a CPU from mm_cpumask() that the scheduler thinks is
lazy, x86 can use mm_cpumask() to optimize for_each_possible_lazymm_cpu().

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/x86/include/asm/mmu.h | 4 ++++
 arch/x86/mm/tlb.c          | 4 +++-
 2 files changed, 7 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
index 03ba71420ff3..da55f768e68c 100644
--- a/arch/x86/include/asm/mmu.h
+++ b/arch/x86/include/asm/mmu.h
@@ -63,5 +63,9 @@  typedef struct {
 		.lock = __MUTEX_INITIALIZER(mm.context.lock),		\
 	}
 
+/* On x86, mm_cpumask(mm) contains all CPUs that might be lazily using mm */
+#define for_each_possible_lazymm_cpu(cpu, mm) \
+	for_each_cpu((cpu), mm_cpumask((mm)))
+
 
 #endif /* _ASM_X86_MMU_H */
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 225b407812c7..04eb43e96e23 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -706,7 +706,9 @@  temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
 	/*
 	 * Make sure not to be in TLB lazy mode, as otherwise we'll end up
 	 * with a stale address space WITHOUT being in lazy mode after
-	 * restoring the previous mm.
+	 * restoring the previous mm.  Additionally, once we switch mms,
+	 * for_each_possible_lazymm_cpu() will no longer report this CPU,
+	 * so a lazymm pin wouldn't work.
 	 */
 	if (this_cpu_read(cpu_tlbstate_shared.is_lazy))
 		unlazy_mm_irqs_off();