diff mbox series

[1/2] lazy tlb: fix hotplug exit race with MMU_LAZY_TLB_SHOOTDOWN

Message ID 20230524060455.147699-1-npiggin@gmail.com (mailing list archive)
State New
Headers show
Series [1/2] lazy tlb: fix hotplug exit race with MMU_LAZY_TLB_SHOOTDOWN | expand

Commit Message

Nicholas Piggin May 24, 2023, 6:04 a.m. UTC
CPU unplug first calls __cpu_disable(), and that's where powerpc calls
cleanup_cpu_mmu_context(), which clears this CPU from mm_cpumask() of
all mms in the system.

However this CPU may still be using a lazy tlb mm, and its mm_cpumask
bit will be cleared from it. The CPU does not switch away from the lazy
tlb mm until arch_cpu_idle_dead() calls idle_task_exit().

If that user mm exits in this window, it will not be subject to the lazy
tlb mm shootdown and may be freed while in use as a lazy mm by the CPU
that is being unplugged.

cleanup_cpu_mmu_context() could be moved later, but it looks better to
move the lazy tlb mm switching earlier. The problem with doing the lazy
mm switching in idle_task_exit() is explained in commit bf2c59fce4074
("sched/core: Fix illegal RCU from offline CPUs"), which added a wart to
switch away from the mm but leave it set in active_mm to be cleaned up
later.

So instead, switch away from the lazy tlb mm on the stopper kthread
before the CPU is taken down. This CPU will never switch to a user
thread from this point, so it has no chance to pick up a new lazy tlb
mm. This removes the lazy tlb mm handling wart in CPU unplug.

idle_task_exit() remains to reduce churn in the patch. It could be
removed entirely after this because finish_cpu() makes a similar check.
finish_cpu() itself is not strictly needed because init_mm will never
have its refcount drop to zero.  But it is conceptually nicer to keep it
rather than have the idle thread drop the reference on the mm it is
using.

Fixes: 2655421ae69fa ("lazy tlb: shoot lazies, non-refcounting lazy tlb mm reference handling scheme")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 include/linux/sched/hotplug.h |  2 ++
 kernel/cpu.c                  | 11 +++++++----
 kernel/sched/core.c           | 24 +++++++++++++++++++-----
 3 files changed, 28 insertions(+), 9 deletions(-)
diff mbox series

Patch

diff --git a/include/linux/sched/hotplug.h b/include/linux/sched/hotplug.h
index 412cdaba33eb..cb447d8e3f9a 100644
--- a/include/linux/sched/hotplug.h
+++ b/include/linux/sched/hotplug.h
@@ -19,8 +19,10 @@  extern int sched_cpu_dying(unsigned int cpu);
 #endif
 
 #ifdef CONFIG_HOTPLUG_CPU
+extern void idle_task_prepare_exit(void);
 extern void idle_task_exit(void);
 #else
+static inline void idle_task_prepare_exit(void) {}
 static inline void idle_task_exit(void) {}
 #endif
 
diff --git a/kernel/cpu.c b/kernel/cpu.c
index f4a2c5845bcb..584def27ff24 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -618,12 +618,13 @@  static int finish_cpu(unsigned int cpu)
 	struct mm_struct *mm = idle->active_mm;
 
 	/*
-	 * idle_task_exit() will have switched to &init_mm, now
-	 * clean up any remaining active_mm state.
+	 * idle_task_prepare_exit() ensured the idle task was using
+	 * &init_mm. Now that the CPU has stopped, drop that refcount.
 	 */
-	if (mm != &init_mm)
-		idle->active_mm = &init_mm;
+	WARN_ON(mm != &init_mm);
+	idle->active_mm = NULL;
 	mmdrop_lazy_tlb(mm);
+
 	return 0;
 }
 
@@ -1030,6 +1031,8 @@  static int take_cpu_down(void *_param)
 	enum cpuhp_state target = max((int)st->target, CPUHP_AP_OFFLINE);
 	int err, cpu = smp_processor_id();
 
+	idle_task_prepare_exit();
+
 	/* Ensure this CPU doesn't handle any more interrupts. */
 	err = __cpu_disable();
 	if (err < 0)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a68d1276bab0..bc4ef1f3394b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -9373,19 +9373,33 @@  void sched_setnuma(struct task_struct *p, int nid)
  * Ensure that the idle task is using init_mm right before its CPU goes
  * offline.
  */
-void idle_task_exit(void)
+void idle_task_prepare_exit(void)
 {
 	struct mm_struct *mm = current->active_mm;
 
-	BUG_ON(cpu_online(smp_processor_id()));
-	BUG_ON(current != this_rq()->idle);
+	WARN_ON(!irqs_disabled());
 
 	if (mm != &init_mm) {
-		switch_mm(mm, &init_mm, current);
+		mmgrab_lazy_tlb(&init_mm);
+		current->active_mm = &init_mm;
+		switch_mm_irqs_off(mm, &init_mm, current);
 		finish_arch_post_lock_switch();
+		mmdrop_lazy_tlb(mm);
 	}
+	/* finish_cpu() will mmdrop the init_mm ref after this CPU stops */
+}
+
+/*
+ * After the CPU is offline, double check that it was previously switched to
+ * init_mm. This call can be removed because the condition is caught in
+ * finish_cpu() as well.
+ */
+void idle_task_exit(void)
+{
+	BUG_ON(cpu_online(smp_processor_id()));
+	BUG_ON(current != this_rq()->idle);
 
-	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
+	WARN_ON_ONCE(current->active_mm != &init_mm);
 }
 
 static int __balance_push_cpu_stop(void *arg)