diff mbox series

[v2,04/10] rcu/tasks: Create rcu_idle_task_is_holdout() definition for !SMP

Message ID 20241009125127.18902-5-neeraj.upadhyay@kernel.org (mailing list archive)
State New
Headers show
Series Make RCU Tasks scan idle tasks | expand

Commit Message

Neeraj Upadhyay Oct. 9, 2024, 12:51 p.m. UTC
From: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>

rcu_idle_task_is_holdout() is called in rcu_tasks_kthread() context.
As idle tasks cannot be non-voluntary preempted, non-running idle tasks
are not in RCU-tasks critical section. So, idle task is not a RCU-tasks
holdout task on !SMP (which also covers TINY_RCU).

Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>
---
 kernel/rcu/tasks.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)
diff mbox series

Patch

diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 56015ced3f37..b794deeaf6d8 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -976,6 +976,7 @@  static void rcu_tasks_pregp_step(struct list_head *hop)
 	synchronize_rcu();
 }
 
+#ifdef CONFIG_SMP
 static bool rcu_idle_task_is_holdout(struct task_struct *t, int cpu)
 {
 	/* Idle tasks on offline CPUs are RCU-tasks quiescent states. */
@@ -984,6 +985,17 @@  static bool rcu_idle_task_is_holdout(struct task_struct *t, int cpu)
 
 	return true;
 }
+#else /* #ifdef CONFIG_SMP */
+static inline bool rcu_idle_task_is_holdout(struct task_struct *t, int cpu)
+{
+	/*
+	 * rcu_idle_task_is_holdout() is called in rcu_tasks_kthread()
+	 * context. Idle thread would have done a voluntary context
+	 * switch.
+	 */
+	return false;
+}
+#endif
 
 /* Check for quiescent states since the pregp's synchronize_rcu() */
 static bool rcu_tasks_is_holdout(struct task_struct *t)