diff mbox series

[v2,09/10] context_tracking: Invoke RCU-tasks enter/exit for NMI context

Message ID 20241009125127.18902-10-neeraj.upadhyay@kernel.org (mailing list archive)
State New
Headers show
Series Make RCU Tasks scan idle tasks | expand

Commit Message

Neeraj Upadhyay Oct. 9, 2024, 12:51 p.m. UTC
From: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>

rcu_task_enter() and rcu_task_exit() are not called on NMI
entry and exit. So, Tasks-RCU-Rude grace period wait is required to
ensure that NMI handlers have entered/exited into Tasks-RCU eqs.
For architectures which do not require Tasks-RCU-Rude (as the code
sections where RCU is not watching are marked as noinstr), when
those architectures switch to not using Tasks-RCU-Rude, NMI handlers
task exit to eqs will need to be handled correctly for Tasks-RCU holdout
tasks running on nohz_full CPUs. As it is safe to call these two
functions from NMI context, remove the in_nmi() check. This ensures
that RCU-tasks entry/exit is marked correctly for NMI handlers.
With this check removed, all callers of ct_kernel_exit_state() and
ct_kernel_enter_state() now also call rcu_task_exit() and
rcu_task_enter() respectively. So, fold rcu_task_exit() and
rcu_task_entry() calls into ct_kernel_exit_state() and
ct_kernel_enter_state().

Reported-by: Frederic Weisbecker <frederic@kernel.org>
Suggested-by: Frederic Weisbecker <frederic@kernel.org>
Suggested-by: "Paul E. McKenney" <paulmck@kernel.org>
Reviewed-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>
---
 kernel/context_tracking.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)
diff mbox series

Patch

diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index 938c48952d26..85ced563af23 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -91,6 +91,7 @@  static noinstr void ct_kernel_exit_state(int offset)
 	seq = ct_state_inc(offset);
 	// RCU is no longer watching.  Better be in extended quiescent state!
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && (seq & CT_RCU_WATCHING));
+	rcu_task_exit();
 }
 
 /*
@@ -102,6 +103,8 @@  static noinstr void ct_kernel_enter_state(int offset)
 {
 	int seq;
 
+	rcu_task_enter();
+
 	/*
 	 * CPUs seeing atomic_add_return() must see prior idle sojourns,
 	 * and we also must force ordering with the next RCU read-side
@@ -149,7 +152,6 @@  static void noinstr ct_kernel_exit(bool user, int offset)
 	// RCU is watching here ...
 	ct_kernel_exit_state(offset);
 	// ... but is no longer watching here.
-	rcu_task_exit();
 }
 
 /*
@@ -173,7 +175,6 @@  static void noinstr ct_kernel_enter(bool user, int offset)
 		ct->nesting++;
 		return;
 	}
-	rcu_task_enter();
 	// RCU is not watching here ...
 	ct_kernel_enter_state(offset);
 	// ... but is watching here.
@@ -238,9 +239,6 @@  void noinstr ct_nmi_exit(void)
 	// RCU is watching here ...
 	ct_kernel_exit_state(CT_RCU_WATCHING);
 	// ... but is no longer watching here.
-
-	if (!in_nmi())
-		rcu_task_exit();
 }
 
 /**
@@ -273,9 +271,6 @@  void noinstr ct_nmi_enter(void)
 	 */
 	if (!rcu_is_watching_curr_cpu()) {
 
-		if (!in_nmi())
-			rcu_task_enter();
-
 		// RCU is not watching here ...
 		ct_kernel_enter_state(CT_RCU_WATCHING);
 		// ... but is watching here.