@@ -439,11 +439,9 @@ not allow to acquire p->lock because get_cpu_ptr() implicitly disables
spin_lock(&p->lock);
p->count += this_cpu_read(var2);
-On a non-PREEMPT_RT kernel migrate_disable() maps to preempt_disable()
-which makes the above code fully equivalent. On a PREEMPT_RT kernel
migrate_disable() ensures that the task is pinned on the current CPU which
in turn guarantees that the per-CPU access to var1 and var2 are staying on
-the same CPU.
+the same CPU while the task remains preemptible.
The migrate_disable() substitution is not valid for the following
scenario::
@@ -456,9 +454,8 @@ The migrate_disable() substitution is not valid for the following
p = this_cpu_ptr(&var1);
p->val = func2();
-While correct on a non-PREEMPT_RT kernel, this breaks on PREEMPT_RT because
-here migrate_disable() does not protect against reentrancy from a
-preempting task. A correct substitution for this case is::
+This breaks because migrate_disable() does not protect against reentrancy from
+a preempting task. A correct substitution for this case is::
func()
{
The initial implementation of migrate_disable() for mainline was a wrapper around preempt_disable(). RT kernels substituted this with a real migrate disable implementation. Later on mainline gained true migrate disable support, but the documentation was not updated. Update the documentation, remove the claims about migrate_disable() mapping to preempt_disable() on non-PREEMPT_RT kernels. Fixes: 74d862b682f51 ("sched: Make migrate_disable/enable() independent of RT") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- Documentation/locking/locktypes.rst | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)