Message ID | 20210721115118.729943-3-valentin.schneider@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | sched: migrate_disable() vs per-CPU access safety checks | expand |
On Wed, Jul 21, 2021 at 12:51:17PM +0100, Valentin Schneider wrote: > Running v5.13-rt1 on my arm64 Juno board triggers: > > [ 0.156302] ============================= > [ 0.160416] WARNING: suspicious RCU usage > [ 0.164529] 5.13.0-rt1 #20 Not tainted > [ 0.168300] ----------------------------- > [ 0.172409] kernel/rcu/tree_plugin.h:69 Unsafe read of RCU_NOCB offloaded state! > [ 0.179920] > [ 0.179920] other info that might help us debug this: > [ 0.179920] > [ 0.188037] > [ 0.188037] rcu_scheduler_active = 1, debug_locks = 1 > [ 0.194677] 3 locks held by rcuc/0/11: > [ 0.198448] #0: ffff00097ef10cf8 ((softirq_ctrl.lock).lock){+.+.}-{2:2}, at: __local_bh_disable_ip (./include/linux/rcupdate.h:662 kernel/softirq.c:171) > [ 0.208709] #1: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock (kernel/locking/spinlock_rt.c:43 (discriminator 4)) > [ 0.217134] #2: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: __local_bh_disable_ip (kernel/softirq.c:169) > [ 0.226428] > [ 0.226428] stack backtrace: > [ 0.230889] CPU: 0 PID: 11 Comm: rcuc/0 Not tainted 5.13.0-rt1 #20 > [ 0.237100] Hardware name: ARM Juno development board (r0) (DT) > [ 0.243041] Call trace: > [ 0.245497] dump_backtrace (arch/arm64/kernel/stacktrace.c:163) > [ 0.249185] show_stack (arch/arm64/kernel/stacktrace.c:219) > [ 0.252522] dump_stack (lib/dump_stack.c:122) > [ 0.255947] lockdep_rcu_suspicious (kernel/locking/lockdep.c:6439) > [ 0.260328] rcu_rdp_is_offloaded (kernel/rcu/tree_plugin.h:69 kernel/rcu/tree_plugin.h:58) > [ 0.264537] rcu_core (kernel/rcu/tree.c:2332 kernel/rcu/tree.c:2398 kernel/rcu/tree.c:2777) > [ 0.267786] rcu_cpu_kthread (./include/linux/bottom_half.h:32 kernel/rcu/tree.c:2876) > [ 0.271644] smpboot_thread_fn (kernel/smpboot.c:165 (discriminator 3)) > [ 0.275767] kthread (kernel/kthread.c:321) > [ 0.279013] ret_from_fork (arch/arm64/kernel/entry.S:1005) > > In this case, this is the RCU core kthread accessing the local CPU's > rdp. Before that, rcu_cpu_kthread() invokes local_bh_disable(). > > Under !CONFIG_PREEMPT_RT (and rcutree.use_softirq=0), this ends up > incrementing the preempt_count, which satisfies the "local non-preemptible > read" of rcu_rdp_is_offloaded(). > > Under CONFIG_PREEMPT_RT however, this becomes > > local_lock(&softirq_ctrl.lock) > > which, under the same config, is migrate_disable() + rt_spin_lock(). > This *does* prevent the task from migrating away, but not in a way > rcu_rdp_is_offloaded() can notice. Note that the invoking task is an > smpboot thread, and thus cannot be migrated away in the first place. > > Check is_pcpu_safe() here rather than preemptible(). > > Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Acked-by: Paul E. McKenney <paulmck@kernel.org> > --- > kernel/rcu/tree_plugin.h | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > index ad0156b86937..6c3c4100da83 100644 > --- a/kernel/rcu/tree_plugin.h > +++ b/kernel/rcu/tree_plugin.h > @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp) > !(lockdep_is_held(&rcu_state.barrier_mutex) || > (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) || > rcu_lockdep_is_held_nocb(rdp) || > - (rdp == this_cpu_ptr(&rcu_data) && > - !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) || > + (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) || > rcu_current_is_nocb_kthread(rdp) || > rcu_running_nocb_timer(rdp)), > "Unsafe read of RCU_NOCB offloaded state" > -- > 2.25.1 >
On Wed, Jul 21, 2021 at 12:51:17PM +0100, Valentin Schneider wrote: > Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> > --- > kernel/rcu/tree_plugin.h | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > index ad0156b86937..6c3c4100da83 100644 > --- a/kernel/rcu/tree_plugin.h > +++ b/kernel/rcu/tree_plugin.h > @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp) > !(lockdep_is_held(&rcu_state.barrier_mutex) || > (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) || > rcu_lockdep_is_held_nocb(rdp) || > - (rdp == this_cpu_ptr(&rcu_data) && > - !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) || > + (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) || I fear that won't work. We really need any caller of rcu_rdp_is_offloaded() on the local rdp to have preemption disabled and not just migration disabled, because we must protect against concurrent offloaded state changes. The offloaded state is changed by a workqueue that executes on the target rdp. Here is a practical example where it matters: CPU 0 ----- // =======> task rcuc running rcu_core { rcu_nocb_lock_irqsave(rdp, flags) { if (!rcu_segcblist_is_offloaded(rdp->cblist)) { // is not offloaded right now, so it's going // to just disable IRQs. Oh no wait: // preemption // ========> workqueue running rcu_nocb_rdp_offload(); // ========> task rcuc resume local_irq_disable(); } } .... rcu_nocb_unlock_irqrestore(rdp, flags) { if (rcu_segcblist_is_offloaded(rdp->cblist)) { // is offloaded right now so: raw_spin_unlock_irqrestore(rdp, flags); And that will explode because that's an impaired unlock on nocb_lock.
On 28/07/21 01:08, Frederic Weisbecker wrote: > On Wed, Jul 21, 2021 at 12:51:17PM +0100, Valentin Schneider wrote: >> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> >> --- >> kernel/rcu/tree_plugin.h | 3 +-- >> 1 file changed, 1 insertion(+), 2 deletions(-) >> >> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h >> index ad0156b86937..6c3c4100da83 100644 >> --- a/kernel/rcu/tree_plugin.h >> +++ b/kernel/rcu/tree_plugin.h >> @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp) >> !(lockdep_is_held(&rcu_state.barrier_mutex) || >> (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) || >> rcu_lockdep_is_held_nocb(rdp) || >> - (rdp == this_cpu_ptr(&rcu_data) && >> - !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) || >> + (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) || > > I fear that won't work. We really need any caller of rcu_rdp_is_offloaded() > on the local rdp to have preemption disabled and not just migration disabled, > because we must protect against concurrent offloaded state changes. > > The offloaded state is changed by a workqueue that executes on the target rdp. > > Here is a practical example where it matters: > > CPU 0 > ----- > // =======> task rcuc running > rcu_core { > rcu_nocb_lock_irqsave(rdp, flags) { > if (!rcu_segcblist_is_offloaded(rdp->cblist)) { > // is not offloaded right now, so it's going > // to just disable IRQs. Oh no wait: > // preemption > // ========> workqueue running > rcu_nocb_rdp_offload(); > // ========> task rcuc resume > local_irq_disable(); > } > } > .... > rcu_nocb_unlock_irqrestore(rdp, flags) { > if (rcu_segcblist_is_offloaded(rdp->cblist)) { > // is offloaded right now so: > raw_spin_unlock_irqrestore(rdp, flags); > > And that will explode because that's an impaired unlock on nocb_lock. Harumph, that doesn't look good, thanks for pointing this out. AFAICT PREEMPT_RT doesn't actually require to disable softirqs here (since it forces RCU callbacks on the RCU kthreads), but disabled softirqs seem to be a requirement for much of the underlying functions and even some of the callbacks (delayed_put_task_struct() ~> vfree() pays close attention to in_interrupt() for instance). Now, if the offloaded state was (properly) protected by a local_lock, do you reckon we could then keep preemption enabled? From a naive outsider PoV, rdp->nocb_lock looks like a decent candidate, but it's a *raw* spinlock (I can't tell right now whether changing this is a horrible idea or not), and then there's 81c0b3d724f4 ("rcu/nocb: Avoid ->nocb_lock capture by corresponding CPU") on top...
On Wed, Jul 28, 2021 at 08:34:14PM +0100, Valentin Schneider wrote: > On 28/07/21 01:08, Frederic Weisbecker wrote: > > On Wed, Jul 21, 2021 at 12:51:17PM +0100, Valentin Schneider wrote: > >> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> > >> --- > >> kernel/rcu/tree_plugin.h | 3 +-- > >> 1 file changed, 1 insertion(+), 2 deletions(-) > >> > >> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > >> index ad0156b86937..6c3c4100da83 100644 > >> --- a/kernel/rcu/tree_plugin.h > >> +++ b/kernel/rcu/tree_plugin.h > >> @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp) > >> !(lockdep_is_held(&rcu_state.barrier_mutex) || > >> (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) || > >> rcu_lockdep_is_held_nocb(rdp) || > >> - (rdp == this_cpu_ptr(&rcu_data) && > >> - !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) || > >> + (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) || > > > > I fear that won't work. We really need any caller of rcu_rdp_is_offloaded() > > on the local rdp to have preemption disabled and not just migration disabled, > > because we must protect against concurrent offloaded state changes. > > > > The offloaded state is changed by a workqueue that executes on the target rdp. > > > > Here is a practical example where it matters: > > > > CPU 0 > > ----- > > // =======> task rcuc running > > rcu_core { > > rcu_nocb_lock_irqsave(rdp, flags) { > > if (!rcu_segcblist_is_offloaded(rdp->cblist)) { > > // is not offloaded right now, so it's going > > // to just disable IRQs. Oh no wait: > > // preemption > > // ========> workqueue running > > rcu_nocb_rdp_offload(); > > // ========> task rcuc resume > > local_irq_disable(); > > } > > } > > .... > > rcu_nocb_unlock_irqrestore(rdp, flags) { > > if (rcu_segcblist_is_offloaded(rdp->cblist)) { > > // is offloaded right now so: > > raw_spin_unlock_irqrestore(rdp, flags); > > > > And that will explode because that's an impaired unlock on nocb_lock. > > Harumph, that doesn't look good, thanks for pointing this out. > > AFAICT PREEMPT_RT doesn't actually require to disable softirqs here (since > it forces RCU callbacks on the RCU kthreads), but disabled softirqs seem to > be a requirement for much of the underlying functions and even some of the > callbacks (delayed_put_task_struct() ~> vfree() pays close attention to > in_interrupt() for instance). > > Now, if the offloaded state was (properly) protected by a local_lock, do > you reckon we could then keep preemption enabled? I guess we could take such a local lock on the update side (rcu_nocb_rdp_offload) and then take it on rcuc kthread/softirqs and maybe other places. But we must make sure that rcu_core() is preempt-safe from a general perspective in the first place. From a quick glance I can't find obvious issues...yet. Paul maybe you can see something? > > From a naive outsider PoV, rdp->nocb_lock looks like a decent candidate, > but it's a *raw* spinlock (I can't tell right now whether changing this is > a horrible idea or not), and then there's Yeah that's not possible, nocb_lock is too low level and has to be called with IRQs disabled. So if we take that local_lock solution, we need a new lock. Thanks.
On Thu, Jul 29, 2021 at 12:01:37AM +0200, Frederic Weisbecker wrote: > On Wed, Jul 28, 2021 at 08:34:14PM +0100, Valentin Schneider wrote: > > On 28/07/21 01:08, Frederic Weisbecker wrote: > > > On Wed, Jul 21, 2021 at 12:51:17PM +0100, Valentin Schneider wrote: > > >> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> > > >> --- > > >> kernel/rcu/tree_plugin.h | 3 +-- > > >> 1 file changed, 1 insertion(+), 2 deletions(-) > > >> > > >> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > > >> index ad0156b86937..6c3c4100da83 100644 > > >> --- a/kernel/rcu/tree_plugin.h > > >> +++ b/kernel/rcu/tree_plugin.h > > >> @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp) > > >> !(lockdep_is_held(&rcu_state.barrier_mutex) || > > >> (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) || > > >> rcu_lockdep_is_held_nocb(rdp) || > > >> - (rdp == this_cpu_ptr(&rcu_data) && > > >> - !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) || > > >> + (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) || > > > > > > I fear that won't work. We really need any caller of rcu_rdp_is_offloaded() > > > on the local rdp to have preemption disabled and not just migration disabled, > > > because we must protect against concurrent offloaded state changes. > > > > > > The offloaded state is changed by a workqueue that executes on the target rdp. > > > > > > Here is a practical example where it matters: > > > > > > CPU 0 > > > ----- > > > // =======> task rcuc running > > > rcu_core { > > > rcu_nocb_lock_irqsave(rdp, flags) { > > > if (!rcu_segcblist_is_offloaded(rdp->cblist)) { > > > // is not offloaded right now, so it's going > > > // to just disable IRQs. Oh no wait: > > > // preemption > > > // ========> workqueue running > > > rcu_nocb_rdp_offload(); > > > // ========> task rcuc resume > > > local_irq_disable(); > > > } > > > } > > > .... > > > rcu_nocb_unlock_irqrestore(rdp, flags) { > > > if (rcu_segcblist_is_offloaded(rdp->cblist)) { > > > // is offloaded right now so: > > > raw_spin_unlock_irqrestore(rdp, flags); > > > > > > And that will explode because that's an impaired unlock on nocb_lock. > > > > Harumph, that doesn't look good, thanks for pointing this out. > > > > AFAICT PREEMPT_RT doesn't actually require to disable softirqs here (since > > it forces RCU callbacks on the RCU kthreads), but disabled softirqs seem to > > be a requirement for much of the underlying functions and even some of the > > callbacks (delayed_put_task_struct() ~> vfree() pays close attention to > > in_interrupt() for instance). > > > > Now, if the offloaded state was (properly) protected by a local_lock, do > > you reckon we could then keep preemption enabled? > > I guess we could take such a local lock on the update side > (rcu_nocb_rdp_offload) and then take it on rcuc kthread/softirqs > and maybe other places. > > But we must make sure that rcu_core() is preempt-safe from a general perspective > in the first place. From a quick glance I can't find obvious issues...yet. > > Paul maybe you can see something? Let's see... o Extra context switches in rcu_core() mean extra quiescent states. It therefore might be necessary to wrap rcu_core() in an rcu_read_lock() / rcu_read_unlock() pair, because otherwise an RCU grace period won't wait for rcu_core(). Actually, better have local_bh_disable() imply rcu_read_lock() and local_bh_enable() imply rcu_read_unlock(). But I would hope that this already happened. o The rcu_preempt_deferred_qs() check should still be fine, unless there is a raw_bh_disable() in -rt. o The set_tsk_need_resched() and set_preempt_need_resched() might preempt immediately. I cannot think of a problem with that, but careful testing is clearly in order. o The values checked by rcu_check_quiescent_state() could now change while this function is running. I don't immediately see a problematic sequence of events, but here be dragons. I therefore suggest disabling preemption across this function. Or if that is impossible, taking a very careful look at the proposed expansion of the state space of this function. o I don't see any new races in the grace-period/callback check. New callbacks can appear in interrupt handlers, after all. o The rcu_check_gp_start_stall() function looks similarly unproblematic. o Callback invocation can now be preempted, but then again it recently started being concurrent, so this should be no added risk over offloading/de-offloading. o I don't see any problem with do_nocb_deferred_wakeup(). o The CONFIG_RCU_STRICT_GRACE_PERIOD check should not be impacted. So some adjustments might be needed, but I don't see a need for major surgery. This of course might be a failure of imagination on my part, so it wouldn't hurt to double-check my observations. > > From a naive outsider PoV, rdp->nocb_lock looks like a decent candidate, > > but it's a *raw* spinlock (I can't tell right now whether changing this is > > a horrible idea or not), and then there's > > Yeah that's not possible, nocb_lock is too low level and has to be called with > IRQs disabled. So if we take that local_lock solution, we need a new lock. No argument here! Thanx, Paul
On 28/07/21 18:04, Paul E. McKenney wrote: > On Thu, Jul 29, 2021 at 12:01:37AM +0200, Frederic Weisbecker wrote: >> On Wed, Jul 28, 2021 at 08:34:14PM +0100, Valentin Schneider wrote: >> > Now, if the offloaded state was (properly) protected by a local_lock, do >> > you reckon we could then keep preemption enabled? >> >> I guess we could take such a local lock on the update side >> (rcu_nocb_rdp_offload) and then take it on rcuc kthread/softirqs >> and maybe other places. >> >> But we must make sure that rcu_core() is preempt-safe from a general perspective >> in the first place. From a quick glance I can't find obvious issues...yet. >> >> Paul maybe you can see something? > > Let's see... > > o Extra context switches in rcu_core() mean extra quiescent > states. It therefore might be necessary to wrap rcu_core() > in an rcu_read_lock() / rcu_read_unlock() pair, because > otherwise an RCU grace period won't wait for rcu_core(). > > Actually, better have local_bh_disable() imply > rcu_read_lock() and local_bh_enable() imply rcu_read_unlock(). > But I would hope that this already happened. It does look like it. > > o The rcu_preempt_deferred_qs() check should still be fine, > unless there is a raw_bh_disable() in -rt. > > o The set_tsk_need_resched() and set_preempt_need_resched() > might preempt immediately. I cannot think of a problem > with that, but careful testing is clearly in order. > > o The values checked by rcu_check_quiescent_state() could now > change while this function is running. I don't immediately > see a problematic sequence of events, but here be dragons. > I therefore suggest disabling preemption across this function. > Or if that is impossible, taking a very careful look at the > proposed expansion of the state space of this function. > > o I don't see any new races in the grace-period/callback check. > New callbacks can appear in interrupt handlers, after all. > > o The rcu_check_gp_start_stall() function looks similarly > unproblematic. > > o Callback invocation can now be preempted, but then again it > recently started being concurrent, so this should be no > added risk over offloading/de-offloading. > > o I don't see any problem with do_nocb_deferred_wakeup(). > > o The CONFIG_RCU_STRICT_GRACE_PERIOD check should not be > impacted. > > So some adjustments might be needed, but I don't see a need for > major surgery. > > This of course might be a failure of imagination on my part, so it > wouldn't hurt to double-check my observations. > I'll go poke around, thank you both!
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index ad0156b86937..6c3c4100da83 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp) !(lockdep_is_held(&rcu_state.barrier_mutex) || (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) || rcu_lockdep_is_held_nocb(rdp) || - (rdp == this_cpu_ptr(&rcu_data) && - !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) || + (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) || rcu_current_is_nocb_kthread(rdp) || rcu_running_nocb_timer(rdp)), "Unsafe read of RCU_NOCB offloaded state"
Running v5.13-rt1 on my arm64 Juno board triggers: [ 0.156302] ============================= [ 0.160416] WARNING: suspicious RCU usage [ 0.164529] 5.13.0-rt1 #20 Not tainted [ 0.168300] ----------------------------- [ 0.172409] kernel/rcu/tree_plugin.h:69 Unsafe read of RCU_NOCB offloaded state! [ 0.179920] [ 0.179920] other info that might help us debug this: [ 0.179920] [ 0.188037] [ 0.188037] rcu_scheduler_active = 1, debug_locks = 1 [ 0.194677] 3 locks held by rcuc/0/11: [ 0.198448] #0: ffff00097ef10cf8 ((softirq_ctrl.lock).lock){+.+.}-{2:2}, at: __local_bh_disable_ip (./include/linux/rcupdate.h:662 kernel/softirq.c:171) [ 0.208709] #1: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock (kernel/locking/spinlock_rt.c:43 (discriminator 4)) [ 0.217134] #2: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: __local_bh_disable_ip (kernel/softirq.c:169) [ 0.226428] [ 0.226428] stack backtrace: [ 0.230889] CPU: 0 PID: 11 Comm: rcuc/0 Not tainted 5.13.0-rt1 #20 [ 0.237100] Hardware name: ARM Juno development board (r0) (DT) [ 0.243041] Call trace: [ 0.245497] dump_backtrace (arch/arm64/kernel/stacktrace.c:163) [ 0.249185] show_stack (arch/arm64/kernel/stacktrace.c:219) [ 0.252522] dump_stack (lib/dump_stack.c:122) [ 0.255947] lockdep_rcu_suspicious (kernel/locking/lockdep.c:6439) [ 0.260328] rcu_rdp_is_offloaded (kernel/rcu/tree_plugin.h:69 kernel/rcu/tree_plugin.h:58) [ 0.264537] rcu_core (kernel/rcu/tree.c:2332 kernel/rcu/tree.c:2398 kernel/rcu/tree.c:2777) [ 0.267786] rcu_cpu_kthread (./include/linux/bottom_half.h:32 kernel/rcu/tree.c:2876) [ 0.271644] smpboot_thread_fn (kernel/smpboot.c:165 (discriminator 3)) [ 0.275767] kthread (kernel/kthread.c:321) [ 0.279013] ret_from_fork (arch/arm64/kernel/entry.S:1005) In this case, this is the RCU core kthread accessing the local CPU's rdp. Before that, rcu_cpu_kthread() invokes local_bh_disable(). Under !CONFIG_PREEMPT_RT (and rcutree.use_softirq=0), this ends up incrementing the preempt_count, which satisfies the "local non-preemptible read" of rcu_rdp_is_offloaded(). Under CONFIG_PREEMPT_RT however, this becomes local_lock(&softirq_ctrl.lock) which, under the same config, is migrate_disable() + rt_spin_lock(). This *does* prevent the task from migrating away, but not in a way rcu_rdp_is_offloaded() can notice. Note that the invoking task is an smpboot thread, and thus cannot be migrated away in the first place. Check is_pcpu_safe() here rather than preemptible(). Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> --- kernel/rcu/tree_plugin.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)