From patchwork Thu Jan 5 00:22:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13089231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DC63C53210 for ; Thu, 5 Jan 2023 00:23:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235540AbjAEAXT (ORCPT ); Wed, 4 Jan 2023 19:23:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235383AbjAEAXM (ORCPT ); Wed, 4 Jan 2023 19:23:12 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA7134436F; Wed, 4 Jan 2023 16:23:10 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 239C9B81985; Thu, 5 Jan 2023 00:23:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE2CAC433EF; Thu, 5 Jan 2023 00:23:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878187; bh=0ljvs2QOdGuLUAaxiiCnltc5mW3jT5j/qQAFulm3cTg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NQ30BrwZpdHVHwiV8AvKN4qD68IuNfd7pZ3HCV3tSA+xnYTPknssJ03j+KSIwDPRT d1LVu6c34jQAptXnXPtEJ93Qu4i+fagmjDXhWQJLcgvgWkxuuFDYZsVDKcWRDL8+YF LaOExSXR68T4b08uJAQhHjGKNo+nGSqNjkSZJuXZPq/Ksv17G9I0wuk/B9JRHneruR w1rspg+xPwl18VgiiV9bFfXBR1W1+o+BSJnAFDWgMK8jWZBz40eV3kNvnlFLHAZeuN mReSTicA3crOuq/1B2lZafVGei3PAuz9ACzVibvvsi3LYEBtWACDBpj3QbZBETq3+U 7thuX9zkbw0sg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7327F5C05CA; Wed, 4 Jan 2023 16:23:07 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, Zhao Mengmeng , "Paul E . McKenney" Subject: [PATCH rcu 01/10] rcu: Use hlist_nulls_next_rcu() in hlist_nulls_add_tail_rcu() Date: Wed, 4 Jan 2023 16:22:56 -0800 Message-Id: <20230105002305.1768591-1-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> References: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Zhao Mengmeng In commit 8dbd76e79a16 ("tcp/dccp: fix possible race __inet_lookup_established()"), function hlist_nulls_add_tail_rcu() was added back, but the local variable *last* is of type hlist_nulls_node, so use hlist_nulls_next_rcu() instead of hlist_next_rcu(). Signed-off-by: Zhao Mengmeng Signed-off-by: Paul E. McKenney --- include/linux/rculist_nulls.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h index d8afdb8784c1c..ba4c00dd8005a 100644 --- a/include/linux/rculist_nulls.h +++ b/include/linux/rculist_nulls.h @@ -139,7 +139,7 @@ static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n, if (last) { n->next = last->next; n->pprev = &last->next; - rcu_assign_pointer(hlist_next_rcu(last), n); + rcu_assign_pointer(hlist_nulls_next_rcu(last), n); } else { hlist_nulls_add_head_rcu(n, h); } From patchwork Thu Jan 5 00:22:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13089234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AD15C46467 for ; Thu, 5 Jan 2023 00:23:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235517AbjAEAXV (ORCPT ); Wed, 4 Jan 2023 19:23:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235496AbjAEAXN (ORCPT ); Wed, 4 Jan 2023 19:23:13 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA84744372; Wed, 4 Jan 2023 16:23:10 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1AE0EB8167B; Thu, 5 Jan 2023 00:23:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B908CC433D2; Thu, 5 Jan 2023 00:23:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878187; bh=RVZbEyTmCniCww65JvMwyLYk21JOsmMrHWf5ALY+E3s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SCUL6KbQmIBZ8VvBA/pmHR6NC0WroSi1i+aHvUdwxTjFNUMIXXOLYQwmoLGKPgo5D mTx7e4rPGU1tHdND6mQdsBo+goFLWeKcSsVJNtZaYdOEnSxq2KMUBy7pdtVjCEvym7 jigU4sRoYWHPTQSjkBqIsP9U7a6vqYs5H2m9r9cToVoC6VlsxqzJwIlcxP0/Uk8H0y vza1fCeaOt+SjlAZCJ+o5Iv2LvGgW7ch7Yr3V/RgXyJIGlvjW/SVep92kDaapMO5M9 weVYXl8YDQcsLa/DSLK+/Dm1NdGYtYUSK5TWK9Y/v+tyGw66wDufDy93M9E3MtoCcX sSt0uFgT5+gGQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 755145C086D; Wed, 4 Jan 2023 16:23:07 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 02/10] rcu: Consolidate initialization and CPU-hotplug code Date: Wed, 4 Jan 2023 16:22:57 -0800 Message-Id: <20230105002305.1768591-2-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> References: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit consolidates the initialization and CPU-hotplug code at the end of kernel/rcu/tree.c. This is strictly a code-motion commit. No functionality has changed. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 314 +++++++++++++++++++++++----------------------- 1 file changed, 158 insertions(+), 156 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index cf34a961821ad..d3b082233b74b 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -144,14 +144,16 @@ static int rcu_scheduler_fully_active __read_mostly; static void rcu_report_qs_rnp(unsigned long mask, struct rcu_node *rnp, unsigned long gps, unsigned long flags); -static void rcu_init_new_rnp(struct rcu_node *rnp_leaf); -static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf); static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu); static void invoke_rcu_core(void); static void rcu_report_exp_rdp(struct rcu_data *rdp); static void sync_sched_exp_online_cleanup(int cpu); static void check_cb_ovld_locked(struct rcu_data *rdp, struct rcu_node *rnp); static bool rcu_rdp_is_offloaded(struct rcu_data *rdp); +static bool rcu_rdp_cpu_online(struct rcu_data *rdp); +static bool rcu_init_invoked(void); +static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf); +static void rcu_init_new_rnp(struct rcu_node *rnp_leaf); /* * rcuc/rcub/rcuop kthread realtime priority. The "rcuop" @@ -214,27 +216,6 @@ EXPORT_SYMBOL_GPL(rcu_get_gp_kthreads_prio); */ #define PER_RCU_NODE_PERIOD 3 /* Number of grace periods between delays for debugging. */ -/* - * Compute the mask of online CPUs for the specified rcu_node structure. - * This will not be stable unless the rcu_node structure's ->lock is - * held, but the bit corresponding to the current CPU will be stable - * in most contexts. - */ -static unsigned long rcu_rnp_online_cpus(struct rcu_node *rnp) -{ - return READ_ONCE(rnp->qsmaskinitnext); -} - -/* - * Is the CPU corresponding to the specified rcu_data structure online - * from RCU's perspective? This perspective is given by that structure's - * ->qsmaskinitnext field rather than by the global cpu_online_mask. - */ -static bool rcu_rdp_cpu_online(struct rcu_data *rdp) -{ - return !!(rdp->grpmask & rcu_rnp_online_cpus(rdp->mynode)); -} - /* * Return true if an RCU grace period is in progress. The READ_ONCE()s * permit this function to be invoked without holding the root rcu_node @@ -734,46 +715,6 @@ void rcu_request_urgent_qs_task(struct task_struct *t) smp_store_release(per_cpu_ptr(&rcu_data.rcu_urgent_qs, cpu), true); } -#if defined(CONFIG_PROVE_RCU) && defined(CONFIG_HOTPLUG_CPU) - -/* - * Is the current CPU online as far as RCU is concerned? - * - * Disable preemption to avoid false positives that could otherwise - * happen due to the current CPU number being sampled, this task being - * preempted, its old CPU being taken offline, resuming on some other CPU, - * then determining that its old CPU is now offline. - * - * Disable checking if in an NMI handler because we cannot safely - * report errors from NMI handlers anyway. In addition, it is OK to use - * RCU on an offline processor during initial boot, hence the check for - * rcu_scheduler_fully_active. - */ -bool rcu_lockdep_current_cpu_online(void) -{ - struct rcu_data *rdp; - bool ret = false; - - if (in_nmi() || !rcu_scheduler_fully_active) - return true; - preempt_disable_notrace(); - rdp = this_cpu_ptr(&rcu_data); - /* - * Strictly, we care here about the case where the current CPU is - * in rcu_cpu_starting() and thus has an excuse for rdp->grpmask - * not being up to date. So arch_spin_is_locked() might have a - * false positive if it's held by some *other* CPU, but that's - * OK because that just means a false *negative* on the warning. - */ - if (rcu_rdp_cpu_online(rdp) || arch_spin_is_locked(&rcu_state.ofl_lock)) - ret = true; - preempt_enable_notrace(); - return ret; -} -EXPORT_SYMBOL_GPL(rcu_lockdep_current_cpu_online); - -#endif /* #if defined(CONFIG_PROVE_RCU) && defined(CONFIG_HOTPLUG_CPU) */ - /* * When trying to report a quiescent state on behalf of some other CPU, * it is our responsibility to check for and handle potential overflow @@ -1350,13 +1291,6 @@ static void rcu_strict_gp_boundary(void *unused) invoke_rcu_core(); } -// Has rcu_init() been invoked? This is used (for example) to determine -// whether spinlocks may be acquired safely. -static bool rcu_init_invoked(void) -{ - return !!rcu_state.n_online_cpus; -} - // Make the polled API aware of the beginning of a grace period. static void rcu_poll_gp_seq_start(unsigned long *snap) { @@ -2091,92 +2025,6 @@ rcu_check_quiescent_state(struct rcu_data *rdp) rcu_report_qs_rdp(rdp); } -/* - * Near the end of the offline process. Trace the fact that this CPU - * is going offline. - */ -int rcutree_dying_cpu(unsigned int cpu) -{ - bool blkd; - struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); - struct rcu_node *rnp = rdp->mynode; - - if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) - return 0; - - blkd = !!(READ_ONCE(rnp->qsmask) & rdp->grpmask); - trace_rcu_grace_period(rcu_state.name, READ_ONCE(rnp->gp_seq), - blkd ? TPS("cpuofl-bgp") : TPS("cpuofl")); - return 0; -} - -/* - * All CPUs for the specified rcu_node structure have gone offline, - * and all tasks that were preempted within an RCU read-side critical - * section while running on one of those CPUs have since exited their RCU - * read-side critical section. Some other CPU is reporting this fact with - * the specified rcu_node structure's ->lock held and interrupts disabled. - * This function therefore goes up the tree of rcu_node structures, - * clearing the corresponding bits in the ->qsmaskinit fields. Note that - * the leaf rcu_node structure's ->qsmaskinit field has already been - * updated. - * - * This function does check that the specified rcu_node structure has - * all CPUs offline and no blocked tasks, so it is OK to invoke it - * prematurely. That said, invoking it after the fact will cost you - * a needless lock acquisition. So once it has done its work, don't - * invoke it again. - */ -static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf) -{ - long mask; - struct rcu_node *rnp = rnp_leaf; - - raw_lockdep_assert_held_rcu_node(rnp_leaf); - if (!IS_ENABLED(CONFIG_HOTPLUG_CPU) || - WARN_ON_ONCE(rnp_leaf->qsmaskinit) || - WARN_ON_ONCE(rcu_preempt_has_tasks(rnp_leaf))) - return; - for (;;) { - mask = rnp->grpmask; - rnp = rnp->parent; - if (!rnp) - break; - raw_spin_lock_rcu_node(rnp); /* irqs already disabled. */ - rnp->qsmaskinit &= ~mask; - /* Between grace periods, so better already be zero! */ - WARN_ON_ONCE(rnp->qsmask); - if (rnp->qsmaskinit) { - raw_spin_unlock_rcu_node(rnp); - /* irqs remain disabled. */ - return; - } - raw_spin_unlock_rcu_node(rnp); /* irqs remain disabled. */ - } -} - -/* - * The CPU has been completely removed, and some other CPU is reporting - * this fact from process context. Do the remainder of the cleanup. - * There can only be one CPU hotplug operation at a time, so no need for - * explicit locking. - */ -int rcutree_dead_cpu(unsigned int cpu) -{ - struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); - struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */ - - if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) - return 0; - - WRITE_ONCE(rcu_state.n_online_cpus, rcu_state.n_online_cpus - 1); - /* Adjust any no-longer-needed kthreads. */ - rcu_boost_kthread_setaffinity(rnp, -1); - // Stop-machine done, so allow nohz_full to disable tick. - tick_dep_clear(TICK_DEP_BIT_RCU); - return 0; -} - /* * Invoke any RCU callbacks that have made it to the end of their grace * period. Throttle as specified by rdp->blimit. @@ -4079,6 +3927,160 @@ void rcu_barrier(void) } EXPORT_SYMBOL_GPL(rcu_barrier); +/* + * Compute the mask of online CPUs for the specified rcu_node structure. + * This will not be stable unless the rcu_node structure's ->lock is + * held, but the bit corresponding to the current CPU will be stable + * in most contexts. + */ +static unsigned long rcu_rnp_online_cpus(struct rcu_node *rnp) +{ + return READ_ONCE(rnp->qsmaskinitnext); +} + +/* + * Is the CPU corresponding to the specified rcu_data structure online + * from RCU's perspective? This perspective is given by that structure's + * ->qsmaskinitnext field rather than by the global cpu_online_mask. + */ +static bool rcu_rdp_cpu_online(struct rcu_data *rdp) +{ + return !!(rdp->grpmask & rcu_rnp_online_cpus(rdp->mynode)); +} + +#if defined(CONFIG_PROVE_RCU) && defined(CONFIG_HOTPLUG_CPU) + +/* + * Is the current CPU online as far as RCU is concerned? + * + * Disable preemption to avoid false positives that could otherwise + * happen due to the current CPU number being sampled, this task being + * preempted, its old CPU being taken offline, resuming on some other CPU, + * then determining that its old CPU is now offline. + * + * Disable checking if in an NMI handler because we cannot safely + * report errors from NMI handlers anyway. In addition, it is OK to use + * RCU on an offline processor during initial boot, hence the check for + * rcu_scheduler_fully_active. + */ +bool rcu_lockdep_current_cpu_online(void) +{ + struct rcu_data *rdp; + bool ret = false; + + if (in_nmi() || !rcu_scheduler_fully_active) + return true; + preempt_disable_notrace(); + rdp = this_cpu_ptr(&rcu_data); + /* + * Strictly, we care here about the case where the current CPU is + * in rcu_cpu_starting() and thus has an excuse for rdp->grpmask + * not being up to date. So arch_spin_is_locked() might have a + * false positive if it's held by some *other* CPU, but that's + * OK because that just means a false *negative* on the warning. + */ + if (rcu_rdp_cpu_online(rdp) || arch_spin_is_locked(&rcu_state.ofl_lock)) + ret = true; + preempt_enable_notrace(); + return ret; +} +EXPORT_SYMBOL_GPL(rcu_lockdep_current_cpu_online); + +#endif /* #if defined(CONFIG_PROVE_RCU) && defined(CONFIG_HOTPLUG_CPU) */ + +// Has rcu_init() been invoked? This is used (for example) to determine +// whether spinlocks may be acquired safely. +static bool rcu_init_invoked(void) +{ + return !!rcu_state.n_online_cpus; +} + +/* + * Near the end of the offline process. Trace the fact that this CPU + * is going offline. + */ +int rcutree_dying_cpu(unsigned int cpu) +{ + bool blkd; + struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); + struct rcu_node *rnp = rdp->mynode; + + if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) + return 0; + + blkd = !!(READ_ONCE(rnp->qsmask) & rdp->grpmask); + trace_rcu_grace_period(rcu_state.name, READ_ONCE(rnp->gp_seq), + blkd ? TPS("cpuofl-bgp") : TPS("cpuofl")); + return 0; +} + +/* + * All CPUs for the specified rcu_node structure have gone offline, + * and all tasks that were preempted within an RCU read-side critical + * section while running on one of those CPUs have since exited their RCU + * read-side critical section. Some other CPU is reporting this fact with + * the specified rcu_node structure's ->lock held and interrupts disabled. + * This function therefore goes up the tree of rcu_node structures, + * clearing the corresponding bits in the ->qsmaskinit fields. Note that + * the leaf rcu_node structure's ->qsmaskinit field has already been + * updated. + * + * This function does check that the specified rcu_node structure has + * all CPUs offline and no blocked tasks, so it is OK to invoke it + * prematurely. That said, invoking it after the fact will cost you + * a needless lock acquisition. So once it has done its work, don't + * invoke it again. + */ +static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf) +{ + long mask; + struct rcu_node *rnp = rnp_leaf; + + raw_lockdep_assert_held_rcu_node(rnp_leaf); + if (!IS_ENABLED(CONFIG_HOTPLUG_CPU) || + WARN_ON_ONCE(rnp_leaf->qsmaskinit) || + WARN_ON_ONCE(rcu_preempt_has_tasks(rnp_leaf))) + return; + for (;;) { + mask = rnp->grpmask; + rnp = rnp->parent; + if (!rnp) + break; + raw_spin_lock_rcu_node(rnp); /* irqs already disabled. */ + rnp->qsmaskinit &= ~mask; + /* Between grace periods, so better already be zero! */ + WARN_ON_ONCE(rnp->qsmask); + if (rnp->qsmaskinit) { + raw_spin_unlock_rcu_node(rnp); + /* irqs remain disabled. */ + return; + } + raw_spin_unlock_rcu_node(rnp); /* irqs remain disabled. */ + } +} + +/* + * The CPU has been completely removed, and some other CPU is reporting + * this fact from process context. Do the remainder of the cleanup. + * There can only be one CPU hotplug operation at a time, so no need for + * explicit locking. + */ +int rcutree_dead_cpu(unsigned int cpu) +{ + struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); + struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */ + + if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) + return 0; + + WRITE_ONCE(rcu_state.n_online_cpus, rcu_state.n_online_cpus - 1); + /* Adjust any no-longer-needed kthreads. */ + rcu_boost_kthread_setaffinity(rnp, -1); + // Stop-machine done, so allow nohz_full to disable tick. + tick_dep_clear(TICK_DEP_BIT_RCU); + return 0; +} + /* * Propagate ->qsinitmask bits up the rcu_node tree to account for the * first CPU in a given leaf rcu_node structure coming online. The caller From patchwork Thu Jan 5 00:22:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13089235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C325C53210 for ; Thu, 5 Jan 2023 00:23:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235521AbjAEAXW (ORCPT ); Wed, 4 Jan 2023 19:23:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235516AbjAEAXN (ORCPT ); Wed, 4 Jan 2023 19:23:13 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAA4044375; Wed, 4 Jan 2023 16:23:10 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 281B6B81987; Thu, 5 Jan 2023 00:23:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0088C433F1; Thu, 5 Jan 2023 00:23:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878187; bh=/ECD69jxcNYYwEATRT9YBzkdJrzTS32gHvcq9AJnndw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hpVqoG0SK7CXriUdM8F38SKoj35jgBFggQkLZaV6ufIyNqjzg43BGlmAO1vjkW3Xu p6MDNItrdzIYkideWsLk8WNQnvME6pEvPFc48WeLg/a8BpwEj++b9zoYql7/8Y4p0P KeJIOK2D7qRqMmsncrUmMlBFDlsKE+gP/KnpEsxsQAAgvqzGajZtPl0YRblraWwx5x kfoduOq4aEAYOv3iNyWRySbznB4eGJTilik0beQPzXhXio78iuM7VxnSV2XN282fN0 li+3Ntqeo2uPkUJnLShuPPWyPfjfrE9hlsd2s30O6OFLurnP4ESyfD5js1LziGuD11 iKu/42ZLAC2EA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 779BB5C08E5; Wed, 4 Jan 2023 16:23:07 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 03/10] rcu: Throttle callback invocation based on number of ready callbacks Date: Wed, 4 Jan 2023 16:22:58 -0800 Message-Id: <20230105002305.1768591-3-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> References: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Currently, rcu_do_batch() sizes its batches based on the total number of callbacks in the callback list. This can result in some strange choices, for example, if there was 12,800 callbacks in the list, but only 200 were ready to invoke, RCU would invoke 100 at a time (12,800 shifted down by seven bits). A more measured approach would use the number that were actually ready to invoke, an approach that has become feasible only recently given the per-segment ->seglen counts in ->cblist. This commit therefore bases the batch limit on the number of callbacks ready to invoke instead of on the total number of callbacks. Signed-off-by: Paul E. McKenney --- kernel/rcu/rcu_segcblist.c | 2 +- kernel/rcu/rcu_segcblist.h | 2 ++ kernel/rcu/tree.c | 2 +- 3 files changed, 4 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c index c54ea2b6a36bc..f71fac422c8f6 100644 --- a/kernel/rcu/rcu_segcblist.c +++ b/kernel/rcu/rcu_segcblist.c @@ -89,7 +89,7 @@ static void rcu_segcblist_set_len(struct rcu_segcblist *rsclp, long v) } /* Get the length of a segment of the rcu_segcblist structure. */ -static long rcu_segcblist_get_seglen(struct rcu_segcblist *rsclp, int seg) +long rcu_segcblist_get_seglen(struct rcu_segcblist *rsclp, int seg) { return READ_ONCE(rsclp->seglen[seg]); } diff --git a/kernel/rcu/rcu_segcblist.h b/kernel/rcu/rcu_segcblist.h index 431cee212467d..4fe877f5f6540 100644 --- a/kernel/rcu/rcu_segcblist.h +++ b/kernel/rcu/rcu_segcblist.h @@ -15,6 +15,8 @@ static inline long rcu_cblist_n_cbs(struct rcu_cblist *rclp) return READ_ONCE(rclp->len); } +long rcu_segcblist_get_seglen(struct rcu_segcblist *rsclp, int seg); + /* Return number of callbacks in segmented callback list by summing seglen. */ long rcu_segcblist_n_segment_cbs(struct rcu_segcblist *rsclp); diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index d3b082233b74b..7d3a59d4f37ef 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2057,7 +2057,7 @@ static void rcu_do_batch(struct rcu_data *rdp) */ rcu_nocb_lock_irqsave(rdp, flags); WARN_ON_ONCE(cpu_is_offline(smp_processor_id())); - pending = rcu_segcblist_n_cbs(&rdp->cblist); + pending = rcu_segcblist_get_seglen(&rdp->cblist, RCU_DONE_TAIL); div = READ_ONCE(rcu_divisor); div = div < 0 ? 7 : div > sizeof(long) * 8 - 2 ? sizeof(long) * 8 - 2 : div; bl = max(rdp->blimit, pending >> div); From patchwork Thu Jan 5 00:22:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13089232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD534C54EBC for ; Thu, 5 Jan 2023 00:23:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235510AbjAEAXT (ORCPT ); Wed, 4 Jan 2023 19:23:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235470AbjAEAXM (ORCPT ); Wed, 4 Jan 2023 19:23:12 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA9A544374; Wed, 4 Jan 2023 16:23:10 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1CD61B81984; Thu, 5 Jan 2023 00:23:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C1F92C433F0; Thu, 5 Jan 2023 00:23:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878187; bh=ztl1EQiOdDcQZcotzokX82t3kWGoj7LXDHlcvmlwCU8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sGVTI7Qg58lsUbW1BNkoKK45kvR4sdZ/a992D/99AGedvCnCZSIqHj4dbrB7gnw2l wQSEtk02ZpdrfHG2cCe+xtrc2CxkeaWrNFn76ZXgMjlsjAMD7625j5J3Cs93HYMTf+ p1fXfTSkTccAHr/SeBda7YtysoiWMzbwV24TmBbB0rS/fuZSgelcByU7IhoZwjHy4L aPQlcqJ/0fplKWR7RZaitdz806uNp3VcWLLkmb9S4D4xiCDY2uB8jFJAT35DJxD8TK taQI2dt/DzcUdVu0P47i7F2WZKu0tWegdzU3+a16S55bvhK+SipdfWVQZu23w0x+AO lJplReIpqI9yA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 79AB35C1456; Wed, 4 Jan 2023 16:23:07 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 04/10] rcu: Upgrade header comment for poll_state_synchronize_rcu() Date: Wed, 4 Jan 2023 16:22:59 -0800 Message-Id: <20230105002305.1768591-4-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> References: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit emphasizes the possibility of concurrent calls to synchronize_rcu() and synchronize_rcu_expedited() causing one or the other of the two grace periods being lost from the viewpoint of poll_state_synchronize_rcu(). If you cannot afford to lose grace periods this way, you should instead use the _full() variants of the polled RCU API, for example, poll_state_synchronize_rcu_full(). Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 7d3a59d4f37ef..0147e69ea85a9 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3559,7 +3559,9 @@ EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu_full); * If @false is returned, it is the caller's responsibility to invoke this * function later on until it does return @true. Alternatively, the caller * can explicitly wait for a grace period, for example, by passing @oldstate - * to cond_synchronize_rcu() or by directly invoking synchronize_rcu(). + * to either cond_synchronize_rcu() or cond_synchronize_rcu_expedited() + * on the one hand or by directly invoking either synchronize_rcu() or + * synchronize_rcu_expedited() on the other. * * Yes, this function does not take counter wrap into account. * But counter wrap is harmless. If the counter wraps, we have waited for @@ -3570,6 +3572,12 @@ EXPORT_SYMBOL_GPL(start_poll_synchronize_rcu_full); * completed. Alternatively, they can use get_completed_synchronize_rcu() * to get a guaranteed-completed grace-period state. * + * In addition, because oldstate compresses the grace-period state for + * both normal and expedited grace periods into a single unsigned long, + * it can miss a grace period when synchronize_rcu() runs concurrently + * with synchronize_rcu_expedited(). If this is unacceptable, please + * instead use the _full() variant of these polling APIs. + * * This function provides the same memory-ordering guarantees that * would be provided by a synchronize_rcu() that was invoked at the call * to the function that provided @oldstate, and that returned at the end From patchwork Thu Jan 5 00:23:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13089233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 678C4C54EBE for ; Thu, 5 Jan 2023 00:23:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235518AbjAEAXV (ORCPT ); Wed, 4 Jan 2023 19:23:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235393AbjAEAXN (ORCPT ); Wed, 4 Jan 2023 19:23:13 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA04044368; Wed, 4 Jan 2023 16:23:10 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3C677B81983; Thu, 5 Jan 2023 00:23:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3BD8C43392; Thu, 5 Jan 2023 00:23:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878187; bh=hrYxt2M86I+YLxkqBWLEpmhGnP0U4qusfW3DVwJ922w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lO+DcO1DQJWobdC4TYtmZNiT4JyEMz7vzZ2AszirxPy8b3kCcyCawRA1PIcMj+mvo dsEttD17K3MEfR5RbFKzJAOXwCtFQeWRE9H9SQSiK+7E69bJolQxvUoX6La7NVx88E vgIJ53KJz/DFMFbSd+QbAUWujYZDqGr4oKmtBDyXoX0Ma1V7n/scnXiYsSu1ndyEBc uGqVzi9+Qwnx0cCJ2fXKlM41Hii2qNSKmCkaCE21nUsVUrIuWYwTxZMfuiJwrEssiv a3tCWnViVumHh4vfJpmZ+mAOQXumqEnmtK9MQc3Vw3f6BErUtyufnjg9Zm7m5+qQYc ACW0BZvV+6vAg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7BB955C149B; Wed, 4 Jan 2023 16:23:07 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Masami Hiramatsu , Mathieu Desnoyers , Boqun Feng , Matthew Wilcox , Thomas Gleixner Subject: [PATCH rcu 05/10] rcu: Make RCU_LOCKDEP_WARN() avoid early lockdep checks Date: Wed, 4 Jan 2023 16:23:00 -0800 Message-Id: <20230105002305.1768591-5-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> References: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Currently, RCU_LOCKDEP_WARN() checks the condition before checking to see if lockdep is still enabled. This is necessary to avoid the false-positive splats fixed by commit 3066820034b5dd ("rcu: Reject RCU_LOCKDEP_WARN() false positives"). However, the current state can result in false-positive splats during early boot before lockdep is fully initialized. This commit therefore checks debug_lockdep_rcu_enabled() both before and after checking the condition, thus avoiding both sets of false-positive error reports. Reported-by: Steven Rostedt Reported-by: Masami Hiramatsu (Google) Reported-by: Mathieu Desnoyers Signed-off-by: Paul E. McKenney Reviewed-by: Mathieu Desnoyers Cc: Boqun Feng Cc: Matthew Wilcox Cc: Thomas Gleixner --- include/linux/rcupdate.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 03abf883a281b..aa86de01aab6d 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -374,11 +374,18 @@ static inline int debug_lockdep_rcu_enabled(void) * RCU_LOCKDEP_WARN - emit lockdep splat if specified condition is met * @c: condition to check * @s: informative message + * + * This checks debug_lockdep_rcu_enabled() before checking (c) to + * prevent early boot splats due to lockdep not yet being initialized, + * and rechecks it after checking (c) to prevent false-positive splats + * due to races with lockdep being disabled. See commit 3066820034b5dd + * ("rcu: Reject RCU_LOCKDEP_WARN() false positives") for more detail. */ #define RCU_LOCKDEP_WARN(c, s) \ do { \ static bool __section(".data.unlikely") __warned; \ - if ((c) && debug_lockdep_rcu_enabled() && !__warned) { \ + if (debug_lockdep_rcu_enabled() && (c) && \ + debug_lockdep_rcu_enabled() && !__warned) { \ __warned = true; \ lockdep_rcu_suspicious(__FILE__, __LINE__, s); \ } \ From patchwork Thu Jan 5 00:23:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13089229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC96AC53210 for ; Thu, 5 Jan 2023 00:23:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235008AbjAEAXS (ORCPT ); Wed, 4 Jan 2023 19:23:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235468AbjAEAXL (ORCPT ); Wed, 4 Jan 2023 19:23:11 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BA624434B; Wed, 4 Jan 2023 16:23:10 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C9F7561899; Thu, 5 Jan 2023 00:23:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1DFFDC433A0; Thu, 5 Jan 2023 00:23:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878188; bh=fNTH5yIpqQhqXPGvfulmhhwbllvc7TzDs+475c3ZbT8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BlfqUVD6Nyn+zgTpC7gS9OqRe0xIO2vnskDN1tauZgcwdLjXC/aW2YOwqEsdDrRKc 3vszS3QAsEe72ALkbfzcb6FiYrV2MzeBiQ7kEi8lO/4a/fpsBOWIU4UlG9JyxzZrWZ Alz4M5qAmard7X4dyG/UvR9cv3QJuycbHL+gNBm6+ffs6FLID9zBA/XLDSuahmL8Lt dqN2FIl9gMmeq7aoYqf3Ot+p3oNeSisqRHj0ZYWdA4khhQy1+5C3mhnSCv6LhJURCS 6ZgxwXYNKHCnw31MqJfbi7+QLIS38CfsplWh9AiwpK//d4jRGjSCPoVDUAKSHnTbHx Y5fZV0hjER1+w== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7D8F85C1ADF; Wed, 4 Jan 2023 16:23:07 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 06/10] rcu: Suppress smp_processor_id() complaint in synchronize_rcu_expedited_wait() Date: Wed, 4 Jan 2023 16:23:01 -0800 Message-Id: <20230105002305.1768591-6-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> References: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The normal grace period's RCU CPU stall warnings are invoked from the scheduling-clock interrupt handler, and can thus invoke smp_processor_id() with impunity, which allows them to directly invoke dump_cpu_task(). In contrast, the expedited grace period's RCU CPU stall warnings are invoked from process context, which causes the dump_cpu_task() function's calls to smp_processor_id() to complain bitterly in debug kernels. This commit therefore causes synchronize_rcu_expedited_wait() to disable preemption around its call to dump_cpu_task(). Signed-off-by: Paul E. McKenney --- kernel/rcu/tree_exp.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index ed6c3cce28f23..927abaf6c822e 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -667,7 +667,9 @@ static void synchronize_rcu_expedited_wait(void) mask = leaf_node_cpu_bit(rnp, cpu); if (!(READ_ONCE(rnp->expmask) & mask)) continue; + preempt_disable(); // For smp_processor_id() in dump_cpu_task(). dump_cpu_task(cpu); + preempt_enable(); } } jiffies_stall = 3 * rcu_exp_jiffies_till_stall_check() + 3; From patchwork Thu Jan 5 00:23:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13089236 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EF02C54EBC for ; Thu, 5 Jan 2023 00:23:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235632AbjAEAXW (ORCPT ); Wed, 4 Jan 2023 19:23:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235437AbjAEAXP (ORCPT ); Wed, 4 Jan 2023 19:23:15 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE25D4435A; Wed, 4 Jan 2023 16:23:13 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7CE8AB81982; Thu, 5 Jan 2023 00:23:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1E02FC433A4; Thu, 5 Jan 2023 00:23:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878188; bh=2WSJ8n/mVkUaqRO+2OU4uVBj5A9i4K7XPBfFkEnpujQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GPKGOamfbyj/w6a6jbRFZwcg+TgZGWBmQ0+ZtF5mmm9iETuuYIIXFn89wPPPupKtV A/KwX+0ZZCcRtmqQi2QN/WjEkyAqCwkMxnd/jDwdo9mkAJlK/YBPH8t+xhaAiK2tIL HGLaWf+om9DQsYS1jREDI2jqwkZX8YcV9tucW61XOq2RvCZz7eAK+XDE+yfkBFG5rQ B3PzYb/nsPPDop+EZ+DR5+Y7T9MQmWAye2coVn+SrNELR7O5ebEOQhKS4f0vy5q70U 4fbvSSxTtmFu27iqzWxQrhG7Qg5bT41Kg3l+cYL6nYe028t/yjWIVKccq3zECCjubv ygU8w6/RzQdTw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7F4CE5C1AE0; Wed, 4 Jan 2023 16:23:07 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, Zqiang , "Paul E . McKenney" Subject: [PATCH rcu 07/10] rcu: Make rcu_blocking_is_gp() stop early-boot might_sleep() Date: Wed, 4 Jan 2023 16:23:02 -0800 Message-Id: <20230105002305.1768591-7-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> References: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Zqiang Currently, rcu_blocking_is_gp() invokes might_sleep() even during early boot when interrupts are disabled and before the scheduler is scheduling. This is at best an accident waiting to happen. Therefore, this commit moves that might_sleep() under an rcu_scheduler_active check in order to ensure that might_sleep() is not invoked unless sleeping might actually happen. Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 0147e69ea85a9..15f9765064727 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3333,9 +3333,10 @@ void __init kfree_rcu_scheduler_running(void) */ static int rcu_blocking_is_gp(void) { - if (rcu_scheduler_active != RCU_SCHEDULER_INACTIVE) + if (rcu_scheduler_active != RCU_SCHEDULER_INACTIVE) { + might_sleep(); return false; - might_sleep(); /* Check for RCU read-side critical section. */ + } return true; } From patchwork Thu Jan 5 00:23:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13089230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0CABC46467 for ; Thu, 5 Jan 2023 00:23:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235458AbjAEAXS (ORCPT ); Wed, 4 Jan 2023 19:23:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235064AbjAEAXM (ORCPT ); Wed, 4 Jan 2023 19:23:12 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E06A43A2E; Wed, 4 Jan 2023 16:23:10 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DE8036189A; Thu, 5 Jan 2023 00:23:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22C74C433AA; Thu, 5 Jan 2023 00:23:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878188; bh=cQj+0MvK2dvCMDfEUq3ufEL/MhEK6ftbDWfUx6TMnEo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FQS48QKDf2Lpmmy3HwQPQyH0ez2nI90pzgXA2IgpC2we2flhJ64kQbkiKafaxswN6 PK5RUgoFwh26PWhWgqg2pFpPqDrMnOse8qPffSTUox8YCJ/pPLNzu2y4iRkO7eGwAc APXTqmI9J3IcjmoQkN2ofiUz3bgGmFwSSGbCi+Yaa1slUFQXEtDL+GRgSwlZYwiGCA qXKgxhtKd4fm+MAJbZbOsQVbJJC39h2IDrVww6Hkf0ubqsAxucBhh6jLyhrdc9O9JO XL8SNCExQiBgCbrnqZMx6u3ZQxZreuv4K7xY1RKEkkGy2njbVC9nj6CMzQo8FmUjWt pQL8dnm0J8Y3A== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 811CF5C1C5B; Wed, 4 Jan 2023 16:23:07 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 08/10] rcu: Test synchronous RCU grace periods at the end of rcu_init() Date: Wed, 4 Jan 2023 16:23:03 -0800 Message-Id: <20230105002305.1768591-8-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> References: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit tests synchronize_rcu() and synchronize_rcu_expedited() at the end of rcu_init(), in addition to the test already at the beginning of that function. These tests are run only in kernels built with CONFIG_PROVE_RCU=y. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 2 ++ kernel/rcu/update.c | 1 + 2 files changed, 3 insertions(+) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 15f9765064727..80b84ae285b41 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4849,6 +4849,8 @@ void __init rcu_init(void) // Kick-start any polled grace periods that started early. if (!(per_cpu_ptr(&rcu_data, cpu)->mynode->exp_seq_poll_rq & 0x1)) (void)start_poll_synchronize_rcu_expedited(); + + rcu_test_sync_prims(); } #include "tree_stall.h" diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c index f5e6a2f95a2a0..587b97c401914 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c @@ -220,6 +220,7 @@ void rcu_test_sync_prims(void) { if (!IS_ENABLED(CONFIG_PROVE_RCU)) return; + pr_info("Running RCU synchronous self tests\n"); synchronize_rcu(); synchronize_rcu_expedited(); } From patchwork Thu Jan 5 00:23:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13089237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 029E6C53210 for ; Thu, 5 Jan 2023 00:23:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240128AbjAEAXf (ORCPT ); Wed, 4 Jan 2023 19:23:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235451AbjAEAXQ (ORCPT ); Wed, 4 Jan 2023 19:23:16 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAC6F44360; Wed, 4 Jan 2023 16:23:13 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9BEFFB81990; Thu, 5 Jan 2023 00:23:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22C11C433A8; Thu, 5 Jan 2023 00:23:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878188; bh=MI3c5QYGakL+wl+pPfIjyXc4qZP1Ce7fJ4Wh6kV0Nls=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FuMuwQZWImSI4uvXh25bZyrTECbTfsUQcHsxAcxzPyj5uDX3IUUIWSb0164looBl1 39qNBTnjnDgnu5ecFHmjdUsFNeG+ReoSu+1SsHtNCAODDaruP6e1BMYWbmdCQ505RP uymMeJPCQe3nA/NWZaDRcsXgBucBJRmHO9a/l1YiZ5J0Z/6gBJR82FEJppHO5uYFDX 7yGYGPcYL8t/QNCJwBos7xi8WF72mo5Gdlq3x00ZR/Y94iuogFSRV+UxHsQstT0lw0 AyQ3DlebyRg61Vm6tw/YDCboAR8O1fcK8hFXc/ofMPLUUoQhznNA+ArtGCuelH5as4 GAVr2FwxoJEkw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 82E155C1C5D; Wed, 4 Jan 2023 16:23:07 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , David Howells Subject: [PATCH rcu 09/10] rcu: Allow expedited RCU CPU stall warnings to dump task stacks Date: Wed, 4 Jan 2023 16:23:04 -0800 Message-Id: <20230105002305.1768591-9-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> References: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit introduces the rcupdate.rcu_exp_stall_task_details kernel boot parameter, which cause expedited RCU CPU stall warnings to dump the stacks of any tasks blocking the current expedited grace period. Reported-by: David Howells Signed-off-by: Paul E. McKenney --- .../admin-guide/kernel-parameters.txt | 5 +++ kernel/rcu/rcu.h | 1 + kernel/rcu/tree_exp.h | 41 +++++++++++++++++++ kernel/rcu/update.c | 2 + 4 files changed, 49 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 6cfa6e3996cf7..aa453f9202d89 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5113,6 +5113,11 @@ rcupdate.rcu_cpu_stall_timeout to be used (after conversion from seconds to milliseconds). + rcupdate.rcu_exp_stall_task_details= [KNL] + Print stack dumps of any tasks blocking the + current expedited RCU grace period during an + expedited RCU CPU stall warning. + rcupdate.rcu_expedited= [KNL] Use expedited grace-period primitives, for example, synchronize_rcu_expedited() instead diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index c5aa934de59b0..fa640c45172e9 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -224,6 +224,7 @@ extern int rcu_cpu_stall_ftrace_dump; extern int rcu_cpu_stall_suppress; extern int rcu_cpu_stall_timeout; extern int rcu_exp_cpu_stall_timeout; +extern bool rcu_exp_stall_task_details __read_mostly; int rcu_jiffies_till_stall_check(void); int rcu_exp_jiffies_till_stall_check(void); diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 927abaf6c822e..249c2967d9e6c 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -11,6 +11,7 @@ static void rcu_exp_handler(void *unused); static int rcu_print_task_exp_stall(struct rcu_node *rnp); +static void rcu_exp_print_detail_task_stall_rnp(struct rcu_node *rnp); /* * Record the start of an expedited grace period. @@ -671,6 +672,7 @@ static void synchronize_rcu_expedited_wait(void) dump_cpu_task(cpu); preempt_enable(); } + rcu_exp_print_detail_task_stall_rnp(rnp); } jiffies_stall = 3 * rcu_exp_jiffies_till_stall_check() + 3; panic_on_rcu_stall(); @@ -813,6 +815,36 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp) return ndetected; } +/* + * Scan the current list of tasks blocked within RCU read-side critical + * sections, dumping the stack of each that is blocking the current + * expedited grace period. + */ +static void rcu_exp_print_detail_task_stall_rnp(struct rcu_node *rnp) +{ + unsigned long flags; + struct task_struct *t; + + if (!rcu_exp_stall_task_details) + return; + raw_spin_lock_irqsave_rcu_node(rnp, flags); + if (!READ_ONCE(rnp->exp_tasks)) { + raw_spin_unlock_irqrestore_rcu_node(rnp, flags); + return; + } + t = list_entry(rnp->exp_tasks->prev, + struct task_struct, rcu_node_entry); + list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) { + /* + * We could be printing a lot while holding a spinlock. + * Avoid triggering hard lockup. + */ + touch_nmi_watchdog(); + sched_show_task(t); + } + raw_spin_unlock_irqrestore_rcu_node(rnp, flags); +} + #else /* #ifdef CONFIG_PREEMPT_RCU */ /* Request an expedited quiescent state. */ @@ -885,6 +917,15 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp) return 0; } +/* + * Because preemptible RCU does not exist, we never have to print out + * tasks blocked within RCU read-side critical sections that are blocking + * the current expedited grace period. + */ +static void rcu_exp_print_detail_task_stall_rnp(struct rcu_node *rnp) +{ +} + #endif /* #else #ifdef CONFIG_PREEMPT_RCU */ /** diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c index 587b97c401914..6ed5020aee6d1 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c @@ -509,6 +509,8 @@ int rcu_cpu_stall_timeout __read_mostly = CONFIG_RCU_CPU_STALL_TIMEOUT; module_param(rcu_cpu_stall_timeout, int, 0644); int rcu_exp_cpu_stall_timeout __read_mostly = CONFIG_RCU_EXP_CPU_STALL_TIMEOUT; module_param(rcu_exp_cpu_stall_timeout, int, 0644); +bool rcu_exp_stall_task_details __read_mostly; +module_param(rcu_exp_stall_task_details, bool, 0644); #endif /* #ifdef CONFIG_RCU_STALL_COMMON */ // Suppress boot-time RCU CPU stall warnings and rcutorture writer stall From patchwork Thu Jan 5 00:23:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13089238 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAD28C54E76 for ; Thu, 5 Jan 2023 00:23:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240151AbjAEAXg (ORCPT ); Wed, 4 Jan 2023 19:23:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235537AbjAEAXQ (ORCPT ); Wed, 4 Jan 2023 19:23:16 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE2E74435E; Wed, 4 Jan 2023 16:23:13 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8A06EB8198A; Thu, 5 Jan 2023 00:23:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22A1CC433A7; Thu, 5 Jan 2023 00:23:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878188; bh=Zj66HaKlbW7or4kBjbwXaM4/EWUfv7i59XPipGrmdK0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ulKYLo2ZelNq3JjSgOVb1UmueUei/z6vfdW5xa/K9HlH5Rsqxsctdb4/Tujqz4bh/ SLYLIFY/7LXH42rZs2DBm8dPtZ1qhcWCwxzBEygxCyfmvOdrUWWi0K5bldjYXGM4+E A9qGjQEKRNTZ8DR0SNwMy2p6ZCtRikhj613LcOEgJZs0oF7I4zuX7AzlLz0qOQKgTv HHus1peh8fjC+fXpJ/m3aPrcp1+8W+GvMdwq4JY1CdjHAejq388N7yzHmDMQp0FWMQ poja+Oartem/2us8mYOzK07V7SCE79MquyoyOJ7/B4JcXdza+m7UNi7/wbrns8bgin VY5gly0yNjKFw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 84A6B5C1C64; Wed, 4 Jan 2023 16:23:07 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, Zqiang , "Paul E . McKenney" Subject: [PATCH rcu 10/10] rcu: Remove redundant call to rcu_boost_kthread_setaffinity() Date: Wed, 4 Jan 2023 16:23:05 -0800 Message-Id: <20230105002305.1768591-10-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> References: <20230105002257.GA1768487@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Zqiang The rcu_boost_kthread_setaffinity() function is invoked at rcutree_online_cpu() and rcutree_offline_cpu() time, early in the online timeline and late in the offline timeline, respectively. It is also invoked from rcutree_dead_cpu(), however, in the absence of userspace manipulations (for which userspace must take responsibility), this call is redundant with that from rcutree_offline_cpu(). This redundancy can be demonstrated by printing out the relevant cpumasks This commit therefore removes the call to rcu_boost_kthread_setaffinity() from rcutree_dead_cpu(). Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 80b84ae285b41..89313c7c17b62 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4076,15 +4076,10 @@ static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf) */ int rcutree_dead_cpu(unsigned int cpu) { - struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); - struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */ - if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) return 0; WRITE_ONCE(rcu_state.n_online_cpus, rcu_state.n_online_cpus - 1); - /* Adjust any no-longer-needed kthreads. */ - rcu_boost_kthread_setaffinity(rnp, -1); // Stop-machine done, so allow nohz_full to disable tick. tick_dep_clear(TICK_DEP_BIT_RCU); return 0;