From patchwork Mon Jul 17 18:04:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13316166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06E52C001E0 for ; Mon, 17 Jul 2023 18:05:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231416AbjGQSFX (ORCPT ); Mon, 17 Jul 2023 14:05:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229948AbjGQSFK (ORCPT ); Mon, 17 Jul 2023 14:05:10 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D68351BDA; Mon, 17 Jul 2023 11:04:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2DA0F61063; Mon, 17 Jul 2023 18:04:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 92431C433C8; Mon, 17 Jul 2023 18:04:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1689617095; bh=CThg7+qrAYHNviaEnrB7QQ1YArM5XTgkYw5qKyemOsI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TvubXY9EE0C9gCFaPi+1i6CMEFSl0GE/uzksJ8eoClCwMhazf3m6gCm4EM2phKitC cIELsJbRoZOKZ/8yAPlfXP/7lwl14TskZjkcP3F3ODLMiqXrOUAtRw+KYKmtSAYGQg YtOlzR0H39xYPe0i01To5VBlyJOiEFv2L7FPAVLGOmS8BWa2/H6wIzfydHm7v9lqcE RQFWYRUtDJ2UcSEgTL1YHGehhvlCDD7VSwyiGFlKdkN2e4fxUFaIxmhQdYTH8l+HUt WatSLiTgT6UWMHM1SfR41qBEPHYedyGtd7derjP6izmUjc6Z3SNgjacHIibke2wPXF LB1gMuuUelP0g== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 54EEACE03F1; Mon, 17 Jul 2023 11:04:55 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Martin KaFai Lau Subject: [PATCH rcu 1/5] rcu-tasks: Treat only synchronous grace periods urgently Date: Mon, 17 Jul 2023 11:04:50 -0700 Message-Id: <20230717180454.1097714-1-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The performance requirements on RCU Tasks, and in particular on RCU Tasks Trace, have evolved over time as the workloads have evolved. The current implementation is designed to provide low grace-period latencies, and also to accommodate short-duration floods of callbacks. However, current workloads can also provide a constant background callback-queuing rate of a few hundred call_rcu_tasks_trace() invocations per second. This results in continuous back-to-back RCU Tasks Trace grace periods, which in turn can consume the better part of 10% of a CPU. One could take the attitude that there are several tens of other CPUs on the systems running such workloads, but energy efficiency is a thing. On these systems, although asynchronous grace-period requests happen every few milliseconds, synchronous grace-period requests are quite rare. This commit therefore arrranges for grace periods to be initiated immediately in response to calls to synchronize_rcu_tasks*() and also to calls to synchronize_rcu_mult() that are passed one of the call_rcu_tasks*() functions. These are recognized by the tell-tale wakeme_after_rcu callback function. In other cases, callbacks are gathered up for up to about 250 milliseconds before a grace period is initiated. This results in more than an order of magnitude reduction in RCU Tasks Trace grace periods, with corresponding reduction in consumption of CPU time. Reported-by: Alexei Starovoitov Reported-by: Martin KaFai Lau Signed-off-by: Paul E. McKenney --- kernel/rcu/tasks.h | 81 +++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 73 insertions(+), 8 deletions(-) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index b770add3f843..c88d7fe29d92 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -25,6 +25,8 @@ typedef void (*postgp_func_t)(struct rcu_tasks *rtp); * @cblist: Callback list. * @lock: Lock protecting per-CPU callback list. * @rtp_jiffies: Jiffies counter value for statistics. + * @lazy_timer: Timer to unlazify callbacks. + * @urgent_gp: Number of additional non-lazy grace periods. * @rtp_n_lock_retries: Rough lock-contention statistic. * @rtp_work: Work queue for invoking callbacks. * @rtp_irq_work: IRQ work queue for deferred wakeups. @@ -38,6 +40,8 @@ struct rcu_tasks_percpu { raw_spinlock_t __private lock; unsigned long rtp_jiffies; unsigned long rtp_n_lock_retries; + struct timer_list lazy_timer; + unsigned int urgent_gp; struct work_struct rtp_work; struct irq_work rtp_irq_work; struct rcu_head barrier_q_head; @@ -51,7 +55,6 @@ struct rcu_tasks_percpu { * @cbs_wait: RCU wait allowing a new callback to get kthread's attention. * @cbs_gbl_lock: Lock protecting callback list. * @tasks_gp_mutex: Mutex protecting grace period, needed during mid-boot dead zone. - * @kthread_ptr: This flavor's grace-period/callback-invocation kthread. * @gp_func: This flavor's grace-period-wait function. * @gp_state: Grace period's most recent state transition (debugging). * @gp_sleep: Per-grace-period sleep to prevent CPU-bound looping. @@ -61,6 +64,8 @@ struct rcu_tasks_percpu { * @tasks_gp_seq: Number of grace periods completed since boot. * @n_ipis: Number of IPIs sent to encourage grace periods to end. * @n_ipis_fails: Number of IPI-send failures. + * @kthread_ptr: This flavor's grace-period/callback-invocation kthread. + * @lazy_jiffies: Number of jiffies to allow callbacks to be lazy. * @pregp_func: This flavor's pre-grace-period function (optional). * @pertask_func: This flavor's per-task scan function (optional). * @postscan_func: This flavor's post-task scan function (optional). @@ -92,6 +97,7 @@ struct rcu_tasks { unsigned long n_ipis; unsigned long n_ipis_fails; struct task_struct *kthread_ptr; + unsigned long lazy_jiffies; rcu_tasks_gp_func_t gp_func; pregp_func_t pregp_func; pertask_func_t pertask_func; @@ -127,6 +133,7 @@ static struct rcu_tasks rt_name = \ .gp_func = gp, \ .call_func = call, \ .rtpcpu = &rt_name ## __percpu, \ + .lazy_jiffies = DIV_ROUND_UP(HZ, 4), \ .name = n, \ .percpu_enqueue_shift = order_base_2(CONFIG_NR_CPUS), \ .percpu_enqueue_lim = 1, \ @@ -276,6 +283,33 @@ static void cblist_init_generic(struct rcu_tasks *rtp) data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim), rcu_task_cb_adjust); } +// Compute wakeup time for lazy callback timer. +static unsigned long rcu_tasks_lazy_time(struct rcu_tasks *rtp) +{ + return jiffies + rtp->lazy_jiffies; +} + +// Timer handler that unlazifies lazy callbacks. +static void call_rcu_tasks_generic_timer(struct timer_list *tlp) +{ + unsigned long flags; + bool needwake = false; + struct rcu_tasks *rtp; + struct rcu_tasks_percpu *rtpcp = from_timer(rtpcp, tlp, lazy_timer); + + rtp = rtpcp->rtpp; + raw_spin_lock_irqsave_rcu_node(rtpcp, flags); + if (!rcu_segcblist_empty(&rtpcp->cblist) && rtp->lazy_jiffies) { + if (!rtpcp->urgent_gp) + rtpcp->urgent_gp = 1; + needwake = true; + mod_timer(&rtpcp->lazy_timer, rcu_tasks_lazy_time(rtp)); + } + raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); + if (needwake) + rcuwait_wake_up(&rtp->cbs_wait); +} + // IRQ-work handler that does deferred wakeup for call_rcu_tasks_generic(). static void call_rcu_tasks_iw_wakeup(struct irq_work *iwp) { @@ -292,6 +326,7 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func, { int chosen_cpu; unsigned long flags; + bool havekthread = smp_load_acquire(&rtp->kthread_ptr); int ideal_cpu; unsigned long j; bool needadjust = false; @@ -321,7 +356,15 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func, cblist_init_generic(rtp); raw_spin_lock_rcu_node(rtpcp); // irqs already disabled. } - needwake = rcu_segcblist_empty(&rtpcp->cblist); + needwake = func == wakeme_after_rcu; + if (havekthread && !timer_pending(&rtpcp->lazy_timer)) { + if (rtp->lazy_jiffies) + mod_timer(&rtpcp->lazy_timer, rcu_tasks_lazy_time(rtp)); + else + needwake = rcu_segcblist_empty(&rtpcp->cblist); + } + if (needwake) + rtpcp->urgent_gp = 3; rcu_segcblist_enqueue(&rtpcp->cblist, rhp); raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); if (unlikely(needadjust)) { @@ -415,9 +458,14 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp) } rcu_segcblist_advance(&rtpcp->cblist, rcu_seq_current(&rtp->tasks_gp_seq)); (void)rcu_segcblist_accelerate(&rtpcp->cblist, rcu_seq_snap(&rtp->tasks_gp_seq)); - if (rcu_segcblist_pend_cbs(&rtpcp->cblist)) + if (rtpcp->urgent_gp > 0 && rcu_segcblist_pend_cbs(&rtpcp->cblist)) { + if (rtp->lazy_jiffies) + rtpcp->urgent_gp--; needgpcb |= 0x3; - if (!rcu_segcblist_empty(&rtpcp->cblist)) + } else if (rcu_segcblist_empty(&rtpcp->cblist)) { + rtpcp->urgent_gp = 0; + } + if (rcu_segcblist_ready_cbs(&rtpcp->cblist)) needgpcb |= 0x1; raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); } @@ -549,11 +597,19 @@ static void rcu_tasks_one_gp(struct rcu_tasks *rtp, bool midboot) // RCU-tasks kthread that detects grace periods and invokes callbacks. static int __noreturn rcu_tasks_kthread(void *arg) { + int cpu; struct rcu_tasks *rtp = arg; + for_each_possible_cpu(cpu) { + struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu); + + timer_setup(&rtpcp->lazy_timer, call_rcu_tasks_generic_timer, 0); + rtpcp->urgent_gp = 1; + } + /* Run on housekeeping CPUs by default. Sysadm can move if desired. */ housekeeping_affine(current, HK_TYPE_RCU); - WRITE_ONCE(rtp->kthread_ptr, current); // Let GPs start! + smp_store_release(&rtp->kthread_ptr, current); // Let GPs start! /* * Each pass through the following loop makes one check for @@ -635,16 +691,22 @@ static void show_rcu_tasks_generic_gp_kthread(struct rcu_tasks *rtp, char *s) { int cpu; bool havecbs = false; + bool haveurgent = false; + bool haveurgentcbs = false; for_each_possible_cpu(cpu) { struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu); - if (!data_race(rcu_segcblist_empty(&rtpcp->cblist))) { + if (!data_race(rcu_segcblist_empty(&rtpcp->cblist))) havecbs = true; + if (data_race(rtpcp->urgent_gp)) + haveurgent = true; + if (!data_race(rcu_segcblist_empty(&rtpcp->cblist)) && data_race(rtpcp->urgent_gp)) + haveurgentcbs = true; + if (havecbs && haveurgent && haveurgentcbs) break; - } } - pr_info("%s: %s(%d) since %lu g:%lu i:%lu/%lu %c%c %s\n", + pr_info("%s: %s(%d) since %lu g:%lu i:%lu/%lu %c%c%c%c l:%lu %s\n", rtp->kname, tasks_gp_state_getname(rtp), data_race(rtp->gp_state), jiffies - data_race(rtp->gp_jiffies), @@ -652,6 +714,9 @@ static void show_rcu_tasks_generic_gp_kthread(struct rcu_tasks *rtp, char *s) data_race(rtp->n_ipis_fails), data_race(rtp->n_ipis), ".k"[!!data_race(rtp->kthread_ptr)], ".C"[havecbs], + ".u"[haveurgent], + ".U"[haveurgentcbs], + rtp->lazy_jiffies, s); } #endif // #ifndef CONFIG_TINY_RCU From patchwork Mon Jul 17 18:04:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13316162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBCE8C0015E for ; Mon, 17 Jul 2023 18:05:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230509AbjGQSFU (ORCPT ); Mon, 17 Jul 2023 14:05:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230435AbjGQSFI (ORCPT ); Mon, 17 Jul 2023 14:05:08 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A08EF198E; Mon, 17 Jul 2023 11:04:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 30508611D1; Mon, 17 Jul 2023 18:04:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 96001C433C7; Mon, 17 Jul 2023 18:04:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1689617095; bh=MFE5zfhf0lge+PlNtey6NsBHeE7FU1t9598D+2WWazI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cFVx6RCzlCHQeu5P4ovX7IXKbUO7PwJTzL6RqFD51A2XX1zzsYJAwgrg6Ih3gegDL a6gZF/SS3K0imID7jRZ/7ZwKAS0TEBX2hWaiJK7KASXoITkTqX7FkJtCdVUvBrI/Ly cNzT907+NO7KGkvgVy6xCO7hHG5dSN+RsMgGplqFT49QwA/ph3RAya0PyHCjvxiimh voaiCHocjeuX2JOEj4PlPx0JWiYAbEowqDZFUmtSWPHNEakVacF/4Qi3L122l01azp JB3/hjA8G2X3uUcR/9FXy4IGUl22E6tAHO1wZEe8RaySkURFihswkgf/w46c857rza GJ1njkKZUTYgw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 575C5CE04CD; Mon, 17 Jul 2023 11:04:55 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 2/5] rcu-tasks: Remove redundant #ifdef CONFIG_TASKS_RCU Date: Mon, 17 Jul 2023 11:04:51 -0700 Message-Id: <20230717180454.1097714-2-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The kernel/rcu/tasks.h file has a #endif immediately followed by an Signed-off-by: Paul E. McKenney --- kernel/rcu/tasks.h | 2 -- 1 file changed, 2 deletions(-) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index c88d7fe29d92..7eae67fbe47c 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -146,9 +146,7 @@ static struct rcu_tasks rt_name = \ #ifdef CONFIG_TASKS_RCU /* Track exiting tasks in order to allow them to be waited for. */ DEFINE_STATIC_SRCU(tasks_rcu_exit_srcu); -#endif -#ifdef CONFIG_TASKS_RCU /* Report delay in synchronize_srcu() completion in rcu_tasks_postscan(). */ static void tasks_rcu_exit_srcu_stall(struct timer_list *unused); static DEFINE_TIMER(tasks_rcu_exit_srcu_stall_timer, tasks_rcu_exit_srcu_stall); From patchwork Mon Jul 17 18:04:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13316165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 009E0C0015E for ; Mon, 17 Jul 2023 18:05:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231364AbjGQSFX (ORCPT ); Mon, 17 Jul 2023 14:05:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229995AbjGQSFJ (ORCPT ); Mon, 17 Jul 2023 14:05:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAA601BD4; Mon, 17 Jul 2023 11:04:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2DD3F611CF; Mon, 17 Jul 2023 18:04:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 92600C433C9; Mon, 17 Jul 2023 18:04:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1689617095; bh=a+qFf3w4u9E73iijzqt1C1Oe4CG3736gzxYOpqJxCPM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fqcZyjPf+/n6FWpk64kKMskETZOpf5iI5GKH2VZjLrg4eIS4o2vZqbv4Y7i2bKOBu Ny8uU1JsVN110wxBh6evNZLPYgDnUhk4TWfh9Xr89UIwI0WVQPn1J12SAqzKdDPvlP 3zSq0o4dqlwg60z7+MRXZ8YUtbEA2FMY9Uzqb7YHLfMEKwQ+t7whihM73pC8Ng264q QAsPyFe8nRnCcFhsikv08rGsmAgJn9ox/+NcNUxIjZwLQFscXG0oVf0QfqoPmlstgX 1gN1ZGnNQ07lpEWbBAXcaUhjIbz/+vI+FARugBDWgOkjScZTunBtAN+mhdNwTHbPNq mNvNP6PHAaSSQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 59BB8CE0806; Mon, 17 Jul 2023 11:04:55 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 3/5] rcu-tasks: Add kernel boot parameters for callback laziness Date: Mon, 17 Jul 2023 11:04:52 -0700 Message-Id: <20230717180454.1097714-3-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit adds kernel boot parameters for callback laziness, allowing the RCU Tasks flavors to be individually adjusted. Signed-off-by: Paul E. McKenney --- .../admin-guide/kernel-parameters.txt | 23 +++++++++++++++++++ kernel/rcu/tasks.h | 15 ++++++++++++ 2 files changed, 38 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index a1457995fd41..a0f427e1a562 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5293,6 +5293,29 @@ A change in value does not take effect until the beginning of the next grace period. + rcupdate.rcu_tasks_lazy_ms= [KNL] + Set timeout in milliseconds RCU Tasks asynchronous + callback batching for call_rcu_tasks(). + A negative value will take the default. A value + of zero will disable batching. Batching is + always disabled for synchronize_rcu_tasks(). + + rcupdate.rcu_tasks_rude_lazy_ms= [KNL] + Set timeout in milliseconds RCU Tasks + Rude asynchronous callback batching for + call_rcu_tasks_rude(). A negative value + will take the default. A value of zero will + disable batching. Batching is always disabled + for synchronize_rcu_tasks_rude(). + + rcupdate.rcu_tasks_trace_lazy_ms= [KNL] + Set timeout in milliseconds RCU Tasks + Trace asynchronous callback batching for + call_rcu_tasks_trace(). A negative value + will take the default. A value of zero will + disable batching. Batching is always disabled + for synchronize_rcu_tasks_trace(). + rcupdate.rcu_self_test= [KNL] Run the RCU early boot self tests diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 7eae67fbe47c..28e986627e3f 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -1083,11 +1083,16 @@ void rcu_barrier_tasks(void) } EXPORT_SYMBOL_GPL(rcu_barrier_tasks); +int rcu_tasks_lazy_ms = -1; +module_param(rcu_tasks_lazy_ms, int, 0444); + static int __init rcu_spawn_tasks_kthread(void) { cblist_init_generic(&rcu_tasks); rcu_tasks.gp_sleep = HZ / 10; rcu_tasks.init_fract = HZ / 10; + if (rcu_tasks_lazy_ms >= 0) + rcu_tasks.lazy_jiffies = msecs_to_jiffies(rcu_tasks_lazy_ms); rcu_tasks.pregp_func = rcu_tasks_pregp_step; rcu_tasks.pertask_func = rcu_tasks_pertask; rcu_tasks.postscan_func = rcu_tasks_postscan; @@ -1236,10 +1241,15 @@ void rcu_barrier_tasks_rude(void) } EXPORT_SYMBOL_GPL(rcu_barrier_tasks_rude); +int rcu_tasks_rude_lazy_ms = -1; +module_param(rcu_tasks_rude_lazy_ms, int, 0444); + static int __init rcu_spawn_tasks_rude_kthread(void) { cblist_init_generic(&rcu_tasks_rude); rcu_tasks_rude.gp_sleep = HZ / 10; + if (rcu_tasks_rude_lazy_ms >= 0) + rcu_tasks_rude.lazy_jiffies = msecs_to_jiffies(rcu_tasks_rude_lazy_ms); rcu_spawn_tasks_kthread_generic(&rcu_tasks_rude); return 0; } @@ -1856,6 +1866,9 @@ void rcu_barrier_tasks_trace(void) } EXPORT_SYMBOL_GPL(rcu_barrier_tasks_trace); +int rcu_tasks_trace_lazy_ms = -1; +module_param(rcu_tasks_trace_lazy_ms, int, 0444); + static int __init rcu_spawn_tasks_trace_kthread(void) { cblist_init_generic(&rcu_tasks_trace); @@ -1870,6 +1883,8 @@ static int __init rcu_spawn_tasks_trace_kthread(void) if (rcu_tasks_trace.init_fract <= 0) rcu_tasks_trace.init_fract = 1; } + if (rcu_tasks_trace_lazy_ms >= 0) + rcu_tasks_trace.lazy_jiffies = msecs_to_jiffies(rcu_tasks_trace_lazy_ms); rcu_tasks_trace.pregp_func = rcu_tasks_trace_pregp_step; rcu_tasks_trace.postscan_func = rcu_tasks_trace_postscan; rcu_tasks_trace.holdouts_func = check_all_holdout_tasks_trace; From patchwork Mon Jul 17 18:04:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13316163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E018AC001DE for ; Mon, 17 Jul 2023 18:05:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231223AbjGQSFV (ORCPT ); Mon, 17 Jul 2023 14:05:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231220AbjGQSFJ (ORCPT ); Mon, 17 Jul 2023 14:05:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0377198C; Mon, 17 Jul 2023 11:04:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2E217611C6; Mon, 17 Jul 2023 18:04:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 92647C433CA; Mon, 17 Jul 2023 18:04:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1689617095; bh=4Rvqpj2uxc7m3FkdSvfRYtUA4WLN8IJWZxpll6G1Oo8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OFdPUnn6Y8cXhOeM3B0NMhTg21LZlEPm9YkU9MIchyEwcc0Q90EQBqcl5ggtXzBNk YlSdNx8MH5rIUP2KNizz5raY4yY7TwoPq+Iv2SwIUcg20Ts25G79Dhpb5FcEW4TNDi mWapt47SD2fs/YtoSlZ9vmt0ILEnRbPvQ3Iy1/y55QWveTjOIhGY2hQdsbKDCMnvi+ mG1Rh8Jx5hW5wOPhKmxCKW+3OtfK5lc6l3cazzHtPpR6bNoe21ATS5L6XUOCfagRZA pmg3Xt5Yqjgo0miag0aSO/RANllIbxhHWQgkWI5+fdYTTcpJA5lYiypEphFVstWYPc et3UJ2j/F0saA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 5C269CE0836; Mon, 17 Jul 2023 11:04:55 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Alexei Starovoitov , Martin KaFai Lau , Andrii Nakryiko Subject: [PATCH rcu 4/5] rcu-tasks: Cancel callback laziness if too many callbacks Date: Mon, 17 Jul 2023 11:04:53 -0700 Message-Id: <20230717180454.1097714-4-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The various RCU Tasks flavors now do lazy grace periods when there are only asynchronous grace period requests. By default, the system will let 250 milliseconds elapse after the first call_rcu_tasks*() callbacki is queued before starting a grace period. In contrast, synchronous grace period requests such as synchronize_rcu_tasks*() will start a grace period immediately. However, invoking one of the call_rcu_tasks*() functions in a too-tight loop can result in a callback flood, which in turn can exhaust memory if grace periods are delayed for too long. This commit therefore sets a limit so that the grace-period kthread will be awakened when any CPU's callback list expands to contain rcupdate.rcu_task_lazy_lim callbacks elements (defaulting to 32, set to -1 to disable), the grace-period kthread will be awakened, thus cancelling any ongoing laziness and getting out in front of the potential callback flood. Cc: Alexei Starovoitov Cc: Martin KaFai Lau Cc: Andrii Nakryiko Signed-off-by: Paul E. McKenney --- Documentation/admin-guide/kernel-parameters.txt | 7 +++++++ kernel/rcu/tasks.h | 7 +++++-- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index a0f427e1a562..44705a3618b9 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5264,6 +5264,13 @@ number avoids disturbing real-time workloads, but lengthens grace periods. + rcupdate.rcu_task_lazy_lim= [KNL] + Number of callbacks on a given CPU that will + cancel laziness on that CPU. Use -1 to disable + cancellation of laziness, but be advised that + doing so increases the danger of OOM due to + callback flooding. + rcupdate.rcu_task_stall_info= [KNL] Set initial timeout in jiffies for RCU task stall informational messages, which give some indication diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 28e986627e3f..291d536b1789 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -176,6 +176,8 @@ static int rcu_task_contend_lim __read_mostly = 100; module_param(rcu_task_contend_lim, int, 0444); static int rcu_task_collapse_lim __read_mostly = 10; module_param(rcu_task_collapse_lim, int, 0444); +static int rcu_task_lazy_lim __read_mostly = 32; +module_param(rcu_task_lazy_lim, int, 0444); /* RCU tasks grace-period state for debugging. */ #define RTGS_INIT 0 @@ -354,8 +356,9 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func, cblist_init_generic(rtp); raw_spin_lock_rcu_node(rtpcp); // irqs already disabled. } - needwake = func == wakeme_after_rcu; - if (havekthread && !timer_pending(&rtpcp->lazy_timer)) { + needwake = (func == wakeme_after_rcu) || + (rcu_segcblist_n_cbs(&rtpcp->cblist) == rcu_task_lazy_lim); + if (havekthread && !needwake && !timer_pending(&rtpcp->lazy_timer)) { if (rtp->lazy_jiffies) mod_timer(&rtpcp->lazy_timer, rcu_tasks_lazy_time(rtp)); else From patchwork Mon Jul 17 18:04:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13316164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81687EB64DC for ; Mon, 17 Jul 2023 18:05:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231142AbjGQSFV (ORCPT ); Mon, 17 Jul 2023 14:05:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231263AbjGQSFJ (ORCPT ); Mon, 17 Jul 2023 14:05:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F02401BE1; Mon, 17 Jul 2023 11:04:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 45E44611D5; Mon, 17 Jul 2023 18:04:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2388C433CC; Mon, 17 Jul 2023 18:04:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1689617095; bh=z5xJPNvzuFRjo2lR/l/wLkVKNv1SHd09S5ncMfb7TjY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cYt2oIUYUT5pC68a8AdfGvE7WCB9HYuDQCTvgQXOZYLpS4p1Hxh4GFFaGy7Iuj2AT MAoo0tdMoFMV264r7XQFcDWalXuQqA/Tg2hd3g1obZ9k+DGOkOHT4bEEeyW4Usvy8Q PLAJcrOLgPZb9m40+DnRS7yEFB6BL5wcLYNYsfLen+9EpHRfxVZq36bPsRKOHlsMnK 1bFVAiFBvrIA8UF17g6tw6Xw40hQGAN+4PceBM2Sx0egUp+QaJ9zs4OGrol7eA2mnG qNbZ0je4cj+eUqLh/tkOKAzZYlVun9e2B62ePPtJh2RnrmbykNJX46G+DIPj/86bhX 03iiHStrPifgQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 5ED95CE0902; Mon, 17 Jul 2023 11:04:55 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Paul E. McKenney" , Andy Whitcroft , Joe Perches , Dwaipayan Ray , Lukas Bulwahn , Alexei Starovoitov , Daniel Borkmann , John Fastabend , bpf@vger.kernel.org Subject: [PATCH rcu 5/5] checkpatch: Complain about unexpected uses of RCU Tasks Trace Date: Mon, 17 Jul 2023 11:04:54 -0700 Message-Id: <20230717180454.1097714-5-paulmck@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org RCU Tasks Trace is quite specialized, having been created specifically for sleepable BPF programs. Because it allows general blocking within readers, any new use of RCU Tasks Trace must take current use cases into account. Therefore, update checkpatch.pl to complain about use of any of the RCU Tasks Trace API members outside of BPF and outside of RCU itself. Cc: Andy Whitcroft (maintainer:CHECKPATCH) Cc: Joe Perches (maintainer:CHECKPATCH) Cc: Dwaipayan Ray (reviewer:CHECKPATCH) Cc: Lukas Bulwahn Cc: Alexei Starovoitov Cc: Daniel Borkmann Cc: John Fastabend Cc: Signed-off-by: Paul E. McKenney --- scripts/checkpatch.pl | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 880fde13d9b8..24bab980bc6f 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -7457,6 +7457,24 @@ sub process { } } +# Complain about RCU Tasks Trace used outside of BPF (and of course, RCU). + if ($line =~ /\brcu_read_lock_trace\s*\(/ || + $line =~ /\brcu_read_lock_trace_held\s*\(/ || + $line =~ /\brcu_read_unlock_trace\s*\(/ || + $line =~ /\bcall_rcu_tasks_trace\s*\(/ || + $line =~ /\bsynchronize_rcu_tasks_trace\s*\(/ || + $line =~ /\brcu_barrier_tasks_trace\s*\(/ || + $line =~ /\brcu_request_urgent_qs_task\s*\(/) { + if ($realfile !~ m@^kernel/bpf@ && + $realfile !~ m@^include/linux/bpf@ && + $realfile !~ m@^net/bpf@ && + $realfile !~ m@^kernel/rcu@ && + $realfile !~ m@^include/linux/rcu@) { + WARN("RCU_TASKS_TRACE", + "use of RCU tasks trace is incorrect outside BPF or core RCU code\n" . $herecurr); + } + } + # check for lockdep_set_novalidate_class if ($line =~ /^.\s*lockdep_set_novalidate_class\s*\(/ || $line =~ /__lockdep_no_validate__\s*\)/ ) { From patchwork Mon Jul 31 23:11:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13335578 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 095E2C001DF for ; Mon, 31 Jul 2023 23:11:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229716AbjGaXLz (ORCPT ); Mon, 31 Jul 2023 19:11:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229622AbjGaXLy (ORCPT ); Mon, 31 Jul 2023 19:11:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E2E51A5; Mon, 31 Jul 2023 16:11:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id F18986134F; Mon, 31 Jul 2023 23:11:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5D1E1C433C7; Mon, 31 Jul 2023 23:11:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690845110; bh=QLAjE/GGSiZipKUex73TsazKgLlG2VFV8t+GpNpJUrs=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=QT7qFPwbt++C6pX1FEft72MhAT12SkEdw5f7xZYXniCoFrQYe82clCPDDxdSGX3I0 Vv0CPidxzYvsT9AI4yTlxnLZwUaMyqBeSnhEr1+ZAXsEm2wTSx95f6uL4LdKhkvvak ZQe6DKpsaDCUBaiMPu0cA0wV38sPhtDHQ4zycelDCBsXdIvInxYRvb2yCXA1wMCMtN CDf/GFjm7G7ObpadRPHVSc/f2EWuVCq9dSSuGNoUpol7138IlGOmdiITt/jpnOm330 ucSpVbKBv8QSpSIzy0/04p0kuscGRFZVerowhDFwENPkfzH+VxtQtcDXjpNSrisVhP UFP7H51bAB5kA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id EE140CE1065; Mon, 31 Jul 2023 16:11:49 -0700 (PDT) Date: Mon, 31 Jul 2023 16:11:49 -0700 From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org Subject: [PATCH rcu 6/5] rcu-tasks: Permit use of debug-objects with RCU Tasks flavors Message-ID: <0bb374e7-694e-4b2d-b2aa-472a5a4e1670@paulmck-laptop> Reply-To: paulmck@kernel.org References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org rcu-tasks: Permit use of debug-objects with RCU Tasks flavors Currently, cblist_init_generic() holds a raw spinlock when invoking INIT_WORK(). This fails in kernels built with CONFIG_DEBUG_OBJECTS=y due to memory allocation being forbidden while holding a raw spinlock. But the only reason for holding the raw spinlock is to synchronize with early boot calls to call_rcu_tasks(), call_rcu_tasks_rude, and, last but not least, call_rcu_tasks_trace(). These calls also invoke cblist_init_generic() in order to support early boot queueing of callbacks. Except that there are no early boot calls to either of these three functions, and the BPF guys confirm that they have no plans to add any such calls. This commit therefore removes the synchronization and adds a WARN_ON_ONCE() to catch the case of now-prohibited early boot RCU Tasks callback queueing. If early boot queueing is needed, an "initialized" flag may be added to the rcu_tasks structure. Then queueing a callback before this flag is set would initialize the callback list (if needed) and queue the callback. The decision as to where to queue the callback given the possibility of non-zero boot CPUs is left as an exercise for the reader. Reported-by: Jakub Kicinski Signed-off-by: Paul E. McKenney diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 291d536b1789..a4cd4ea2402c 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -236,7 +236,7 @@ static const char *tasks_gp_state_getname(struct rcu_tasks *rtp) #endif /* #ifndef CONFIG_TINY_RCU */ // Initialize per-CPU callback lists for the specified flavor of -// Tasks RCU. +// Tasks RCU. Do not enqueue callbacks before this function is invoked. static void cblist_init_generic(struct rcu_tasks *rtp) { int cpu; @@ -244,7 +244,6 @@ static void cblist_init_generic(struct rcu_tasks *rtp) int lim; int shift; - raw_spin_lock_irqsave(&rtp->cbs_gbl_lock, flags); if (rcu_task_enqueue_lim < 0) { rcu_task_enqueue_lim = 1; rcu_task_cb_adjust = true; @@ -267,17 +266,16 @@ static void cblist_init_generic(struct rcu_tasks *rtp) WARN_ON_ONCE(!rtpcp); if (cpu) raw_spin_lock_init(&ACCESS_PRIVATE(rtpcp, lock)); - raw_spin_lock_rcu_node(rtpcp); // irqs already disabled. + local_irq_save(flags); // serialize initialization if (rcu_segcblist_empty(&rtpcp->cblist)) rcu_segcblist_init(&rtpcp->cblist); + local_irq_restore(flags); INIT_WORK(&rtpcp->rtp_work, rcu_tasks_invoke_cbs_wq); rtpcp->cpu = cpu; rtpcp->rtpp = rtp; if (!rtpcp->rtp_blkd_tasks.next) INIT_LIST_HEAD(&rtpcp->rtp_blkd_tasks); - raw_spin_unlock_rcu_node(rtpcp); // irqs remain disabled. } - raw_spin_unlock_irqrestore(&rtp->cbs_gbl_lock, flags); pr_info("%s: Setting shift to %d and lim to %d rcu_task_cb_adjust=%d.\n", rtp->name, data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim), rcu_task_cb_adjust); @@ -351,11 +349,9 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func, READ_ONCE(rtp->percpu_enqueue_lim) != nr_cpu_ids) needadjust = true; // Defer adjustment to avoid deadlock. } - if (!rcu_segcblist_is_enabled(&rtpcp->cblist)) { - raw_spin_unlock_rcu_node(rtpcp); // irqs remain disabled. - cblist_init_generic(rtp); - raw_spin_lock_rcu_node(rtpcp); // irqs already disabled. - } + // Queuing callbacks before initialization not yet supported. + if (WARN_ON_ONCE(!rcu_segcblist_is_enabled(&rtpcp->cblist))) + rcu_segcblist_init(&rtpcp->cblist); needwake = (func == wakeme_after_rcu) || (rcu_segcblist_n_cbs(&rtpcp->cblist) == rcu_task_lazy_lim); if (havekthread && !needwake && !timer_pending(&rtpcp->lazy_timer)) {