From patchwork Mon Jun 20 22:20:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42B51C433EF for ; Mon, 20 Jun 2022 22:20:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245376AbiFTWUn (ORCPT ); Mon, 20 Jun 2022 18:20:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243870AbiFTWUj (ORCPT ); Mon, 20 Jun 2022 18:20:39 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFD371B796; Mon, 20 Jun 2022 15:20:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D2C93611D9; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 415DAC3411B; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=jG3hNc3HY7o8s2RFDc2C+XAxckGdI/PbpjhwaH1AXKY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fXh1cizEODZ4SMedxiPCazh6QGP5XJe+ddgxpEMuYmJ7YyDrh8UglNT2xW5j7mB19 DY8TvVoI9SwRXzF354hU/qLIUlY1JEk0CYtiaFHsB33lZUdS6xWlDUPEd2hTq+leNJ TGvtgjGHXU42yI41xTbYXRtkiHNWaOVK/MMG6XqYQF1HvnXU1C9C+XVUmSzf5UMtd2 c1uKiHRF1NHTl50JRyXFZC8HiqWj69y+TleFGWEAtEfLS3L6f1in3Vv5MZFozPT4/3 yIJ7C5ycqtulJ3+J+ex53zm8FmPehhnd2q+ml/i7kTvl1DC2SAMGZ+oNARhWvvC73B YeKVhUjonEthg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id F13D25C05B9; Mon, 20 Jun 2022 15:20:33 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 01/12] rcu: Decrease FQS scan wait time in case of callback overloading Date: Mon, 20 Jun 2022 15:20:21 -0700 Message-Id: <20220620222032.3839547-1-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The force-quiesce-state loop function rcu_gp_fqs_loop() checks for callback overloading and does an immediate initial scan for idle CPUs if so. However, subsequent rescans will be carried out at as leisurely a rate as they always are, as specified by the rcutree.jiffies_till_next_fqs module parameter. It might be tempting to just continue immediately rescanning, but this turns the RCU grace-period kthread into a CPU hog. It might also be tempting to reduce the time between rescans to a single jiffy, but this can be problematic on larger systems. This commit therefore divides the normal time between rescans by three, rounding up. Thus a small system running at HZ=1000 that is suffering from callback overload will wait only one jiffy instead of the normal three between rescans. Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index c25ba442044a6..c19d5926886fb 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1993,6 +1993,11 @@ static noinline_for_stack void rcu_gp_fqs_loop(void) WRITE_ONCE(rcu_state.jiffies_kick_kthreads, jiffies + (j ? 3 * j : 2)); } + if (rcu_state.cbovld) { + j = (j + 2) / 3; + if (j <= 0) + j = 1; + } trace_rcu_grace_period(rcu_state.name, rcu_state.gp_seq, TPS("fqswait")); WRITE_ONCE(rcu_state.gp_state, RCU_GP_WAIT_FQS); From patchwork Mon Jun 20 22:20:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888334 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A358DCCA481 for ; Mon, 20 Jun 2022 22:20:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343922AbiFTWUy (ORCPT ); Mon, 20 Jun 2022 18:20:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244520AbiFTWUm (ORCPT ); Mon, 20 Jun 2022 18:20:42 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E0511B7B6; Mon, 20 Jun 2022 15:20:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 903ACB81648; Mon, 20 Jun 2022 22:20:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 45DCAC3411C; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=IE9ivt9U+MAy3rcGS7RqeSn8fZ9nZNRTzcq4pLgoURo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tNZtaJ+cm+x9uTAURYlLmWtUMNpLi05YFqr1zKb0c36rsUEZgY9b1I5cO0XD3JF3Y vxVv4F90l4lds2d2dOZXEps6L/+TIdVDQHwFyFcavGtTDz4qKGqRPuuhksUpR2EH4b MrDwVn/r/YvI6ghkZAYsBIbbua6y4dbfpPw47CueV7NibuDQS8/fCitGiDuLwQxp8o T5CItYE8FkkD4WB8PDTqUnbaPI5P3IuxhCjk2sRQ2mmsNapUsVP5CPgeVALndbF9zq kBjW83g939y8flEsLCkzCbGs9I2I8dlkMOheoXX5zsGnmzozHUMl0uP71NkNK5nNKV CjXGHB1n/HyqA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id F40C45C05C8; Mon, 20 Jun 2022 15:20:33 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Patrick Wang , "Paul E . McKenney" Subject: [PATCH rcu 02/12] rcu: Avoid tracing a few functions executed in stop machine Date: Mon, 20 Jun 2022 15:20:22 -0700 Message-Id: <20220620222032.3839547-2-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Patrick Wang Stop-machine recently started calling additional functions while waiting: ---------------------------------------------------------------- Former stop machine wait loop: do { cpu_relax(); => macro ... } while (curstate != STOPMACHINE_EXIT); ----------------------------------------------------------------- Current stop machine wait loop: do { stop_machine_yield(cpumask); => function (notraced) ... touch_nmi_watchdog(); => function (notraced, inside calls also notraced) ... rcu_momentary_dyntick_idle(); => function (notraced, inside calls traced) } while (curstate != MULTI_STOP_EXIT); ------------------------------------------------------------------ These functions (and the functions that they call) must be marked notrace to prevent them from being updated while they are executing. The consequences of failing to mark these functions can be severe: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: 1-...!: (0 ticks this GP) idle=14f/1/0x4000000000000000 softirq=3397/3397 fqs=0 rcu: 3-...!: (0 ticks this GP) idle=ee9/1/0x4000000000000000 softirq=5168/5168 fqs=0 (detected by 0, t=8137 jiffies, g=5889, q=2 ncpus=4) Task dump for CPU 1: task:migration/1 state:R running task stack: 0 pid: 19 ppid: 2 flags:0x00000000 Stopper: multi_cpu_stop+0x0/0x18c <- stop_machine_cpuslocked+0x128/0x174 Call Trace: Task dump for CPU 3: task:migration/3 state:R running task stack: 0 pid: 29 ppid: 2 flags:0x00000000 Stopper: multi_cpu_stop+0x0/0x18c <- stop_machine_cpuslocked+0x128/0x174 Call Trace: rcu: rcu_preempt kthread timer wakeup didn't happen for 8136 jiffies! g5889 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 rcu: Possible timer handling issue on cpu=2 timer-softirq=594 rcu: rcu_preempt kthread starved for 8137 jiffies! g5889 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=2 rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior. rcu: RCU grace-period kthread stack dump: task:rcu_preempt state:I stack: 0 pid: 14 ppid: 2 flags:0x00000000 Call Trace: schedule+0x56/0xc2 schedule_timeout+0x82/0x184 rcu_gp_fqs_loop+0x19a/0x318 rcu_gp_kthread+0x11a/0x140 kthread+0xee/0x118 ret_from_exception+0x0/0x14 rcu: Stack dump where RCU GP kthread last ran: Task dump for CPU 2: task:migration/2 state:R running task stack: 0 pid: 24 ppid: 2 flags:0x00000000 Stopper: multi_cpu_stop+0x0/0x18c <- stop_machine_cpuslocked+0x128/0x174 Call Trace: This commit therefore marks these functions notrace: rcu_preempt_deferred_qs() rcu_preempt_need_deferred_qs() rcu_preempt_deferred_qs_irqrestore() Signed-off-by: Patrick Wang Acked-by: Steven Rostedt (Google) Signed-off-by: Paul E. McKenney Reviewed-by: Neeraj Upadhyay --- kernel/rcu/tree_plugin.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index c8ba0fe17267c..440d9e02a26e0 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -460,7 +460,7 @@ static bool rcu_preempt_has_tasks(struct rcu_node *rnp) * be quite short, for example, in the case of the call from * rcu_read_unlock_special(). */ -static void +static notrace void rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags) { bool empty_exp; @@ -581,7 +581,7 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags) * is disabled. This function cannot be expected to understand these * nuances, so the caller must handle them. */ -static bool rcu_preempt_need_deferred_qs(struct task_struct *t) +static notrace bool rcu_preempt_need_deferred_qs(struct task_struct *t) { return (__this_cpu_read(rcu_data.cpu_no_qs.b.exp) || READ_ONCE(t->rcu_read_unlock_special.s)) && @@ -595,7 +595,7 @@ static bool rcu_preempt_need_deferred_qs(struct task_struct *t) * evaluate safety in terms of interrupt, softirq, and preemption * disabling. */ -static void rcu_preempt_deferred_qs(struct task_struct *t) +static notrace void rcu_preempt_deferred_qs(struct task_struct *t) { unsigned long flags; From patchwork Mon Jun 20 22:20:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3401CCA480 for ; Mon, 20 Jun 2022 22:20:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243664AbiFTWUp (ORCPT ); Mon, 20 Jun 2022 18:20:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244095AbiFTWUj (ORCPT ); Mon, 20 Jun 2022 18:20:39 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0ACE1B792; Mon, 20 Jun 2022 15:20:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E484F611FA; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 49EC6C341C5; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=Ev7MJB6+VbSr95NwkxJsx9nVPnwQqYzMCUqVwoydx3A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b4YeM8PlT4eNafI0SiBQ7QwlQtyA/YRjo9Scwk4seglPe/QhehGEGvXXQokLvspsw 2KYNYpltfwkTkOyhDhlqpB/ynVQs2yWgYBRnRI6wTYX/RGI9EAmFHFc8up3Fnx+/PS CODH6o+apFAqzE3JRJYL/RZS3SlBym6xi2Al3cd+UVF9rxkkozuATx6nGZ6CPZ5hTa aeoy3amLtgg9klHtQNF5yX++gCNXbt4NdUwgzm3qANWeVK+KAC0BlnrpIE4AYwjLJI 7zpcelurrXCEk16saDoTgSSAsbX0hjehg9UeTKH9qJytC/X+Nf9EHgZRiUXWl8M5in /vVm/jda2tTcg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 01D095C0A15; Mon, 20 Jun 2022 15:20:34 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Zqiang , "Paul E . McKenney" Subject: [PATCH rcu 03/12] rcu: Add rnp->cbovldmask check in rcutree_migrate_callbacks() Date: Mon, 20 Jun 2022 15:20:23 -0700 Message-Id: <20220620222032.3839547-3-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Zqiang Currently, the rcu_node structure's ->cbovlmask field is set in call_rcu() when a given CPU is suffering from callback overload. But if that CPU goes offline, the outgoing CPU's callbacks is migrated to the running CPU, which is likely to overload the running CPU. However, that CPU's bit in its leaf rcu_node structure's ->cbovlmask field remains zero. Initially, this is OK because the outgoing CPU's bit remains set. However, that bit will be cleared at the next end of a grace period, at which time it is quite possible that the running CPU will still be overloaded. If the running CPU invokes call_rcu(), then overload will be checked for and the bit will be set. Except that there is no guarantee that the running CPU will invoke call_rcu(), in which case the next grace period will fail to take the running CPU's overload condition into account. Plus, because the bit is not set, the end of the grace period won't check for overload on this CPU. This commit therefore adds a call to check_cb_ovld_locked() in check_cb_ovld_locked() to set the running CPU's ->cbovlmask bit appropriately. Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney Reviewed-by: Neeraj Upadhyay --- kernel/rcu/tree.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index c19d5926886fb..f4a37f2032664 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4491,6 +4491,7 @@ void rcutree_migrate_callbacks(int cpu) needwake = needwake || rcu_advance_cbs(my_rnp, my_rdp); rcu_segcblist_disable(&rdp->cblist); WARN_ON_ONCE(rcu_segcblist_empty(&my_rdp->cblist) != !rcu_segcblist_n_cbs(&my_rdp->cblist)); + check_cb_ovld_locked(my_rdp, my_rnp); if (rcu_rdp_is_offloaded(my_rdp)) { raw_spin_unlock_rcu_node(my_rnp); /* irqs remain disabled. */ __call_rcu_nocb_wake(my_rdp, true, flags); From patchwork Mon Jun 20 22:20:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888328 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BB7BCCA485 for ; Mon, 20 Jun 2022 22:20:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245715AbiFTWUr (ORCPT ); Mon, 20 Jun 2022 18:20:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244457AbiFTWUm (ORCPT ); Mon, 20 Jun 2022 18:20:42 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 934801B7BA; Mon, 20 Jun 2022 15:20:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B4421B8164A; Mon, 20 Jun 2022 22:20:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4F3CEC341C4; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=BzfEqP9ZYAPt1rGlIDWKlmdtdWPBu8au1LUFkZLPWlc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oHVBE8KNv/cZfZZ+GP0GVSaJkI0CWNmwoGVW4IkSqhdIU1gj3ueXFrCicOqFBcpy6 z1RtdbYlRukO8rO81rMfaZiLjkluSmlKJf2Y0ZlXgc97fIv+hbW6LsPbJA3bA/UPIV ESpCpICh1e4FCA7uf8XuwzxC7lNhoD2J5Yct/uUP3XU+PtndgEkZfl+PTQheYREKjS E/uIBMHEcFzHjxvhBYm9bB8cgG9onuwpFH2/MLpx6FVMKLso+AvL7PsuBoVlnAK8po KNyDv6CCW1iu93u9FeMo7V8mfjHG/vA3Gv57P8HBTYIoD+htSRsSc0LC4pfG/2rF3g 1RfDROIS/WBcQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 03B8C5C0A33; Mon, 20 Jun 2022 15:20:34 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Zqiang , "Paul E . McKenney" Subject: [PATCH rcu 04/12] rcu: Immediately boost preempted readers for strict grace periods Date: Mon, 20 Jun 2022 15:20:24 -0700 Message-Id: <20220620222032.3839547-4-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Zqiang The intent of the CONFIG_RCU_STRICT_GRACE_PERIOD Konfig option is to cause normal grace periods to complete quickly in order to better catch errors resulting from improperly leaking pointers from RCU read-side critical sections. However, kernels built with this option enabled still wait for some hundreds of milliseconds before boosting RCU readers that have been preempted within their current critical section. The value of this delay is set by the CONFIG_RCU_BOOST_DELAY Kconfig option, which defaults to 500 milliseconds. This commit therefore causes kernels build with strict grace periods to ignore CONFIG_RCU_BOOST_DELAY. This causes rcu_initiate_boost() to start boosting immediately after all CPUs on a given leaf rcu_node structure have passed through their quiescent states. Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney Reviewed-by: Neeraj Upadhyay --- kernel/rcu/tree_plugin.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 440d9e02a26e0..53d05eb804121 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -1140,7 +1140,8 @@ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags) (rnp->gp_tasks != NULL && rnp->boost_tasks == NULL && rnp->qsmask == 0 && - (!time_after(rnp->boost_time, jiffies) || rcu_state.cbovld))) { + (!time_after(rnp->boost_time, jiffies) || rcu_state.cbovld || + IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD)))) { if (rnp->exp_tasks == NULL) WRITE_ONCE(rnp->boost_tasks, rnp->gp_tasks); raw_spin_unlock_irqrestore_rcu_node(rnp, flags); From patchwork Mon Jun 20 22:20:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBEEAC433EF for ; Mon, 20 Jun 2022 22:20:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243870AbiFTWUr (ORCPT ); Mon, 20 Jun 2022 18:20:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238367AbiFTWUl (ORCPT ); Mon, 20 Jun 2022 18:20:41 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D12331B7A2; Mon, 20 Jun 2022 15:20:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E9658612EE; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C923C341C7; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=n7HI6sinDisynSHP103tNG9aSbp5KKiDzOeA9d+Lwk4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X+EiZMQ2CeLc0OCOiNDByl4SFY431IAMQJzhF/GJ2gNmNv/wfnStYi7UnI+G7tOU6 1tFl6lUMNtgwg68VXBg30JR7ivyYSmyPvoTaF1YLS8p5amUaomjhUNbMNZgYxxl2IL W11PuCcOE3+rE8ULVCXAtX3RQslQr9Zl+CFt/Frfuc6aA2TQ3e4uzS15y4f7MfoU9R zH3ROWCcidwKZDNkEX+3HTpQT71CDWTa9FME4T1/QLlJ8lwjKiljfyMWMnFWvP0DRz tckubD8SOGb3UQSFmOnN1AbOEruX9LLhXJ9gV8Q+onlz93n+2TSxAF7FvUC74OSlAu VzL6Ay5T9J7uw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 05AF35C0ADC; Mon, 20 Jun 2022 15:20:34 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" Subject: [PATCH rcu 05/12] rcu: Forbid RCU_STRICT_GRACE_PERIOD in TINY_RCU kernels Date: Mon, 20 Jun 2022 15:20:25 -0700 Message-Id: <20220620222032.3839547-5-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The RCU_STRICT_GRACE_PERIOD Kconfig option does nothing in kernels built with CONFIG_TINY_RCU=y, so this commit adjusts the dependencies to disallow this combination. Signed-off-by: Paul E. McKenney Reviewed-by: Neeraj Upadhyay --- kernel/rcu/Kconfig.debug | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/Kconfig.debug b/kernel/rcu/Kconfig.debug index 9b64e55d4f615..4da05beb13d79 100644 --- a/kernel/rcu/Kconfig.debug +++ b/kernel/rcu/Kconfig.debug @@ -121,7 +121,7 @@ config RCU_EQS_DEBUG config RCU_STRICT_GRACE_PERIOD bool "Provide debug RCU implementation with short grace periods" - depends on DEBUG_KERNEL && RCU_EXPERT && NR_CPUS <= 4 + depends on DEBUG_KERNEL && RCU_EXPERT && NR_CPUS <= 4 && !TINY_RCU default n select PREEMPT_COUNT if PREEMPT=n help From patchwork Mon Jun 20 22:20:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 938A8C433EF for ; Mon, 20 Jun 2022 22:20:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343873AbiFTWUu (ORCPT ); Mon, 20 Jun 2022 18:20:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244759AbiFTWUn (ORCPT ); Mon, 20 Jun 2022 18:20:43 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF7191BE82; Mon, 20 Jun 2022 15:20:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2ABB7B8164E; Mon, 20 Jun 2022 22:20:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F8C3C341CF; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=lyNCAIvYhX/+kHwlAad/y9gHMFmiZO/u932Jluyi4aA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AjIFk5YxhEODc2yYfvgjOXXFHf3Ik38rQd+LeQZ+Oek9uUtSZ38cy8J4RZ5+l8XFm VaGTwCO0bXakKzgxFSgEVtK9UY2il7+RWQB5d4UhfLFHZ2CpRVi2ae9cVhnm/gQNbF G5UKOuoFendHYd+tezPrBS6rqLpFqaoU6c3BWoiKv1sdt223xkCpQ2ukK7mY8z8HRq HPvGFZRgHmnCv3Vov0EbAtgGgrnE168NzYsNNwSbZgpVYDQypzNhn5ntHLf1DbaWcW pbOaB5ccgWHX/Mce4mAr+4kwdYKKRkZSymeKTQzMnIij7QUwx+cTX/X/h0vEZAC9Ji ugsXHrG/y59Tg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 078495C0B06; Mon, 20 Jun 2022 15:20:34 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Chen Zhongjin , stable@vger.kernel.org, Chen jingwen , "Paul E . McKenney" Subject: [PATCH rcu 06/12] locking/csd_lock: Change csdlock_debug from early_param to __setup Date: Mon, 20 Jun 2022 15:20:26 -0700 Message-Id: <20220620222032.3839547-6-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Chen Zhongjin The csdlock_debug kernel-boot parameter is parsed by the early_param() function csdlock_debug(). If set, csdlock_debug() invokes static_branch_enable() to enable csd_lock_wait feature, which triggers a panic on arm64 for kernels built with CONFIG_SPARSEMEM=y and CONFIG_SPARSEMEM_VMEMMAP=n. With CONFIG_SPARSEMEM_VMEMMAP=n, __nr_to_section is called in static_key_enable() and returns NULL, resulting in a NULL dereference because mem_section is initialized only later in sparse_init(). This is also a problem for powerpc because early_param() functions are invoked earlier than jump_label_init(), also resulting in static_key_enable() failures. These failures cause the warning "static key 'xxx' used before call to jump_label_init()". Thus, early_param is too early for csd_lock_wait to run static_branch_enable(), so changes it to __setup to fix these. Fixes: 8d0968cc6b8f ("locking/csd_lock: Add boot parameter for controlling CSD lock debugging") Cc: stable@vger.kernel.org Reported-by: Chen jingwen Signed-off-by: Chen Zhongjin Signed-off-by: Paul E. McKenney --- kernel/smp.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index dd215f4394264..650810a6f29b3 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -174,9 +174,9 @@ static int __init csdlock_debug(char *str) if (val) static_branch_enable(&csdlock_debug_enabled); - return 0; + return 1; } -early_param("csdlock_debug", csdlock_debug); +__setup("csdlock_debug=", csdlock_debug); static DEFINE_PER_CPU(call_single_data_t *, cur_csd); static DEFINE_PER_CPU(smp_call_func_t, cur_csd_func); From patchwork Mon Jun 20 22:20:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8490EC43334 for ; Mon, 20 Jun 2022 22:20:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343912AbiFTWUv (ORCPT ); Mon, 20 Jun 2022 18:20:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244546AbiFTWUm (ORCPT ); Mon, 20 Jun 2022 18:20:42 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C30D1B7BC; Mon, 20 Jun 2022 15:20:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 15BCEB8164C; Mon, 20 Jun 2022 22:20:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9C325C341CE; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=dvayfTMczyv3C6g8GjwtjDg8jUO9BjwWnCpcNP6JWoQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KXwIw1hoF6M8JmrZhNW6N32tDGxEXEY7fyaa5fmnMwUGYflOYI+KdJE791fCCyVJs 9WNtr2SwLSDmce5jZhN7Fq3bsO7LK3KpsXsC4UXWJgvOg3w7UCiTQttRp3jDB7S/K9 WClbicMt+afMnOpGsrfOHuxBvGr9lofxt+R6mTUkOLCdEg2AwEBJ3BXo1agH6wFRj+ 5bTmuGRmoIgCloVv9meIHcbAho6yh6M7adTNhAA+Hg/Hvp3xmCq0IzEdUW7Mj7b7hf lCf8lDX4FmGSWQiewxv3KHP+TJB36fJdUakciOPOZPaPvLILH9UDWkkdTSACT5v2YV 1Vsm1DW32i0oQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 094375C0BCC; Mon, 20 Jun 2022 15:20:34 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Johannes Berg , Dmitry Vyukov , "Paul E . McKenney" Subject: [PATCH rcu 07/12] rcu: tiny: Record kvfree_call_rcu() call stack for KASAN Date: Mon, 20 Jun 2022 15:20:27 -0700 Message-Id: <20220620222032.3839547-7-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Johannes Berg When running KASAN with Tiny RCU (e.g. under ARCH=um, where a working KASAN patch is now available), we don't get any information on the original kfree_rcu() (or similar) caller when a problem is reported, as Tiny RCU doesn't record this. Add the recording, which required pulling kvfree_call_rcu() out of line for the KASAN case since the recording function (kasan_record_aux_stack_noalloc) is neither exported, nor can we include kasan.h into rcutiny.h. without KASAN, the patch has no size impact (ARCH=um kernel): text data bss dec hex filename 6151515 4423154 33148520 43723189 29b29b5 linux 6151515 4423154 33148520 43723189 29b29b5 linux + patch with KASAN, the impact on my build was minimal: text data bss dec hex filename 13915539 7388050 33282304 54585893 340ea25 linux 13911266 7392114 33282304 54585684 340e954 linux + patch -4273 +4064 +-0 -209 Acked-by: Dmitry Vyukov Signed-off-by: Johannes Berg Signed-off-by: Paul E. McKenney --- include/linux/rcutiny.h | 11 ++++++++++- kernel/rcu/tiny.c | 14 ++++++++++++++ 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 5fed476f977f6..d84e13f2c3848 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -38,7 +38,7 @@ static inline void synchronize_rcu_expedited(void) */ extern void kvfree(const void *addr); -static inline void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) +static inline void __kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) { if (head) { call_rcu(head, func); @@ -51,6 +51,15 @@ static inline void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) kvfree((void *) func); } +#ifdef CONFIG_KASAN_GENERIC +void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func); +#else +static inline void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) +{ + __kvfree_call_rcu(head, func); +} +#endif + void rcu_qs(void); static inline void rcu_softirq_qs(void) diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c index 340b3f8b090d4..58ff3721d975c 100644 --- a/kernel/rcu/tiny.c +++ b/kernel/rcu/tiny.c @@ -217,6 +217,20 @@ bool poll_state_synchronize_rcu(unsigned long oldstate) } EXPORT_SYMBOL_GPL(poll_state_synchronize_rcu); +#ifdef CONFIG_KASAN_GENERIC +void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) +{ + if (head) { + void *ptr = (void *) head - (unsigned long) func; + + kasan_record_aux_stack_noalloc(ptr); + } + + __kvfree_call_rcu(head, func); +} +EXPORT_SYMBOL_GPL(kvfree_call_rcu); +#endif + void __init rcu_init(void) { open_softirq(RCU_SOFTIRQ, rcu_process_callbacks); From patchwork Mon Jun 20 22:20:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888324 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69A4CC43334 for ; Mon, 20 Jun 2022 22:20:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242159AbiFTWUo (ORCPT ); Mon, 20 Jun 2022 18:20:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230160AbiFTWUm (ORCPT ); Mon, 20 Jun 2022 18:20:42 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 028991B7AA; Mon, 20 Jun 2022 15:20:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7CAE5612FA; Mon, 20 Jun 2022 22:20:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F9FBC341D1; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=i4CawKOFPTigQL0XBLAWhdlDe14tFZDYfSCVAPb/eQE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Hxy2vAPtAq1FfTBIh69lbVsfRHcnWpJEDM616wTrMIKYvbD/Krf/t2vibZC8Xms3x mUnqpdnPx6MS5H7LfSJH2haQKGxXMK2FGZEEep5svDSyI1Fu7Ziu9Y84b1d9DHc0a4 SD243xJCmbm+U8Cf2gUuU59JaWEaUDpDrE3Ai6URyLAoQUZD5ALw9SP8DJ50tAPyde RJxmJvt3mLO8wdm7RP1QbEJSLISO4Mrj/ddUio5MTfpTVvYbQrUKM2Q0+XA2pCf5sf fEqGGMbJ58krhF7e7KmAUED2OltfR3FFig68Yh4mxAtKfnELzne4AJEAOLoI9gHgO5 eQCNP43mfoCAQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 0AF785C0CCE; Mon, 20 Jun 2022 15:20:34 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Zqiang , "Paul E . McKenney" Subject: [PATCH rcu 08/12] rcu: Cleanup RCU urgency state for offline CPU Date: Mon, 20 Jun 2022 15:20:28 -0700 Message-Id: <20220620222032.3839547-8-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Zqiang When a CPU is slow to provide a quiescent state for a given grace period, RCU takes steps to encourage that CPU to get with the quiescent-state program in a more timely fashion. These steps include these flags in the rcu_data structure: 1. ->rcu_urgent_qs, which causes the scheduling-clock interrupt to request an otherwise pointless context switch from the scheduler. 2. ->rcu_need_heavy_qs, which causes both cond_resched() and RCU's context-switch hook to do an immediate momentary quiscent state. 3. ->rcu_need_heavy_qs, which causes the scheduler-clock tick to be enabled even on nohz_full CPUs with only one runnable task. These flags are of course cleared once the corresponding CPU has passed through a quiescent state. Unless that quiescent state is the CPU going offline, which means that when the CPU comes back online, it will needlessly consume additional CPU time and incur additional latency, which constitutes a minor but very real performance bug. This commit therefore adds the call to rcu_disable_urgency_upon_qs() that clears these flags to the CPU-hotplug offlining code path. Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney Reviewed-by: Neeraj Upadhyay --- kernel/rcu/tree.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index f4a37f2032664..5445b19b48408 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4446,6 +4446,7 @@ void rcu_report_dead(unsigned int cpu) rdp->rcu_ofl_gp_flags = READ_ONCE(rcu_state.gp_flags); if (rnp->qsmask & mask) { /* RCU waiting on outgoing CPU? */ /* Report quiescent state -before- changing ->qsmaskinitnext! */ + rcu_disable_urgency_upon_qs(rdp); rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags); raw_spin_lock_irqsave_rcu_node(rnp, flags); } From patchwork Mon Jun 20 22:20:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888326 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83CE9CCA482 for ; Mon, 20 Jun 2022 22:20:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243558AbiFTWUq (ORCPT ); Mon, 20 Jun 2022 18:20:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235922AbiFTWUl (ORCPT ); Mon, 20 Jun 2022 18:20:41 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0EE11B79D; Mon, 20 Jun 2022 15:20:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6EE96612F5; Mon, 20 Jun 2022 22:20:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9FD07C385A2; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=41amnyMn1rIKChCQFUfwJMh9tfKMg6S8/hCqGWL9D8M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hFGzayjGIscFAUjdwfZ2osfwf3BlM1a3uXsow9v3z/VoGNIUrz3J83Tb1R1wTjfYs 6xnK9lc7k3Q7tiFZ34avH+fskYxhjU5zqiv7KxPQaDKlWg6mKy3uA/9M694ye49ofd 9XiED9jwQ0aB1GwQuz3LAd4444j/z3zBqwlbXe310EN5aEhPPISDlzJGMtt6wGh7xf EFGA50lOBj/vie5Bl6ZSdnoh1rifsYcA8r30noIF/UELEHUIDwlOCg18fI9AJ40ASP hMgFmrwZuN79ZzE4KXE031VYlk7/acOfpZ76kwupsk7aOkxYdZuWWwDwZBGDN5QEeT zd5XoB7FpACMQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 0CCEC5C0D1B; Mon, 20 Jun 2022 15:20:34 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Joel Fernandes (Google)" , Uladzislau Rezki , "Paul E . McKenney" Subject: [PATCH rcu 09/12] rcu/kvfree: Remove useless monitor_todo flag Date: Mon, 20 Jun 2022 15:20:29 -0700 Message-Id: <20220620222032.3839547-9-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: "Joel Fernandes (Google)" monitor_todo is not needed as the work struct already tracks if work is pending. Just use that to know if work is pending using schedule_delayed_work() helper. Signed-off-by: Joel Fernandes (Google) Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney Reviewed-by: Neeraj Upadhyay --- kernel/rcu/tree.c | 33 ++++++++++++++++----------------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 5445b19b48408..7919d7b48fa6a 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3216,7 +3216,6 @@ struct kfree_rcu_cpu_work { * @krw_arr: Array of batches of kfree_rcu() objects waiting for a grace period * @lock: Synchronize access to this structure * @monitor_work: Promote @head to @head_free after KFREE_DRAIN_JIFFIES - * @monitor_todo: Tracks whether a @monitor_work delayed work is pending * @initialized: The @rcu_work fields have been initialized * @count: Number of objects for which GP not started * @bkvcache: @@ -3241,7 +3240,6 @@ struct kfree_rcu_cpu { struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; raw_spinlock_t lock; struct delayed_work monitor_work; - bool monitor_todo; bool initialized; int count; @@ -3421,6 +3419,18 @@ static void kfree_rcu_work(struct work_struct *work) } } +static bool +need_offload_krc(struct kfree_rcu_cpu *krcp) +{ + int i; + + for (i = 0; i < FREE_N_CHANNELS; i++) + if (krcp->bkvhead[i]) + return true; + + return !!krcp->head; +} + /* * This function is invoked after the KFREE_DRAIN_JIFFIES timeout. */ @@ -3477,9 +3487,7 @@ static void kfree_rcu_monitor(struct work_struct *work) // of the channels that is still busy we should rearm the // work to repeat an attempt. Because previous batches are // still in progress. - if (!krcp->bkvhead[0] && !krcp->bkvhead[1] && !krcp->head) - krcp->monitor_todo = false; - else + if (need_offload_krc(krcp)) schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES); raw_spin_unlock_irqrestore(&krcp->lock, flags); @@ -3667,11 +3675,8 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) WRITE_ONCE(krcp->count, krcp->count + 1); // Set timer to drain after KFREE_DRAIN_JIFFIES. - if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING && - !krcp->monitor_todo) { - krcp->monitor_todo = true; + if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING) schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES); - } unlock_return: krc_this_cpu_unlock(krcp, flags); @@ -3746,14 +3751,8 @@ void __init kfree_rcu_scheduler_running(void) struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); raw_spin_lock_irqsave(&krcp->lock, flags); - if ((!krcp->bkvhead[0] && !krcp->bkvhead[1] && !krcp->head) || - krcp->monitor_todo) { - raw_spin_unlock_irqrestore(&krcp->lock, flags); - continue; - } - krcp->monitor_todo = true; - schedule_delayed_work_on(cpu, &krcp->monitor_work, - KFREE_DRAIN_JIFFIES); + if (need_offload_krc(krcp)) + schedule_delayed_work_on(cpu, &krcp->monitor_work, KFREE_DRAIN_JIFFIES); raw_spin_unlock_irqrestore(&krcp->lock, flags); } } From patchwork Mon Jun 20 22:20:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 453DBCCA480 for ; Mon, 20 Jun 2022 22:20:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343796AbiFTWUx (ORCPT ); Mon, 20 Jun 2022 18:20:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244552AbiFTWUm (ORCPT ); Mon, 20 Jun 2022 18:20:42 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF86F1BE83; Mon, 20 Jun 2022 15:20:37 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3414EB8164F; Mon, 20 Jun 2022 22:20:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A044BC385A5; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=omFpL6AjD/2c4cDtQPge8eB3hsr4UIqIh5fcwI+Snkw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L2aeUhfRA6yU94xu33SIBxrDD57PJ6/AHIPrCffkBFV5gVousgGB1b846unb/VpRy cMQP65ZV6scdrQ6TLcRXTdFEpb0GDzoUiMiwtP0n4FOTFknI0+spFUYlZ2gjbEB2Af TlPELayCfcl8at4ZRGrAN/4BGwnFhI/75T7Zuz5auaP/eWZTdIFnG+grDhBMwKOLqU kL3eHj6slDghDWmoJ+eCVddp9c/3E+Yj+KWjM7OGXBN6ZVprlz0IRHl8/D/XGyBFz6 gl4AtLRBC6uFPyDovbpFpr43bVu6b39hiOaU517jCenTcgiLJPiHwtRGRV/MiTo0M1 MROJ5LXCvK+zQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 0E8F25C0DAC; Mon, 20 Jun 2022 15:20:34 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Frederic Weisbecker , Neeraj Upadhyay Subject: [PATCH rcu 10/12] rcu: Initialize first_gp_fqs at declaration in rcu_gp_fqs() Date: Mon, 20 Jun 2022 15:20:30 -0700 Message-Id: <20220620222032.3839547-10-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit saves a line of code by initializing the rcu_gp_fqs() function's first_gp_fqs local variable in its declaration. Reported-by: Frederic Weisbecker Reported-by: Neeraj Upadhyay Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 7919d7b48fa6a..7d8e9cb852786 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1971,13 +1971,12 @@ static void rcu_gp_fqs(bool first_time) */ static noinline_for_stack void rcu_gp_fqs_loop(void) { - bool first_gp_fqs; + bool first_gp_fqs = true; int gf = 0; unsigned long j; int ret; struct rcu_node *rnp = rcu_get_root(); - first_gp_fqs = true; j = READ_ONCE(jiffies_till_first_fqs); if (rcu_state.cbovld) gf = RCU_GP_FLAG_OVLD; From patchwork Mon Jun 20 22:20:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B07CFCCA481 for ; Mon, 20 Jun 2022 22:20:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343758AbiFTWUt (ORCPT ); Mon, 20 Jun 2022 18:20:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243528AbiFTWUn (ORCPT ); Mon, 20 Jun 2022 18:20:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56C501BE84; Mon, 20 Jun 2022 15:20:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7FF04612CB; Mon, 20 Jun 2022 22:20:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A3F37C341D2; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=RTEIPBwPDlTk/wfRVgO+sMnshOl2v68/BsPmAdRqgA8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jVTv4oRPty53ozzQCLwdvLNHv0onjmR7Ti37QakgvcZgNLzAKpyV9yGWnXJ9lMHnF KZ0HU2EvWxYqd9G3hIYjlYLYsij4Lfw4/FbqdGiTE5t/juLsbjhdWIoqGqF8odxxuy p4VYuVMXhC4P8HVFyunc33iVUdjAL2dXvSqhO9OGLioMBGKhADFtMSetHsnKT0ISJG bTdgC+kk2W0zRWDRjvDJEwkLiGc2IeH8OU8QEi/PAcgM/JPgz1XnLsR49k5C6FPQPq HDYo0lHibPVxDwQ0fgM0VOZ4LJ0x3h/yvZcMQ4JYMLpNjHzIZiFoVJfOB7bQddd+C0 aQgZDRftBMTVA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 105BE5C0DEB; Mon, 20 Jun 2022 15:20:34 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Neeraj Upadhyay , Joel Fernandes , "Paul E . McKenney" Subject: [PATCH rcu 11/12] rcu/tree: Add comment to describe GP-done condition in fqs loop Date: Mon, 20 Jun 2022 15:20:31 -0700 Message-Id: <20220620222032.3839547-11-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Neeraj Upadhyay Add a comment to explain why !rcu_preempt_blocked_readers_cgp() condition is required on root rnp node, for GP completion check in rcu_gp_fqs_loop(). Reviewed-by: Joel Fernandes (Google) Signed-off-by: Neeraj Upadhyay Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 7d8e9cb852786..cdffb3b60ee02 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2005,7 +2005,15 @@ static noinline_for_stack void rcu_gp_fqs_loop(void) rcu_gp_torture_wait(); WRITE_ONCE(rcu_state.gp_state, RCU_GP_DOING_FQS); /* Locking provides needed memory barriers. */ - /* If grace period done, leave loop. */ + /* + * Exit the loop if the root rcu_node structure indicates that the grace period + * has ended, leave the loop. The rcu_preempt_blocked_readers_cgp(rnp) check + * is required only for single-node rcu_node trees because readers blocking + * the current grace period are queued only on leaf rcu_node structures. + * For multi-node trees, checking the root node's ->qsmask suffices, because a + * given root node's ->qsmask bit is cleared only when all CPUs and tasks from + * the corresponding leaf nodes have passed through their quiescent state. + */ if (!READ_ONCE(rnp->qsmask) && !rcu_preempt_blocked_readers_cgp(rnp)) break; From patchwork Mon Jun 20 22:20:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888330 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A09CEC43334 for ; Mon, 20 Jun 2022 22:20:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343731AbiFTWUs (ORCPT ); Mon, 20 Jun 2022 18:20:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244719AbiFTWUn (ORCPT ); Mon, 20 Jun 2022 18:20:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68D391BE85; Mon, 20 Jun 2022 15:20:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9BB9761302; Mon, 20 Jun 2022 22:20:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB7EBC341D0; Mon, 20 Jun 2022 22:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655763634; bh=fOc9xTo9noQUtXSDMR5WJIHkNdSsrsFNVIlNlxZLH50=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=a0KJoIJTBJ+Y5jVRE5E9CD1dTRHBWAelLTfseUbGLRi//UzfbXpIj2iU3U8ztQBA5 vH3rIZ2w8GVUpAdInHNPSbUVl3whA41c94ptNMeNk5S/rkeCW8GGm1vr6zd9+wWrqL uerl/BEvUgOEKmTP6rECBCHPyh/UxUQw1pT+EdWUw9r2EP+D1taP4xHobKAfcqlPqJ fL6nhguLD4Swe9JLL3bI43S7ZVOgdXZipgjLqmYSDakQCR0ExaSdP1r38naqvNmDCJ F8KilofCaGy7EVh6o/K5wYDPSTOayGrQy2lWkpbgKiJZxT6ZV/1wP/4bxWwmW1n0H+ qHinlt9FnH/FA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 122A85C0E3F; Mon, 20 Jun 2022 15:20:34 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Zhangfei Gao , Shameerali Kolothum Thodi , Paolo Bonzini Subject: [PATCH rcu 12/12] srcu: Block less aggressively for expedited grace periods Date: Mon, 20 Jun 2022 15:20:32 -0700 Message-Id: <20220620222032.3839547-12-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> References: <20220620222022.GA3839466@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Commit 282d8998e997 ("srcu: Prevent expedited GPs and blocking readers from consuming CPU") fixed a problem where a long-running expedited SRCU grace period could block kernel live patching. It did so by giving up on expediting once a given SRCU expedited grace period grew too old. Unfortunately, this added excessive delays to boots of embedded systems running on qemu that use the ARM IORT RMR feature. This commit therefore makes the transition away from expediting less aggressive, increasing the per-grace-period phase number of non-sleeping polls of readers from one to three and increasing the required grace-period age from one jiffy (actually from zero to one jiffies) to two jiffies (actually from one to two jiffies). Fixes: 282d8998e997 ("srcu: Prevent expedited GPs and blocking readers from consuming CPU") Signed-off-by: Paul E. McKenney Reported-by: Zhangfei Gao Reported-by: chenxiang (M)" Cc: Shameerali Kolothum Thodi Cc: Paolo Bonzini Link: https://lore.kernel.org/all/20615615-0013-5adc-584f-2b1d5c03ebfc@linaro.org/ Reviewed-by: Neeraj Upadhyay --- kernel/rcu/srcutree.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 50ba70f019dea..0db7873f4e95b 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -513,7 +513,7 @@ static bool srcu_readers_active(struct srcu_struct *ssp) #define SRCU_INTERVAL 1 // Base delay if no expedited GPs pending. #define SRCU_MAX_INTERVAL 10 // Maximum incremental delay from slow readers. -#define SRCU_MAX_NODELAY_PHASE 1 // Maximum per-GP-phase consecutive no-delay instances. +#define SRCU_MAX_NODELAY_PHASE 3 // Maximum per-GP-phase consecutive no-delay instances. #define SRCU_MAX_NODELAY 100 // Maximum consecutive no-delay instances. /* @@ -522,16 +522,22 @@ static bool srcu_readers_active(struct srcu_struct *ssp) */ static unsigned long srcu_get_delay(struct srcu_struct *ssp) { + unsigned long gpstart; + unsigned long j; unsigned long jbase = SRCU_INTERVAL; if (ULONG_CMP_LT(READ_ONCE(ssp->srcu_gp_seq), READ_ONCE(ssp->srcu_gp_seq_needed_exp))) jbase = 0; - if (rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq))) - jbase += jiffies - READ_ONCE(ssp->srcu_gp_start); - if (!jbase) { - WRITE_ONCE(ssp->srcu_n_exp_nodelay, READ_ONCE(ssp->srcu_n_exp_nodelay) + 1); - if (READ_ONCE(ssp->srcu_n_exp_nodelay) > SRCU_MAX_NODELAY_PHASE) - jbase = 1; + if (rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq))) { + j = jiffies - 1; + gpstart = READ_ONCE(ssp->srcu_gp_start); + if (time_after(j, gpstart)) + jbase += j - gpstart; + if (!jbase) { + WRITE_ONCE(ssp->srcu_n_exp_nodelay, READ_ONCE(ssp->srcu_n_exp_nodelay) + 1); + if (READ_ONCE(ssp->srcu_n_exp_nodelay) > SRCU_MAX_NODELAY_PHASE) + jbase = 1; + } } return jbase > SRCU_MAX_INTERVAL ? SRCU_MAX_INTERVAL : jbase; }