From patchwork Mon Jun 20 23:10:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA702CCA47C for ; Mon, 20 Jun 2022 23:12:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346798AbiFTXMK (ORCPT ); Mon, 20 Jun 2022 19:12:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346652AbiFTXLt (ORCPT ); Mon, 20 Jun 2022 19:11:49 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7435225C4F; Mon, 20 Jun 2022 16:10:34 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D236BB81649; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 96333C3411C; Mon, 20 Jun 2022 23:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766631; bh=+rj4ghNkEiHAQaun3eCwWQ1DE2elLmXJU5WBjGsDFrU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=M9T7iLlFPceRrLUeoRerQgmellUJ615wD6Y059ynTFRpM4zBT7pji5XDZzly2dliM bsd1CNoj6dr/hktKEZyZcfAGG6sTgeEzzDeKDgUKS02bIcp0jQUqFykkR2gR7rDWhC vmqrCeUtpCMV4p2cB8kY9tL4IPrEiJ1aJ6MMd43eTeAdF86m/t2DOHjaxs8/kgcNv9 PnBKfYfqBOdZ/qiXvScNg5Z7Aam3BtQCm+aJhODYeGTa9IdlD6Vn/l9Gr0uAuQaaCA SyH0ARU1INitEo7OKnXGhQ68r31gfXlB6vLTqIaE0567hVhAIxKP6tTJdFlQtkXfTo ndCDGPmiMqPbA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 499B75C05B9; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Zqiang , "Paul E . McKenney" Subject: [PATCH rcu 01/23] rcu: Dump rcuc kthread status for CPUs not reporting quiescent state Date: Mon, 20 Jun 2022 16:10:07 -0700 Message-Id: <20220620231029.3844583-1-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Zqiang If the rcutree.use_softirq kernel boot parameter is disabled, then it is possible that a RCU CPU stall is due to the rcuc kthreads being starved of CPU time. There is currently no easy way to infer this from the RCU CPU stall warning output. This commit therefore adds a string of the form " rcuc=%ld jiffies(starved)" to a given CPU's output if the corresponding rcuc kthread has been starved for more than two seconds. [ paulmck: Eliminate extraneous space characters. ] Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney --- kernel/rcu/tree_stall.h | 49 ++++++++++++++++++----------------------- 1 file changed, 21 insertions(+), 28 deletions(-) diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h index 4995c078cff98..3556637768fd5 100644 --- a/kernel/rcu/tree_stall.h +++ b/kernel/rcu/tree_stall.h @@ -409,7 +409,19 @@ static bool rcu_is_gp_kthread_starving(unsigned long *jp) static bool rcu_is_rcuc_kthread_starving(struct rcu_data *rdp, unsigned long *jp) { - unsigned long j = jiffies - READ_ONCE(rdp->rcuc_activity); + int cpu; + struct task_struct *rcuc; + unsigned long j; + + rcuc = rdp->rcu_cpu_kthread_task; + if (!rcuc) + return false; + + cpu = task_cpu(rcuc); + if (cpu_is_offline(cpu) || idle_cpu(cpu)) + return false; + + j = jiffies - READ_ONCE(rdp->rcuc_activity); if (jp) *jp = j; @@ -434,6 +446,9 @@ static void print_cpu_stall_info(int cpu) struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); char *ticks_title; unsigned long ticks_value; + bool rcuc_starved; + unsigned long j; + char buf[32]; /* * We could be printing a lot while holding a spinlock. Avoid @@ -451,7 +466,10 @@ static void print_cpu_stall_info(int cpu) delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq); falsepositive = rcu_is_gp_kthread_starving(NULL) && rcu_dynticks_in_eqs(rcu_dynticks_snap(rdp)); - pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld %s\n", + rcuc_starved = rcu_is_rcuc_kthread_starving(rdp, &j); + if (rcuc_starved) + sprintf(buf, " rcuc=%ld jiffies(starved)", j); + pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld%s%s\n", cpu, "O."[!!cpu_online(cpu)], "o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)], @@ -464,32 +482,10 @@ static void print_cpu_stall_info(int cpu) rdp->dynticks_nesting, rdp->dynticks_nmi_nesting, rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu), data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart, + rcuc_starved ? buf : "", falsepositive ? " (false positive?)" : ""); } -static void rcuc_kthread_dump(struct rcu_data *rdp) -{ - int cpu; - unsigned long j; - struct task_struct *rcuc; - - rcuc = rdp->rcu_cpu_kthread_task; - if (!rcuc) - return; - - cpu = task_cpu(rcuc); - if (cpu_is_offline(cpu) || idle_cpu(cpu)) - return; - - if (!rcu_is_rcuc_kthread_starving(rdp, &j)) - return; - - pr_err("%s kthread starved for %ld jiffies\n", rcuc->comm, j); - sched_show_task(rcuc); - if (!trigger_single_cpu_backtrace(cpu)) - dump_cpu_task(cpu); -} - /* Complain about starvation of grace-period kthread. */ static void rcu_check_gp_kthread_starvation(void) { @@ -662,9 +658,6 @@ static void print_cpu_stall(unsigned long gps) rcu_check_gp_kthread_expired_fqs_timer(); rcu_check_gp_kthread_starvation(); - if (!use_softirq) - rcuc_kthread_dump(rdp); - rcu_dump_cpu_stacks(); raw_spin_lock_irqsave_rcu_node(rnp, flags); From patchwork Mon Jun 20 23:10:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1364CCA47C for ; Mon, 20 Jun 2022 23:12:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347174AbiFTXMN (ORCPT ); Mon, 20 Jun 2022 19:12:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346790AbiFTXLt (ORCPT ); Mon, 20 Jun 2022 19:11:49 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7464626135; Mon, 20 Jun 2022 16:10:34 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E7455B8125A; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 926A6C341C4; Mon, 20 Jun 2022 23:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766631; bh=ZVejpdvjiEzycawhaOOnxLRps1g1Vw9+JCtNa1JMx9I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X3roUHt24laqwZmqI1xvHPeT3YwiBpsNq4jG28Ba1+/keE/dLW+96AKw4BsmQHRQe LMXD6KkuxUnw5CE0QcQBG4aEtbsFSnVhAazm0QfTx9AxoZqE4mHvpolY0Ju8ufzIGX tthP8HomN97pe3BNS5xxlY1X8gUEt3R4APt+J27FQumgXdSGnNCnEgS8CX3NcOsQi0 AHwQlzSMDPhiuTLpRUodvYI07Ply3BooyGIw1KLfR5ol9SPaUCTE24j6BCpA3/1TjZ NkceKrA2EvvdDjWlZc+ikMCO2l0CCtYc+qOr18j9kvLODW9BhxCyBUwXN6xgVSNd0n aAHx2jusI3i8Q== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 4CF405C05C8; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Jiri Olsa , Alexei Starovoitov , Andrii Nakryiko , Yonghong Song Subject: [PATCH rcu 02/23] rcu: Apply noinstr to rcu_idle_enter() and rcu_idle_exit() Date: Mon, 20 Jun 2022 16:10:08 -0700 Message-Id: <20220620231029.3844583-2-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit applies the "noinstr" tag to the rcu_idle_enter() and rcu_idle_exit() functions, which are invoked from portions of the idle loop that cannot be instrumented. These tags require reworking the rcu_eqs_enter() and rcu_eqs_exit() functions that these two functions invoke in order to cause them to use normal assertions rather than lockdep. In addition, within rcu_idle_exit(), the raw versions of local_irq_save() and local_irq_restore() are used, again to avoid issues with lockdep in uninstrumented code. This patch is based in part on an earlier patch by Jiri Olsa, discussions with Peter Zijlstra and Frederic Weisbecker, earlier changes by Thomas Gleixner, and off-list discussions with Yonghong Song. Link: https://lore.kernel.org/lkml/20220515203653.4039075-1-jolsa@kernel.org/ Reported-by: Jiri Olsa Reported-by: Alexei Starovoitov Reported-by: Andrii Nakryiko Signed-off-by: Paul E. McKenney Reviewed-by: Yonghong Song --- kernel/rcu/tree.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index c25ba442044a6..9a5edab5558c9 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -631,8 +631,8 @@ static noinstr void rcu_eqs_enter(bool user) return; } - lockdep_assert_irqs_disabled(); instrumentation_begin(); + lockdep_assert_irqs_disabled(); trace_rcu_dyntick(TPS("Start"), rdp->dynticks_nesting, 0, atomic_read(&rdp->dynticks)); WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); rcu_preempt_deferred_qs(current); @@ -659,9 +659,9 @@ static noinstr void rcu_eqs_enter(bool user) * If you add or remove a call to rcu_idle_enter(), be sure to test with * CONFIG_RCU_EQS_DEBUG=y. */ -void rcu_idle_enter(void) +void noinstr rcu_idle_enter(void) { - lockdep_assert_irqs_disabled(); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); rcu_eqs_enter(false); } EXPORT_SYMBOL_GPL(rcu_idle_enter); @@ -861,7 +861,7 @@ static void noinstr rcu_eqs_exit(bool user) struct rcu_data *rdp; long oldval; - lockdep_assert_irqs_disabled(); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); rdp = this_cpu_ptr(&rcu_data); oldval = rdp->dynticks_nesting; WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0); @@ -896,13 +896,13 @@ static void noinstr rcu_eqs_exit(bool user) * If you add or remove a call to rcu_idle_exit(), be sure to test with * CONFIG_RCU_EQS_DEBUG=y. */ -void rcu_idle_exit(void) +void noinstr rcu_idle_exit(void) { unsigned long flags; - local_irq_save(flags); + raw_local_irq_save(flags); rcu_eqs_exit(false); - local_irq_restore(flags); + raw_local_irq_restore(flags); } EXPORT_SYMBOL_GPL(rcu_idle_exit); From patchwork Mon Jun 20 23:10:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CBECC43334 for ; Mon, 20 Jun 2022 23:12:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346649AbiFTXMK (ORCPT ); Mon, 20 Jun 2022 19:12:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346024AbiFTXLt (ORCPT ); Mon, 20 Jun 2022 19:11:49 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C4F62611C; Mon, 20 Jun 2022 16:10:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3121B614FC; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8EBDBC3411B; Mon, 20 Jun 2022 23:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766631; bh=exndd9a3xBM686KDgEhraoh1YVxjbXSDdpSHqgGxhoQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l7BeZAP4o1DL1v4+o1rtHvllFGM9RNht/gJ/P3dIICurJOIRYqIDxfITdkibDkv3S NuV9qSwE1MI/2L1uUQSxP4LoSbe5N4vcALefAKjYpptgAykvYNnNgBgdwjK8pk/VgR l1L8rGLhrF+9XsqM+xnbAjV87Didsqi3X1MAT3c6PqG1RuRe1voHb3yBF4MmZ1qyPZ 4Z2f/RYxytKwJBKly7kEwpJ9PtLYLj2PKfasRa3fx5gvrj5JpLZEYwu4StS2qenuyV rdNQN4JoM46k6hnz5N2x4w8VchDAztXbuuCG86WSz772ncLIrJXJ+NzkfDsCS+1DgG pgpqQhFvSB6+g== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 4FBF25C0A15; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Nicolas Saenz Julienne , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 03/23] context_tracking: Remove unused context_tracking_in_user() Date: Mon, 20 Jun 2022 16:10:09 -0700 Message-Id: <20220620231029.3844583-3-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker This function is not used and CT_WARN_ON() coupled with ct_state() is the preferred way to assert context tracking state values. Reported-by: Nicolas Saenz Julienne Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- include/linux/context_tracking_state.h | 5 ----- 1 file changed, 5 deletions(-) diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h index ae1e63e269474..edc7b46376a6b 100644 --- a/include/linux/context_tracking_state.h +++ b/include/linux/context_tracking_state.h @@ -41,12 +41,7 @@ static inline bool context_tracking_enabled_this_cpu(void) return context_tracking_enabled() && __this_cpu_read(context_tracking.active); } -static __always_inline bool context_tracking_in_user(void) -{ - return __this_cpu_read(context_tracking.state) == CONTEXT_USER; -} #else -static __always_inline bool context_tracking_in_user(void) { return false; } static __always_inline bool context_tracking_enabled(void) { return false; } static __always_inline bool context_tracking_enabled_cpu(int cpu) { return false; } static __always_inline bool context_tracking_enabled_this_cpu(void) { return false; } From patchwork Mon Jun 20 23:10:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F673CCA47C for ; Mon, 20 Jun 2022 23:12:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346582AbiFTXMJ (ORCPT ); Mon, 20 Jun 2022 19:12:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245682AbiFTXLt (ORCPT ); Mon, 20 Jun 2022 19:11:49 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C5BE2611E; Mon, 20 Jun 2022 16:10:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4CF40614DF; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2732C341C5; Mon, 20 Jun 2022 23:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766631; bh=jyKzInLCGfGE6kjpqCH9px1pTWGwep4tsb1DYTRrcV4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K4GMK53BeskJ2T2AJmQqaLY15iKZk3aH+9z0mrrBe+r/JQD9wyF/C1c7bx5oBrAEv nzLTkAn762PP/OTdbXyEs+MMrxCBMil2aZXBWtt6VCGiSXrJWxwa2YmzYuZ1+KldLK g2Q4Qsa+SCCd39aoIC9HS1+BmflQFe/GfzXKCks93EaQG8a8XhJWildU9HiTmJYbFR EE8uZM11Df9W/IE+ZTpW9Yd0TN+NQHBOekQNkabnTHIJuOQarQaWutsobZUvfXKP2Y Fxta4A+PfQ3J9fHMGmKMgHT72UJHucPoKcpH14w4yB7eMzJOnwjprqwHHP3mwNrRwf H3uLBVl94Bp2Q== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 51E8B5C0A33; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 04/23] context_tracking: Add a note about noinstr VS unsafe context tracking functions Date: Mon, 20 Jun 2022 16:10:10 -0700 Message-Id: <20220620231029.3844583-4-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker Some context tracking functions enter or exit into/from RCU idle mode while using trace-able and lockdep-aware IRQs (un-)masking. As a result those functions can't get tagged as noinstr. This is unlikely to be fixed since these are obsolete APIs. Drop a note about this matter. [ paulmck: Apply Peter Zijlstra feedback. ] Reported-by: Peter Zijlstra Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/context_tracking.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 36a98c48aedc7..3082332f64765 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -103,6 +103,15 @@ void noinstr __context_tracking_enter(enum ctx_state state) } EXPORT_SYMBOL_GPL(__context_tracking_enter); +/* + * OBSOLETE: + * This function should be noinstr but the below local_irq_restore() is + * unsafe because it involves illegal RCU uses through tracing and lockdep. + * This is unlikely to be fixed as this function is obsolete. The preferred + * way is to call __context_tracking_enter() through user_enter_irqoff() + * or context_tracking_guest_enter(). It should be the arch entry code + * responsibility to call into context tracking with IRQs disabled. + */ void context_tracking_enter(enum ctx_state state) { unsigned long flags; @@ -125,6 +134,14 @@ void context_tracking_enter(enum ctx_state state) NOKPROBE_SYMBOL(context_tracking_enter); EXPORT_SYMBOL_GPL(context_tracking_enter); +/* + * OBSOLETE: + * This function should be noinstr but it unsafely calls local_irq_restore(), + * involving illegal RCU uses through tracing and lockdep. + * This is unlikely to be fixed as this function is obsolete. The preferred + * way is to call user_enter_irqoff(). It should be the arch entry code + * responsibility to call into context tracking with IRQs disabled. + */ void context_tracking_user_enter(void) { user_enter(); @@ -168,6 +185,15 @@ void noinstr __context_tracking_exit(enum ctx_state state) } EXPORT_SYMBOL_GPL(__context_tracking_exit); +/* + * OBSOLETE: + * This function should be noinstr but the below local_irq_save() is + * unsafe because it involves illegal RCU uses through tracing and lockdep. + * This is unlikely to be fixed as this function is obsolete. The preferred + * way is to call __context_tracking_exit() through user_exit_irqoff() + * or context_tracking_guest_exit(). It should be the arch entry code + * responsibility to call into context tracking with IRQs disabled. + */ void context_tracking_exit(enum ctx_state state) { unsigned long flags; @@ -182,6 +208,14 @@ void context_tracking_exit(enum ctx_state state) NOKPROBE_SYMBOL(context_tracking_exit); EXPORT_SYMBOL_GPL(context_tracking_exit); +/* + * OBSOLETE: + * This function should be noinstr but it unsafely calls local_irq_save(), + * involving illegal RCU uses through tracing and lockdep. This is unlikely + * to be fixed as this function is obsolete. The preferred way is to call + * user_exit_irqoff(). It should be the arch entry code responsibility to + * call into context tracking with IRQs disabled. + */ void context_tracking_user_exit(void) { user_exit(); From patchwork Mon Jun 20 23:10:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D131CCA480 for ; Mon, 20 Jun 2022 23:12:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244103AbiFTXMO (ORCPT ); Mon, 20 Jun 2022 19:12:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346811AbiFTXLu (ORCPT ); Mon, 20 Jun 2022 19:11:50 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7218026134; Mon, 20 Jun 2022 16:10:34 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0D9B1B8164C; Mon, 20 Jun 2022 23:10:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5C4AC341C8; Mon, 20 Jun 2022 23:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766631; bh=sxrAJS/7NlhVkYTsvI/ZDDsFLvW3fu6p2vu9WPNdzew=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LxyDGn7ChqCWdqGwj0pTcK3XAWKGNNujtev3jSsoW0PRsFw3qSGI7lVfJKwe0mLc5 Xq6R1u4tsMwQZPASEL7Ll4xh0PIJJbNJknI1OEYUTmA7NgizbmP5iZfAFi49VtMXmT +tlYkYuex6g4iSuse5lQEdFZhqis2zVR7O1+8DRBI5Lq3Rbb7IyV9O6APptCIFQbKq tZxWp5VTyNDng2F7b8nE3Ylx+iJx/fYh2lWwJz6oFocHF4fvFE8Vk0nQg2YZo2FwLK RZZdiFDdRwmQJQD+Qe6zyY416j0x+jPrj9559Lz5REQK6G5Y81zke5ceVB/YvO7exy htcrh4IBhGOgg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 541685C0ADC; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , "Paul E . McKenney" , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits Subject: [PATCH rcu 05/23] context_tracking: Rename __context_tracking_enter/exit() to __ct_user_enter/exit() Date: Mon, 20 Jun 2022 16:10:11 -0700 Message-Id: <20220620231029.3844583-5-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker The context tracking namespace is going to expand and some new functions will require even longer names. Start shrinking the context_tracking prefix to "ct" as is already the case for some existing macros, this will make the introduction of new functions easier. Acked-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- include/linux/context_tracking.h | 12 ++++++------ kernel/context_tracking.c | 20 ++++++++++---------- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index 7a14807c9d1a6..773035124badb 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -14,8 +14,8 @@ extern void context_tracking_cpu_set(int cpu); /* Called with interrupts disabled. */ -extern void __context_tracking_enter(enum ctx_state state); -extern void __context_tracking_exit(enum ctx_state state); +extern void __ct_user_enter(enum ctx_state state); +extern void __ct_user_exit(enum ctx_state state); extern void context_tracking_enter(enum ctx_state state); extern void context_tracking_exit(enum ctx_state state); @@ -38,13 +38,13 @@ static inline void user_exit(void) static __always_inline void user_enter_irqoff(void) { if (context_tracking_enabled()) - __context_tracking_enter(CONTEXT_USER); + __ct_user_enter(CONTEXT_USER); } static __always_inline void user_exit_irqoff(void) { if (context_tracking_enabled()) - __context_tracking_exit(CONTEXT_USER); + __ct_user_exit(CONTEXT_USER); } static inline enum ctx_state exception_enter(void) @@ -74,7 +74,7 @@ static inline void exception_exit(enum ctx_state prev_ctx) static __always_inline bool context_tracking_guest_enter(void) { if (context_tracking_enabled()) - __context_tracking_enter(CONTEXT_GUEST); + __ct_user_enter(CONTEXT_GUEST); return context_tracking_enabled_this_cpu(); } @@ -82,7 +82,7 @@ static __always_inline bool context_tracking_guest_enter(void) static __always_inline void context_tracking_guest_exit(void) { if (context_tracking_enabled()) - __context_tracking_exit(CONTEXT_GUEST); + __ct_user_exit(CONTEXT_GUEST); } /** diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 3082332f64765..e36395598095e 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -51,15 +51,15 @@ static __always_inline void context_tracking_recursion_exit(void) } /** - * context_tracking_enter - Inform the context tracking that the CPU is going - * enter user or guest space mode. + * __ct_user_enter - Inform the context tracking that the CPU is going + * to enter user or guest space mode. * * This function must be called right before we switch from the kernel * to user or guest space, when it's guaranteed the remaining kernel * instructions to execute won't use any RCU read side critical section * because this function sets RCU in extended quiescent state. */ -void noinstr __context_tracking_enter(enum ctx_state state) +void noinstr __ct_user_enter(enum ctx_state state) { /* Kernel threads aren't supposed to go to userspace */ WARN_ON_ONCE(!current->mm); @@ -101,7 +101,7 @@ void noinstr __context_tracking_enter(enum ctx_state state) } context_tracking_recursion_exit(); } -EXPORT_SYMBOL_GPL(__context_tracking_enter); +EXPORT_SYMBOL_GPL(__ct_user_enter); /* * OBSOLETE: @@ -128,7 +128,7 @@ void context_tracking_enter(enum ctx_state state) return; local_irq_save(flags); - __context_tracking_enter(state); + __ct_user_enter(state); local_irq_restore(flags); } NOKPROBE_SYMBOL(context_tracking_enter); @@ -149,8 +149,8 @@ void context_tracking_user_enter(void) NOKPROBE_SYMBOL(context_tracking_user_enter); /** - * context_tracking_exit - Inform the context tracking that the CPU is - * exiting user or guest mode and entering the kernel. + * __ct_user_exit - Inform the context tracking that the CPU is + * exiting user or guest mode and entering the kernel. * * This function must be called after we entered the kernel from user or * guest space before any use of RCU read side critical section. This @@ -160,7 +160,7 @@ NOKPROBE_SYMBOL(context_tracking_user_enter); * This call supports re-entrancy. This way it can be called from any exception * handler without needing to know if we came from userspace or not. */ -void noinstr __context_tracking_exit(enum ctx_state state) +void noinstr __ct_user_exit(enum ctx_state state) { if (!context_tracking_recursion_enter()) return; @@ -183,7 +183,7 @@ void noinstr __context_tracking_exit(enum ctx_state state) } context_tracking_recursion_exit(); } -EXPORT_SYMBOL_GPL(__context_tracking_exit); +EXPORT_SYMBOL_GPL(__ct_user_exit); /* * OBSOLETE: @@ -202,7 +202,7 @@ void context_tracking_exit(enum ctx_state state) return; local_irq_save(flags); - __context_tracking_exit(state); + __ct_user_exit(state); local_irq_restore(flags); } NOKPROBE_SYMBOL(context_tracking_exit); From patchwork Mon Jun 20 23:10:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EDF7C43334 for ; Mon, 20 Jun 2022 23:12:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244004AbiFTXML (ORCPT ); Mon, 20 Jun 2022 19:12:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346758AbiFTXLt (ORCPT ); Mon, 20 Jun 2022 19:11:49 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E32B26124; Mon, 20 Jun 2022 16:10:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BDB7661500; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9E6EC341CE; Mon, 20 Jun 2022 23:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=KtHjEpf1/cXjxzwYm5DHi+7YCN/waSSuMPLxua1t+1A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZQbA+gwBjoSE4v4P3VausRMgyZCW++aiavo3foW0jVxOFPGMZl68MBf2hMB8kLfro Qi3yMukrzKYhkHUooecXgd2z7/gh+JYCupzRSAELwyZD3AE4AIOS+nygThJ6hRq8Ml Pv/YplGFxanA9acxji++eyyK6YrH41LTbiwWDDDje76cYzGwtL8PTbjYbuy7+ccFE1 kt+GUTt+/DF2dgWB7TD7eJVWgxoUmRZV9sipA2hVX/ncLpUewmvZI7/23+4fU8Kdtj YNIaSOLnRdyC1SJMOLYwVV1GdJ7qhaEW6UP30QRI5iMqW2ZAiBZPEyyfba7Zz/PRQI vp9iIsvzSxOYA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 563015C0B06; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 06/23] context_tracking: Rename context_tracking_user_enter/exit() to user_enter/exit_callable() Date: Mon, 20 Jun 2022 16:10:12 -0700 Message-Id: <20220620231029.3844583-6-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker context_tracking_user_enter() and context_tracking_user_exit() are ASM callable versions of user_enter() and user_exit() for architectures that didn't manage to check the context tracking static key from ASM. Change those function names to better reflect their purpose. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- arch/arm/kernel/entry-header.S | 8 ++++---- arch/csky/kernel/entry.S | 4 ++-- arch/riscv/kernel/entry.S | 6 +++--- include/linux/context_tracking.h | 4 ++-- kernel/context_tracking.c | 28 +++++++++++++++++----------- 5 files changed, 28 insertions(+), 22 deletions(-) diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index 5865621bf6912..95def2b38d1ca 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -369,10 +369,10 @@ ALT_UP_B(.L1_\@) #ifdef CONFIG_CONTEXT_TRACKING .if \save stmdb sp!, {r0-r3, ip, lr} - bl context_tracking_user_exit + bl user_exit_callable ldmia sp!, {r0-r3, ip, lr} .else - bl context_tracking_user_exit + bl user_exit_callable .endif #endif .endm @@ -381,10 +381,10 @@ ALT_UP_B(.L1_\@) #ifdef CONFIG_CONTEXT_TRACKING .if \save stmdb sp!, {r0-r3, ip, lr} - bl context_tracking_user_enter + bl user_enter_callable ldmia sp!, {r0-r3, ip, lr} .else - bl context_tracking_user_enter + bl user_enter_callable .endif #endif .endm diff --git a/arch/csky/kernel/entry.S b/arch/csky/kernel/entry.S index a4ababf25e243..bc734d17c16f4 100644 --- a/arch/csky/kernel/entry.S +++ b/arch/csky/kernel/entry.S @@ -23,7 +23,7 @@ mfcr a0, epsr btsti a0, 31 bt 1f - jbsr context_tracking_user_exit + jbsr user_exit_callable ldw a0, (sp, LSAVE_A0) ldw a1, (sp, LSAVE_A1) ldw a2, (sp, LSAVE_A2) @@ -160,7 +160,7 @@ ret_from_exception: cmpnei r10, 0 bt exit_work #ifdef CONFIG_CONTEXT_TRACKING - jbsr context_tracking_user_enter + jbsr user_enter_callable #endif 1: #ifdef CONFIG_PREEMPTION diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 2e5b88ca11ce1..12f6bba57e335 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -112,11 +112,11 @@ _save_context: #endif #ifdef CONFIG_CONTEXT_TRACKING - /* If previous state is in user mode, call context_tracking_user_exit. */ + /* If previous state is in user mode, call user_exit_callable(). */ li a0, SR_PP and a0, s1, a0 bnez a0, skip_context_tracking - call context_tracking_user_exit + call user_exit_callable skip_context_tracking: #endif @@ -270,7 +270,7 @@ resume_userspace: bnez s1, work_pending #ifdef CONFIG_CONTEXT_TRACKING - call context_tracking_user_enter + call user_enter_callable #endif /* Save unwound kernel stack pointer in thread_info */ diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index 773035124badb..69532cd18f72c 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -19,8 +19,8 @@ extern void __ct_user_exit(enum ctx_state state); extern void context_tracking_enter(enum ctx_state state); extern void context_tracking_exit(enum ctx_state state); -extern void context_tracking_user_enter(void); -extern void context_tracking_user_exit(void); +extern void user_enter_callable(void); +extern void user_exit_callable(void); static inline void user_enter(void) { diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index e36395598095e..9d4a872ca92c4 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -134,19 +134,22 @@ void context_tracking_enter(enum ctx_state state) NOKPROBE_SYMBOL(context_tracking_enter); EXPORT_SYMBOL_GPL(context_tracking_enter); -/* - * OBSOLETE: - * This function should be noinstr but it unsafely calls local_irq_restore(), - * involving illegal RCU uses through tracing and lockdep. +/** + * user_enter_callable() - Unfortunate ASM callable version of user_enter() for + * archs that didn't manage to check the context tracking + * static key from low level code. + * + * This OBSOLETE function should be noinstr but it unsafely calls + * local_irq_restore(), involving illegal RCU uses through tracing and lockdep. * This is unlikely to be fixed as this function is obsolete. The preferred * way is to call user_enter_irqoff(). It should be the arch entry code * responsibility to call into context tracking with IRQs disabled. */ -void context_tracking_user_enter(void) +void user_enter_callable(void) { user_enter(); } -NOKPROBE_SYMBOL(context_tracking_user_enter); +NOKPROBE_SYMBOL(user_enter_callable); /** * __ct_user_exit - Inform the context tracking that the CPU is @@ -208,19 +211,22 @@ void context_tracking_exit(enum ctx_state state) NOKPROBE_SYMBOL(context_tracking_exit); EXPORT_SYMBOL_GPL(context_tracking_exit); -/* - * OBSOLETE: - * This function should be noinstr but it unsafely calls local_irq_save(), +/** + * user_exit_callable() - Unfortunate ASM callable version of user_exit() for + * archs that didn't manage to check the context tracking + * static key from low level code. + * + * This OBSOLETE function should be noinstr but it unsafely calls local_irq_save(), * involving illegal RCU uses through tracing and lockdep. This is unlikely * to be fixed as this function is obsolete. The preferred way is to call * user_exit_irqoff(). It should be the arch entry code responsibility to * call into context tracking with IRQs disabled. */ -void context_tracking_user_exit(void) +void user_exit_callable(void) { user_exit(); } -NOKPROBE_SYMBOL(context_tracking_user_exit); +NOKPROBE_SYMBOL(user_exit_callable); void __init context_tracking_cpu_set(int cpu) { From patchwork Mon Jun 20 23:10:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6494FC433EF for ; Mon, 20 Jun 2022 23:12:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343666AbiFTXMQ (ORCPT ); Mon, 20 Jun 2022 19:12:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346822AbiFTXLu (ORCPT ); Mon, 20 Jun 2022 19:11:50 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38EDF2613E; Mon, 20 Jun 2022 16:10:36 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 98B0EB81213; Mon, 20 Jun 2022 23:10:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EC8A6C341D4; Mon, 20 Jun 2022 23:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=23nB7ajTFW78bIbwshi6o8yum8ua3pcV5XlbBQQNSC4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qaUlGOjzgIO5d1ozOafOsuWFdNy+cb+mdKnd72PXWcAI2nc9Mwldtf+O2SaqTzk2R MkyhS4AbCe1bISJgR0DUvPIjdPyl0fpS731dd9XTddxJqUadpQ+R+n1CLa6bPmnj66 iiZ8K4duTD150J1ywxxEEBNSTt1w25GQe76Wi/L/AfFSPhyRyi0SxYBtjSJsPL6qSS PBNiZI0QOc95vJCXGrIUfzj+3nIt+HuqE0Q4s1wmHaiBmr7bwc0TplomoBQ3z8E6PC ySw57rLlr06kvpX8RwynNSJ4btunZTRDLisM573R41K4IquWtsS756R+lpoed/gakr EAPldSEl6JFCg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 588FD5C0BCC; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 07/23] context_tracking: Rename context_tracking_enter/exit() to ct_user_enter/exit() Date: Mon, 20 Jun 2022 16:10:13 -0700 Message-Id: <20220620231029.3844583-7-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker context_tracking_enter() and context_tracking_exit() have confusing names that don't explain the fact they are referring to user/guest state. Use more self-explanatory names and shrink to the new context tracking prefix instead. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- include/linux/context_tracking.h | 13 +++++++------ kernel/context_tracking.c | 12 ++++++------ 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index 69532cd18f72c..7a5f04ae1758f 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -17,21 +17,22 @@ extern void context_tracking_cpu_set(int cpu); extern void __ct_user_enter(enum ctx_state state); extern void __ct_user_exit(enum ctx_state state); -extern void context_tracking_enter(enum ctx_state state); -extern void context_tracking_exit(enum ctx_state state); +extern void ct_user_enter(enum ctx_state state); +extern void ct_user_exit(enum ctx_state state); + extern void user_enter_callable(void); extern void user_exit_callable(void); static inline void user_enter(void) { if (context_tracking_enabled()) - context_tracking_enter(CONTEXT_USER); + ct_user_enter(CONTEXT_USER); } static inline void user_exit(void) { if (context_tracking_enabled()) - context_tracking_exit(CONTEXT_USER); + ct_user_exit(CONTEXT_USER); } /* Called with interrupts disabled. */ @@ -57,7 +58,7 @@ static inline enum ctx_state exception_enter(void) prev_ctx = this_cpu_read(context_tracking.state); if (prev_ctx != CONTEXT_KERNEL) - context_tracking_exit(prev_ctx); + ct_user_exit(prev_ctx); return prev_ctx; } @@ -67,7 +68,7 @@ static inline void exception_exit(enum ctx_state prev_ctx) if (!IS_ENABLED(CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK) && context_tracking_enabled()) { if (prev_ctx != CONTEXT_KERNEL) - context_tracking_enter(prev_ctx); + ct_user_enter(prev_ctx); } } diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 9d4a872ca92c4..87454e3515546 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -112,7 +112,7 @@ EXPORT_SYMBOL_GPL(__ct_user_enter); * or context_tracking_guest_enter(). It should be the arch entry code * responsibility to call into context tracking with IRQs disabled. */ -void context_tracking_enter(enum ctx_state state) +void ct_user_enter(enum ctx_state state) { unsigned long flags; @@ -131,8 +131,8 @@ void context_tracking_enter(enum ctx_state state) __ct_user_enter(state); local_irq_restore(flags); } -NOKPROBE_SYMBOL(context_tracking_enter); -EXPORT_SYMBOL_GPL(context_tracking_enter); +NOKPROBE_SYMBOL(ct_user_enter); +EXPORT_SYMBOL_GPL(ct_user_enter); /** * user_enter_callable() - Unfortunate ASM callable version of user_enter() for @@ -197,7 +197,7 @@ EXPORT_SYMBOL_GPL(__ct_user_exit); * or context_tracking_guest_exit(). It should be the arch entry code * responsibility to call into context tracking with IRQs disabled. */ -void context_tracking_exit(enum ctx_state state) +void ct_user_exit(enum ctx_state state) { unsigned long flags; @@ -208,8 +208,8 @@ void context_tracking_exit(enum ctx_state state) __ct_user_exit(state); local_irq_restore(flags); } -NOKPROBE_SYMBOL(context_tracking_exit); -EXPORT_SYMBOL_GPL(context_tracking_exit); +NOKPROBE_SYMBOL(ct_user_exit); +EXPORT_SYMBOL_GPL(ct_user_exit); /** * user_exit_callable() - Unfortunate ASM callable version of user_exit() for From patchwork Mon Jun 20 23:10:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E16F7C433EF for ; Mon, 20 Jun 2022 23:12:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346313AbiFTXMI (ORCPT ); Mon, 20 Jun 2022 19:12:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346326AbiFTXLs (ORCPT ); Mon, 20 Jun 2022 19:11:48 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E00B26123; Mon, 20 Jun 2022 16:10:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C5E0E61501; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EC696C341D2; Mon, 20 Jun 2022 23:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=CBtjLr8hEiHJztrqXuMM/t8/2akXKqG1GWOYWGjDjto=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FpYnt8NGPzUjDTeXDjdORCcJSvLTA57hO76FKnvZZMPaIGDKQI0kYx/9PHNRAy3la QOfpfXJcF70V2K1x2qqTlJr8i2V/GS+nodXeCTJMNDr4qRtei6vQhTysuIjQfgv4BW C2clgWrMt9ei9sazSGh+GL9HjYQhAKlkD6eGZQoevoXJXvt2i8dIQErS4wO4fGE9h5 eClv//+5c6dgFYJbLUeyYExD+Ek0kePmdpwctThmw308s6n/2kYzrDtWcrc475emof OCFQQlepKfsIsBkk74pgdo9yf44MTS2JgJmKyvzRHcNrtflijMU47u29t6JmNr1xQe k1uPN+czdcD+g== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 5A7E45C0CCE; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 08/23] context_tracking: Rename context_tracking_cpu_set() to ct_cpu_track_user() Date: Mon, 20 Jun 2022 16:10:14 -0700 Message-Id: <20220620231029.3844583-8-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker context_tracking_cpu_set() is called in order to tell a CPU to track user/kernel transitions. Since context tracking is going to expand in to also track transitions from/to idle/IRQ/NMIs, the scope of this function name becomes too broad and needs to be made more specific. Also shorten the prefix to align with the new namespace. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- include/linux/context_tracking.h | 2 +- kernel/context_tracking.c | 4 ++-- kernel/time/tick-sched.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index 7a5f04ae1758f..63259fece7c76 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -11,7 +11,7 @@ #ifdef CONFIG_CONTEXT_TRACKING -extern void context_tracking_cpu_set(int cpu); +extern void ct_cpu_track_user(int cpu); /* Called with interrupts disabled. */ extern void __ct_user_enter(enum ctx_state state); diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 87454e3515546..7f457a1a1b551 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -228,7 +228,7 @@ void user_exit_callable(void) } NOKPROBE_SYMBOL(user_exit_callable); -void __init context_tracking_cpu_set(int cpu) +void __init ct_cpu_track_user(int cpu) { static __initdata bool initialized = false; @@ -258,6 +258,6 @@ void __init context_tracking_init(void) int cpu; for_each_possible_cpu(cpu) - context_tracking_cpu_set(cpu); + ct_cpu_track_user(cpu); } #endif diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index 58a11f859ac79..de192dcff8282 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -571,7 +571,7 @@ void __init tick_nohz_init(void) } for_each_cpu(cpu, tick_nohz_full_mask) - context_tracking_cpu_set(cpu); + ct_cpu_track_user(cpu); ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "kernel/nohz:predown", NULL, From patchwork Mon Jun 20 23:10:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7865C433EF for ; Mon, 20 Jun 2022 23:12:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346790AbiFTXMs (ORCPT ); Mon, 20 Jun 2022 19:12:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347048AbiFTXME (ORCPT ); Mon, 20 Jun 2022 19:12:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 241331D329; Mon, 20 Jun 2022 16:10:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EAF37614FF; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0781C341D5; Mon, 20 Jun 2022 23:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=XyRgtSbvWle6eQPE5MOjMSTVqyJ2jrzKSkJQdZ5O6o8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JVqiBnRG+HmODhNEbcsRCQTKSU4+BRqs6o4v8vUYylnmqjXy6tJJYvTueMlIlnnnJ /ClUFh/bZdtS9/d/gPh841Xd1AJ5M5KDMPB3W00mBzdx+dSPHMUqk3vuuj7uE7LAL+ vuZFfOEB1Bh6Jh8/ZAu9acExFt6i3FO1ilPCnoE6SM6wNbsYVL+fdDL1Zrr47qeb49 mDkdabgELG75HDhaP9ZIu6KIWvn8RYbz7gG9LlZ5xnF2RNOZz1nk11Em8t5kuuwYbK HFYcv/Sr3QXnaXY5T6KhZchxRnQRMr8bVr4vMuRczyadsXN+RsdXvrNSDLgQSFNurM LYH/k8Abx4MHw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 5CD365C0D1B; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 09/23] context_tracking: Split user tracking Kconfig Date: Mon, 20 Jun 2022 16:10:15 -0700 Message-Id: <20220620231029.3844583-9-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker Context tracking is going to be used not only to track user transitions but also idle/IRQs/NMIs. The user tracking part will then become a separate feature. Prepare Kconfig for that. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- .../time/context-tracking/arch-support.txt | 6 ++-- arch/Kconfig | 4 +-- arch/arm/Kconfig | 2 +- arch/arm/kernel/entry-common.S | 4 +-- arch/arm/kernel/entry-header.S | 4 +-- arch/arm64/Kconfig | 2 +- arch/csky/Kconfig | 2 +- arch/csky/kernel/entry.S | 4 +-- arch/mips/Kconfig | 2 +- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/context_tracking.h | 2 +- arch/riscv/Kconfig | 2 +- arch/riscv/kernel/entry.S | 6 ++-- arch/sparc/Kconfig | 2 +- arch/sparc/kernel/rtrap_64.S | 2 +- arch/x86/Kconfig | 4 +-- include/linux/context_tracking.h | 12 +++---- include/linux/context_tracking_state.h | 4 +-- init/Kconfig | 4 +-- kernel/context_tracking.c | 6 +++- kernel/sched/core.c | 2 +- kernel/time/Kconfig | 31 ++++++++++++------- 22 files changed, 61 insertions(+), 48 deletions(-) diff --git a/Documentation/features/time/context-tracking/arch-support.txt b/Documentation/features/time/context-tracking/arch-support.txt index c9e0a16290e68..e59071a490901 100644 --- a/Documentation/features/time/context-tracking/arch-support.txt +++ b/Documentation/features/time/context-tracking/arch-support.txt @@ -1,7 +1,7 @@ # -# Feature name: context-tracking -# Kconfig: HAVE_CONTEXT_TRACKING -# description: arch supports context tracking for NO_HZ_FULL +# Feature name: user-context-tracking +# Kconfig: HAVE_CONTEXT_TRACKING_USER +# description: arch supports user context tracking for NO_HZ_FULL # ----------------------- | arch |status| diff --git a/arch/Kconfig b/arch/Kconfig index fcf9a41a4ef5b..154b7b78da093 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -774,7 +774,7 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES and similar) by implementing an inline arch_within_stack_frames(), which is used by CONFIG_HARDENED_USERCOPY. -config HAVE_CONTEXT_TRACKING +config HAVE_CONTEXT_TRACKING_USER bool help Provide kernel/user boundaries probes necessary for subsystems @@ -785,7 +785,7 @@ config HAVE_CONTEXT_TRACKING protected inside rcu_irq_enter/rcu_irq_exit() but preemption or signal handling on irq exit still need to be protected. -config HAVE_CONTEXT_TRACKING_OFFSTACK +config HAVE_CONTEXT_TRACKING_USER_OFFSTACK bool help Architecture neither relies on exception_enter()/exception_exit() diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 7630ba9cb6ccc..9acc6aac59126 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -84,7 +84,7 @@ config ARM select HAVE_ARCH_TRANSPARENT_HUGEPAGE if ARM_LPAE select HAVE_ARM_SMCCC if CPU_V7 select HAVE_EBPF_JIT if !CPU_ENDIAN_BE32 - select HAVE_CONTEXT_TRACKING + select HAVE_CONTEXT_TRACKING_USER select HAVE_C_RECORDMCOUNT select HAVE_BUILDTIME_MCOUNT_SORT select HAVE_DEBUG_KMEMLEAK if !XIP_KERNEL diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S index 7aa3ded4af929..37a0125fc9265 100644 --- a/arch/arm/kernel/entry-common.S +++ b/arch/arm/kernel/entry-common.S @@ -28,7 +28,7 @@ #include "entry-header.S" saved_psr .req r8 -#if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING) +#if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING_USER) saved_pc .req r9 #define TRACE(x...) x #else @@ -38,7 +38,7 @@ saved_pc .req lr .section .entry.text,"ax",%progbits .align 5 -#if !(IS_ENABLED(CONFIG_TRACE_IRQFLAGS) || IS_ENABLED(CONFIG_CONTEXT_TRACKING) || \ +#if !(IS_ENABLED(CONFIG_TRACE_IRQFLAGS) || IS_ENABLED(CONFIG_CONTEXT_TRACKING_USER) || \ IS_ENABLED(CONFIG_DEBUG_RSEQ)) /* * This is the fast syscall return path. We do as little as possible here, diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index 95def2b38d1ca..99411fa913501 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -366,7 +366,7 @@ ALT_UP_B(.L1_\@) * between user and kernel mode. */ .macro ct_user_exit, save = 1 -#ifdef CONFIG_CONTEXT_TRACKING +#ifdef CONFIG_CONTEXT_TRACKING_USER .if \save stmdb sp!, {r0-r3, ip, lr} bl user_exit_callable @@ -378,7 +378,7 @@ ALT_UP_B(.L1_\@) .endm .macro ct_user_enter, save = 1 -#ifdef CONFIG_CONTEXT_TRACKING +#ifdef CONFIG_CONTEXT_TRACKING_USER .if \save stmdb sp!, {r0-r3, ip, lr} bl user_enter_callable diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1652a9800ebee..7c5dd2af9ca95 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -174,7 +174,7 @@ config ARM64 select HAVE_C_RECORDMCOUNT select HAVE_CMPXCHG_DOUBLE select HAVE_CMPXCHG_LOCAL - select HAVE_CONTEXT_TRACKING + select HAVE_CONTEXT_TRACKING_USER select HAVE_DEBUG_KMEMLEAK select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig index 21d72b078eefc..f55ba1745f7b9 100644 --- a/arch/csky/Kconfig +++ b/arch/csky/Kconfig @@ -42,7 +42,7 @@ config CSKY select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_SECCOMP_FILTER - select HAVE_CONTEXT_TRACKING + select HAVE_CONTEXT_TRACKING_USER select HAVE_VIRT_CPU_ACCOUNTING_GEN select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_KMEMLEAK diff --git a/arch/csky/kernel/entry.S b/arch/csky/kernel/entry.S index bc734d17c16f4..547b4cd1b24b4 100644 --- a/arch/csky/kernel/entry.S +++ b/arch/csky/kernel/entry.S @@ -19,7 +19,7 @@ .endm .macro context_tracking -#ifdef CONFIG_CONTEXT_TRACKING +#ifdef CONFIG_CONTEXT_TRACKING_USER mfcr a0, epsr btsti a0, 31 bt 1f @@ -159,7 +159,7 @@ ret_from_exception: and r10, r9 cmpnei r10, 0 bt exit_work -#ifdef CONFIG_CONTEXT_TRACKING +#ifdef CONFIG_CONTEXT_TRACKING_USER jbsr user_enter_callable #endif 1: diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index db09d45d59ec7..9457894db2375 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -56,7 +56,7 @@ config MIPS select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES select HAVE_ASM_MODVERSIONS - select HAVE_CONTEXT_TRACKING + select HAVE_CONTEXT_TRACKING_USER select HAVE_TIF_NOHZ select HAVE_C_RECORDMCOUNT select HAVE_DEBUG_KMEMLEAK diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index c2ce2e60c8f0f..874c8d81284ad 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -202,7 +202,7 @@ config PPC select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_TRACEHOOK select HAVE_ASM_MODVERSIONS - select HAVE_CONTEXT_TRACKING if PPC64 + select HAVE_CONTEXT_TRACKING_USER if PPC64 select HAVE_C_RECORDMCOUNT select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_STACKOVERFLOW diff --git a/arch/powerpc/include/asm/context_tracking.h b/arch/powerpc/include/asm/context_tracking.h index f2682b28b0502..4b63931c49e0e 100644 --- a/arch/powerpc/include/asm/context_tracking.h +++ b/arch/powerpc/include/asm/context_tracking.h @@ -2,7 +2,7 @@ #ifndef _ASM_POWERPC_CONTEXT_TRACKING_H #define _ASM_POWERPC_CONTEXT_TRACKING_H -#ifdef CONFIG_CONTEXT_TRACKING +#ifdef CONFIG_CONTEXT_TRACKING_USER #define SCHEDULE_USER bl schedule_user #else #define SCHEDULE_USER bl schedule diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 32ffef9f6e5b4..29b46f2173457 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -86,7 +86,7 @@ config RISCV select HAVE_ARCH_THREAD_STRUCT_WHITELIST select HAVE_ARCH_VMAP_STACK if MMU && 64BIT select HAVE_ASM_MODVERSIONS - select HAVE_CONTEXT_TRACKING + select HAVE_CONTEXT_TRACKING_USER select HAVE_DEBUG_KMEMLEAK select HAVE_DMA_CONTIGUOUS if MMU select HAVE_EBPF_JIT if MMU diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 12f6bba57e335..b9eda3fcbd6d7 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -111,7 +111,7 @@ _save_context: call __trace_hardirqs_off #endif -#ifdef CONFIG_CONTEXT_TRACKING +#ifdef CONFIG_CONTEXT_TRACKING_USER /* If previous state is in user mode, call user_exit_callable(). */ li a0, SR_PP and a0, s1, a0 @@ -176,7 +176,7 @@ handle_syscall: */ csrs CSR_STATUS, SR_IE #endif -#if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING) +#if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING_USER) /* Recover a0 - a7 for system calls */ REG_L a0, PT_A0(sp) REG_L a1, PT_A1(sp) @@ -269,7 +269,7 @@ resume_userspace: andi s1, s0, _TIF_WORK_MASK bnez s1, work_pending -#ifdef CONFIG_CONTEXT_TRACKING +#ifdef CONFIG_CONTEXT_TRACKING_USER call user_enter_callable #endif diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index ba449c47effd8..9232411a8821a 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -71,7 +71,7 @@ config SPARC64 select HAVE_DYNAMIC_FTRACE select HAVE_FTRACE_MCOUNT_RECORD select HAVE_SYSCALL_TRACEPOINTS - select HAVE_CONTEXT_TRACKING + select HAVE_CONTEXT_TRACKING_USER select HAVE_TIF_NOHZ select HAVE_DEBUG_KMEMLEAK select IOMMU_HELPER diff --git a/arch/sparc/kernel/rtrap_64.S b/arch/sparc/kernel/rtrap_64.S index c5fd4b450d9b6..eef102765a7e2 100644 --- a/arch/sparc/kernel/rtrap_64.S +++ b/arch/sparc/kernel/rtrap_64.S @@ -15,7 +15,7 @@ #include #include -#ifdef CONFIG_CONTEXT_TRACKING +#ifdef CONFIG_CONTEXT_TRACKING_USER # define SCHEDULE_USER schedule_user #else # define SCHEDULE_USER schedule diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index be0b95e51df66..b0a6dbbb760bc 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -186,8 +186,8 @@ config X86 select HAVE_ASM_MODVERSIONS select HAVE_CMPXCHG_DOUBLE select HAVE_CMPXCHG_LOCAL - select HAVE_CONTEXT_TRACKING if X86_64 - select HAVE_CONTEXT_TRACKING_OFFSTACK if HAVE_CONTEXT_TRACKING + select HAVE_CONTEXT_TRACKING_USER if X86_64 + select HAVE_CONTEXT_TRACKING_USER_OFFSTACK if HAVE_CONTEXT_TRACKING_USER select HAVE_C_RECORDMCOUNT select HAVE_OBJTOOL_MCOUNT if HAVE_OBJTOOL select HAVE_BUILDTIME_MCOUNT_SORT diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index 63259fece7c76..e35ae66b4794e 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -10,7 +10,7 @@ #include -#ifdef CONFIG_CONTEXT_TRACKING +#ifdef CONFIG_CONTEXT_TRACKING_USER extern void ct_cpu_track_user(int cpu); /* Called with interrupts disabled. */ @@ -52,7 +52,7 @@ static inline enum ctx_state exception_enter(void) { enum ctx_state prev_ctx; - if (IS_ENABLED(CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK) || + if (IS_ENABLED(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK) || !context_tracking_enabled()) return 0; @@ -65,7 +65,7 @@ static inline enum ctx_state exception_enter(void) static inline void exception_exit(enum ctx_state prev_ctx) { - if (!IS_ENABLED(CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK) && + if (!IS_ENABLED(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK) && context_tracking_enabled()) { if (prev_ctx != CONTEXT_KERNEL) ct_user_enter(prev_ctx); @@ -109,14 +109,14 @@ static inline enum ctx_state ct_state(void) { return CONTEXT_DISABLED; } static __always_inline bool context_tracking_guest_enter(void) { return false; } static inline void context_tracking_guest_exit(void) { } -#endif /* !CONFIG_CONTEXT_TRACKING */ +#endif /* !CONFIG_CONTEXT_TRACKING_USER */ #define CT_WARN_ON(cond) WARN_ON(context_tracking_enabled() && (cond)) -#ifdef CONFIG_CONTEXT_TRACKING_FORCE +#ifdef CONFIG_CONTEXT_TRACKING_USER_FORCE extern void context_tracking_init(void); #else static inline void context_tracking_init(void) { } -#endif /* CONFIG_CONTEXT_TRACKING_FORCE */ +#endif /* CONFIG_CONTEXT_TRACKING_USER_FORCE */ #endif diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h index edc7b46376a6b..2b46afe105a96 100644 --- a/include/linux/context_tracking_state.h +++ b/include/linux/context_tracking_state.h @@ -22,7 +22,7 @@ struct context_tracking { } state; }; -#ifdef CONFIG_CONTEXT_TRACKING +#ifdef CONFIG_CONTEXT_TRACKING_USER extern struct static_key_false context_tracking_key; DECLARE_PER_CPU(struct context_tracking, context_tracking); @@ -45,6 +45,6 @@ static inline bool context_tracking_enabled_this_cpu(void) static __always_inline bool context_tracking_enabled(void) { return false; } static __always_inline bool context_tracking_enabled_cpu(int cpu) { return false; } static __always_inline bool context_tracking_enabled_this_cpu(void) { return false; } -#endif /* CONFIG_CONTEXT_TRACKING */ +#endif /* CONFIG_CONTEXT_TRACKING_USER */ #endif diff --git a/init/Kconfig b/init/Kconfig index c7900e8975f18..06454d19e2f0a 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -494,11 +494,11 @@ config VIRT_CPU_ACCOUNTING_NATIVE config VIRT_CPU_ACCOUNTING_GEN bool "Full dynticks CPU time accounting" - depends on HAVE_CONTEXT_TRACKING + depends on HAVE_CONTEXT_TRACKING_USER depends on HAVE_VIRT_CPU_ACCOUNTING_GEN depends on GENERIC_CLOCKEVENTS select VIRT_CPU_ACCOUNTING - select CONTEXT_TRACKING + select CONTEXT_TRACKING_USER help Select this option to enable task and CPU time accounting on full dynticks systems. This accounting is implemented by watching every diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 7f457a1a1b551..a6997daf27d10 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -22,6 +22,8 @@ #include #include +#ifdef CONFIG_CONTEXT_TRACKING_USER + #define CREATE_TRACE_POINTS #include @@ -252,7 +254,7 @@ void __init ct_cpu_track_user(int cpu) initialized = true; } -#ifdef CONFIG_CONTEXT_TRACKING_FORCE +#ifdef CONFIG_CONTEXT_TRACKING_USER_FORCE void __init context_tracking_init(void) { int cpu; @@ -261,3 +263,5 @@ void __init context_tracking_init(void) ct_cpu_track_user(cpu); } #endif + +#endif /* #ifdef CONFIG_CONTEXT_TRACKING_USER */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index da0bf6fe9ecdc..883167a57bf9d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6559,7 +6559,7 @@ void __sched schedule_idle(void) } while (need_resched()); } -#if defined(CONFIG_CONTEXT_TRACKING) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK) +#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK) asmlinkage __visible void __sched schedule_user(void) { /* diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig index 27b7868b5c30d..41f99bcfe9e66 100644 --- a/kernel/time/Kconfig +++ b/kernel/time/Kconfig @@ -73,6 +73,9 @@ config TIME_KUNIT_TEST If unsure, say N. +config CONTEXT_TRACKING + bool + if GENERIC_CLOCKEVENTS menu "Timers subsystem" @@ -111,7 +114,7 @@ config NO_HZ_FULL # NO_HZ_COMMON dependency # We need at least one periodic CPU for timekeeping depends on SMP - depends on HAVE_CONTEXT_TRACKING + depends on HAVE_CONTEXT_TRACKING_USER # VIRT_CPU_ACCOUNTING_GEN dependency depends on HAVE_VIRT_CPU_ACCOUNTING_GEN select NO_HZ_COMMON @@ -137,31 +140,37 @@ config NO_HZ_FULL endchoice -config CONTEXT_TRACKING - bool +config CONTEXT_TRACKING_USER + bool + depends on HAVE_CONTEXT_TRACKING_USER + select CONTEXT_TRACKING + help + Track transitions between kernel and user on behalf of RCU and + tickless cputime accounting. The former case relies on context + tracking to enter/exit RCU extended quiescent states. -config CONTEXT_TRACKING_FORCE - bool "Force context tracking" - depends on CONTEXT_TRACKING +config CONTEXT_TRACKING_USER_FORCE + bool "Force user context tracking" + depends on CONTEXT_TRACKING_USER default y if !NO_HZ_FULL help The major pre-requirement for full dynticks to work is to - support the context tracking subsystem. But there are also + support the user context tracking subsystem. But there are also other dependencies to provide in order to make the full dynticks working. This option stands for testing when an arch implements the - context tracking backend but doesn't yet fulfill all the + user context tracking backend but doesn't yet fulfill all the requirements to make the full dynticks feature working. Without the full dynticks, there is no way to test the support - for context tracking and the subsystems that rely on it: RCU + for user context tracking and the subsystems that rely on it: RCU userspace extended quiescent state and tickless cputime accounting. This option copes with the absence of the full - dynticks subsystem by forcing the context tracking on all + dynticks subsystem by forcing the user context tracking on all CPUs in the system. Say Y only if you're working on the development of an - architecture backend for the context tracking. + architecture backend for the user context tracking. Say N otherwise, this option brings an overhead that you don't want in production. From patchwork Mon Jun 20 23:10:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7D9BC43334 for ; Mon, 20 Jun 2022 23:12:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243240AbiFTXMR (ORCPT ); Mon, 20 Jun 2022 19:12:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346829AbiFTXLu (ORCPT ); Mon, 20 Jun 2022 19:11:50 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FA0D26544; Mon, 20 Jun 2022 16:10:36 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9A693B8164D; Mon, 20 Jun 2022 23:10:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ECB79C36AE7; Mon, 20 Jun 2022 23:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=7Os/de3av0FDcoi4MWN14jETAiFGF7qvFPt5+ViREeE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sRwCov+YCQ5eWkPcV5pqEqjDGRdPIZYJC96fQGVsQV3HWrWD1SAO1N0L4Sz8JmOq9 qo6D56y6m4T7r+tbUo0zNRvkWmHNYHGSW1/y7ro9VCRzO1OxqT2iJ8e98+G3Q3fDsm OxTMtxY4XBnAjl02CfYNkfmx3cDF1Gambq4V82NiGSTeqi7d0BktOnqiszKKoOWwTr /bwTI9I8e8AAWMu000VeuqYv3XXS+l9J4/UZMBLVQcfK3mMzVfYNQ4IYVeimeIFNtm dtP45MZfUH6Ch2pv+60gziWo6Pr+nbGAbQEHvWL6wYrzftCW0qQw3nTXbkk4uJwuNn T/oxauglr4leQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 5EEFA5C0DAC; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 10/23] context_tracking: Take idle eqs entrypoints over RCU Date: Mon, 20 Jun 2022 16:10:16 -0700 Message-Id: <20220620231029.3844583-10-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker The RCU dynticks counter is going to be merged into the context tracking subsystem. Start with moving the idle extended quiescent states entrypoints to context tracking. For now those are dumb redirections to existing RCU calls. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- Documentation/RCU/stallwarn.rst | 4 ++-- arch/arm/mach-imx/cpuidle-imx6q.c | 5 +++-- drivers/acpi/processor_idle.c | 5 +++-- drivers/cpuidle/cpuidle.c | 9 +++++---- include/linux/context_tracking.h | 8 ++++++++ include/linux/rcupdate.h | 2 +- kernel/context_tracking.c | 15 +++++++++++++++ kernel/locking/lockdep.c | 2 +- kernel/rcu/Kconfig | 2 ++ kernel/rcu/tree.c | 2 -- kernel/rcu/update.c | 2 +- kernel/sched/idle.c | 10 +++++----- kernel/sched/sched.h | 1 + kernel/time/Kconfig | 6 ++++++ 14 files changed, 53 insertions(+), 20 deletions(-) diff --git a/Documentation/RCU/stallwarn.rst b/Documentation/RCU/stallwarn.rst index 794837eb519b9..b95bda7755fa9 100644 --- a/Documentation/RCU/stallwarn.rst +++ b/Documentation/RCU/stallwarn.rst @@ -97,8 +97,8 @@ warnings: which will include additional debugging information. - A low-level kernel issue that either fails to invoke one of the - variants of rcu_user_enter(), rcu_user_exit(), rcu_idle_enter(), - rcu_idle_exit(), rcu_irq_enter(), or rcu_irq_exit() on the one + variants of rcu_user_enter(), rcu_user_exit(), ct_idle_enter(), + ct_idle_exit(), rcu_irq_enter(), or rcu_irq_exit() on the one hand, or that invokes one of them too many times on the other. Historically, the most frequent issue has been an omission of either irq_enter() or irq_exit(), which in turn invoke diff --git a/arch/arm/mach-imx/cpuidle-imx6q.c b/arch/arm/mach-imx/cpuidle-imx6q.c index 094337dc1bc7e..d086cbae09c37 100644 --- a/arch/arm/mach-imx/cpuidle-imx6q.c +++ b/arch/arm/mach-imx/cpuidle-imx6q.c @@ -3,6 +3,7 @@ * Copyright (C) 2012 Freescale Semiconductor, Inc. */ +#include #include #include #include @@ -24,9 +25,9 @@ static int imx6q_enter_wait(struct cpuidle_device *dev, imx6_set_lpm(WAIT_UNCLOCKED); raw_spin_unlock(&cpuidle_lock); - rcu_idle_enter(); + ct_idle_enter(); cpu_do_idle(); - rcu_idle_exit(); + ct_idle_exit(); raw_spin_lock(&cpuidle_lock); if (num_idle_cpus-- == num_online_cpus()) diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c index 6a5572a1a80cc..1401d193a2dfc 100644 --- a/drivers/acpi/processor_idle.c +++ b/drivers/acpi/processor_idle.c @@ -23,6 +23,7 @@ #include #include #include +#include /* * Include the apic definitions for x86 to have the APIC timer related defines @@ -647,11 +648,11 @@ static int acpi_idle_enter_bm(struct cpuidle_driver *drv, raw_spin_unlock(&c3_lock); } - rcu_idle_enter(); + ct_idle_enter(); acpi_idle_do_entry(cx); - rcu_idle_exit(); + ct_idle_exit(); /* Re-enable bus master arbitration */ if (dis_bm) { diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c index ef2ea1b12cd84..62dd956025f33 100644 --- a/drivers/cpuidle/cpuidle.c +++ b/drivers/cpuidle/cpuidle.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include "cpuidle.h" @@ -150,12 +151,12 @@ static void enter_s2idle_proper(struct cpuidle_driver *drv, */ stop_critical_timings(); if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE)) - rcu_idle_enter(); + ct_idle_enter(); target_state->enter_s2idle(dev, drv, index); if (WARN_ON_ONCE(!irqs_disabled())) local_irq_disable(); if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE)) - rcu_idle_exit(); + ct_idle_exit(); tick_unfreeze(); start_critical_timings(); @@ -233,10 +234,10 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv, stop_critical_timings(); if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE)) - rcu_idle_enter(); + ct_idle_enter(); entered_state = target_state->enter(dev, drv, index); if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE)) - rcu_idle_exit(); + ct_idle_exit(); start_critical_timings(); sched_clock_idle_wakeup_event(); diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index e35ae66b4794e..01abadb2f9930 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -119,4 +119,12 @@ extern void context_tracking_init(void); static inline void context_tracking_init(void) { } #endif /* CONFIG_CONTEXT_TRACKING_USER_FORCE */ +#ifdef CONFIG_CONTEXT_TRACKING_IDLE +extern void ct_idle_enter(void); +extern void ct_idle_exit(void); +#else +static inline void ct_idle_enter(void) { } +static inline void ct_idle_exit(void) { } +#endif /* !CONFIG_CONTEXT_TRACKING_IDLE */ + #endif diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 1a32036c918cd..6ebe754501c38 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -128,7 +128,7 @@ static inline void rcu_nocb_flush_deferred_wakeup(void) { } * @a: Code that RCU needs to pay attention to. * * RCU read-side critical sections are forbidden in the inner idle loop, - * that is, between the rcu_idle_enter() and the rcu_idle_exit() -- RCU + * that is, between the ct_idle_enter() and the ct_idle_exit() -- RCU * will happily ignore any such read-side critical sections. However, * things like powertop need tracepoints in the inner idle loop. * diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index a6997daf27d10..e9904f935f7f4 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -22,6 +22,21 @@ #include #include + +#ifdef CONFIG_CONTEXT_TRACKING_IDLE +noinstr void ct_idle_enter(void) +{ + rcu_idle_enter(); +} +EXPORT_SYMBOL_GPL(ct_idle_enter); + +void ct_idle_exit(void) +{ + rcu_idle_exit(); +} +EXPORT_SYMBOL_GPL(ct_idle_exit); +#endif /* #ifdef CONFIG_CONTEXT_TRACKING_IDLE */ + #ifdef CONFIG_CONTEXT_TRACKING_USER #define CREATE_TRACE_POINTS diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index f06b91ca6482d..5ea690cb4f7af 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -6570,7 +6570,7 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s) /* * If a CPU is in the RCU-free window in idle (ie: in the section - * between rcu_idle_enter() and rcu_idle_exit(), then RCU + * between ct_idle_enter() and ct_idle_exit(), then RCU * considers that CPU to be in an "extended quiescent state", * which means that RCU will be completely ignoring that CPU. * Therefore, rcu_read_lock() and friends have absolutely no diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig index 1c630e573548d..3fa24e63d6f9b 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -8,6 +8,8 @@ menu "RCU Subsystem" config TREE_RCU bool default y if SMP + # Dynticks-idle tracking + select CONTEXT_TRACKING_IDLE help This option selects the RCU implementation that is designed for very large SMP system with hundreds or diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 9a5edab5558c9..051fed0844b67 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -664,7 +664,6 @@ void noinstr rcu_idle_enter(void) WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); rcu_eqs_enter(false); } -EXPORT_SYMBOL_GPL(rcu_idle_enter); #ifdef CONFIG_NO_HZ_FULL @@ -904,7 +903,6 @@ void noinstr rcu_idle_exit(void) rcu_eqs_exit(false); raw_local_irq_restore(flags); } -EXPORT_SYMBOL_GPL(rcu_idle_exit); #ifdef CONFIG_NO_HZ_FULL /** diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c index fc7fef5756064..147214b2cd68e 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c @@ -85,7 +85,7 @@ module_param(rcu_normal_after_boot, int, 0444); * and while lockdep is disabled. * * Note that if the CPU is in the idle loop from an RCU point of view (ie: - * that we are in the section between rcu_idle_enter() and rcu_idle_exit()) + * that we are in the section between ct_idle_enter() and ct_idle_exit()) * then rcu_read_lock_held() sets ``*ret`` to false even if the CPU did an * rcu_read_lock(). The reason for this is that RCU ignores CPUs that are * in such a section, considering these as in extended quiescent state, diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index 328cccbee4441..f26ab2675f7d7 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -53,14 +53,14 @@ static noinline int __cpuidle cpu_idle_poll(void) { trace_cpu_idle(0, smp_processor_id()); stop_critical_timings(); - rcu_idle_enter(); + ct_idle_enter(); local_irq_enable(); while (!tif_need_resched() && (cpu_idle_force_poll || tick_check_broadcast_expired())) cpu_relax(); - rcu_idle_exit(); + ct_idle_exit(); start_critical_timings(); trace_cpu_idle(PWR_EVENT_EXIT, smp_processor_id()); @@ -98,12 +98,12 @@ void __cpuidle default_idle_call(void) * * Trace IRQs enable here, then switch off RCU, and have * arch_cpu_idle() use raw_local_irq_enable(). Note that - * rcu_idle_enter() relies on lockdep IRQ state, so switch that + * ct_idle_enter() relies on lockdep IRQ state, so switch that * last -- this is very similar to the entry code. */ trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(); - rcu_idle_enter(); + ct_idle_enter(); lockdep_hardirqs_on(_THIS_IP_); arch_cpu_idle(); @@ -116,7 +116,7 @@ void __cpuidle default_idle_call(void) */ raw_local_irq_disable(); lockdep_hardirqs_off(_THIS_IP_); - rcu_idle_exit(); + ct_idle_exit(); lockdep_hardirqs_on(_THIS_IP_); raw_local_irq_enable(); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 47b89a0fc6e55..0cfe2d0af2947 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig index 41f99bcfe9e66..a41753be1a2bf 100644 --- a/kernel/time/Kconfig +++ b/kernel/time/Kconfig @@ -76,6 +76,12 @@ config TIME_KUNIT_TEST config CONTEXT_TRACKING bool +config CONTEXT_TRACKING_IDLE + bool + select CONTEXT_TRACKING + help + Tracks idle state on behalf of RCU. + if GENERIC_CLOCKEVENTS menu "Timers subsystem" From patchwork Mon Jun 20 23:10:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888554 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E122AC43334 for ; Mon, 20 Jun 2022 23:14:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345964AbiFTXOn (ORCPT ); Mon, 20 Jun 2022 19:14:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346469AbiFTXMJ (ORCPT ); Mon, 20 Jun 2022 19:12:09 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FED522523; Mon, 20 Jun 2022 16:10:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 04A48B8164A; Mon, 20 Jun 2022 23:10:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 082BBC341D3; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=/oD+awAadMfLy4fTgbaXRqXnjDJ/R+Dmtn7onpuDHWc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W0csfxRPBN40licupTZEon7Zx5cTr6Okoaqth4Lo35wtlZLd2GCeOGh6g1w8HAq9j eyZ5SvnO1fUOgpDU5ZEN/O97vL2Zzjp/aU3yMC9yNoI6Fqo+r8uZV47jblxRgiHaHb L55XbkyyqRPYKEdGerIjbH2SsV0XbRdd26rWNuchG1SIVzb+Sk8AjIIqSejEsPICv5 tS185RVewtab2lmQNpAKoVSDfo/J7OJ7e7mXfWxFbUWatAZL/QGVamNOd3eSBeob6Y 7UX3lovdIJiRJu/cTlmz4lx2EviYhAive13Xl1LnKFb8dLKbrra6IQuwNW7Fijxpfd Y0RJEi2Gf0PNw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 612CD5C0DEB; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , "Paul E . McKenney" , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits Subject: [PATCH rcu 11/23] context_tracking: Take IRQ eqs entrypoints over RCU Date: Mon, 20 Jun 2022 16:10:17 -0700 Message-Id: <20220620231029.3844583-11-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker The RCU dynticks counter is going to be merged into the context tracking subsystem. Prepare with moving the IRQ extended quiescent states entrypoints to context tracking. For now those are dumb redirection to existing RCU calls. [ paulmck: Apply Stephen Rothwell feedback from -next. ] [ paulmck: Apply Nathan Chancellor feedback. ] Acked-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- .../RCU/Design/Requirements/Requirements.rst | 10 ++++---- Documentation/RCU/stallwarn.rst | 4 ++-- arch/Kconfig | 2 +- arch/arm64/kernel/entry-common.c | 6 ++--- arch/x86/mm/fault.c | 2 +- drivers/cpuidle/cpuidle-psci.c | 8 +++---- drivers/cpuidle/cpuidle-riscv-sbi.c | 8 +++---- include/linux/context_tracking_irq.h | 17 +++++++++++++ include/linux/context_tracking_state.h | 1 + include/linux/entry-common.h | 10 ++++---- include/linux/rcupdate.h | 5 ++-- include/linux/tracepoint.h | 4 ++-- kernel/cfi.c | 4 ++-- kernel/context_tracking.c | 24 +++++++++++++++++-- kernel/cpu_pm.c | 8 +++---- kernel/entry/common.c | 12 +++++----- kernel/softirq.c | 4 ++-- kernel/trace/trace.c | 6 ++--- 18 files changed, 87 insertions(+), 48 deletions(-) create mode 100644 include/linux/context_tracking_irq.h diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst index 04ed8bf27a0ea..074810c739367 100644 --- a/Documentation/RCU/Design/Requirements/Requirements.rst +++ b/Documentation/RCU/Design/Requirements/Requirements.rst @@ -1844,10 +1844,10 @@ that meets this requirement. Furthermore, NMI handlers can be interrupted by what appear to RCU to be normal interrupts. One way that this can happen is for code that -directly invokes rcu_irq_enter() and rcu_irq_exit() to be called +directly invokes ct_irq_enter() and ct_irq_exit() to be called from an NMI handler. This astonishing fact of life prompted the current -code structure, which has rcu_irq_enter() invoking -rcu_nmi_enter() and rcu_irq_exit() invoking rcu_nmi_exit(). +code structure, which has ct_irq_enter() invoking +rcu_nmi_enter() and ct_irq_exit() invoking rcu_nmi_exit(). And yes, I also learned of this requirement the hard way. Loadable Modules @@ -2195,7 +2195,7 @@ scheduling-clock interrupt be enabled when RCU needs it to be: sections, and RCU believes this CPU to be idle, no problem. This sort of thing is used by some architectures for light-weight exception handlers, which can then avoid the overhead of - rcu_irq_enter() and rcu_irq_exit() at exception entry and + ct_irq_enter() and ct_irq_exit() at exception entry and exit, respectively. Some go further and avoid the entireties of irq_enter() and irq_exit(). Just make very sure you are running some of your tests with @@ -2226,7 +2226,7 @@ scheduling-clock interrupt be enabled when RCU needs it to be: +-----------------------------------------------------------------------+ | **Answer**: | +-----------------------------------------------------------------------+ -| One approach is to do ``rcu_irq_exit();rcu_irq_enter();`` every so | +| One approach is to do ``ct_irq_exit();ct_irq_enter();`` every so | | often. But given that long-running interrupt handlers can cause other | | problems, not least for response time, shouldn't you work to keep | | your interrupt handler's runtime within reasonable bounds? | diff --git a/Documentation/RCU/stallwarn.rst b/Documentation/RCU/stallwarn.rst index b95bda7755fa9..ce1f58a9d954b 100644 --- a/Documentation/RCU/stallwarn.rst +++ b/Documentation/RCU/stallwarn.rst @@ -98,11 +98,11 @@ warnings: - A low-level kernel issue that either fails to invoke one of the variants of rcu_user_enter(), rcu_user_exit(), ct_idle_enter(), - ct_idle_exit(), rcu_irq_enter(), or rcu_irq_exit() on the one + ct_idle_exit(), ct_irq_enter(), or ct_irq_exit() on the one hand, or that invokes one of them too many times on the other. Historically, the most frequent issue has been an omission of either irq_enter() or irq_exit(), which in turn invoke - rcu_irq_enter() or rcu_irq_exit(), respectively. Building your + ct_irq_enter() or ct_irq_exit(), respectively. Building your kernel with CONFIG_RCU_EQS_DEBUG=y can help track down these types of issues, which sometimes arise in architecture-specific code. diff --git a/arch/Kconfig b/arch/Kconfig index 154b7b78da093..342642be105fc 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -782,7 +782,7 @@ config HAVE_CONTEXT_TRACKING_USER Syscalls need to be wrapped inside user_exit()-user_enter(), either optimized behind static key or through the slow path using TIF_NOHZ flag. Exceptions handlers must be wrapped as well. Irqs are already - protected inside rcu_irq_enter/rcu_irq_exit() but preemption or signal + protected inside ct_irq_enter/ct_irq_exit() but preemption or signal handling on irq exit still need to be protected. config HAVE_CONTEXT_TRACKING_USER_OFFSTACK diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 56cefd33eb8e9..8dabe9ec10f16 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -41,7 +41,7 @@ static __always_inline void __enter_from_kernel_mode(struct pt_regs *regs) if (!IS_ENABLED(CONFIG_TINY_RCU) && is_idle_task(current)) { lockdep_hardirqs_off(CALLER_ADDR0); - rcu_irq_enter(); + ct_irq_enter(); trace_hardirqs_off_finish(); regs->exit_rcu = true; @@ -76,7 +76,7 @@ static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs) if (regs->exit_rcu) { trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(); - rcu_irq_exit(); + ct_irq_exit(); lockdep_hardirqs_on(CALLER_ADDR0); return; } @@ -84,7 +84,7 @@ static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs) trace_hardirqs_on(); } else { if (regs->exit_rcu) - rcu_irq_exit(); + ct_irq_exit(); } } diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index fad8faa29d042..971977c438fc1 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1526,7 +1526,7 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) /* * Entry handling for valid #PF from kernel mode is slightly - * different: RCU is already watching and rcu_irq_enter() must not + * different: RCU is already watching and ct_irq_enter() must not * be invoked because a kernel fault on a user space address might * sleep. * diff --git a/drivers/cpuidle/cpuidle-psci.c b/drivers/cpuidle/cpuidle-psci.c index 540105ca0781f..57bc3e3ae3912 100644 --- a/drivers/cpuidle/cpuidle-psci.c +++ b/drivers/cpuidle/cpuidle-psci.c @@ -69,12 +69,12 @@ static int __psci_enter_domain_idle_state(struct cpuidle_device *dev, return -1; /* Do runtime PM to manage a hierarchical CPU toplogy. */ - rcu_irq_enter_irqson(); + ct_irq_enter_irqson(); if (s2idle) dev_pm_genpd_suspend(pd_dev); else pm_runtime_put_sync_suspend(pd_dev); - rcu_irq_exit_irqson(); + ct_irq_exit_irqson(); state = psci_get_domain_state(); if (!state) @@ -82,12 +82,12 @@ static int __psci_enter_domain_idle_state(struct cpuidle_device *dev, ret = psci_cpu_suspend_enter(state) ? -1 : idx; - rcu_irq_enter_irqson(); + ct_irq_enter_irqson(); if (s2idle) dev_pm_genpd_resume(pd_dev); else pm_runtime_get_sync(pd_dev); - rcu_irq_exit_irqson(); + ct_irq_exit_irqson(); cpu_pm_exit(); diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c index 1151e5e2ba824..862a2876f1c9d 100644 --- a/drivers/cpuidle/cpuidle-riscv-sbi.c +++ b/drivers/cpuidle/cpuidle-riscv-sbi.c @@ -116,12 +116,12 @@ static int __sbi_enter_domain_idle_state(struct cpuidle_device *dev, return -1; /* Do runtime PM to manage a hierarchical CPU toplogy. */ - rcu_irq_enter_irqson(); + ct_irq_enter_irqson(); if (s2idle) dev_pm_genpd_suspend(pd_dev); else pm_runtime_put_sync_suspend(pd_dev); - rcu_irq_exit_irqson(); + ct_irq_exit_irqson(); if (sbi_is_domain_state_available()) state = sbi_get_domain_state(); @@ -130,12 +130,12 @@ static int __sbi_enter_domain_idle_state(struct cpuidle_device *dev, ret = sbi_suspend(state) ? -1 : idx; - rcu_irq_enter_irqson(); + ct_irq_enter_irqson(); if (s2idle) dev_pm_genpd_resume(pd_dev); else pm_runtime_get_sync(pd_dev); - rcu_irq_exit_irqson(); + ct_irq_exit_irqson(); cpu_pm_exit(); diff --git a/include/linux/context_tracking_irq.h b/include/linux/context_tracking_irq.h new file mode 100644 index 0000000000000..62f62bbd1a50d --- /dev/null +++ b/include/linux/context_tracking_irq.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_CONTEXT_TRACKING_IRQ_H +#define _LINUX_CONTEXT_TRACKING_IRQ_H + +#ifdef CONFIG_CONTEXT_TRACKING_IDLE +void ct_irq_enter(void); +void ct_irq_exit(void); +void ct_irq_enter_irqson(void); +void ct_irq_exit_irqson(void); +#else +static inline void ct_irq_enter(void) { } +static inline void ct_irq_exit(void) { } +static inline void ct_irq_enter_irqson(void) { } +static inline void ct_irq_exit_irqson(void) { } +#endif + +#endif diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h index 2b46afe105a96..9c16a8b2c1947 100644 --- a/include/linux/context_tracking_state.h +++ b/include/linux/context_tracking_state.h @@ -4,6 +4,7 @@ #include #include +#include struct context_tracking { /* diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index c92ac75d6556d..84a466b176cf4 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -357,7 +357,7 @@ void irqentry_exit_to_user_mode(struct pt_regs *regs); /** * struct irqentry_state - Opaque object for exception state storage * @exit_rcu: Used exclusively in the irqentry_*() calls; signals whether the - * exit path has to invoke rcu_irq_exit(). + * exit path has to invoke ct_irq_exit(). * @lockdep: Used exclusively in the irqentry_nmi_*() calls; ensures that * lockdep state is restored correctly on exit from nmi. * @@ -395,12 +395,12 @@ typedef struct irqentry_state { * * For kernel mode entries RCU handling is done conditional. If RCU is * watching then the only RCU requirement is to check whether the tick has - * to be restarted. If RCU is not watching then rcu_irq_enter() has to be - * invoked on entry and rcu_irq_exit() on exit. + * to be restarted. If RCU is not watching then ct_irq_enter() has to be + * invoked on entry and ct_irq_exit() on exit. * - * Avoiding the rcu_irq_enter/exit() calls is an optimization but also + * Avoiding the ct_irq_enter/exit() calls is an optimization but also * solves the problem of kernel mode pagefaults which can schedule, which - * is not possible after invoking rcu_irq_enter() without undoing it. + * is not possible after invoking ct_irq_enter() without undoing it. * * For user mode entries irqentry_enter_from_user_mode() is invoked to * establish the proper context for NOHZ_FULL. Otherwise scheduling on exit diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 6ebe754501c38..f1562d91c67d2 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -29,6 +29,7 @@ #include #include #include +#include #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b)) #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) @@ -143,9 +144,9 @@ static inline void rcu_nocb_flush_deferred_wakeup(void) { } */ #define RCU_NONIDLE(a) \ do { \ - rcu_irq_enter_irqson(); \ + ct_irq_enter_irqson(); \ do { a; } while (0); \ - rcu_irq_exit_irqson(); \ + ct_irq_exit_irqson(); \ } while (0) /* diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h index 28031b15f8783..55717a2eda08a 100644 --- a/include/linux/tracepoint.h +++ b/include/linux/tracepoint.h @@ -200,13 +200,13 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) */ \ if (rcuidle) { \ __idx = srcu_read_lock_notrace(&tracepoint_srcu);\ - rcu_irq_enter_irqson(); \ + ct_irq_enter_irqson(); \ } \ \ __DO_TRACE_CALL(name, TP_ARGS(args)); \ \ if (rcuidle) { \ - rcu_irq_exit_irqson(); \ + ct_irq_exit_irqson(); \ srcu_read_unlock_notrace(&tracepoint_srcu, __idx);\ } \ \ diff --git a/kernel/cfi.c b/kernel/cfi.c index 08102d19ec15a..2046276ee2348 100644 --- a/kernel/cfi.c +++ b/kernel/cfi.c @@ -295,7 +295,7 @@ static inline cfi_check_fn find_check_fn(unsigned long ptr) rcu_idle = !rcu_is_watching(); if (rcu_idle) { local_irq_save(flags); - rcu_irq_enter(); + ct_irq_enter(); } if (IS_ENABLED(CONFIG_CFI_CLANG_SHADOW)) @@ -304,7 +304,7 @@ static inline cfi_check_fn find_check_fn(unsigned long ptr) fn = find_module_check_fn(ptr); if (rcu_idle) { - rcu_irq_exit(); + ct_irq_exit(); local_irq_restore(flags); } diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index e9904f935f7f4..891464f7aa5a4 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -35,6 +35,26 @@ void ct_idle_exit(void) rcu_idle_exit(); } EXPORT_SYMBOL_GPL(ct_idle_exit); + +noinstr void ct_irq_enter(void) +{ + rcu_irq_enter(); +} + +noinstr void ct_irq_exit(void) +{ + rcu_irq_exit(); +} + +void ct_irq_enter_irqson(void) +{ + rcu_irq_enter_irqson(); +} + +void ct_irq_exit_irqson(void) +{ + rcu_irq_exit_irqson(); +} #endif /* #ifdef CONFIG_CONTEXT_TRACKING_IDLE */ #ifdef CONFIG_CONTEXT_TRACKING_USER @@ -90,7 +110,7 @@ void noinstr __ct_user_enter(enum ctx_state state) * At this stage, only low level arch entry code remains and * then we'll run in userspace. We can assume there won't be * any RCU read-side critical section until the next call to - * user_exit() or rcu_irq_enter(). Let's remove RCU's dependency + * user_exit() or ct_irq_enter(). Let's remove RCU's dependency * on the tick. */ if (state == CONTEXT_USER) { @@ -136,7 +156,7 @@ void ct_user_enter(enum ctx_state state) /* * Some contexts may involve an exception occuring in an irq, * leading to that nesting: - * rcu_irq_enter() rcu_user_exit() rcu_user_exit() rcu_irq_exit() + * ct_irq_enter() rcu_user_exit() rcu_user_exit() ct_irq_exit() * This would mess up the dyntick_nesting count though. And rcu_irq_*() * helpers are enough to protect RCU uses inside the exception. So * just return immediately if we detect we are in an IRQ. diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c index 246efc74e3f34..ba4ba71facf97 100644 --- a/kernel/cpu_pm.c +++ b/kernel/cpu_pm.c @@ -35,11 +35,11 @@ static int cpu_pm_notify(enum cpu_pm_event event) * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know * this. */ - rcu_irq_enter_irqson(); + ct_irq_enter_irqson(); rcu_read_lock(); ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL); rcu_read_unlock(); - rcu_irq_exit_irqson(); + ct_irq_exit_irqson(); return notifier_to_errno(ret); } @@ -49,11 +49,11 @@ static int cpu_pm_notify_robust(enum cpu_pm_event event_up, enum cpu_pm_event ev unsigned long flags; int ret; - rcu_irq_enter_irqson(); + ct_irq_enter_irqson(); raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags); ret = raw_notifier_call_chain_robust(&cpu_pm_notifier.chain, event_up, event_down, NULL); raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags); - rcu_irq_exit_irqson(); + ct_irq_exit_irqson(); return notifier_to_errno(ret); } diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 032f164abe7ce..667ba5d581ff7 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -321,7 +321,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) } /* - * If this entry hit the idle task invoke rcu_irq_enter() whether + * If this entry hit the idle task invoke ct_irq_enter() whether * RCU is watching or not. * * Interrupts can nest when the first interrupt invokes softirq @@ -332,12 +332,12 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) * not nested into another interrupt. * * Checking for rcu_is_watching() here would prevent the nesting - * interrupt to invoke rcu_irq_enter(). If that nested interrupt is + * interrupt to invoke ct_irq_enter(). If that nested interrupt is * the tick then rcu_flavor_sched_clock_irq() would wrongfully * assume that it is the first interrupt and eventually claim * quiescent state and end grace periods prematurely. * - * Unconditionally invoke rcu_irq_enter() so RCU state stays + * Unconditionally invoke ct_irq_enter() so RCU state stays * consistent. * * TINY_RCU does not support EQS, so let the compiler eliminate @@ -350,7 +350,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) * as in irqentry_enter_from_user_mode(). */ lockdep_hardirqs_off(CALLER_ADDR0); - rcu_irq_enter(); + ct_irq_enter(); instrumentation_begin(); trace_hardirqs_off_finish(); instrumentation_end(); @@ -418,7 +418,7 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state) trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(); instrumentation_end(); - rcu_irq_exit(); + ct_irq_exit(); lockdep_hardirqs_on(CALLER_ADDR0); return; } @@ -436,7 +436,7 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state) * was not watching on entry. */ if (state.exit_rcu) - rcu_irq_exit(); + ct_irq_exit(); } } diff --git a/kernel/softirq.c b/kernel/softirq.c index 9f0aef8aa9ff8..c8a6913c067d9 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -620,7 +620,7 @@ void irq_enter_rcu(void) */ void irq_enter(void) { - rcu_irq_enter(); + ct_irq_enter(); irq_enter_rcu(); } @@ -672,7 +672,7 @@ void irq_exit_rcu(void) void irq_exit(void) { __irq_exit_rcu(); - rcu_irq_exit(); + ct_irq_exit(); /* must be last! */ lockdep_hardirq_exit(); } diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 2c95992e2c710..fe78a68181263 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3107,15 +3107,15 @@ void __trace_stack(struct trace_array *tr, unsigned int trace_ctx, /* * When an NMI triggers, RCU is enabled via rcu_nmi_enter(), * but if the above rcu_is_watching() failed, then the NMI - * triggered someplace critical, and rcu_irq_enter() should + * triggered someplace critical, and ct_irq_enter() should * not be called from NMI. */ if (unlikely(in_nmi())) return; - rcu_irq_enter_irqson(); + ct_irq_enter_irqson(); __ftrace_trace_stack(buffer, trace_ctx, skip, NULL); - rcu_irq_exit_irqson(); + ct_irq_exit_irqson(); } /** From patchwork Mon Jun 20 23:10:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888552 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 286E3CCA47C for ; Mon, 20 Jun 2022 23:14:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344199AbiFTXOk (ORCPT ); Mon, 20 Jun 2022 19:14:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346414AbiFTXMI (ORCPT ); Mon, 20 Jun 2022 19:12:08 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 405D82252D; Mon, 20 Jun 2022 16:10:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2C59AB81655; Mon, 20 Jun 2022 23:10:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 121C1C341D6; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=pF2fVyXIL84Hfc1fiUZJvF5wgV1c6NIcI7di45T13uA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UtTZIZ55lRC1tDgWCcx66XwKTwdvYfLf8pCr518bS0ARR9ERvd/uF6eUSy7srf8D5 qA0SVT16EYEh4xJUns9AYQKYx3OB7XZxZS60HUHo+jCZmPvwAsDczyCZauwCEP/eX2 PcsRca3iXARF+CktIn4ex0xjGOTLLOS6SNnF3ZpONY2szr2T+LbOzf8eBL0BUTOE8p /vLddAdQw74DZbEG6wq+hMutEpgpUSvhDoP6TTiFC6t1dA4nNYtOx1ntNBw4BPYTEd tSKvGBdkU/klA/o3jllnOifP8d/J8dzpNNaYmCU11dFjFZJZXhWp6qNQKDyJmAQugd ga36Wi8YcHMDA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6321A5C0E3F; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , "Paul E . McKenney" , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits Subject: [PATCH rcu 12/23] context_tracking: Take NMI eqs entrypoints over RCU Date: Mon, 20 Jun 2022 16:10:18 -0700 Message-Id: <20220620231029.3844583-12-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker The RCU dynticks counter is going to be merged into the context tracking subsystem. Prepare with moving the NMI extended quiescent states entrypoints to context tracking. For now those are dumb redirection to existing RCU calls. Acked-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- Documentation/RCU/Design/Requirements/Requirements.rst | 2 +- arch/Kconfig | 2 +- arch/arm64/kernel/entry-common.c | 8 ++++---- include/linux/context_tracking_irq.h | 4 ++++ include/linux/hardirq.h | 4 ++-- kernel/context_tracking.c | 10 ++++++++++ kernel/entry/common.c | 4 ++-- kernel/extable.c | 4 ++-- kernel/trace/trace.c | 2 +- 9 files changed, 27 insertions(+), 13 deletions(-) diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst index 074810c739367..a0f8164c85135 100644 --- a/Documentation/RCU/Design/Requirements/Requirements.rst +++ b/Documentation/RCU/Design/Requirements/Requirements.rst @@ -1847,7 +1847,7 @@ normal interrupts. One way that this can happen is for code that directly invokes ct_irq_enter() and ct_irq_exit() to be called from an NMI handler. This astonishing fact of life prompted the current code structure, which has ct_irq_enter() invoking -rcu_nmi_enter() and ct_irq_exit() invoking rcu_nmi_exit(). +ct_nmi_enter() and ct_irq_exit() invoking ct_nmi_exit(). And yes, I also learned of this requirement the hard way. Loadable Modules diff --git a/arch/Kconfig b/arch/Kconfig index 342642be105fc..f56f7c0e924d8 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -797,7 +797,7 @@ config HAVE_CONTEXT_TRACKING_USER_OFFSTACK - Critical entry code isn't preemptible (or better yet: not interruptible). - - No use of RCU read side critical sections, unless rcu_nmi_enter() + - No use of RCU read side critical sections, unless ct_nmi_enter() got called. - No use of instrumentation, unless instrumentation_begin() got called. diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 8dabe9ec10f16..c75ca36b4a491 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -161,7 +161,7 @@ static void noinstr arm64_enter_nmi(struct pt_regs *regs) __nmi_enter(); lockdep_hardirqs_off(CALLER_ADDR0); lockdep_hardirq_enter(); - rcu_nmi_enter(); + ct_nmi_enter(); trace_hardirqs_off_finish(); ftrace_nmi_enter(); @@ -182,7 +182,7 @@ static void noinstr arm64_exit_nmi(struct pt_regs *regs) lockdep_hardirqs_on_prepare(); } - rcu_nmi_exit(); + ct_nmi_exit(); lockdep_hardirq_exit(); if (restore) lockdep_hardirqs_on(CALLER_ADDR0); @@ -199,7 +199,7 @@ static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs) regs->lockdep_hardirqs = lockdep_hardirqs_enabled(); lockdep_hardirqs_off(CALLER_ADDR0); - rcu_nmi_enter(); + ct_nmi_enter(); trace_hardirqs_off_finish(); } @@ -218,7 +218,7 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs) lockdep_hardirqs_on_prepare(); } - rcu_nmi_exit(); + ct_nmi_exit(); if (restore) lockdep_hardirqs_on(CALLER_ADDR0); } diff --git a/include/linux/context_tracking_irq.h b/include/linux/context_tracking_irq.h index 62f62bbd1a50d..c50b5670c4a52 100644 --- a/include/linux/context_tracking_irq.h +++ b/include/linux/context_tracking_irq.h @@ -7,11 +7,15 @@ void ct_irq_enter(void); void ct_irq_exit(void); void ct_irq_enter_irqson(void); void ct_irq_exit_irqson(void); +void ct_nmi_enter(void); +void ct_nmi_exit(void); #else static inline void ct_irq_enter(void) { } static inline void ct_irq_exit(void) { } static inline void ct_irq_enter_irqson(void) { } static inline void ct_irq_exit_irqson(void) { } +static inline void ct_nmi_enter(void) { } +static inline void ct_nmi_exit(void) { } #endif #endif diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index 76878b357ffa9..345cdbe9c1b70 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -124,7 +124,7 @@ extern void rcu_nmi_exit(void); do { \ __nmi_enter(); \ lockdep_hardirq_enter(); \ - rcu_nmi_enter(); \ + ct_nmi_enter(); \ instrumentation_begin(); \ ftrace_nmi_enter(); \ instrumentation_end(); \ @@ -143,7 +143,7 @@ extern void rcu_nmi_exit(void); instrumentation_begin(); \ ftrace_nmi_exit(); \ instrumentation_end(); \ - rcu_nmi_exit(); \ + ct_nmi_exit(); \ lockdep_hardirq_exit(); \ __nmi_exit(); \ } while (0) diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 891464f7aa5a4..afb4451c26a25 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -55,6 +55,16 @@ void ct_irq_exit_irqson(void) { rcu_irq_exit_irqson(); } + +noinstr void ct_nmi_enter(void) +{ + rcu_nmi_enter(); +} + +noinstr void ct_nmi_exit(void) +{ + rcu_nmi_exit(); +} #endif /* #ifdef CONFIG_CONTEXT_TRACKING_IDLE */ #ifdef CONFIG_CONTEXT_TRACKING_USER diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 667ba5d581ff7..063068a9ea9b3 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -449,7 +449,7 @@ irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs) __nmi_enter(); lockdep_hardirqs_off(CALLER_ADDR0); lockdep_hardirq_enter(); - rcu_nmi_enter(); + ct_nmi_enter(); instrumentation_begin(); trace_hardirqs_off_finish(); @@ -469,7 +469,7 @@ void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_state) } instrumentation_end(); - rcu_nmi_exit(); + ct_nmi_exit(); lockdep_hardirq_exit(); if (irq_state.lockdep) lockdep_hardirqs_on(CALLER_ADDR0); diff --git a/kernel/extable.c b/kernel/extable.c index bda5e97615418..71f482581cab4 100644 --- a/kernel/extable.c +++ b/kernel/extable.c @@ -114,7 +114,7 @@ int kernel_text_address(unsigned long addr) /* Treat this like an NMI as it can happen anywhere */ if (no_rcu) - rcu_nmi_enter(); + ct_nmi_enter(); if (is_module_text_address(addr)) goto out; @@ -127,7 +127,7 @@ int kernel_text_address(unsigned long addr) ret = 0; out: if (no_rcu) - rcu_nmi_exit(); + ct_nmi_exit(); return ret; } diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index fe78a68181263..5fc7f17f5ec7b 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3105,7 +3105,7 @@ void __trace_stack(struct trace_array *tr, unsigned int trace_ctx, } /* - * When an NMI triggers, RCU is enabled via rcu_nmi_enter(), + * When an NMI triggers, RCU is enabled via ct_nmi_enter(), * but if the above rcu_is_watching() failed, then the NMI * triggered someplace critical, and ct_irq_enter() should * not be called from NMI. From patchwork Mon Jun 20 23:10:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8545C43334 for ; Mon, 20 Jun 2022 23:12:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347002AbiFTXMk (ORCPT ); Mon, 20 Jun 2022 19:12:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347046AbiFTXME (ORCPT ); Mon, 20 Jun 2022 19:12:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23FFE1ADA2; Mon, 20 Jun 2022 16:10:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id F2DF761503; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 155FFC36AEA; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=xsKKjIRCTs+MYTIZQulQGrFw/Klyh+WfPxCeTxp4NpA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OCEWw2VQTwNZzAlOV5mtXQ9uJL086ovj3jehUe8+Y/yVJ+bGcnjBpCSl7mNY0SYe/ BwL2PO01qHOxJwvoYwFMdETev9aXAOO+T/ypDUWiYc/ysvgF/VmiHgbxw7fzF7JA41 H8jtZOZcwSTTaZUwQ9LJbgT8t3+b8SLAithhwWLlx/NQPXqrGJmFUd9fwKjsNBEPEA ZWbYEAi2gBX0loyUqMv5hdIrAXzc51fshU6LzdeNB2GgNHn/SSaPuN3N1wWOLEGuv8 P4toCMvx0GEb47ynw6IJhH49Flb8pnA1dDeeV51jYGYhQYmIXlDlNYOLNvp6DUFcUN KYHJk+HPS6sLA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 650A15C0E42; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 13/23] rcu/context-tracking: Remove rcu_irq_enter/exit() Date: Mon, 20 Jun 2022 16:10:19 -0700 Message-Id: <20220620231029.3844583-13-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker Now rcu_irq_enter/exit() is an unecessary middle call between ct_irq_enter/exit() and nmi_irq_enter/exit(). Take this opportunity to remove the former functions and move the comments above them to the new entrypoints. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- include/linux/rcutiny.h | 4 -- include/linux/rcutree.h | 4 -- kernel/context_tracking.c | 71 +++++++++++++++++++++++++++++++-- kernel/rcu/tree.c | 83 --------------------------------------- 4 files changed, 67 insertions(+), 95 deletions(-) diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 5fed476f977f6..591119413cf1d 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -78,10 +78,6 @@ static inline void rcu_cpu_stall_reset(void) { } static inline int rcu_jiffies_till_stall_check(void) { return 21 * HZ; } static inline void rcu_idle_enter(void) { } static inline void rcu_idle_exit(void) { } -static inline void rcu_irq_enter(void) { } -static inline void rcu_irq_exit_irqson(void) { } -static inline void rcu_irq_enter_irqson(void) { } -static inline void rcu_irq_exit(void) { } static inline void rcu_irq_exit_check_preempt(void) { } #define rcu_is_idle_cpu(cpu) \ (is_idle_task(current) && !in_nmi() && !in_hardirq() && !in_serving_softirq()) diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 9c6cfb742504f..4522b6a7cc42f 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -47,10 +47,6 @@ void cond_synchronize_rcu(unsigned long oldstate); void rcu_idle_enter(void); void rcu_idle_exit(void); -void rcu_irq_enter(void); -void rcu_irq_exit(void); -void rcu_irq_enter_irqson(void); -void rcu_irq_exit_irqson(void); bool rcu_is_idle_cpu(int cpu); #ifdef CONFIG_PROVE_RCU diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index afb4451c26a25..96406daf5f54b 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -36,24 +36,87 @@ void ct_idle_exit(void) } EXPORT_SYMBOL_GPL(ct_idle_exit); +/** + * ct_irq_enter - inform RCU that current CPU is entering irq away from idle + * + * Enter an interrupt handler, which might possibly result in exiting + * idle mode, in other words, entering the mode in which read-side critical + * sections can occur. The caller must have disabled interrupts. + * + * Note that the Linux kernel is fully capable of entering an interrupt + * handler that it never exits, for example when doing upcalls to user mode! + * This code assumes that the idle loop never does upcalls to user mode. + * If your architecture's idle loop does do upcalls to user mode (or does + * anything else that results in unbalanced calls to the irq_enter() and + * irq_exit() functions), RCU will give you what you deserve, good and hard. + * But very infrequently and irreproducibly. + * + * Use things like work queues to work around this limitation. + * + * You have been warned. + * + * If you add or remove a call to ct_irq_enter(), be sure to test with + * CONFIG_RCU_EQS_DEBUG=y. + */ noinstr void ct_irq_enter(void) { - rcu_irq_enter(); + lockdep_assert_irqs_disabled(); + ct_nmi_enter(); } +/** + * ct_irq_exit - inform RCU that current CPU is exiting irq towards idle + * + * Exit from an interrupt handler, which might possibly result in entering + * idle mode, in other words, leaving the mode in which read-side critical + * sections can occur. The caller must have disabled interrupts. + * + * This code assumes that the idle loop never does anything that might + * result in unbalanced calls to irq_enter() and irq_exit(). If your + * architecture's idle loop violates this assumption, RCU will give you what + * you deserve, good and hard. But very infrequently and irreproducibly. + * + * Use things like work queues to work around this limitation. + * + * You have been warned. + * + * If you add or remove a call to ct_irq_exit(), be sure to test with + * CONFIG_RCU_EQS_DEBUG=y. + */ noinstr void ct_irq_exit(void) { - rcu_irq_exit(); + lockdep_assert_irqs_disabled(); + ct_nmi_exit(); } +/* + * Wrapper for ct_irq_enter() where interrupts are enabled. + * + * If you add or remove a call to ct_irq_enter_irqson(), be sure to test + * with CONFIG_RCU_EQS_DEBUG=y. + */ void ct_irq_enter_irqson(void) { - rcu_irq_enter_irqson(); + unsigned long flags; + + local_irq_save(flags); + ct_irq_enter(); + local_irq_restore(flags); } +/* + * Wrapper for ct_irq_exit() where interrupts are enabled. + * + * If you add or remove a call to ct_irq_exit_irqson(), be sure to test + * with CONFIG_RCU_EQS_DEBUG=y. + */ void ct_irq_exit_irqson(void) { - rcu_irq_exit_irqson(); + unsigned long flags; + + local_irq_save(flags); + ct_irq_exit(); + local_irq_restore(flags); } noinstr void ct_nmi_enter(void) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 051fed0844b67..75b433dba4276 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -789,31 +789,6 @@ noinstr void rcu_nmi_exit(void) rcu_dynticks_task_enter(); } -/** - * rcu_irq_exit - inform RCU that current CPU is exiting irq towards idle - * - * Exit from an interrupt handler, which might possibly result in entering - * idle mode, in other words, leaving the mode in which read-side critical - * sections can occur. The caller must have disabled interrupts. - * - * This code assumes that the idle loop never does anything that might - * result in unbalanced calls to irq_enter() and irq_exit(). If your - * architecture's idle loop violates this assumption, RCU will give you what - * you deserve, good and hard. But very infrequently and irreproducibly. - * - * Use things like work queues to work around this limitation. - * - * You have been warned. - * - * If you add or remove a call to rcu_irq_exit(), be sure to test with - * CONFIG_RCU_EQS_DEBUG=y. - */ -void noinstr rcu_irq_exit(void) -{ - lockdep_assert_irqs_disabled(); - rcu_nmi_exit(); -} - #ifdef CONFIG_PROVE_RCU /** * rcu_irq_exit_check_preempt - Validate that scheduling is possible @@ -832,21 +807,6 @@ void rcu_irq_exit_check_preempt(void) } #endif /* #ifdef CONFIG_PROVE_RCU */ -/* - * Wrapper for rcu_irq_exit() where interrupts are enabled. - * - * If you add or remove a call to rcu_irq_exit_irqson(), be sure to test - * with CONFIG_RCU_EQS_DEBUG=y. - */ -void rcu_irq_exit_irqson(void) -{ - unsigned long flags; - - local_irq_save(flags); - rcu_irq_exit(); - local_irq_restore(flags); -} - /* * Exit an RCU extended quiescent state, which can be either the * idle loop or adaptive-tickless usermode execution. @@ -1041,49 +1001,6 @@ noinstr void rcu_nmi_enter(void) barrier(); } -/** - * rcu_irq_enter - inform RCU that current CPU is entering irq away from idle - * - * Enter an interrupt handler, which might possibly result in exiting - * idle mode, in other words, entering the mode in which read-side critical - * sections can occur. The caller must have disabled interrupts. - * - * Note that the Linux kernel is fully capable of entering an interrupt - * handler that it never exits, for example when doing upcalls to user mode! - * This code assumes that the idle loop never does upcalls to user mode. - * If your architecture's idle loop does do upcalls to user mode (or does - * anything else that results in unbalanced calls to the irq_enter() and - * irq_exit() functions), RCU will give you what you deserve, good and hard. - * But very infrequently and irreproducibly. - * - * Use things like work queues to work around this limitation. - * - * You have been warned. - * - * If you add or remove a call to rcu_irq_enter(), be sure to test with - * CONFIG_RCU_EQS_DEBUG=y. - */ -noinstr void rcu_irq_enter(void) -{ - lockdep_assert_irqs_disabled(); - rcu_nmi_enter(); -} - -/* - * Wrapper for rcu_irq_enter() where interrupts are enabled. - * - * If you add or remove a call to rcu_irq_enter_irqson(), be sure to test - * with CONFIG_RCU_EQS_DEBUG=y. - */ -void rcu_irq_enter_irqson(void) -{ - unsigned long flags; - - local_irq_save(flags); - rcu_irq_enter(); - local_irq_restore(flags); -} - /* * Check to see if any future non-offloaded RCU-related work will need * to be done by the current CPU, even if none need be done immediately, From patchwork Mon Jun 20 23:10:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A42ECCA480 for ; Mon, 20 Jun 2022 23:14:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346242AbiFTXOq (ORCPT ); Mon, 20 Jun 2022 19:14:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346299AbiFTXMI (ORCPT ); Mon, 20 Jun 2022 19:12:08 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A987C222AA; Mon, 20 Jun 2022 16:10:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2B73861504; Mon, 20 Jun 2022 23:10:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 14E41C341D7; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=M4o4K0D5D4UpDBxuMma4e0nsCYtSp7K/AUgpBlW9SPc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tif+b9dLWLt1H7vOglNv0yl1pUAPB/EIsLdh5HrBcZZWUMfN+HpvrfGF+DNKQNrOk bxzB5Iv+B8R89tOp8c2EVa5E6QUXUXZfNsFA9a/y/2u03ZTfwEofgPfXtnPDs6VDZ4 VaLE/HaQ3BtEqfYbQSa2s9xTwFxnNuQj9nKRfJrds6h4oGuMn11KRWXO927IcT5D3H eAQK5U77e1xTgW98egnVWzIJn9Wb3mxY0c+hwJcoa+bo91gqlVNQv2gVfddb/C8goH KlDfX1wbkHqmMLOMXkuBZ0beGP7Mgu76KOnKhXurWSC40T/FOROOtvVDMvPWRTtq6b +/pzUWmnjihcQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6721B5C0E52; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , "Paul E . McKenney" , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits Subject: [PATCH rcu 14/23] rcu/context_tracking: Move dynticks counter to context tracking Date: Mon, 20 Jun 2022 16:10:20 -0700 Message-Id: <20220620231029.3844583-14-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker In order to prepare for merging RCU dynticks counter into the context tracking state, move the rcu_data's dynticks field to the context tracking structure. It will later be mixed within the context tracking state itself. [ paulmck: Move enum ctx_state into global scope. ] Acked-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- include/linux/context_tracking_state.h | 43 ++++++++++++++++---- kernel/context_tracking.c | 10 +++-- kernel/rcu/tree.c | 56 +++++++++++++------------- kernel/rcu/tree.h | 1 - kernel/rcu/tree_exp.h | 2 +- kernel/rcu/tree_stall.h | 4 +- 6 files changed, 73 insertions(+), 43 deletions(-) diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h index 9c16a8b2c1947..6d50d08b4933a 100644 --- a/include/linux/context_tracking_state.h +++ b/include/linux/context_tracking_state.h @@ -6,7 +6,15 @@ #include #include +enum ctx_state { + CONTEXT_DISABLED = -1, /* returned by ct_state() if unknown */ + CONTEXT_KERNEL = 0, + CONTEXT_USER, + CONTEXT_GUEST, +}; + struct context_tracking { +#ifdef CONFIG_CONTEXT_TRACKING_USER /* * When active is false, probes are unset in order * to minimize overhead: TIF flags are cleared @@ -15,17 +23,38 @@ struct context_tracking { */ bool active; int recursion; - enum ctx_state { - CONTEXT_DISABLED = -1, /* returned by ct_state() if unknown */ - CONTEXT_KERNEL = 0, - CONTEXT_USER, - CONTEXT_GUEST, - } state; + enum ctx_state state; +#endif +#ifdef CONFIG_CONTEXT_TRACKING_IDLE + atomic_t dynticks; /* Even value for idle, else odd. */ +#endif }; +#ifdef CONFIG_CONTEXT_TRACKING +DECLARE_PER_CPU(struct context_tracking, context_tracking); +#endif + +#ifdef CONFIG_CONTEXT_TRACKING_IDLE +static __always_inline int ct_dynticks(void) +{ + return atomic_read(this_cpu_ptr(&context_tracking.dynticks)); +} + +static __always_inline int ct_dynticks_cpu(int cpu) +{ + struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); + return atomic_read(&ct->dynticks); +} + +static __always_inline int ct_dynticks_cpu_acquire(int cpu) +{ + struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); + return atomic_read_acquire(&ct->dynticks); +} +#endif /* #ifdef CONFIG_CONTEXT_TRACKING_IDLE */ + #ifdef CONFIG_CONTEXT_TRACKING_USER extern struct static_key_false context_tracking_key; -DECLARE_PER_CPU(struct context_tracking, context_tracking); static __always_inline bool context_tracking_enabled(void) { diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 96406daf5f54b..22b4ee56a7c97 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -23,6 +23,13 @@ #include +DEFINE_PER_CPU(struct context_tracking, context_tracking) = { +#ifdef CONFIG_CONTEXT_TRACKING_IDLE + .dynticks = ATOMIC_INIT(1), +#endif +}; +EXPORT_SYMBOL_GPL(context_tracking); + #ifdef CONFIG_CONTEXT_TRACKING_IDLE noinstr void ct_idle_enter(void) { @@ -138,9 +145,6 @@ noinstr void ct_nmi_exit(void) DEFINE_STATIC_KEY_FALSE(context_tracking_key); EXPORT_SYMBOL_GPL(context_tracking_key); -DEFINE_PER_CPU(struct context_tracking, context_tracking); -EXPORT_SYMBOL_GPL(context_tracking); - static noinstr bool context_tracking_recursion_enter(void) { int recursion; diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 75b433dba4276..a471edc3d8938 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -77,7 +77,6 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, rcu_data) = { .dynticks_nesting = 1, .dynticks_nmi_nesting = DYNTICK_IRQ_NONIDLE, - .dynticks = ATOMIC_INIT(1), #ifdef CONFIG_RCU_NOCB_CPU .cblist.flags = SEGCBLIST_RCU_CORE, #endif @@ -268,7 +267,7 @@ void rcu_softirq_qs(void) */ static noinline noinstr unsigned long rcu_dynticks_inc(int incby) { - return arch_atomic_add_return(incby, this_cpu_ptr(&rcu_data.dynticks)); + return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.dynticks)); } /* @@ -324,9 +323,7 @@ static noinstr void rcu_dynticks_eqs_exit(void) */ static void rcu_dynticks_eqs_online(void) { - struct rcu_data *rdp = this_cpu_ptr(&rcu_data); - - if (atomic_read(&rdp->dynticks) & 0x1) + if (ct_dynticks() & 0x1) return; rcu_dynticks_inc(1); } @@ -338,17 +335,17 @@ static void rcu_dynticks_eqs_online(void) */ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) { - return !(arch_atomic_read(this_cpu_ptr(&rcu_data.dynticks)) & 0x1); + return !(arch_atomic_read(this_cpu_ptr(&context_tracking.dynticks)) & 0x1); } /* * Snapshot the ->dynticks counter with full ordering so as to allow * stable comparison of this counter with past and future snapshots. */ -static int rcu_dynticks_snap(struct rcu_data *rdp) +static int rcu_dynticks_snap(int cpu) { smp_mb(); // Fundamental RCU ordering guarantee. - return atomic_read_acquire(&rdp->dynticks); + return ct_dynticks_cpu_acquire(cpu); } /* @@ -363,9 +360,7 @@ static bool rcu_dynticks_in_eqs(int snap) /* Return true if the specified CPU is currently idle from an RCU viewpoint. */ bool rcu_is_idle_cpu(int cpu) { - struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); - - return rcu_dynticks_in_eqs(rcu_dynticks_snap(rdp)); + return rcu_dynticks_in_eqs(rcu_dynticks_snap(cpu)); } /* @@ -375,7 +370,7 @@ bool rcu_is_idle_cpu(int cpu) */ static bool rcu_dynticks_in_eqs_since(struct rcu_data *rdp, int snap) { - return snap != rcu_dynticks_snap(rdp); + return snap != rcu_dynticks_snap(rdp->cpu); } /* @@ -384,11 +379,10 @@ static bool rcu_dynticks_in_eqs_since(struct rcu_data *rdp, int snap) */ bool rcu_dynticks_zero_in_eqs(int cpu, int *vp) { - struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); int snap; // If not quiescent, force back to earlier extended quiescent state. - snap = atomic_read(&rdp->dynticks) & ~0x1; + snap = ct_dynticks_cpu(cpu) & ~0x1; smp_rmb(); // Order ->dynticks and *vp reads. if (READ_ONCE(*vp)) @@ -396,7 +390,7 @@ bool rcu_dynticks_zero_in_eqs(int cpu, int *vp) smp_rmb(); // Order *vp read and ->dynticks re-read. // If still in the same extended quiescent state, we are good! - return snap == atomic_read(&rdp->dynticks); + return snap == ct_dynticks_cpu(cpu); } /* @@ -620,6 +614,7 @@ EXPORT_SYMBOL_GPL(rcutorture_get_gp_data); static noinstr void rcu_eqs_enter(bool user) { struct rcu_data *rdp = this_cpu_ptr(&rcu_data); + struct context_tracking *ct = this_cpu_ptr(&context_tracking); WARN_ON_ONCE(rdp->dynticks_nmi_nesting != DYNTICK_IRQ_NONIDLE); WRITE_ONCE(rdp->dynticks_nmi_nesting, 0); @@ -633,12 +628,12 @@ static noinstr void rcu_eqs_enter(bool user) instrumentation_begin(); lockdep_assert_irqs_disabled(); - trace_rcu_dyntick(TPS("Start"), rdp->dynticks_nesting, 0, atomic_read(&rdp->dynticks)); + trace_rcu_dyntick(TPS("Start"), rdp->dynticks_nesting, 0, ct_dynticks()); WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); rcu_preempt_deferred_qs(current); // instrumentation for the noinstr rcu_dynticks_eqs_enter() - instrument_atomic_write(&rdp->dynticks, sizeof(rdp->dynticks)); + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); instrumentation_end(); WRITE_ONCE(rdp->dynticks_nesting, 0); /* Avoid irq-access tearing. */ @@ -740,7 +735,7 @@ noinstr void rcu_user_enter(void) * rcu_nmi_exit - inform RCU of exit from NMI context * * If we are returning from the outermost NMI handler that interrupted an - * RCU-idle period, update rdp->dynticks and rdp->dynticks_nmi_nesting + * RCU-idle period, update ct->dynticks and rdp->dynticks_nmi_nesting * to let the RCU grace-period handling know that the CPU is back to * being RCU-idle. * @@ -749,6 +744,7 @@ noinstr void rcu_user_enter(void) */ noinstr void rcu_nmi_exit(void) { + struct context_tracking *ct = this_cpu_ptr(&context_tracking); struct rcu_data *rdp = this_cpu_ptr(&rcu_data); instrumentation_begin(); @@ -766,7 +762,7 @@ noinstr void rcu_nmi_exit(void) */ if (rdp->dynticks_nmi_nesting != 1) { trace_rcu_dyntick(TPS("--="), rdp->dynticks_nmi_nesting, rdp->dynticks_nmi_nesting - 2, - atomic_read(&rdp->dynticks)); + ct_dynticks()); WRITE_ONCE(rdp->dynticks_nmi_nesting, /* No store tearing. */ rdp->dynticks_nmi_nesting - 2); instrumentation_end(); @@ -774,11 +770,11 @@ noinstr void rcu_nmi_exit(void) } /* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */ - trace_rcu_dyntick(TPS("Startirq"), rdp->dynticks_nmi_nesting, 0, atomic_read(&rdp->dynticks)); + trace_rcu_dyntick(TPS("Startirq"), rdp->dynticks_nmi_nesting, 0, ct_dynticks()); WRITE_ONCE(rdp->dynticks_nmi_nesting, 0); /* Avoid store tearing. */ // instrumentation for the noinstr rcu_dynticks_eqs_enter() - instrument_atomic_write(&rdp->dynticks, sizeof(rdp->dynticks)); + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); instrumentation_end(); // RCU is watching here ... @@ -817,6 +813,7 @@ void rcu_irq_exit_check_preempt(void) */ static void noinstr rcu_eqs_exit(bool user) { + struct context_tracking *ct = this_cpu_ptr(&context_tracking); struct rcu_data *rdp; long oldval; @@ -836,9 +833,9 @@ static void noinstr rcu_eqs_exit(bool user) instrumentation_begin(); // instrumentation for the noinstr rcu_dynticks_eqs_exit() - instrument_atomic_write(&rdp->dynticks, sizeof(rdp->dynticks)); + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); - trace_rcu_dyntick(TPS("End"), rdp->dynticks_nesting, 1, atomic_read(&rdp->dynticks)); + trace_rcu_dyntick(TPS("End"), rdp->dynticks_nesting, 1, ct_dynticks()); WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); WRITE_ONCE(rdp->dynticks_nesting, 1); WARN_ON_ONCE(rdp->dynticks_nmi_nesting); @@ -944,7 +941,7 @@ void __rcu_irq_enter_check_tick(void) /** * rcu_nmi_enter - inform RCU of entry to NMI context * - * If the CPU was idle from RCU's viewpoint, update rdp->dynticks and + * If the CPU was idle from RCU's viewpoint, update ct->dynticks and * rdp->dynticks_nmi_nesting to let the RCU grace-period handling know * that the CPU is active. This implementation permits nested NMIs, as * long as the nesting level does not overflow an int. (You will probably @@ -957,6 +954,7 @@ noinstr void rcu_nmi_enter(void) { long incby = 2; struct rcu_data *rdp = this_cpu_ptr(&rcu_data); + struct context_tracking *ct = this_cpu_ptr(&context_tracking); /* Complain about underflow. */ WARN_ON_ONCE(rdp->dynticks_nmi_nesting < 0); @@ -980,9 +978,9 @@ noinstr void rcu_nmi_enter(void) instrumentation_begin(); // instrumentation for the noinstr rcu_dynticks_curr_cpu_in_eqs() - instrument_atomic_read(&rdp->dynticks, sizeof(rdp->dynticks)); + instrument_atomic_read(&ct->dynticks, sizeof(ct->dynticks)); // instrumentation for the noinstr rcu_dynticks_eqs_exit() - instrument_atomic_write(&rdp->dynticks, sizeof(rdp->dynticks)); + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); incby = 1; } else if (!in_nmi()) { @@ -994,7 +992,7 @@ noinstr void rcu_nmi_enter(void) trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="), rdp->dynticks_nmi_nesting, - rdp->dynticks_nmi_nesting + incby, atomic_read(&rdp->dynticks)); + rdp->dynticks_nmi_nesting + incby, ct_dynticks()); instrumentation_end(); WRITE_ONCE(rdp->dynticks_nmi_nesting, /* Prevent store tearing. */ rdp->dynticks_nmi_nesting + incby); @@ -1138,7 +1136,7 @@ static void rcu_gpnum_ovf(struct rcu_node *rnp, struct rcu_data *rdp) */ static int dyntick_save_progress_counter(struct rcu_data *rdp) { - rdp->dynticks_snap = rcu_dynticks_snap(rdp); + rdp->dynticks_snap = rcu_dynticks_snap(rdp->cpu); if (rcu_dynticks_in_eqs(rdp->dynticks_snap)) { trace_rcu_fqs(rcu_state.name, rdp->gp_seq, rdp->cpu, TPS("dti")); rcu_gpnum_ovf(rdp->mynode, rdp); @@ -4142,7 +4140,7 @@ rcu_boot_init_percpu_data(int cpu) rdp->grpmask = leaf_node_cpu_bit(rdp->mynode, cpu); INIT_WORK(&rdp->strict_work, strict_work_handler); WARN_ON_ONCE(rdp->dynticks_nesting != 1); - WARN_ON_ONCE(rcu_dynticks_in_eqs(rcu_dynticks_snap(rdp))); + WARN_ON_ONCE(rcu_dynticks_in_eqs(rcu_dynticks_snap(cpu))); rdp->barrier_seq_snap = rcu_state.barrier_sequence; rdp->rcu_ofl_gp_seq = rcu_state.gp_seq; rdp->rcu_ofl_gp_flags = RCU_GP_CLEANED; diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 2ccf5845957df..ebb973f5b1900 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -189,7 +189,6 @@ struct rcu_data { int dynticks_snap; /* Per-GP tracking for dynticks. */ long dynticks_nesting; /* Track process nesting level. */ long dynticks_nmi_nesting; /* Track irq/NMI nesting level. */ - atomic_t dynticks; /* Even value for idle, else odd. */ bool rcu_need_heavy_qs; /* GP old, so heavy quiescent state! */ bool rcu_urgent_qs; /* GP old need light quiescent state. */ bool rcu_forced_tick; /* Forced tick to provide QS. */ diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 0f70f62039a90..75c22d1034c1e 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -356,7 +356,7 @@ static void __sync_rcu_exp_select_node_cpus(struct rcu_exp_work *rewp) !(rnp->qsmaskinitnext & mask)) { mask_ofl_test |= mask; } else { - snap = rcu_dynticks_snap(rdp); + snap = rcu_dynticks_snap(cpu); if (rcu_dynticks_in_eqs(snap)) mask_ofl_test |= mask; else diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h index 3556637768fd5..250fbf2e8522f 100644 --- a/kernel/rcu/tree_stall.h +++ b/kernel/rcu/tree_stall.h @@ -465,7 +465,7 @@ static void print_cpu_stall_info(int cpu) } delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq); falsepositive = rcu_is_gp_kthread_starving(NULL) && - rcu_dynticks_in_eqs(rcu_dynticks_snap(rdp)); + rcu_dynticks_in_eqs(rcu_dynticks_snap(cpu)); rcuc_starved = rcu_is_rcuc_kthread_starving(rdp, &j); if (rcuc_starved) sprintf(buf, " rcuc=%ld jiffies(starved)", j); @@ -478,7 +478,7 @@ static void print_cpu_stall_info(int cpu) rdp->rcu_iw_pending ? (int)min(delta, 9UL) + '0' : "!."[!delta], ticks_value, ticks_title, - rcu_dynticks_snap(rdp) & 0xfff, + rcu_dynticks_snap(cpu) & 0xfff, rdp->dynticks_nesting, rdp->dynticks_nmi_nesting, rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu), data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart, From patchwork Mon Jun 20 23:10:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6FA3CCA485 for ; Mon, 20 Jun 2022 23:14:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346800AbiFTXOv (ORCPT ); Mon, 20 Jun 2022 19:14:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346625AbiFTXMJ (ORCPT ); Mon, 20 Jun 2022 19:12:09 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 513DB22536; Mon, 20 Jun 2022 16:10:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1A8B4B81652; Mon, 20 Jun 2022 23:10:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 16C7AC341D9; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=xB8I41pMV8+Y5veLgFPsHktn0vjZmgf2p1c6zqutOb8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e8tOLoZ4UMAoWQb6JxNR3CWIUz7ouO6mdB8EpPAmZGsf7limNQn2bDYM+eBGu5wRU FiZC8N9lt+vU8reI9PL8ShAnkHh7bxzBZjmELiXqG1jqLFFLKCXMTch6WVSL/w07aY ZkGBBbnuVXj21bYYYbF2qk4W+3gZbHm2VG0s/S+hMB2zuo7W5XK2ajYJfHCspS8TlC ieHCDEs26OYIWmdGldNhxbW7MdjDJ0x4eB6p0aJVJU/G/h1juPk2WLwJtOvsW7r2vU gEWFMvnmyUm+HKnYfRU8N3KSji5/Y+opOBb/kdRWBSYkG2dC1uRVIPKgHbdgM4AyJH KHVs2+AZwMe7A== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6974F5C0E69; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , "Paul E . McKenney" , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits Subject: [PATCH rcu 15/23] rcu/context_tracking: Move dynticks_nesting to context tracking Date: Mon, 20 Jun 2022 16:10:21 -0700 Message-Id: <20220620231029.3844583-15-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker The RCU eqs tracking is going to be performed by the context tracking subsystem. The related nesting counters thus need to be moved to the context tracking structure. Acked-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- include/linux/context_tracking_state.h | 12 ++++++++++ kernel/context_tracking.c | 1 + kernel/rcu/tree.c | 31 +++++++++++++------------- kernel/rcu/tree.h | 1 - kernel/rcu/tree_stall.h | 2 +- 5 files changed, 30 insertions(+), 17 deletions(-) diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h index 6d50d08b4933a..c866701c7edb5 100644 --- a/include/linux/context_tracking_state.h +++ b/include/linux/context_tracking_state.h @@ -27,6 +27,7 @@ struct context_tracking { #endif #ifdef CONFIG_CONTEXT_TRACKING_IDLE atomic_t dynticks; /* Even value for idle, else odd. */ + long dynticks_nesting; /* Track process nesting level. */ #endif }; @@ -51,6 +52,17 @@ static __always_inline int ct_dynticks_cpu_acquire(int cpu) struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); return atomic_read_acquire(&ct->dynticks); } + +static __always_inline long ct_dynticks_nesting(void) +{ + return __this_cpu_read(context_tracking.dynticks_nesting); +} + +static __always_inline long ct_dynticks_nesting_cpu(int cpu) +{ + struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); + return ct->dynticks_nesting; +} #endif /* #ifdef CONFIG_CONTEXT_TRACKING_IDLE */ #ifdef CONFIG_CONTEXT_TRACKING_USER diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 22b4ee56a7c97..ed6c062aff9fa 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -25,6 +25,7 @@ DEFINE_PER_CPU(struct context_tracking, context_tracking) = { #ifdef CONFIG_CONTEXT_TRACKING_IDLE + .dynticks_nesting = 1, .dynticks = ATOMIC_INIT(1), #endif }; diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index a471edc3d8938..f6bf328bb9cfd 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -75,7 +75,6 @@ /* Data structures. */ static DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, rcu_data) = { - .dynticks_nesting = 1, .dynticks_nmi_nesting = DYNTICK_IRQ_NONIDLE, #ifdef CONFIG_RCU_NOCB_CPU .cblist.flags = SEGCBLIST_RCU_CORE, @@ -436,7 +435,7 @@ static int rcu_is_cpu_rrupt_from_idle(void) lockdep_assert_irqs_disabled(); /* Check for counter underflows */ - RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nesting) < 0, + RCU_LOCKDEP_WARN(ct_dynticks_nesting() < 0, "RCU dynticks_nesting counter underflow!"); RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nmi_nesting) <= 0, "RCU dynticks_nmi_nesting counter underflow/zero!"); @@ -452,7 +451,7 @@ static int rcu_is_cpu_rrupt_from_idle(void) WARN_ON_ONCE(!nesting && !is_idle_task(current)); /* Does CPU appear to be idle from an RCU standpoint? */ - return __this_cpu_read(rcu_data.dynticks_nesting) == 0; + return ct_dynticks_nesting() == 0; } #define DEFAULT_RCU_BLIMIT (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) ? 1000 : 10) @@ -619,16 +618,16 @@ static noinstr void rcu_eqs_enter(bool user) WARN_ON_ONCE(rdp->dynticks_nmi_nesting != DYNTICK_IRQ_NONIDLE); WRITE_ONCE(rdp->dynticks_nmi_nesting, 0); WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && - rdp->dynticks_nesting == 0); - if (rdp->dynticks_nesting != 1) { + ct_dynticks_nesting() == 0); + if (ct_dynticks_nesting() != 1) { // RCU will still be watching, so just do accounting and leave. - rdp->dynticks_nesting--; + ct->dynticks_nesting--; return; } instrumentation_begin(); lockdep_assert_irqs_disabled(); - trace_rcu_dyntick(TPS("Start"), rdp->dynticks_nesting, 0, ct_dynticks()); + trace_rcu_dyntick(TPS("Start"), ct_dynticks_nesting(), 0, ct_dynticks()); WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); rcu_preempt_deferred_qs(current); @@ -636,7 +635,7 @@ static noinstr void rcu_eqs_enter(bool user) instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); instrumentation_end(); - WRITE_ONCE(rdp->dynticks_nesting, 0); /* Avoid irq-access tearing. */ + WRITE_ONCE(ct->dynticks_nesting, 0); /* Avoid irq-access tearing. */ // RCU is watching here ... rcu_dynticks_eqs_enter(); // ... but is no longer watching here. @@ -793,7 +792,7 @@ void rcu_irq_exit_check_preempt(void) { lockdep_assert_irqs_disabled(); - RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nesting) <= 0, + RCU_LOCKDEP_WARN(ct_dynticks_nesting() <= 0, "RCU dynticks_nesting counter underflow/zero!"); RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nmi_nesting) != DYNTICK_IRQ_NONIDLE, @@ -819,11 +818,11 @@ static void noinstr rcu_eqs_exit(bool user) WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); rdp = this_cpu_ptr(&rcu_data); - oldval = rdp->dynticks_nesting; + oldval = ct_dynticks_nesting(); WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0); if (oldval) { // RCU was already watching, so just do accounting and leave. - rdp->dynticks_nesting++; + ct->dynticks_nesting++; return; } rcu_dynticks_task_exit(); @@ -835,9 +834,9 @@ static void noinstr rcu_eqs_exit(bool user) // instrumentation for the noinstr rcu_dynticks_eqs_exit() instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); - trace_rcu_dyntick(TPS("End"), rdp->dynticks_nesting, 1, ct_dynticks()); + trace_rcu_dyntick(TPS("End"), ct_dynticks_nesting(), 1, ct_dynticks()); WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); - WRITE_ONCE(rdp->dynticks_nesting, 1); + WRITE_ONCE(ct->dynticks_nesting, 1); WARN_ON_ONCE(rdp->dynticks_nmi_nesting); WRITE_ONCE(rdp->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE); instrumentation_end(); @@ -4134,12 +4133,13 @@ static void rcu_init_new_rnp(struct rcu_node *rnp_leaf) static void __init rcu_boot_init_percpu_data(int cpu) { + struct context_tracking *ct = this_cpu_ptr(&context_tracking); struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); /* Set up local state, ensuring consistent view of global state. */ rdp->grpmask = leaf_node_cpu_bit(rdp->mynode, cpu); INIT_WORK(&rdp->strict_work, strict_work_handler); - WARN_ON_ONCE(rdp->dynticks_nesting != 1); + WARN_ON_ONCE(ct->dynticks_nesting != 1); WARN_ON_ONCE(rcu_dynticks_in_eqs(rcu_dynticks_snap(cpu))); rdp->barrier_seq_snap = rcu_state.barrier_sequence; rdp->rcu_ofl_gp_seq = rcu_state.gp_seq; @@ -4164,6 +4164,7 @@ rcu_boot_init_percpu_data(int cpu) int rcutree_prepare_cpu(unsigned int cpu) { unsigned long flags; + struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); struct rcu_node *rnp = rcu_get_root(); @@ -4172,7 +4173,7 @@ int rcutree_prepare_cpu(unsigned int cpu) rdp->qlen_last_fqs_check = 0; rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs); rdp->blimit = blimit; - rdp->dynticks_nesting = 1; /* CPU not up, no tearing. */ + ct->dynticks_nesting = 1; /* CPU not up, no tearing. */ raw_spin_unlock_rcu_node(rnp); /* irqs remain disabled. */ /* diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index ebb973f5b1900..650ff3cf01219 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -187,7 +187,6 @@ struct rcu_data { /* 3) dynticks interface. */ int dynticks_snap; /* Per-GP tracking for dynticks. */ - long dynticks_nesting; /* Track process nesting level. */ long dynticks_nmi_nesting; /* Track irq/NMI nesting level. */ bool rcu_need_heavy_qs; /* GP old, so heavy quiescent state! */ bool rcu_urgent_qs; /* GP old need light quiescent state. */ diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h index 250fbf2e8522f..a9c82254b6c65 100644 --- a/kernel/rcu/tree_stall.h +++ b/kernel/rcu/tree_stall.h @@ -479,7 +479,7 @@ static void print_cpu_stall_info(int cpu) "!."[!delta], ticks_value, ticks_title, rcu_dynticks_snap(cpu) & 0xfff, - rdp->dynticks_nesting, rdp->dynticks_nmi_nesting, + ct_dynticks_nesting_cpu(cpu), rdp->dynticks_nmi_nesting, rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu), data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart, rcuc_starved ? buf : "", From patchwork Mon Jun 20 23:10:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21548C43334 for ; Mon, 20 Jun 2022 23:14:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243274AbiFTXOk (ORCPT ); Mon, 20 Jun 2022 19:14:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346334AbiFTXMI (ORCPT ); Mon, 20 Jun 2022 19:12:08 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FC3C22533; Mon, 20 Jun 2022 16:10:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 62F21B8165E; Mon, 20 Jun 2022 23:10:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5A17AC341ED; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=R3hcoW5ExwknpCngoAMjefhT0YNygpom+qlLYf4jL/0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Eyfj8AXL1XhBDxx/mWyeGeicje0GqMau+mbaNztRvMH54ixQ73lnp3BzUf/LL92Ya /1p734I3r2arJ7BTXSHE9J5GkD3NayjmSTj/592Ey7PT1ufLj7RFtY2seBjTVynTjB vsT6VFIFPk1mfme6tTVri0oiwG7TCgTacEEYSb0t6mvf624Qk2iSVAo/r+yKue67xa G63NvLlQitvw2OR2nwQs3quekcXuFAKUPm3Kugt+U1C9AJoWw36LPet9BbU8yHWA4y 8593QvpBZC6qNKB5/sDetUsyXnx/TBiJerNP2L+jEdPv6yAuAyR7a3QKQNUmbOkICN 8Y98LepdRgWhQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6B7555C0FCA; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 16/23] rcu/context_tracking: Move dynticks_nmi_nesting to context tracking Date: Mon, 20 Jun 2022 16:10:22 -0700 Message-Id: <20220620231029.3844583-16-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker The RCU eqs tracking is going to be performed by the context tracking subsystem. The related nesting counters thus need to be moved to the context tracking structure. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- include/linux/context_tracking_state.h | 15 ++++++++ kernel/context_tracking.c | 1 + kernel/rcu/rcu.h | 4 --- kernel/rcu/tree.c | 48 +++++++++++--------------- kernel/rcu/tree.h | 1 - kernel/rcu/tree_stall.h | 2 +- 6 files changed, 38 insertions(+), 33 deletions(-) diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h index c866701c7edb5..2f957b48e24f9 100644 --- a/include/linux/context_tracking_state.h +++ b/include/linux/context_tracking_state.h @@ -13,6 +13,9 @@ enum ctx_state { CONTEXT_GUEST, }; +/* Offset to allow distinguishing irq vs. task-based idle entry/exit. */ +#define DYNTICK_IRQ_NONIDLE ((LONG_MAX / 2) + 1) + struct context_tracking { #ifdef CONFIG_CONTEXT_TRACKING_USER /* @@ -28,6 +31,7 @@ struct context_tracking { #ifdef CONFIG_CONTEXT_TRACKING_IDLE atomic_t dynticks; /* Even value for idle, else odd. */ long dynticks_nesting; /* Track process nesting level. */ + long dynticks_nmi_nesting; /* Track irq/NMI nesting level. */ #endif }; @@ -63,6 +67,17 @@ static __always_inline long ct_dynticks_nesting_cpu(int cpu) struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); return ct->dynticks_nesting; } + +static __always_inline long ct_dynticks_nmi_nesting(void) +{ + return __this_cpu_read(context_tracking.dynticks_nmi_nesting); +} + +static __always_inline long ct_dynticks_nmi_nesting_cpu(int cpu) +{ + struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); + return ct->dynticks_nmi_nesting; +} #endif /* #ifdef CONFIG_CONTEXT_TRACKING_IDLE */ #ifdef CONFIG_CONTEXT_TRACKING_USER diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index ed6c062aff9fa..bf5f498b21d39 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -26,6 +26,7 @@ DEFINE_PER_CPU(struct context_tracking, context_tracking) = { #ifdef CONFIG_CONTEXT_TRACKING_IDLE .dynticks_nesting = 1, + .dynticks_nmi_nesting = DYNTICK_IRQ_NONIDLE, .dynticks = ATOMIC_INIT(1), #endif }; diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index 4916077119f3f..7b4a88deff9ad 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -12,10 +12,6 @@ #include -/* Offset to allow distinguishing irq vs. task-based idle entry/exit. */ -#define DYNTICK_IRQ_NONIDLE ((LONG_MAX / 2) + 1) - - /* * Grace-period counter management. */ diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index f6bf328bb9cfd..006939b29e823 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -75,7 +75,6 @@ /* Data structures. */ static DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, rcu_data) = { - .dynticks_nmi_nesting = DYNTICK_IRQ_NONIDLE, #ifdef CONFIG_RCU_NOCB_CPU .cblist.flags = SEGCBLIST_RCU_CORE, #endif @@ -437,11 +436,11 @@ static int rcu_is_cpu_rrupt_from_idle(void) /* Check for counter underflows */ RCU_LOCKDEP_WARN(ct_dynticks_nesting() < 0, "RCU dynticks_nesting counter underflow!"); - RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nmi_nesting) <= 0, + RCU_LOCKDEP_WARN(ct_dynticks_nmi_nesting() <= 0, "RCU dynticks_nmi_nesting counter underflow/zero!"); /* Are we at first interrupt nesting level? */ - nesting = __this_cpu_read(rcu_data.dynticks_nmi_nesting); + nesting = ct_dynticks_nmi_nesting(); if (nesting > 1) return false; @@ -612,11 +611,10 @@ EXPORT_SYMBOL_GPL(rcutorture_get_gp_data); */ static noinstr void rcu_eqs_enter(bool user) { - struct rcu_data *rdp = this_cpu_ptr(&rcu_data); struct context_tracking *ct = this_cpu_ptr(&context_tracking); - WARN_ON_ONCE(rdp->dynticks_nmi_nesting != DYNTICK_IRQ_NONIDLE); - WRITE_ONCE(rdp->dynticks_nmi_nesting, 0); + WARN_ON_ONCE(ct_dynticks_nmi_nesting() != DYNTICK_IRQ_NONIDLE); + WRITE_ONCE(ct->dynticks_nmi_nesting, 0); WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && ct_dynticks_nesting() == 0); if (ct_dynticks_nesting() != 1) { @@ -734,7 +732,7 @@ noinstr void rcu_user_enter(void) * rcu_nmi_exit - inform RCU of exit from NMI context * * If we are returning from the outermost NMI handler that interrupted an - * RCU-idle period, update ct->dynticks and rdp->dynticks_nmi_nesting + * RCU-idle period, update ct->dynticks and ct->dynticks_nmi_nesting * to let the RCU grace-period handling know that the CPU is back to * being RCU-idle. * @@ -744,7 +742,6 @@ noinstr void rcu_user_enter(void) noinstr void rcu_nmi_exit(void) { struct context_tracking *ct = this_cpu_ptr(&context_tracking); - struct rcu_data *rdp = this_cpu_ptr(&rcu_data); instrumentation_begin(); /* @@ -752,25 +749,25 @@ noinstr void rcu_nmi_exit(void) * (We are exiting an NMI handler, so RCU better be paying attention * to us!) */ - WARN_ON_ONCE(rdp->dynticks_nmi_nesting <= 0); + WARN_ON_ONCE(ct_dynticks_nmi_nesting() <= 0); WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs()); /* * If the nesting level is not 1, the CPU wasn't RCU-idle, so * leave it in non-RCU-idle state. */ - if (rdp->dynticks_nmi_nesting != 1) { - trace_rcu_dyntick(TPS("--="), rdp->dynticks_nmi_nesting, rdp->dynticks_nmi_nesting - 2, + if (ct_dynticks_nmi_nesting() != 1) { + trace_rcu_dyntick(TPS("--="), ct_dynticks_nmi_nesting(), ct_dynticks_nmi_nesting() - 2, ct_dynticks()); - WRITE_ONCE(rdp->dynticks_nmi_nesting, /* No store tearing. */ - rdp->dynticks_nmi_nesting - 2); + WRITE_ONCE(ct->dynticks_nmi_nesting, /* No store tearing. */ + ct_dynticks_nmi_nesting() - 2); instrumentation_end(); return; } /* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */ - trace_rcu_dyntick(TPS("Startirq"), rdp->dynticks_nmi_nesting, 0, ct_dynticks()); - WRITE_ONCE(rdp->dynticks_nmi_nesting, 0); /* Avoid store tearing. */ + trace_rcu_dyntick(TPS("Startirq"), ct_dynticks_nmi_nesting(), 0, ct_dynticks()); + WRITE_ONCE(ct->dynticks_nmi_nesting, 0); /* Avoid store tearing. */ // instrumentation for the noinstr rcu_dynticks_eqs_enter() instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); @@ -794,7 +791,7 @@ void rcu_irq_exit_check_preempt(void) RCU_LOCKDEP_WARN(ct_dynticks_nesting() <= 0, "RCU dynticks_nesting counter underflow/zero!"); - RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nmi_nesting) != + RCU_LOCKDEP_WARN(ct_dynticks_nmi_nesting() != DYNTICK_IRQ_NONIDLE, "Bad RCU dynticks_nmi_nesting counter\n"); RCU_LOCKDEP_WARN(rcu_dynticks_curr_cpu_in_eqs(), @@ -813,11 +810,9 @@ void rcu_irq_exit_check_preempt(void) static void noinstr rcu_eqs_exit(bool user) { struct context_tracking *ct = this_cpu_ptr(&context_tracking); - struct rcu_data *rdp; long oldval; WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); - rdp = this_cpu_ptr(&rcu_data); oldval = ct_dynticks_nesting(); WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0); if (oldval) { @@ -837,8 +832,8 @@ static void noinstr rcu_eqs_exit(bool user) trace_rcu_dyntick(TPS("End"), ct_dynticks_nesting(), 1, ct_dynticks()); WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); WRITE_ONCE(ct->dynticks_nesting, 1); - WARN_ON_ONCE(rdp->dynticks_nmi_nesting); - WRITE_ONCE(rdp->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE); + WARN_ON_ONCE(ct_dynticks_nmi_nesting()); + WRITE_ONCE(ct->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE); instrumentation_end(); } @@ -941,7 +936,7 @@ void __rcu_irq_enter_check_tick(void) * rcu_nmi_enter - inform RCU of entry to NMI context * * If the CPU was idle from RCU's viewpoint, update ct->dynticks and - * rdp->dynticks_nmi_nesting to let the RCU grace-period handling know + * ct->dynticks_nmi_nesting to let the RCU grace-period handling know * that the CPU is active. This implementation permits nested NMIs, as * long as the nesting level does not overflow an int. (You will probably * run out of stack space first.) @@ -952,11 +947,10 @@ void __rcu_irq_enter_check_tick(void) noinstr void rcu_nmi_enter(void) { long incby = 2; - struct rcu_data *rdp = this_cpu_ptr(&rcu_data); struct context_tracking *ct = this_cpu_ptr(&context_tracking); /* Complain about underflow. */ - WARN_ON_ONCE(rdp->dynticks_nmi_nesting < 0); + WARN_ON_ONCE(ct_dynticks_nmi_nesting() < 0); /* * If idle from RCU viewpoint, atomically increment ->dynticks @@ -990,11 +984,11 @@ noinstr void rcu_nmi_enter(void) } trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="), - rdp->dynticks_nmi_nesting, - rdp->dynticks_nmi_nesting + incby, ct_dynticks()); + ct_dynticks_nmi_nesting(), + ct_dynticks_nmi_nesting() + incby, ct_dynticks()); instrumentation_end(); - WRITE_ONCE(rdp->dynticks_nmi_nesting, /* Prevent store tearing. */ - rdp->dynticks_nmi_nesting + incby); + WRITE_ONCE(ct->dynticks_nmi_nesting, /* Prevent store tearing. */ + ct_dynticks_nmi_nesting() + incby); barrier(); } diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 650ff3cf01219..72dbf8512ce78 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -187,7 +187,6 @@ struct rcu_data { /* 3) dynticks interface. */ int dynticks_snap; /* Per-GP tracking for dynticks. */ - long dynticks_nmi_nesting; /* Track irq/NMI nesting level. */ bool rcu_need_heavy_qs; /* GP old, so heavy quiescent state! */ bool rcu_urgent_qs; /* GP old need light quiescent state. */ bool rcu_forced_tick; /* Forced tick to provide QS. */ diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h index a9c82254b6c65..2683ce0a7c724 100644 --- a/kernel/rcu/tree_stall.h +++ b/kernel/rcu/tree_stall.h @@ -479,7 +479,7 @@ static void print_cpu_stall_info(int cpu) "!."[!delta], ticks_value, ticks_title, rcu_dynticks_snap(cpu) & 0xfff, - ct_dynticks_nesting_cpu(cpu), rdp->dynticks_nmi_nesting, + ct_dynticks_nesting_cpu(cpu), ct_dynticks_nmi_nesting_cpu(cpu), rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu), data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart, rcuc_starved ? buf : "", From patchwork Mon Jun 20 23:10:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E88FAC43334 for ; Mon, 20 Jun 2022 23:14:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345623AbiFTXOl (ORCPT ); Mon, 20 Jun 2022 19:14:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346466AbiFTXMI (ORCPT ); Mon, 20 Jun 2022 19:12:08 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4219B22530; Mon, 20 Jun 2022 16:10:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2E7C4B81657; Mon, 20 Jun 2022 23:10:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 537A2C341EC; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=1Hw+Z13zCUSvb97yNzRE+bRuAmwtfwyZkgGq4kgP9Sk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fwLipl4HvqrT9GqLyYFdB0MV4eACdbKDFaBBAKGwn95u9IMhUnKTLkyYZw6Kpp6FE BQE7uuVyg4Bi8XXq3+IRfnocivg80zTtIjdDZMuLEDKhTvuXtaG0IG12BI75gZLC+C i+mR5jXDgze1N5IlBlcXUlDT/ZCGKhgtKGbDpc4+VB8nL7zsFfOjR/VZ0YkKZ+FCL/ F4GOAYgpuW9/FQdIegCuL3PM7m30iJsNzbfmmtKTbCh7WSXyOI1fadyLD2iwjWrw4n 5ISWY5J4HlDBZLtw//dzePR3H1fMtXSG2VAzH26De9j0Tha3+x7kQPpKqGpezd13JU QD7hrayiv3dDw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6D8CE5C118A; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , "Paul E . McKenney" , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits Subject: [PATCH rcu 17/23] rcu/context-tracking: Move deferred nocb resched to context tracking Date: Mon, 20 Jun 2022 16:10:23 -0700 Message-Id: <20220620231029.3844583-17-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker To prepare for migrating the RCU eqs accounting code to context tracking, split the last-resort deferred nocb resched from rcu_user_enter() and move it into a separate call from context tracking. Acked-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- include/linux/rcupdate.h | 6 ++++++ kernel/context_tracking.c | 8 ++++++++ kernel/rcu/tree.c | 15 ++------------- 3 files changed, 16 insertions(+), 13 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index f1562d91c67d2..3717cad983a67 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -112,6 +112,12 @@ static inline void rcu_user_enter(void) { } static inline void rcu_user_exit(void) { } #endif /* CONFIG_NO_HZ_FULL */ +#if defined(CONFIG_NO_HZ_FULL) && (!defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK)) +void rcu_irq_work_resched(void); +#else +static inline void rcu_irq_work_resched(void) { } +#endif + #ifdef CONFIG_RCU_NOCB_CPU void rcu_init_nohz(void); int rcu_nocb_cpu_offload(int cpu); diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index bf5f498b21d39..8affa0092fab5 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -177,6 +177,8 @@ static __always_inline void context_tracking_recursion_exit(void) */ void noinstr __ct_user_enter(enum ctx_state state) { + lockdep_assert_irqs_disabled(); + /* Kernel threads aren't supposed to go to userspace */ WARN_ON_ONCE(!current->mm); @@ -198,6 +200,12 @@ void noinstr __ct_user_enter(enum ctx_state state) vtime_user_enter(current); instrumentation_end(); } + /* + * Other than generic entry implementation, we may be past the last + * rescheduling opportunity in the entry code. Trigger a self IPI + * that will fire and reschedule once we resume in user/guest mode. + */ + rcu_irq_work_resched(); rcu_user_enter(); } /* diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 006939b29e823..8c0c3490532e3 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -681,7 +681,7 @@ static DEFINE_PER_CPU(struct irq_work, late_wakeup_work) = * last resort is to fire a local irq_work that will trigger a reschedule once IRQs * get re-enabled again. */ -noinstr static void rcu_irq_work_resched(void) +noinstr void rcu_irq_work_resched(void) { struct rcu_data *rdp = this_cpu_ptr(&rcu_data); @@ -697,10 +697,7 @@ noinstr static void rcu_irq_work_resched(void) } instrumentation_end(); } - -#else -static inline void rcu_irq_work_resched(void) { } -#endif +#endif /* #if !defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK) */ /** * rcu_user_enter - inform RCU that we are resuming userspace. @@ -715,14 +712,6 @@ static inline void rcu_irq_work_resched(void) { } */ noinstr void rcu_user_enter(void) { - lockdep_assert_irqs_disabled(); - - /* - * Other than generic entry implementation, we may be past the last - * rescheduling opportunity in the entry code. Trigger a self IPI - * that will fire and reschedule once we resume in user/guest mode. - */ - rcu_irq_work_resched(); rcu_eqs_enter(true); } From patchwork Mon Jun 20 23:10:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17640CCA482 for ; Mon, 20 Jun 2022 23:14:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244977AbiFTXOs (ORCPT ); Mon, 20 Jun 2022 19:14:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245644AbiFTXMJ (ORCPT ); Mon, 20 Jun 2022 19:12:09 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 500E622534; Mon, 20 Jun 2022 16:10:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 374DDB8165A; Mon, 20 Jun 2022 23:10:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 56421C341F0; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=YeTYcSn/q+/bSqE1a+j6Cm44dt+QJ9lusb/hk0TUc6A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X/aX2nMMnHZ2uPJpGGS4p+BYEWFZ55dlyZbfjThNtSXYRvDdZqru5zZ9wtHQC6GKu ErXo3W3rPbGIxgPgpQuQ85jYxtSecAdILBQmCnmgDLS2jKvqUab7RcHq+ux3IbVojO pz5HG8Lya+YGUZyzfTlXTvwKz5mUlQ8aOat//dUGc/ogjFDoDqOsO89Kkcs8UmJc8J 8bp8AenW6mV7qclzlb7HGsZb9kKoiL36gcckDu/1WmkgaS9es/Mzqlf1SCSpI5/Fqp WT4bp+Y666Dsw7kjGEiFxPlbEF/UEXoCBBL37sH8beJwdVGyWlNJuRO+flRkx0Aom0 tGnWr25IpT59g== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 6FEF95C11D7; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , "Paul E . McKenney" , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits Subject: [PATCH rcu 18/23] rcu/context-tracking: Move RCU-dynticks internal functions to context_tracking Date: Mon, 20 Jun 2022 16:10:24 -0700 Message-Id: <20220620231029.3844583-18-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker Move the core RCU eqs/dynticks functions to context tracking so that we can later merge all that code within context tracking. Acked-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- include/linux/context_tracking.h | 20 ++ include/linux/rcutree.h | 3 + kernel/context_tracking.c | 336 +++++++++++++++++++++++++++++++ kernel/rcu/tree.c | 324 +---------------------------- kernel/rcu/tree.h | 5 - kernel/rcu/tree_plugin.h | 38 +--- 6 files changed, 364 insertions(+), 362 deletions(-) diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index 01abadb2f9930..1f568676bc1d2 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -122,6 +122,26 @@ static inline void context_tracking_init(void) { } #ifdef CONFIG_CONTEXT_TRACKING_IDLE extern void ct_idle_enter(void); extern void ct_idle_exit(void); + +/* + * Is the current CPU in an extended quiescent state? + * + * No ordering, as we are sampling CPU-local information. + */ +static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) +{ + return !(arch_atomic_read(this_cpu_ptr(&context_tracking.dynticks)) & 0x1); +} + +/* + * Increment the current CPU's context_tracking structure's ->dynticks field + * with ordering. Return the new value. + */ +static __always_inline unsigned long rcu_dynticks_inc(int incby) +{ + return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.dynticks)); +} + #else static inline void ct_idle_enter(void) { } static inline void ct_idle_exit(void) { } diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 4522b6a7cc42f..24db1e41695c8 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -55,6 +55,9 @@ void rcu_irq_exit_check_preempt(void); static inline void rcu_irq_exit_check_preempt(void) { } #endif +struct task_struct; +void rcu_preempt_deferred_qs(struct task_struct *t); + void exit_rcu(void); void rcu_scheduler_starting(void); diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 8affa0092fab5..f3e92705e0a89 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -21,6 +21,7 @@ #include #include #include +#include DEFINE_PER_CPU(struct context_tracking, context_tracking) = { @@ -33,6 +34,309 @@ DEFINE_PER_CPU(struct context_tracking, context_tracking) = { EXPORT_SYMBOL_GPL(context_tracking); #ifdef CONFIG_CONTEXT_TRACKING_IDLE +#define TPS(x) tracepoint_string(x) + +/* Record the current task on dyntick-idle entry. */ +static __always_inline void rcu_dynticks_task_enter(void) +{ +#if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) + WRITE_ONCE(current->rcu_tasks_idle_cpu, smp_processor_id()); +#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */ +} + +/* Record no current task on dyntick-idle exit. */ +static __always_inline void rcu_dynticks_task_exit(void) +{ +#if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) + WRITE_ONCE(current->rcu_tasks_idle_cpu, -1); +#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */ +} + +/* Turn on heavyweight RCU tasks trace readers on idle/user entry. */ +static __always_inline void rcu_dynticks_task_trace_enter(void) +{ +#ifdef CONFIG_TASKS_TRACE_RCU + if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) + current->trc_reader_special.b.need_mb = true; +#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ +} + +/* Turn off heavyweight RCU tasks trace readers on idle/user exit. */ +static __always_inline void rcu_dynticks_task_trace_exit(void) +{ +#ifdef CONFIG_TASKS_TRACE_RCU + if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) + current->trc_reader_special.b.need_mb = false; +#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ +} + +/* + * Record entry into an extended quiescent state. This is only to be + * called when not already in an extended quiescent state, that is, + * RCU is watching prior to the call to this function and is no longer + * watching upon return. + */ +static noinstr void rcu_dynticks_eqs_enter(void) +{ + int seq; + + /* + * CPUs seeing atomic_add_return() must see prior RCU read-side + * critical sections, and we also must force ordering with the + * next idle sojourn. + */ + rcu_dynticks_task_trace_enter(); // Before ->dynticks update! + seq = rcu_dynticks_inc(1); + // RCU is no longer watching. Better be in extended quiescent state! + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && (seq & 0x1)); +} + +/* + * Record exit from an extended quiescent state. This is only to be + * called from an extended quiescent state, that is, RCU is not watching + * prior to the call to this function and is watching upon return. + */ +static noinstr void rcu_dynticks_eqs_exit(void) +{ + int seq; + + /* + * CPUs seeing atomic_add_return() must see prior idle sojourns, + * and we also must force ordering with the next RCU read-side + * critical section. + */ + seq = rcu_dynticks_inc(1); + // RCU is now watching. Better not be in an extended quiescent state! + rcu_dynticks_task_trace_exit(); // After ->dynticks update! + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(seq & 0x1)); +} + +/* + * Enter an RCU extended quiescent state, which can be either the + * idle loop or adaptive-tickless usermode execution. + * + * We crowbar the ->dynticks_nmi_nesting field to zero to allow for + * the possibility of usermode upcalls having messed up our count + * of interrupt nesting level during the prior busy period. + */ +static void noinstr rcu_eqs_enter(bool user) +{ + struct context_tracking *ct = this_cpu_ptr(&context_tracking); + + WARN_ON_ONCE(ct_dynticks_nmi_nesting() != DYNTICK_IRQ_NONIDLE); + WRITE_ONCE(ct->dynticks_nmi_nesting, 0); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && + ct_dynticks_nesting() == 0); + if (ct_dynticks_nesting() != 1) { + // RCU will still be watching, so just do accounting and leave. + ct->dynticks_nesting--; + return; + } + + instrumentation_begin(); + lockdep_assert_irqs_disabled(); + trace_rcu_dyntick(TPS("Start"), ct_dynticks_nesting(), 0, ct_dynticks()); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); + rcu_preempt_deferred_qs(current); + + // instrumentation for the noinstr rcu_dynticks_eqs_enter() + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); + + instrumentation_end(); + WRITE_ONCE(ct->dynticks_nesting, 0); /* Avoid irq-access tearing. */ + // RCU is watching here ... + rcu_dynticks_eqs_enter(); + // ... but is no longer watching here. + rcu_dynticks_task_enter(); +} + +/* + * Exit an RCU extended quiescent state, which can be either the + * idle loop or adaptive-tickless usermode execution. + * + * We crowbar the ->dynticks_nmi_nesting field to DYNTICK_IRQ_NONIDLE to + * allow for the possibility of usermode upcalls messing up our count of + * interrupt nesting level during the busy period that is just now starting. + */ +static void noinstr rcu_eqs_exit(bool user) +{ + struct context_tracking *ct = this_cpu_ptr(&context_tracking); + long oldval; + + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); + oldval = ct_dynticks_nesting(); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0); + if (oldval) { + // RCU was already watching, so just do accounting and leave. + ct->dynticks_nesting++; + return; + } + rcu_dynticks_task_exit(); + // RCU is not watching here ... + rcu_dynticks_eqs_exit(); + // ... but is watching here. + instrumentation_begin(); + + // instrumentation for the noinstr rcu_dynticks_eqs_exit() + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); + + trace_rcu_dyntick(TPS("End"), ct_dynticks_nesting(), 1, ct_dynticks()); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); + WRITE_ONCE(ct->dynticks_nesting, 1); + WARN_ON_ONCE(ct_dynticks_nmi_nesting()); + WRITE_ONCE(ct->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE); + instrumentation_end(); +} + +/** + * rcu_nmi_exit - inform RCU of exit from NMI context + * + * If we are returning from the outermost NMI handler that interrupted an + * RCU-idle period, update ct->dynticks and ct->dynticks_nmi_nesting + * to let the RCU grace-period handling know that the CPU is back to + * being RCU-idle. + * + * If you add or remove a call to rcu_nmi_exit(), be sure to test + * with CONFIG_RCU_EQS_DEBUG=y. + */ +void noinstr rcu_nmi_exit(void) +{ + struct context_tracking *ct = this_cpu_ptr(&context_tracking); + + instrumentation_begin(); + /* + * Check for ->dynticks_nmi_nesting underflow and bad ->dynticks. + * (We are exiting an NMI handler, so RCU better be paying attention + * to us!) + */ + WARN_ON_ONCE(ct_dynticks_nmi_nesting() <= 0); + WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs()); + + /* + * If the nesting level is not 1, the CPU wasn't RCU-idle, so + * leave it in non-RCU-idle state. + */ + if (ct_dynticks_nmi_nesting() != 1) { + trace_rcu_dyntick(TPS("--="), ct_dynticks_nmi_nesting(), ct_dynticks_nmi_nesting() - 2, + ct_dynticks()); + WRITE_ONCE(ct->dynticks_nmi_nesting, /* No store tearing. */ + ct_dynticks_nmi_nesting() - 2); + instrumentation_end(); + return; + } + + /* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */ + trace_rcu_dyntick(TPS("Startirq"), ct_dynticks_nmi_nesting(), 0, ct_dynticks()); + WRITE_ONCE(ct->dynticks_nmi_nesting, 0); /* Avoid store tearing. */ + + // instrumentation for the noinstr rcu_dynticks_eqs_enter() + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); + instrumentation_end(); + + // RCU is watching here ... + rcu_dynticks_eqs_enter(); + // ... but is no longer watching here. + + if (!in_nmi()) + rcu_dynticks_task_enter(); +} + +/** + * rcu_nmi_enter - inform RCU of entry to NMI context + * + * If the CPU was idle from RCU's viewpoint, update ct->dynticks and + * ct->dynticks_nmi_nesting to let the RCU grace-period handling know + * that the CPU is active. This implementation permits nested NMIs, as + * long as the nesting level does not overflow an int. (You will probably + * run out of stack space first.) + * + * If you add or remove a call to rcu_nmi_enter(), be sure to test + * with CONFIG_RCU_EQS_DEBUG=y. + */ +void noinstr rcu_nmi_enter(void) +{ + long incby = 2; + struct context_tracking *ct = this_cpu_ptr(&context_tracking); + + /* Complain about underflow. */ + WARN_ON_ONCE(ct_dynticks_nmi_nesting() < 0); + + /* + * If idle from RCU viewpoint, atomically increment ->dynticks + * to mark non-idle and increment ->dynticks_nmi_nesting by one. + * Otherwise, increment ->dynticks_nmi_nesting by two. This means + * if ->dynticks_nmi_nesting is equal to one, we are guaranteed + * to be in the outermost NMI handler that interrupted an RCU-idle + * period (observation due to Andy Lutomirski). + */ + if (rcu_dynticks_curr_cpu_in_eqs()) { + + if (!in_nmi()) + rcu_dynticks_task_exit(); + + // RCU is not watching here ... + rcu_dynticks_eqs_exit(); + // ... but is watching here. + + instrumentation_begin(); + // instrumentation for the noinstr rcu_dynticks_curr_cpu_in_eqs() + instrument_atomic_read(&ct->dynticks, sizeof(ct->dynticks)); + // instrumentation for the noinstr rcu_dynticks_eqs_exit() + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); + + incby = 1; + } else if (!in_nmi()) { + instrumentation_begin(); + rcu_irq_enter_check_tick(); + } else { + instrumentation_begin(); + } + + trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="), + ct_dynticks_nmi_nesting(), + ct_dynticks_nmi_nesting() + incby, ct_dynticks()); + instrumentation_end(); + WRITE_ONCE(ct->dynticks_nmi_nesting, /* Prevent store tearing. */ + ct_dynticks_nmi_nesting() + incby); + barrier(); +} + +/** + * rcu_idle_enter - inform RCU that current CPU is entering idle + * + * Enter idle mode, in other words, -leave- the mode in which RCU + * read-side critical sections can occur. (Though RCU read-side + * critical sections can occur in irq handlers in idle, a possibility + * handled by irq_enter() and irq_exit().) + * + * If you add or remove a call to rcu_idle_enter(), be sure to test with + * CONFIG_RCU_EQS_DEBUG=y. + */ +void noinstr rcu_idle_enter(void) +{ + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); + rcu_eqs_enter(false); +} + +/** + * rcu_idle_exit - inform RCU that current CPU is leaving idle + * + * Exit idle mode, in other words, -enter- the mode in which RCU + * read-side critical sections can occur. + * + * If you add or remove a call to rcu_idle_exit(), be sure to test with + * CONFIG_RCU_EQS_DEBUG=y. + */ +void noinstr rcu_idle_exit(void) +{ + unsigned long flags; + + raw_local_irq_save(flags); + rcu_eqs_exit(false); + raw_local_irq_restore(flags); +} +EXPORT_SYMBOL_GPL(rcu_idle_exit); + noinstr void ct_idle_enter(void) { rcu_idle_enter(); @@ -139,6 +443,38 @@ noinstr void ct_nmi_exit(void) } #endif /* #ifdef CONFIG_CONTEXT_TRACKING_IDLE */ +#ifdef CONFIG_NO_HZ_FULL +/** + * rcu_user_enter - inform RCU that we are resuming userspace. + * + * Enter RCU idle mode right before resuming userspace. No use of RCU + * is permitted between this call and rcu_user_exit(). This way the + * CPU doesn't need to maintain the tick for RCU maintenance purposes + * when the CPU runs in userspace. + * + * If you add or remove a call to rcu_user_enter(), be sure to test with + * CONFIG_RCU_EQS_DEBUG=y. + */ +noinstr void rcu_user_enter(void) +{ + rcu_eqs_enter(true); +} + +/** + * rcu_user_exit - inform RCU that we are exiting userspace. + * + * Exit RCU idle mode while entering the kernel because it can + * run a RCU read side critical section anytime. + * + * If you add or remove a call to rcu_user_exit(), be sure to test with + * CONFIG_RCU_EQS_DEBUG=y. + */ +void noinstr rcu_user_exit(void) +{ + rcu_eqs_exit(true); +} +#endif /* #ifdef CONFIG_NO_HZ_FULL */ + #ifdef CONFIG_CONTEXT_TRACKING_USER #define CREATE_TRACE_POINTS diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 8c0c3490532e3..e2a2083079a2c 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -62,6 +62,7 @@ #include #include #include +#include #include "../time/tick-internal.h" #include "tree.h" @@ -259,56 +260,6 @@ void rcu_softirq_qs(void) rcu_tasks_qs(current, false); } -/* - * Increment the current CPU's rcu_data structure's ->dynticks field - * with ordering. Return the new value. - */ -static noinline noinstr unsigned long rcu_dynticks_inc(int incby) -{ - return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.dynticks)); -} - -/* - * Record entry into an extended quiescent state. This is only to be - * called when not already in an extended quiescent state, that is, - * RCU is watching prior to the call to this function and is no longer - * watching upon return. - */ -static noinstr void rcu_dynticks_eqs_enter(void) -{ - int seq; - - /* - * CPUs seeing atomic_add_return() must see prior RCU read-side - * critical sections, and we also must force ordering with the - * next idle sojourn. - */ - rcu_dynticks_task_trace_enter(); // Before ->dynticks update! - seq = rcu_dynticks_inc(1); - // RCU is no longer watching. Better be in extended quiescent state! - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && (seq & 0x1)); -} - -/* - * Record exit from an extended quiescent state. This is only to be - * called from an extended quiescent state, that is, RCU is not watching - * prior to the call to this function and is watching upon return. - */ -static noinstr void rcu_dynticks_eqs_exit(void) -{ - int seq; - - /* - * CPUs seeing atomic_add_return() must see prior idle sojourns, - * and we also must force ordering with the next RCU read-side - * critical section. - */ - seq = rcu_dynticks_inc(1); - // RCU is now watching. Better not be in an extended quiescent state! - rcu_dynticks_task_trace_exit(); // After ->dynticks update! - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(seq & 0x1)); -} - /* * Reset the current CPU's ->dynticks counter to indicate that the * newly onlined CPU is no longer in an extended quiescent state. @@ -326,16 +277,6 @@ static void rcu_dynticks_eqs_online(void) rcu_dynticks_inc(1); } -/* - * Is the current CPU in an extended quiescent state? - * - * No ordering, as we are sampling CPU-local information. - */ -static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) -{ - return !(arch_atomic_read(this_cpu_ptr(&context_tracking.dynticks)) & 0x1); -} - /* * Snapshot the ->dynticks counter with full ordering so as to allow * stable comparison of this counter with past and future snapshots. @@ -601,65 +542,7 @@ void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, } EXPORT_SYMBOL_GPL(rcutorture_get_gp_data); -/* - * Enter an RCU extended quiescent state, which can be either the - * idle loop or adaptive-tickless usermode execution. - * - * We crowbar the ->dynticks_nmi_nesting field to zero to allow for - * the possibility of usermode upcalls having messed up our count - * of interrupt nesting level during the prior busy period. - */ -static noinstr void rcu_eqs_enter(bool user) -{ - struct context_tracking *ct = this_cpu_ptr(&context_tracking); - - WARN_ON_ONCE(ct_dynticks_nmi_nesting() != DYNTICK_IRQ_NONIDLE); - WRITE_ONCE(ct->dynticks_nmi_nesting, 0); - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && - ct_dynticks_nesting() == 0); - if (ct_dynticks_nesting() != 1) { - // RCU will still be watching, so just do accounting and leave. - ct->dynticks_nesting--; - return; - } - - instrumentation_begin(); - lockdep_assert_irqs_disabled(); - trace_rcu_dyntick(TPS("Start"), ct_dynticks_nesting(), 0, ct_dynticks()); - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); - rcu_preempt_deferred_qs(current); - - // instrumentation for the noinstr rcu_dynticks_eqs_enter() - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); - - instrumentation_end(); - WRITE_ONCE(ct->dynticks_nesting, 0); /* Avoid irq-access tearing. */ - // RCU is watching here ... - rcu_dynticks_eqs_enter(); - // ... but is no longer watching here. - rcu_dynticks_task_enter(); -} - -/** - * rcu_idle_enter - inform RCU that current CPU is entering idle - * - * Enter idle mode, in other words, -leave- the mode in which RCU - * read-side critical sections can occur. (Though RCU read-side - * critical sections can occur in irq handlers in idle, a possibility - * handled by irq_enter() and irq_exit().) - * - * If you add or remove a call to rcu_idle_enter(), be sure to test with - * CONFIG_RCU_EQS_DEBUG=y. - */ -void noinstr rcu_idle_enter(void) -{ - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); - rcu_eqs_enter(false); -} - -#ifdef CONFIG_NO_HZ_FULL - -#if !defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK) +#if defined(CONFIG_NO_HZ_FULL) && (!defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK)) /* * An empty function that will trigger a reschedule on * IRQ tail once IRQs get re-enabled on userspace/guest resume. @@ -697,78 +580,7 @@ noinstr void rcu_irq_work_resched(void) } instrumentation_end(); } -#endif /* #if !defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK) */ - -/** - * rcu_user_enter - inform RCU that we are resuming userspace. - * - * Enter RCU idle mode right before resuming userspace. No use of RCU - * is permitted between this call and rcu_user_exit(). This way the - * CPU doesn't need to maintain the tick for RCU maintenance purposes - * when the CPU runs in userspace. - * - * If you add or remove a call to rcu_user_enter(), be sure to test with - * CONFIG_RCU_EQS_DEBUG=y. - */ -noinstr void rcu_user_enter(void) -{ - rcu_eqs_enter(true); -} - -#endif /* CONFIG_NO_HZ_FULL */ - -/** - * rcu_nmi_exit - inform RCU of exit from NMI context - * - * If we are returning from the outermost NMI handler that interrupted an - * RCU-idle period, update ct->dynticks and ct->dynticks_nmi_nesting - * to let the RCU grace-period handling know that the CPU is back to - * being RCU-idle. - * - * If you add or remove a call to rcu_nmi_exit(), be sure to test - * with CONFIG_RCU_EQS_DEBUG=y. - */ -noinstr void rcu_nmi_exit(void) -{ - struct context_tracking *ct = this_cpu_ptr(&context_tracking); - - instrumentation_begin(); - /* - * Check for ->dynticks_nmi_nesting underflow and bad ->dynticks. - * (We are exiting an NMI handler, so RCU better be paying attention - * to us!) - */ - WARN_ON_ONCE(ct_dynticks_nmi_nesting() <= 0); - WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs()); - - /* - * If the nesting level is not 1, the CPU wasn't RCU-idle, so - * leave it in non-RCU-idle state. - */ - if (ct_dynticks_nmi_nesting() != 1) { - trace_rcu_dyntick(TPS("--="), ct_dynticks_nmi_nesting(), ct_dynticks_nmi_nesting() - 2, - ct_dynticks()); - WRITE_ONCE(ct->dynticks_nmi_nesting, /* No store tearing. */ - ct_dynticks_nmi_nesting() - 2); - instrumentation_end(); - return; - } - - /* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */ - trace_rcu_dyntick(TPS("Startirq"), ct_dynticks_nmi_nesting(), 0, ct_dynticks()); - WRITE_ONCE(ct->dynticks_nmi_nesting, 0); /* Avoid store tearing. */ - - // instrumentation for the noinstr rcu_dynticks_eqs_enter() - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); - instrumentation_end(); - - // RCU is watching here ... - rcu_dynticks_eqs_enter(); - // ... but is no longer watching here. - - if (!in_nmi()) - rcu_dynticks_task_enter(); -} +#endif /* #if defined(CONFIG_NO_HZ_FULL) && (!defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK)) */ #ifdef CONFIG_PROVE_RCU /** @@ -788,77 +600,7 @@ void rcu_irq_exit_check_preempt(void) } #endif /* #ifdef CONFIG_PROVE_RCU */ -/* - * Exit an RCU extended quiescent state, which can be either the - * idle loop or adaptive-tickless usermode execution. - * - * We crowbar the ->dynticks_nmi_nesting field to DYNTICK_IRQ_NONIDLE to - * allow for the possibility of usermode upcalls messing up our count of - * interrupt nesting level during the busy period that is just now starting. - */ -static void noinstr rcu_eqs_exit(bool user) -{ - struct context_tracking *ct = this_cpu_ptr(&context_tracking); - long oldval; - - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); - oldval = ct_dynticks_nesting(); - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0); - if (oldval) { - // RCU was already watching, so just do accounting and leave. - ct->dynticks_nesting++; - return; - } - rcu_dynticks_task_exit(); - // RCU is not watching here ... - rcu_dynticks_eqs_exit(); - // ... but is watching here. - instrumentation_begin(); - - // instrumentation for the noinstr rcu_dynticks_eqs_exit() - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); - - trace_rcu_dyntick(TPS("End"), ct_dynticks_nesting(), 1, ct_dynticks()); - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); - WRITE_ONCE(ct->dynticks_nesting, 1); - WARN_ON_ONCE(ct_dynticks_nmi_nesting()); - WRITE_ONCE(ct->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE); - instrumentation_end(); -} - -/** - * rcu_idle_exit - inform RCU that current CPU is leaving idle - * - * Exit idle mode, in other words, -enter- the mode in which RCU - * read-side critical sections can occur. - * - * If you add or remove a call to rcu_idle_exit(), be sure to test with - * CONFIG_RCU_EQS_DEBUG=y. - */ -void noinstr rcu_idle_exit(void) -{ - unsigned long flags; - - raw_local_irq_save(flags); - rcu_eqs_exit(false); - raw_local_irq_restore(flags); -} - #ifdef CONFIG_NO_HZ_FULL -/** - * rcu_user_exit - inform RCU that we are exiting userspace. - * - * Exit RCU idle mode while entering the kernel because it can - * run a RCU read side critical section anytime. - * - * If you add or remove a call to rcu_user_exit(), be sure to test with - * CONFIG_RCU_EQS_DEBUG=y. - */ -void noinstr rcu_user_exit(void) -{ - rcu_eqs_exit(true); -} - /** * __rcu_irq_enter_check_tick - Enable scheduler tick on CPU if RCU needs it. * @@ -921,66 +663,6 @@ void __rcu_irq_enter_check_tick(void) } #endif /* CONFIG_NO_HZ_FULL */ -/** - * rcu_nmi_enter - inform RCU of entry to NMI context - * - * If the CPU was idle from RCU's viewpoint, update ct->dynticks and - * ct->dynticks_nmi_nesting to let the RCU grace-period handling know - * that the CPU is active. This implementation permits nested NMIs, as - * long as the nesting level does not overflow an int. (You will probably - * run out of stack space first.) - * - * If you add or remove a call to rcu_nmi_enter(), be sure to test - * with CONFIG_RCU_EQS_DEBUG=y. - */ -noinstr void rcu_nmi_enter(void) -{ - long incby = 2; - struct context_tracking *ct = this_cpu_ptr(&context_tracking); - - /* Complain about underflow. */ - WARN_ON_ONCE(ct_dynticks_nmi_nesting() < 0); - - /* - * If idle from RCU viewpoint, atomically increment ->dynticks - * to mark non-idle and increment ->dynticks_nmi_nesting by one. - * Otherwise, increment ->dynticks_nmi_nesting by two. This means - * if ->dynticks_nmi_nesting is equal to one, we are guaranteed - * to be in the outermost NMI handler that interrupted an RCU-idle - * period (observation due to Andy Lutomirski). - */ - if (rcu_dynticks_curr_cpu_in_eqs()) { - - if (!in_nmi()) - rcu_dynticks_task_exit(); - - // RCU is not watching here ... - rcu_dynticks_eqs_exit(); - // ... but is watching here. - - instrumentation_begin(); - // instrumentation for the noinstr rcu_dynticks_curr_cpu_in_eqs() - instrument_atomic_read(&ct->dynticks, sizeof(ct->dynticks)); - // instrumentation for the noinstr rcu_dynticks_eqs_exit() - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); - - incby = 1; - } else if (!in_nmi()) { - instrumentation_begin(); - rcu_irq_enter_check_tick(); - } else { - instrumentation_begin(); - } - - trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="), - ct_dynticks_nmi_nesting(), - ct_dynticks_nmi_nesting() + incby, ct_dynticks()); - instrumentation_end(); - WRITE_ONCE(ct->dynticks_nmi_nesting, /* Prevent store tearing. */ - ct_dynticks_nmi_nesting() + incby); - barrier(); -} - /* * Check to see if any future non-offloaded RCU-related work will need * to be done by the current CPU, even if none need be done immediately, diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 72dbf8512ce78..0d5d1de327e41 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -427,7 +427,6 @@ static void rcu_cpu_kthread_setup(unsigned int cpu); static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp); static bool rcu_preempt_has_tasks(struct rcu_node *rnp); static bool rcu_preempt_need_deferred_qs(struct task_struct *t); -static void rcu_preempt_deferred_qs(struct task_struct *t); static void zero_cpu_stall_ticks(struct rcu_data *rdp); static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp); static void rcu_nocb_gp_cleanup(struct swait_queue_head *sq); @@ -467,10 +466,6 @@ do { \ static void rcu_bind_gp_kthread(void); static bool rcu_nohz_full_cpu(void); -static void rcu_dynticks_task_enter(void); -static void rcu_dynticks_task_exit(void); -static void rcu_dynticks_task_trace_enter(void); -static void rcu_dynticks_task_trace_exit(void); /* Forward declarations for tree_stall.h */ static void record_gp_stall_check_time(void); diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index c8ba0fe17267c..c8c57e1a5dfd3 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -595,7 +595,7 @@ static bool rcu_preempt_need_deferred_qs(struct task_struct *t) * evaluate safety in terms of interrupt, softirq, and preemption * disabling. */ -static void rcu_preempt_deferred_qs(struct task_struct *t) +notrace void rcu_preempt_deferred_qs(struct task_struct *t) { unsigned long flags; @@ -935,7 +935,7 @@ static bool rcu_preempt_need_deferred_qs(struct task_struct *t) // period for a quiescent state from this CPU. Note that requests from // tasks are handled when removing the task from the blocked-tasks list // below. -static void rcu_preempt_deferred_qs(struct task_struct *t) +void rcu_preempt_deferred_qs(struct task_struct *t) { struct rcu_data *rdp = this_cpu_ptr(&rcu_data); @@ -1290,37 +1290,3 @@ static void rcu_bind_gp_kthread(void) return; housekeeping_affine(current, HK_TYPE_RCU); } - -/* Record the current task on dyntick-idle entry. */ -static __always_inline void rcu_dynticks_task_enter(void) -{ -#if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) - WRITE_ONCE(current->rcu_tasks_idle_cpu, smp_processor_id()); -#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */ -} - -/* Record no current task on dyntick-idle exit. */ -static __always_inline void rcu_dynticks_task_exit(void) -{ -#if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) - WRITE_ONCE(current->rcu_tasks_idle_cpu, -1); -#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */ -} - -/* Turn on heavyweight RCU tasks trace readers on idle/user entry. */ -static __always_inline void rcu_dynticks_task_trace_enter(void) -{ -#ifdef CONFIG_TASKS_TRACE_RCU - if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) - current->trc_reader_special.b.need_mb = true; -#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ -} - -/* Turn off heavyweight RCU tasks trace readers on idle/user exit. */ -static __always_inline void rcu_dynticks_task_trace_exit(void) -{ -#ifdef CONFIG_TASKS_TRACE_RCU - if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) - current->trc_reader_special.b.need_mb = false; -#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ -} From patchwork Mon Jun 20 23:10:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 106E4CCA480 for ; Mon, 20 Jun 2022 23:15:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346912AbiFTXPT (ORCPT ); Mon, 20 Jun 2022 19:15:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347120AbiFTXMG (ORCPT ); Mon, 20 Jun 2022 19:12:06 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9638222A5; Mon, 20 Jun 2022 16:10:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 90C7561515; Mon, 20 Jun 2022 23:10:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 55D7AC341EF; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=xl4Qv/D8eO08gmDd70pGOWhnlkzRN6HwvsTr8ErlRVs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PeJnwQLjb2yr4ZKchVmSDjcE/5bVBREF2ju+diiQNUrbqfyWJJYCOScGoWeCatTlA +p50b69yLyT9aVH0DCMwVi4A00NrVzILftBP23XTuRorgdNKQLMDyMWOUvjwLyE9uW cG6nuvxpSYN0FdUsvkXBAB1xqzp7Ppy4jFO7IOL7hDD/5clVvVfEnt3YLjjmULdLIG yNTQIcStxJspHy9+66VYx3rAkLvhlKAcA6WFg3MtuJ2qbw8bRA5ZPlwYsqcu0VEXLa qtVKzOE5U1/FuitQavSKjnlIJ6Zh0tQc2CZXpOAqDgr6qHZHVrlkYo3akrVS+2JBgB g/hhMMvXl1YnQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 71F0A5C11DB; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 19/23] rcu/context-tracking: Remove unused and/or unecessary middle functions Date: Mon, 20 Jun 2022 16:10:25 -0700 Message-Id: <20220620231029.3844583-19-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker Some eqs functions are now only used internally by context tracking, so their public declarations can be removed. Also middle functions such as rcu_user_*() and rcu_idle_*() which now directly call to rcu_eqs_enter() and rcu_eqs_exit() can be wiped out as well. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- Documentation/RCU/stallwarn.rst | 2 +- include/linux/hardirq.h | 8 --- include/linux/rcupdate.h | 8 --- include/linux/rcutiny.h | 2 - include/linux/rcutree.h | 2 - kernel/context_tracking.c | 98 +++++++++------------------------ 6 files changed, 28 insertions(+), 92 deletions(-) diff --git a/Documentation/RCU/stallwarn.rst b/Documentation/RCU/stallwarn.rst index ce1f58a9d954b..e38c587067fc8 100644 --- a/Documentation/RCU/stallwarn.rst +++ b/Documentation/RCU/stallwarn.rst @@ -97,7 +97,7 @@ warnings: which will include additional debugging information. - A low-level kernel issue that either fails to invoke one of the - variants of rcu_user_enter(), rcu_user_exit(), ct_idle_enter(), + variants of rcu_eqs_enter(true), rcu_eqs_exit(true), ct_idle_enter(), ct_idle_exit(), ct_irq_enter(), or ct_irq_exit() on the one hand, or that invokes one of them too many times on the other. Historically, the most frequent issue has been an omission diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index 345cdbe9c1b70..d57cab4d4c06f 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -92,14 +92,6 @@ void irq_exit_rcu(void); #define arch_nmi_exit() do { } while (0) #endif -#ifdef CONFIG_TINY_RCU -static inline void rcu_nmi_enter(void) { } -static inline void rcu_nmi_exit(void) { } -#else -extern void rcu_nmi_enter(void); -extern void rcu_nmi_exit(void); -#endif - /* * NMI vs Tracing * -------------- diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 3717cad983a67..434da1eb88cd9 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -104,14 +104,6 @@ static inline void rcu_sysrq_start(void) { } static inline void rcu_sysrq_end(void) { } #endif /* #else #ifdef CONFIG_RCU_STALL_COMMON */ -#ifdef CONFIG_NO_HZ_FULL -void rcu_user_enter(void); -void rcu_user_exit(void); -#else -static inline void rcu_user_enter(void) { } -static inline void rcu_user_exit(void) { } -#endif /* CONFIG_NO_HZ_FULL */ - #if defined(CONFIG_NO_HZ_FULL) && (!defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK)) void rcu_irq_work_resched(void); #else diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 591119413cf1d..900ba35c3582d 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -76,8 +76,6 @@ static inline int rcu_needs_cpu(void) static inline void rcu_virt_note_context_switch(int cpu) { } static inline void rcu_cpu_stall_reset(void) { } static inline int rcu_jiffies_till_stall_check(void) { return 21 * HZ; } -static inline void rcu_idle_enter(void) { } -static inline void rcu_idle_exit(void) { } static inline void rcu_irq_exit_check_preempt(void) { } #define rcu_is_idle_cpu(cpu) \ (is_idle_task(current) && !in_nmi() && !in_hardirq() && !in_serving_softirq()) diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 24db1e41695c8..9cca00ed9bc9f 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -45,8 +45,6 @@ unsigned long start_poll_synchronize_rcu(void); bool poll_state_synchronize_rcu(unsigned long oldstate); void cond_synchronize_rcu(unsigned long oldstate); -void rcu_idle_enter(void); -void rcu_idle_exit(void); bool rcu_is_idle_cpu(int cpu); #ifdef CONFIG_PROVE_RCU diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index f3e92705e0a89..8b0979412f755 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -189,17 +189,17 @@ static void noinstr rcu_eqs_exit(bool user) } /** - * rcu_nmi_exit - inform RCU of exit from NMI context + * ct_nmi_exit - inform RCU of exit from NMI context * * If we are returning from the outermost NMI handler that interrupted an * RCU-idle period, update ct->dynticks and ct->dynticks_nmi_nesting * to let the RCU grace-period handling know that the CPU is back to * being RCU-idle. * - * If you add or remove a call to rcu_nmi_exit(), be sure to test + * If you add or remove a call to ct_nmi_exit(), be sure to test * with CONFIG_RCU_EQS_DEBUG=y. */ -void noinstr rcu_nmi_exit(void) +void noinstr ct_nmi_exit(void) { struct context_tracking *ct = this_cpu_ptr(&context_tracking); @@ -242,7 +242,7 @@ void noinstr rcu_nmi_exit(void) } /** - * rcu_nmi_enter - inform RCU of entry to NMI context + * ct_nmi_enter - inform RCU of entry to NMI context * * If the CPU was idle from RCU's viewpoint, update ct->dynticks and * ct->dynticks_nmi_nesting to let the RCU grace-period handling know @@ -250,10 +250,10 @@ void noinstr rcu_nmi_exit(void) * long as the nesting level does not overflow an int. (You will probably * run out of stack space first.) * - * If you add or remove a call to rcu_nmi_enter(), be sure to test + * If you add or remove a call to ct_nmi_enter(), be sure to test * with CONFIG_RCU_EQS_DEBUG=y. */ -void noinstr rcu_nmi_enter(void) +void noinstr ct_nmi_enter(void) { long incby = 2; struct context_tracking *ct = this_cpu_ptr(&context_tracking); @@ -302,32 +302,33 @@ void noinstr rcu_nmi_enter(void) } /** - * rcu_idle_enter - inform RCU that current CPU is entering idle + * ct_idle_enter - inform RCU that current CPU is entering idle * * Enter idle mode, in other words, -leave- the mode in which RCU * read-side critical sections can occur. (Though RCU read-side * critical sections can occur in irq handlers in idle, a possibility * handled by irq_enter() and irq_exit().) * - * If you add or remove a call to rcu_idle_enter(), be sure to test with + * If you add or remove a call to ct_idle_enter(), be sure to test with * CONFIG_RCU_EQS_DEBUG=y. */ -void noinstr rcu_idle_enter(void) +void noinstr ct_idle_enter(void) { WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); rcu_eqs_enter(false); } +EXPORT_SYMBOL_GPL(ct_idle_enter); /** - * rcu_idle_exit - inform RCU that current CPU is leaving idle + * ct_idle_exit - inform RCU that current CPU is leaving idle * * Exit idle mode, in other words, -enter- the mode in which RCU * read-side critical sections can occur. * - * If you add or remove a call to rcu_idle_exit(), be sure to test with + * If you add or remove a call to ct_idle_exit(), be sure to test with * CONFIG_RCU_EQS_DEBUG=y. */ -void noinstr rcu_idle_exit(void) +void noinstr ct_idle_exit(void) { unsigned long flags; @@ -335,18 +336,6 @@ void noinstr rcu_idle_exit(void) rcu_eqs_exit(false); raw_local_irq_restore(flags); } -EXPORT_SYMBOL_GPL(rcu_idle_exit); - -noinstr void ct_idle_enter(void) -{ - rcu_idle_enter(); -} -EXPORT_SYMBOL_GPL(ct_idle_enter); - -void ct_idle_exit(void) -{ - rcu_idle_exit(); -} EXPORT_SYMBOL_GPL(ct_idle_exit); /** @@ -431,50 +420,11 @@ void ct_irq_exit_irqson(void) ct_irq_exit(); local_irq_restore(flags); } - -noinstr void ct_nmi_enter(void) -{ - rcu_nmi_enter(); -} - -noinstr void ct_nmi_exit(void) -{ - rcu_nmi_exit(); -} +#else +static __always_inline void rcu_eqs_enter(bool user) { } +static __always_inline void rcu_eqs_exit(bool user) { } #endif /* #ifdef CONFIG_CONTEXT_TRACKING_IDLE */ -#ifdef CONFIG_NO_HZ_FULL -/** - * rcu_user_enter - inform RCU that we are resuming userspace. - * - * Enter RCU idle mode right before resuming userspace. No use of RCU - * is permitted between this call and rcu_user_exit(). This way the - * CPU doesn't need to maintain the tick for RCU maintenance purposes - * when the CPU runs in userspace. - * - * If you add or remove a call to rcu_user_enter(), be sure to test with - * CONFIG_RCU_EQS_DEBUG=y. - */ -noinstr void rcu_user_enter(void) -{ - rcu_eqs_enter(true); -} - -/** - * rcu_user_exit - inform RCU that we are exiting userspace. - * - * Exit RCU idle mode while entering the kernel because it can - * run a RCU read side critical section anytime. - * - * If you add or remove a call to rcu_user_exit(), be sure to test with - * CONFIG_RCU_EQS_DEBUG=y. - */ -void noinstr rcu_user_exit(void) -{ - rcu_eqs_exit(true); -} -#endif /* #ifdef CONFIG_NO_HZ_FULL */ - #ifdef CONFIG_CONTEXT_TRACKING_USER #define CREATE_TRACE_POINTS @@ -542,7 +492,13 @@ void noinstr __ct_user_enter(enum ctx_state state) * that will fire and reschedule once we resume in user/guest mode. */ rcu_irq_work_resched(); - rcu_user_enter(); + /* + * Enter RCU idle mode right before resuming userspace. No use of RCU + * is permitted between this call and rcu_eqs_exit(). This way the + * CPU doesn't need to maintain the tick for RCU maintenance purposes + * when the CPU runs in userspace. + */ + rcu_eqs_enter(true); } /* * Even if context tracking is disabled on this CPU, because it's outside @@ -579,7 +535,7 @@ void ct_user_enter(enum ctx_state state) /* * Some contexts may involve an exception occuring in an irq, * leading to that nesting: - * ct_irq_enter() rcu_user_exit() rcu_user_exit() ct_irq_exit() + * ct_irq_enter() rcu_eqs_exit(true) rcu_eqs_enter(true) ct_irq_exit() * This would mess up the dyntick_nesting count though. And rcu_irq_*() * helpers are enough to protect RCU uses inside the exception. So * just return immediately if we detect we are in an IRQ. @@ -631,10 +587,10 @@ void noinstr __ct_user_exit(enum ctx_state state) if (__this_cpu_read(context_tracking.state) == state) { if (__this_cpu_read(context_tracking.active)) { /* - * We are going to run code that may use RCU. Inform - * RCU core about that (ie: we may need the tick again). + * Exit RCU idle mode while entering the kernel because it can + * run a RCU read side critical section anytime. */ - rcu_user_exit(); + rcu_eqs_exit(true); if (state == CONTEXT_USER) { instrumentation_begin(); vtime_user_exit(current); From patchwork Mon Jun 20 23:10:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25CD3CCA47C for ; Mon, 20 Jun 2022 23:15:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347190AbiFTXPS (ORCPT ); Mon, 20 Jun 2022 19:15:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347106AbiFTXMG (ORCPT ); Mon, 20 Jun 2022 19:12:06 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A92F022296; Mon, 20 Jun 2022 16:10:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2F71961505; Mon, 20 Jun 2022 23:10:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5459BC341EE; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=yH3rcc6h4bxkbl4PBwXYAQclBPLiEgWRFDrMwOLKi5U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kTFwUd47L6m/G992pjcvZo2kRrAFTbAn8KMWAd3NI3qCX40S5Z9grrlYkv31W+20t hrW9PTOM5UjZnsXlfmy77Og3zu1gAfQQbm0CoFE0k1Tcn9po8QihEeJ17E92Cpqu7z H9TmEJ9eYMHVZ/dkBzOv9wQLRo58l4/3ahHX2TcgKCWgckt544luPpkHHUFxzSGkwA jFsYA2FdWEPYay3nIYJJpmiZmYlxGbk5QCFHARDQ/W36JeXIqKLjQMSVBAEZkah65q W2g+TdIhNgsi5ZzIWFZYaTJC2f+M7Z3hcmf0q48al3eBVJTRTTJMLXOltFzd7fRknV QWM+uypkMBvog== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 73E035C11DC; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 20/23] context_tracking: Convert state to atomic_t Date: Mon, 20 Jun 2022 16:10:26 -0700 Message-Id: <20220620231029.3844583-20-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker Context tracking's state and dynticks counter are going to be merged in a single field so that both updates can happen atomically and at the same time. Prepare for that with converting the state into an atomic_t. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- include/linux/context_tracking.h | 24 +++++------------- include/linux/context_tracking_state.h | 34 +++++++++++++++++++++++--- kernel/context_tracking.c | 15 +++++++----- 3 files changed, 45 insertions(+), 28 deletions(-) diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index 1f568676bc1d2..a8c1db0a3f65a 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -56,7 +56,7 @@ static inline enum ctx_state exception_enter(void) !context_tracking_enabled()) return 0; - prev_ctx = this_cpu_read(context_tracking.state); + prev_ctx = __ct_state(); if (prev_ctx != CONTEXT_KERNEL) ct_user_exit(prev_ctx); @@ -86,33 +86,21 @@ static __always_inline void context_tracking_guest_exit(void) __ct_user_exit(CONTEXT_GUEST); } -/** - * ct_state() - return the current context tracking state if known - * - * Returns the current cpu's context tracking state if context tracking - * is enabled. If context tracking is disabled, returns - * CONTEXT_DISABLED. This should be used primarily for debugging. - */ -static __always_inline enum ctx_state ct_state(void) -{ - return context_tracking_enabled() ? - this_cpu_read(context_tracking.state) : CONTEXT_DISABLED; -} +#define CT_WARN_ON(cond) WARN_ON(context_tracking_enabled() && (cond)) + #else static inline void user_enter(void) { } static inline void user_exit(void) { } static inline void user_enter_irqoff(void) { } static inline void user_exit_irqoff(void) { } -static inline enum ctx_state exception_enter(void) { return 0; } +static inline int exception_enter(void) { return 0; } static inline void exception_exit(enum ctx_state prev_ctx) { } -static inline enum ctx_state ct_state(void) { return CONTEXT_DISABLED; } +static inline int ct_state(void) { return -1; } static __always_inline bool context_tracking_guest_enter(void) { return false; } static inline void context_tracking_guest_exit(void) { } - +#define CT_WARN_ON(cond) #endif /* !CONFIG_CONTEXT_TRACKING_USER */ -#define CT_WARN_ON(cond) WARN_ON(context_tracking_enabled() && (cond)) - #ifdef CONFIG_CONTEXT_TRACKING_USER_FORCE extern void context_tracking_init(void); #else diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h index 2f957b48e24f9..7d2dddf0da690 100644 --- a/include/linux/context_tracking_state.h +++ b/include/linux/context_tracking_state.h @@ -6,6 +6,9 @@ #include #include +/* Offset to allow distinguishing irq vs. task-based idle entry/exit. */ +#define DYNTICK_IRQ_NONIDLE ((LONG_MAX / 2) + 1) + enum ctx_state { CONTEXT_DISABLED = -1, /* returned by ct_state() if unknown */ CONTEXT_KERNEL = 0, @@ -13,9 +16,6 @@ enum ctx_state { CONTEXT_GUEST, }; -/* Offset to allow distinguishing irq vs. task-based idle entry/exit. */ -#define DYNTICK_IRQ_NONIDLE ((LONG_MAX / 2) + 1) - struct context_tracking { #ifdef CONFIG_CONTEXT_TRACKING_USER /* @@ -26,7 +26,7 @@ struct context_tracking { */ bool active; int recursion; - enum ctx_state state; + atomic_t state; #endif #ifdef CONFIG_CONTEXT_TRACKING_IDLE atomic_t dynticks; /* Even value for idle, else odd. */ @@ -98,6 +98,32 @@ static inline bool context_tracking_enabled_this_cpu(void) return context_tracking_enabled() && __this_cpu_read(context_tracking.active); } +static __always_inline int __ct_state(void) +{ + return atomic_read(this_cpu_ptr(&context_tracking.state)); +} + +/** + * ct_state() - return the current context tracking state if known + * + * Returns the current cpu's context tracking state if context tracking + * is enabled. If context tracking is disabled, returns + * CONTEXT_DISABLED. This should be used primarily for debugging. + */ +static __always_inline int ct_state(void) +{ + int ret; + + if (!context_tracking_enabled()) + return CONTEXT_DISABLED; + + preempt_disable(); + ret = __ct_state(); + preempt_enable(); + + return ret; +} + #else static __always_inline bool context_tracking_enabled(void) { return false; } static __always_inline bool context_tracking_enabled_cpu(int cpu) { return false; } diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 8b0979412f755..c477c7d696e0f 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -463,6 +463,7 @@ static __always_inline void context_tracking_recursion_exit(void) */ void noinstr __ct_user_enter(enum ctx_state state) { + struct context_tracking *ct = this_cpu_ptr(&context_tracking); lockdep_assert_irqs_disabled(); /* Kernel threads aren't supposed to go to userspace */ @@ -471,8 +472,8 @@ void noinstr __ct_user_enter(enum ctx_state state) if (!context_tracking_recursion_enter()) return; - if ( __this_cpu_read(context_tracking.state) != state) { - if (__this_cpu_read(context_tracking.active)) { + if (__ct_state() != state) { + if (ct->active) { /* * At this stage, only low level arch entry code remains and * then we'll run in userspace. We can assume there won't be @@ -513,7 +514,7 @@ void noinstr __ct_user_enter(enum ctx_state state) * OTOH we can spare the calls to vtime and RCU when context_tracking.active * is false because we know that CPU is not tickless. */ - __this_cpu_write(context_tracking.state, state); + atomic_set(&ct->state, state); } context_tracking_recursion_exit(); } @@ -581,11 +582,13 @@ NOKPROBE_SYMBOL(user_enter_callable); */ void noinstr __ct_user_exit(enum ctx_state state) { + struct context_tracking *ct = this_cpu_ptr(&context_tracking); + if (!context_tracking_recursion_enter()) return; - if (__this_cpu_read(context_tracking.state) == state) { - if (__this_cpu_read(context_tracking.active)) { + if (__ct_state() == state) { + if (ct->active) { /* * Exit RCU idle mode while entering the kernel because it can * run a RCU read side critical section anytime. @@ -598,7 +601,7 @@ void noinstr __ct_user_exit(enum ctx_state state) instrumentation_end(); } } - __this_cpu_write(context_tracking.state, CONTEXT_KERNEL); + atomic_set(&ct->state, CONTEXT_KERNEL); } context_tracking_recursion_exit(); } From patchwork Mon Jun 20 23:10:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E6D3C433EF for ; Mon, 20 Jun 2022 23:14:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346635AbiFTXOt (ORCPT ); Mon, 20 Jun 2022 19:14:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344038AbiFTXMK (ORCPT ); Mon, 20 Jun 2022 19:12:10 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F29E618B23; Mon, 20 Jun 2022 16:10:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 608CBB8164F; Mon, 20 Jun 2022 23:10:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5A1CAC341F2; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=2uR0wKzeI7K5nSnunNYR+dCNwwXl5D/cOVQ0kTXkN/U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AYiCfNjhr5w9/QKb5cbIZTK0qDdDBwmnEAs59LyLQpyaEh1Nv3Lx86UBI5+3GfOxU j7mnCoDx4C5W7sfFAMKdlxESldCERzbJXPHxDkWvDqiuEIRzsloj5ILY8qlszFpsgg GCZhLpDTgu74+HFfsyIeFKTDyU1a/JAYcSHlAxKBPtDBvLXVSjGillw1exWfyNuzVp dVcMXknGfEcwkWo1aGJH809Oi+Iz10HL4N8LV/meuLB/iQj6prb6p78Np/GWiKw1h/ XX8gA/ZvW5NMrbHwqcaGzXY1eu5SDz7fvC1WrI1G72wNX3oncsE5kzwZWcU7oKsBWX jxaO9jiGW/kMw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 760825C12AC; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 21/23] rcu/context_tracking: Merge dynticks counter and context tracking states Date: Mon, 20 Jun 2022 16:10:27 -0700 Message-Id: <20220620231029.3844583-21-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker Updating the context tracking state and the RCU dynticks counter atomically in a single operation is a first step towards improving CPU isolation. This makes the context tracking state updates fully ordered and therefore allow for later enhancements such as postponing some work while a task is running isolated in userspace until it ever comes back to the kernel. The state field becomes divided in two parts: 1) Two Lower bits for context tracking state: CONTEXT_KERNEL = 0 CONTEXT_IDLE = 1, CONTEXT_USER = 2, CONTEXT_GUEST = 3, 2) Higher bits for RCU eqs dynticks counting: RCU_DYNTICKS_IDX = 4 The dynticks counting is always incremented by this value. (state & RCU_DYNTICKS_IDX) means we are NOT in an extended quiescent state. This makes the chance for a collision more likely between two RCU dynticks snapshots but wrapping up 28 bits of eqs dynticks increments still takes some bad luck (also rdp.dynticks_snap could be converted from int to long?) Some RCU eqs functions have been renamed to better reflect their broader scope that now include context tracking state. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- include/linux/context_tracking.h | 8 +- include/linux/context_tracking_state.h | 35 ++++--- kernel/context_tracking.c | 132 ++++++++++++++++--------- kernel/rcu/tree.c | 13 ++- kernel/rcu/tree_stall.h | 4 +- 5 files changed, 121 insertions(+), 71 deletions(-) diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h index a8c1db0a3f65a..fd354eaea510a 100644 --- a/include/linux/context_tracking.h +++ b/include/linux/context_tracking.h @@ -118,16 +118,16 @@ extern void ct_idle_exit(void); */ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) { - return !(arch_atomic_read(this_cpu_ptr(&context_tracking.dynticks)) & 0x1); + return !(arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX); } /* - * Increment the current CPU's context_tracking structure's ->dynticks field + * Increment the current CPU's context_tracking structure's ->state field * with ordering. Return the new value. */ -static __always_inline unsigned long rcu_dynticks_inc(int incby) +static __always_inline unsigned long ct_state_inc(int incby) { - return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.dynticks)); + return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state)); } #else diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h index 7d2dddf0da690..0aecc07fb4f50 100644 --- a/include/linux/context_tracking_state.h +++ b/include/linux/context_tracking_state.h @@ -10,12 +10,20 @@ #define DYNTICK_IRQ_NONIDLE ((LONG_MAX / 2) + 1) enum ctx_state { - CONTEXT_DISABLED = -1, /* returned by ct_state() if unknown */ - CONTEXT_KERNEL = 0, - CONTEXT_USER, - CONTEXT_GUEST, + CONTEXT_DISABLED = -1, /* returned by ct_state() if unknown */ + CONTEXT_KERNEL = 0, + CONTEXT_IDLE = 1, + CONTEXT_USER = 2, + CONTEXT_GUEST = 3, + CONTEXT_MAX = 4, }; +/* Even value for idle, else odd. */ +#define RCU_DYNTICKS_IDX CONTEXT_MAX + +#define CT_STATE_MASK (CONTEXT_MAX - 1) +#define CT_DYNTICKS_MASK (~CT_STATE_MASK) + struct context_tracking { #ifdef CONFIG_CONTEXT_TRACKING_USER /* @@ -26,10 +34,11 @@ struct context_tracking { */ bool active; int recursion; +#endif +#ifdef CONFIG_CONTEXT_TRACKING atomic_t state; #endif #ifdef CONFIG_CONTEXT_TRACKING_IDLE - atomic_t dynticks; /* Even value for idle, else odd. */ long dynticks_nesting; /* Track process nesting level. */ long dynticks_nmi_nesting; /* Track irq/NMI nesting level. */ #endif @@ -37,24 +46,29 @@ struct context_tracking { #ifdef CONFIG_CONTEXT_TRACKING DECLARE_PER_CPU(struct context_tracking, context_tracking); + +static __always_inline int __ct_state(void) +{ + return atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK; +} #endif #ifdef CONFIG_CONTEXT_TRACKING_IDLE static __always_inline int ct_dynticks(void) { - return atomic_read(this_cpu_ptr(&context_tracking.dynticks)); + return atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_DYNTICKS_MASK; } static __always_inline int ct_dynticks_cpu(int cpu) { struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); - return atomic_read(&ct->dynticks); + return atomic_read(&ct->state) & CT_DYNTICKS_MASK; } static __always_inline int ct_dynticks_cpu_acquire(int cpu) { struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); - return atomic_read_acquire(&ct->dynticks); + return atomic_read_acquire(&ct->state) & CT_DYNTICKS_MASK; } static __always_inline long ct_dynticks_nesting(void) @@ -98,11 +112,6 @@ static inline bool context_tracking_enabled_this_cpu(void) return context_tracking_enabled() && __this_cpu_read(context_tracking.active); } -static __always_inline int __ct_state(void) -{ - return atomic_read(this_cpu_ptr(&context_tracking.state)); -} - /** * ct_state() - return the current context tracking state if known * diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index c477c7d696e0f..d373fad7c0cba 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -28,8 +28,8 @@ DEFINE_PER_CPU(struct context_tracking, context_tracking) = { #ifdef CONFIG_CONTEXT_TRACKING_IDLE .dynticks_nesting = 1, .dynticks_nmi_nesting = DYNTICK_IRQ_NONIDLE, - .dynticks = ATOMIC_INIT(1), #endif + .state = ATOMIC_INIT(RCU_DYNTICKS_IDX), }; EXPORT_SYMBOL_GPL(context_tracking); @@ -76,7 +76,7 @@ static __always_inline void rcu_dynticks_task_trace_exit(void) * RCU is watching prior to the call to this function and is no longer * watching upon return. */ -static noinstr void rcu_dynticks_eqs_enter(void) +static noinstr void ct_kernel_exit_state(int offset) { int seq; @@ -86,9 +86,9 @@ static noinstr void rcu_dynticks_eqs_enter(void) * next idle sojourn. */ rcu_dynticks_task_trace_enter(); // Before ->dynticks update! - seq = rcu_dynticks_inc(1); + seq = ct_state_inc(offset); // RCU is no longer watching. Better be in extended quiescent state! - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && (seq & 0x1)); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && (seq & RCU_DYNTICKS_IDX)); } /* @@ -96,7 +96,7 @@ static noinstr void rcu_dynticks_eqs_enter(void) * called from an extended quiescent state, that is, RCU is not watching * prior to the call to this function and is watching upon return. */ -static noinstr void rcu_dynticks_eqs_exit(void) +static noinstr void ct_kernel_enter_state(int offset) { int seq; @@ -105,10 +105,10 @@ static noinstr void rcu_dynticks_eqs_exit(void) * and we also must force ordering with the next RCU read-side * critical section. */ - seq = rcu_dynticks_inc(1); + seq = ct_state_inc(offset); // RCU is now watching. Better not be in an extended quiescent state! rcu_dynticks_task_trace_exit(); // After ->dynticks update! - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(seq & 0x1)); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(seq & RCU_DYNTICKS_IDX)); } /* @@ -119,7 +119,7 @@ static noinstr void rcu_dynticks_eqs_exit(void) * the possibility of usermode upcalls having messed up our count * of interrupt nesting level during the prior busy period. */ -static void noinstr rcu_eqs_enter(bool user) +static void noinstr ct_kernel_exit(bool user, int offset) { struct context_tracking *ct = this_cpu_ptr(&context_tracking); @@ -139,13 +139,13 @@ static void noinstr rcu_eqs_enter(bool user) WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); rcu_preempt_deferred_qs(current); - // instrumentation for the noinstr rcu_dynticks_eqs_enter() - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); + // instrumentation for the noinstr ct_kernel_exit_state() + instrument_atomic_write(&ct->state, sizeof(ct->state)); instrumentation_end(); WRITE_ONCE(ct->dynticks_nesting, 0); /* Avoid irq-access tearing. */ // RCU is watching here ... - rcu_dynticks_eqs_enter(); + ct_kernel_exit_state(offset); // ... but is no longer watching here. rcu_dynticks_task_enter(); } @@ -158,7 +158,7 @@ static void noinstr rcu_eqs_enter(bool user) * allow for the possibility of usermode upcalls messing up our count of * interrupt nesting level during the busy period that is just now starting. */ -static void noinstr rcu_eqs_exit(bool user) +static void noinstr ct_kernel_enter(bool user, int offset) { struct context_tracking *ct = this_cpu_ptr(&context_tracking); long oldval; @@ -173,12 +173,12 @@ static void noinstr rcu_eqs_exit(bool user) } rcu_dynticks_task_exit(); // RCU is not watching here ... - rcu_dynticks_eqs_exit(); + ct_kernel_enter_state(offset); // ... but is watching here. instrumentation_begin(); - // instrumentation for the noinstr rcu_dynticks_eqs_exit() - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); + // instrumentation for the noinstr ct_kernel_enter_state() + instrument_atomic_write(&ct->state, sizeof(ct->state)); trace_rcu_dyntick(TPS("End"), ct_dynticks_nesting(), 1, ct_dynticks()); WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); @@ -192,7 +192,7 @@ static void noinstr rcu_eqs_exit(bool user) * ct_nmi_exit - inform RCU of exit from NMI context * * If we are returning from the outermost NMI handler that interrupted an - * RCU-idle period, update ct->dynticks and ct->dynticks_nmi_nesting + * RCU-idle period, update ct->state and ct->dynticks_nmi_nesting * to let the RCU grace-period handling know that the CPU is back to * being RCU-idle. * @@ -229,12 +229,12 @@ void noinstr ct_nmi_exit(void) trace_rcu_dyntick(TPS("Startirq"), ct_dynticks_nmi_nesting(), 0, ct_dynticks()); WRITE_ONCE(ct->dynticks_nmi_nesting, 0); /* Avoid store tearing. */ - // instrumentation for the noinstr rcu_dynticks_eqs_enter() - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); + // instrumentation for the noinstr ct_kernel_exit_state() + instrument_atomic_write(&ct->state, sizeof(ct->state)); instrumentation_end(); // RCU is watching here ... - rcu_dynticks_eqs_enter(); + ct_kernel_exit_state(RCU_DYNTICKS_IDX); // ... but is no longer watching here. if (!in_nmi()) @@ -244,7 +244,7 @@ void noinstr ct_nmi_exit(void) /** * ct_nmi_enter - inform RCU of entry to NMI context * - * If the CPU was idle from RCU's viewpoint, update ct->dynticks and + * If the CPU was idle from RCU's viewpoint, update ct->state and * ct->dynticks_nmi_nesting to let the RCU grace-period handling know * that the CPU is active. This implementation permits nested NMIs, as * long as the nesting level does not overflow an int. (You will probably @@ -275,14 +275,14 @@ void noinstr ct_nmi_enter(void) rcu_dynticks_task_exit(); // RCU is not watching here ... - rcu_dynticks_eqs_exit(); + ct_kernel_enter_state(RCU_DYNTICKS_IDX); // ... but is watching here. instrumentation_begin(); // instrumentation for the noinstr rcu_dynticks_curr_cpu_in_eqs() - instrument_atomic_read(&ct->dynticks, sizeof(ct->dynticks)); - // instrumentation for the noinstr rcu_dynticks_eqs_exit() - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); + instrument_atomic_read(&ct->state, sizeof(ct->state)); + // instrumentation for the noinstr ct_kernel_enter_state() + instrument_atomic_write(&ct->state, sizeof(ct->state)); incby = 1; } else if (!in_nmi()) { @@ -315,7 +315,7 @@ void noinstr ct_nmi_enter(void) void noinstr ct_idle_enter(void) { WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); - rcu_eqs_enter(false); + ct_kernel_exit(false, RCU_DYNTICKS_IDX + CONTEXT_IDLE); } EXPORT_SYMBOL_GPL(ct_idle_enter); @@ -333,7 +333,7 @@ void noinstr ct_idle_exit(void) unsigned long flags; raw_local_irq_save(flags); - rcu_eqs_exit(false); + ct_kernel_enter(false, RCU_DYNTICKS_IDX - CONTEXT_IDLE); raw_local_irq_restore(flags); } EXPORT_SYMBOL_GPL(ct_idle_exit); @@ -421,8 +421,8 @@ void ct_irq_exit_irqson(void) local_irq_restore(flags); } #else -static __always_inline void rcu_eqs_enter(bool user) { } -static __always_inline void rcu_eqs_exit(bool user) { } +static __always_inline void ct_kernel_exit(bool user, int offset) { } +static __always_inline void ct_kernel_enter(bool user, int offset) { } #endif /* #ifdef CONFIG_CONTEXT_TRACKING_IDLE */ #ifdef CONFIG_CONTEXT_TRACKING_USER @@ -493,28 +493,49 @@ void noinstr __ct_user_enter(enum ctx_state state) * that will fire and reschedule once we resume in user/guest mode. */ rcu_irq_work_resched(); + /* * Enter RCU idle mode right before resuming userspace. No use of RCU * is permitted between this call and rcu_eqs_exit(). This way the * CPU doesn't need to maintain the tick for RCU maintenance purposes * when the CPU runs in userspace. */ - rcu_eqs_enter(true); + ct_kernel_exit(true, RCU_DYNTICKS_IDX + state); + + /* + * Special case if we only track user <-> kernel transitions for tickless + * cputime accounting but we don't support RCU extended quiescent state. + * In this we case we don't care about any concurrency/ordering. + */ + if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) + atomic_set(&ct->state, state); + } else { + /* + * Even if context tracking is disabled on this CPU, because it's outside + * the full dynticks mask for example, we still have to keep track of the + * context transitions and states to prevent inconsistency on those of + * other CPUs. + * If a task triggers an exception in userspace, sleep on the exception + * handler and then migrate to another CPU, that new CPU must know where + * the exception returns by the time we call exception_exit(). + * This information can only be provided by the previous CPU when it called + * exception_enter(). + * OTOH we can spare the calls to vtime and RCU when context_tracking.active + * is false because we know that CPU is not tickless. + */ + if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) { + /* Tracking for vtime only, no concurrent RCU EQS accounting */ + atomic_set(&ct->state, state); + } else { + /* + * Tracking for vtime and RCU EQS. Make sure we don't race + * with NMIs. OTOH we don't care about ordering here since + * RCU only requires RCU_DYNTICKS_IDX increments to be fully + * ordered. + */ + atomic_add(state, &ct->state); + } } - /* - * Even if context tracking is disabled on this CPU, because it's outside - * the full dynticks mask for example, we still have to keep track of the - * context transitions and states to prevent inconsistency on those of - * other CPUs. - * If a task triggers an exception in userspace, sleep on the exception - * handler and then migrate to another CPU, that new CPU must know where - * the exception returns by the time we call exception_exit(). - * This information can only be provided by the previous CPU when it called - * exception_enter(). - * OTOH we can spare the calls to vtime and RCU when context_tracking.active - * is false because we know that CPU is not tickless. - */ - atomic_set(&ct->state, state); } context_tracking_recursion_exit(); } @@ -593,15 +614,36 @@ void noinstr __ct_user_exit(enum ctx_state state) * Exit RCU idle mode while entering the kernel because it can * run a RCU read side critical section anytime. */ - rcu_eqs_exit(true); + ct_kernel_enter(true, RCU_DYNTICKS_IDX - state); if (state == CONTEXT_USER) { instrumentation_begin(); vtime_user_exit(current); trace_user_exit(0); instrumentation_end(); } + + /* + * Special case if we only track user <-> kernel transitions for tickless + * cputime accounting but we don't support RCU extended quiescent state. + * In this we case we don't care about any concurrency/ordering. + */ + if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) + atomic_set(&ct->state, CONTEXT_KERNEL); + + } else { + if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) { + /* Tracking for vtime only, no concurrent RCU EQS accounting */ + atomic_set(&ct->state, CONTEXT_KERNEL); + } else { + /* + * Tracking for vtime and RCU EQS. Make sure we don't race + * with NMIs. OTOH we don't care about ordering here since + * RCU only requires RCU_DYNTICKS_IDX increments to be fully + * ordered. + */ + atomic_sub(state, &ct->state); + } } - atomic_set(&ct->state, CONTEXT_KERNEL); } context_tracking_recursion_exit(); } diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index e2a2083079a2c..f9d20b40071f3 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -272,9 +272,9 @@ void rcu_softirq_qs(void) */ static void rcu_dynticks_eqs_online(void) { - if (ct_dynticks() & 0x1) + if (ct_dynticks() & RCU_DYNTICKS_IDX) return; - rcu_dynticks_inc(1); + ct_state_inc(RCU_DYNTICKS_IDX); } /* @@ -293,7 +293,7 @@ static int rcu_dynticks_snap(int cpu) */ static bool rcu_dynticks_in_eqs(int snap) { - return !(snap & 0x1); + return !(snap & RCU_DYNTICKS_IDX); } /* Return true if the specified CPU is currently idle from an RCU viewpoint. */ @@ -321,8 +321,7 @@ bool rcu_dynticks_zero_in_eqs(int cpu, int *vp) int snap; // If not quiescent, force back to earlier extended quiescent state. - snap = ct_dynticks_cpu(cpu) & ~0x1; - + snap = ct_dynticks_cpu(cpu) & ~RCU_DYNTICKS_IDX; smp_rmb(); // Order ->dynticks and *vp reads. if (READ_ONCE(*vp)) return false; // Non-zero, so report failure; @@ -348,9 +347,9 @@ notrace void rcu_momentary_dyntick_idle(void) int seq; raw_cpu_write(rcu_data.rcu_need_heavy_qs, false); - seq = rcu_dynticks_inc(2); + seq = ct_state_inc(2 * RCU_DYNTICKS_IDX); /* It is illegal to call this from idle state. */ - WARN_ON_ONCE(!(seq & 0x1)); + WARN_ON_ONCE(!(seq & RCU_DYNTICKS_IDX)); rcu_preempt_deferred_qs(current); } EXPORT_SYMBOL_GPL(rcu_momentary_dyntick_idle); diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h index 2683ce0a7c724..195cad14742dd 100644 --- a/kernel/rcu/tree_stall.h +++ b/kernel/rcu/tree_stall.h @@ -469,7 +469,7 @@ static void print_cpu_stall_info(int cpu) rcuc_starved = rcu_is_rcuc_kthread_starving(rdp, &j); if (rcuc_starved) sprintf(buf, " rcuc=%ld jiffies(starved)", j); - pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%03x/%ld/%#lx softirq=%u/%u fqs=%ld%s%s\n", + pr_err("\t%d-%c%c%c%c: (%lu %s) idle=%04x/%ld/%#lx softirq=%u/%u fqs=%ld%s%s\n", cpu, "O."[!!cpu_online(cpu)], "o."[!!(rdp->grpmask & rdp->mynode->qsmaskinit)], @@ -478,7 +478,7 @@ static void print_cpu_stall_info(int cpu) rdp->rcu_iw_pending ? (int)min(delta, 9UL) + '0' : "!."[!delta], ticks_value, ticks_title, - rcu_dynticks_snap(cpu) & 0xfff, + rcu_dynticks_snap(cpu) & 0xffff, ct_dynticks_nesting_cpu(cpu), ct_dynticks_nmi_nesting_cpu(cpu), rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu), data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart, From patchwork Mon Jun 20 23:10:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F2AFC433EF for ; Mon, 20 Jun 2022 23:13:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347089AbiFTXNP (ORCPT ); Mon, 20 Jun 2022 19:13:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347093AbiFTXMF (ORCPT ); Mon, 20 Jun 2022 19:12:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A991E222AD; Mon, 20 Jun 2022 16:10:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BCB0261518; Mon, 20 Jun 2022 23:10:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 66E59C341F3; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=HKXI7a3s5VE0aI0E/MR38X8xReaE4DBkd248vFw4EvI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LBOlQ47nuvENrFskbYVGWLjaQifuDIznW2DH8k24/T0/2Cu/TU/NdPQ5SouFRgUuH Fy9WNqi0hk2FMMLM+p1ckNaN2PkZrJuqbQc0zgeN0o+AjrTZg2AMWx1/HyBH0kRA+o GyxGvWld8sSWE1jcM5nMVd8FMsFv2fWGJ6EgcgZd3oo4vuHmEBkTg9O9f5RyKBkGaN pMMOZVkiMcGl2wuVnDY82vTZprdO5gOtgVo0U4zmmMfaviI1Y7ucHxYHQaeIzslA63 5CqGVJ93+boJgz4iITTarQQac6gVEHqY+MKg7xrSFy3jtwMLGaCc7uP2B6W0DqV3Mp MrtjPJvvLbIXg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 782B35C1363; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , Peter Zijlstra , Thomas Gleixner , Neeraj Upadhyay , Uladzislau Rezki , Joel Fernandes , Boqun Feng , Nicolas Saenz Julienne , Marcelo Tosatti , Xiongfeng Wang , Yu Liao , Phil Auld , Paul Gortmaker , Alex Belits , "Paul E . McKenney" Subject: [PATCH rcu 22/23] MAINTAINERS: Add Paul as context tracking maintainer Date: Mon, 20 Jun 2022 16:10:28 -0700 Message-Id: <20220620231029.3844583-22-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker Since most of the bits have been imported from kernel/rcu/tree.c and now that the context tracking code is tightly linked to RCU, add Paul as a context tracking maintainer. Also update the context tracking file header accordingly. Signed-off-by: Frederic Weisbecker Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Neeraj Upadhyay Cc: Uladzislau Rezki Cc: Joel Fernandes Cc: Boqun Feng Cc: Nicolas Saenz Julienne Cc: Marcelo Tosatti Cc: Xiongfeng Wang Cc: Yu Liao Cc: Phil Auld Cc: Paul Gortmaker Cc: Alex Belits Signed-off-by: Paul E. McKenney --- MAINTAINERS | 1 + kernel/context_tracking.c | 12 +++++++----- 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/MAINTAINERS b/MAINTAINERS index 3cf9842d9233c..4e38d7533cbe9 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5039,6 +5039,7 @@ F: include/linux/console* CONTEXT TRACKING M: Frederic Weisbecker +M: "Paul E. McKenney" S: Maintained F: kernel/context_tracking.c F: include/linux/context_tracking* diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index d373fad7c0cba..1da44803fd319 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -1,18 +1,20 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * Context tracking: Probe on high level context boundaries such as kernel - * and userspace. This includes syscalls and exceptions entry/exit. + * Context tracking: Probe on high level context boundaries such as kernel, + * userspace, guest or idle. * * This is used by RCU to remove its dependency on the timer tick while a CPU - * runs in userspace. + * runs in idle, userspace or guest mode. * - * Started by Frederic Weisbecker: + * User/guest tracking started by Frederic Weisbecker: * - * Copyright (C) 2012 Red Hat, Inc., Frederic Weisbecker + * Copyright (C) 2012 Red Hat, Inc., Frederic Weisbecker * * Many thanks to Gilad Ben-Yossef, Paul McKenney, Ingo Molnar, Andrew Morton, * Steven Rostedt, Peter Zijlstra for suggestions and improvements. * + * RCU extended quiescent state bits imported from kernel/rcu/tree.c + * where the relevant authorship may be found. */ #include From patchwork Mon Jun 20 23:10:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31BA0C433EF for ; Mon, 20 Jun 2022 23:12:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347008AbiFTXMv (ORCPT ); Mon, 20 Jun 2022 19:12:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347091AbiFTXMF (ORCPT ); Mon, 20 Jun 2022 19:12:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A99ED22505; Mon, 20 Jun 2022 16:10:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5BFD661502; Mon, 20 Jun 2022 23:10:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B4A3C341F4; Mon, 20 Jun 2022 23:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655766632; bh=gT7MwZiGJkiPueXwvV+cr0NVj3jFSUZd3lPg3Go3xJ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nNOgQPsGunQcJycedFWFIbU6kXAyWGvRcF/Ndv8oWcD/kYqdV7ezjASLrroYc/ybL AGuvzFjTr5WT6Y7ULbZ2gjschh/fQWNdtn8Q34OC4gihZ5SzJ5sYYhgblcBKWAagUj JCI28LUjuFb1GwXXOb36wcC1cSP5dAgBoHnLh4ky8ZylDA2e1fd9BvzRebQCvgHBN/ e15HpSJXk1ddP5SKt6AdNTEMgqReMN+e3O1wxlFhNqe/Dzh1qSa0C8GC8HPJpAE88K brPpTi0m7a8hYBvp/k9Nf+SwExQWetqmel0UYSbTLFaAB/Up3yTUp0YUwDbOmW2ga1 2BfQf3nzvJ9mg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 79F045C13B9; Mon, 20 Jun 2022 16:10:31 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Peter Zijlstra , Frederic Weisbecker , "Paul E . McKenney" Subject: [PATCH rcu 23/23] context_tracking: Interrupts always disabled for ct_idle_exit() Date: Mon, 20 Jun 2022 16:10:29 -0700 Message-Id: <20220620231029.3844583-23-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> References: <20220620231027.GA3844372@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Peter Zijlstra Now that the idle-loop cleanups have ensured that rcu_idle_exit() is always invoked with interrupts disabled, remove the interrupt disabling in favor of a debug check. Signed-off-by: Peter Zijlstra Cc: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/context_tracking.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 1da44803fd319..99310cf5b0254 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -332,11 +332,8 @@ EXPORT_SYMBOL_GPL(ct_idle_enter); */ void noinstr ct_idle_exit(void) { - unsigned long flags; - - raw_local_irq_save(flags); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); ct_kernel_enter(false, RCU_DYNTICKS_IDX - CONTEXT_IDLE); - raw_local_irq_restore(flags); } EXPORT_SYMBOL_GPL(ct_idle_exit);