From patchwork Thu Jan 5 00:44:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13089324 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2049CC54EBC for ; Thu, 5 Jan 2023 00:49:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229470AbjAEAti (ORCPT ); Wed, 4 Jan 2023 19:49:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230343AbjAEAtN (ORCPT ); Wed, 4 Jan 2023 19:49:13 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BEBD53702; Wed, 4 Jan 2023 16:45:46 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2830DB8199A; Thu, 5 Jan 2023 00:45:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 84100C433AA; Thu, 5 Jan 2023 00:45:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672879503; bh=GkMvou9LWTPYdZckr57QSN+yE5nVrUCbMx7MJAU8/Ko=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d/BHuEXCfnXFPrkASVpDASAU+6pQ8/TrrSBBx4uS0EiIgH2kJU72s9W3h51/XS75l DfCE5Qm102E3BMfqoO0wE3ze0hkg8vHLL/VM/pg+8R1MrIWblTRhStxzLOWauI3Nau HdzEE1rYEJIMF2ovM/071ukZSj+/Jai6Zi+fZa+qxZVgDwgsAZRmd3FbNIg73N7zN5 3qTncCoHYovlXr/TSRdo9WZ3py5TQvvfnPZ3hxFHGO9qhzlZUJLCNYvmpcFxIDFH6k J4VbOD4huc3CPiYRH4l8B9NA/N9c6oJIZGxC6oECJrdx5BKLaClnopvZWRJLMfhsIO gSsphY/BrW7eA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id EAFA55C1C64; Wed, 4 Jan 2023 16:45:02 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, Zqiang , "Paul E . McKenney" Subject: [PATCH rcu 5/6] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug Date: Wed, 4 Jan 2023 16:44:58 -0800 Message-Id: <20230105004501.1771332-10-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105004454.GA1771168@paulmck-ThinkPad-P17-Gen-1> References: <20230105004454.GA1771168@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Zqiang The synchronize_rcu_tasks_rude() function invokes rcu_tasks_rude_wait_gp() to wait one rude RCU-tasks grace period. The rcu_tasks_rude_wait_gp() function in turn checks if there is only a single online CPU. If so, it will immediately return, because a call to synchronize_rcu_tasks_rude() is by definition a grace period on a single-CPU system. (We could have blocked!) Unfortunately, this check uses num_online_cpus() without synchronization, which can result in too-short grace periods. To see this, consider the following scenario: CPU0 CPU1 (going offline) migration/1 task: cpu_stopper_thread -> take_cpu_down -> _cpu_disable (dec __num_online_cpus) ->cpuhp_invoke_callback preempt_disable access old_data0 task1 del old_data0 ..... synchronize_rcu_tasks_rude() task1 schedule out .... task2 schedule in rcu_tasks_rude_wait_gp() ->__num_online_cpus == 1 ->return .... task1 schedule in ->free old_data0 preempt_enable When CPU1 decrements __num_online_cpus, its value becomes 1. However, CPU1 has not finished going offline, and will take one last trip through the scheduler and the idle loop before it actually stops executing instructions. Because synchronize_rcu_tasks_rude() is mostly used for tracing, and because both the scheduler and the idle loop can be traced, this means that CPU0's prematurely ended grace period might disrupt the tracing on CPU1. Given that this disruption might include CPU1 executing instructions in memory that was just now freed (and maybe reallocated), this is a matter of some concern. This commit therefore removes that problematic single-CPU check from the rcu_tasks_rude_wait_gp() function. This dispenses with the single-CPU optimization, but there is no evidence indicating that this optimization is important. In addition, synchronize_rcu_tasks_generic() contains a similar optimization (albeit only for early boot), which also splats. (As in exactly why are you invoking synchronize_rcu_tasks_rude() so early in boot, anyway???) It is OK for the synchronize_rcu_tasks_rude() function's check to be unsynchronized because the only times that this check can evaluate to true is when there is only a single CPU running with preemption disabled. While in the area, this commit also fixes a minor bug in which a call to synchronize_rcu_tasks_rude() would instead be attributed to synchronize_rcu_tasks(). [ paulmck: Add "synchronize_" prefix and "()" suffix. ] Signed-off-by: Zqiang Signed-off-by: Paul E. McKenney --- kernel/rcu/tasks.h | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 5de61f12a1645..eee38b0d362a8 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -560,8 +560,9 @@ static int __noreturn rcu_tasks_kthread(void *arg) static void synchronize_rcu_tasks_generic(struct rcu_tasks *rtp) { /* Complain if the scheduler has not started. */ - WARN_ONCE(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE, - "synchronize_rcu_tasks called too soon"); + if (WARN_ONCE(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE, + "synchronize_%s() called too soon", rtp->name)) + return; // If the grace-period kthread is running, use it. if (READ_ONCE(rtp->kthread_ptr)) { @@ -1064,9 +1065,6 @@ static void rcu_tasks_be_rude(struct work_struct *work) // Wait for one rude RCU-tasks grace period. static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp) { - if (num_online_cpus() <= 1) - return; // Fastpath for only one CPU. - rtp->n_ipis += cpumask_weight(cpu_online_mask); schedule_on_each_cpu(rcu_tasks_be_rude); }