From patchwork Mon Jun 20 22:54:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12888392 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47681C43334 for ; Mon, 20 Jun 2022 22:54:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344890AbiFTWyw (ORCPT ); Mon, 20 Jun 2022 18:54:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344834AbiFTWy0 (ORCPT ); Mon, 20 Jun 2022 18:54:26 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18F451EC67; Mon, 20 Jun 2022 15:54:18 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 850EE61483; Mon, 20 Jun 2022 22:54:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 27199C341F0; Mon, 20 Jun 2022 22:54:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1655765654; bh=ZQcJWLIdgwx6oT8P5GOxsn5kfGEH3xvA4D6Bf+j2JpA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kTduJzSPgUneTqdbkCUu9devVOMSUUnx81AKOpx1qh9MRZ+9jVZBiFNB/EFB8/3A0 nAYwZf6PLP16zT7tPfGEWcP6Pw2+q6sD37itjHECjWbi7+gvioGJWY4dSyYte6zy20 jHCSi1eOr7zR3m6XLWGWhR9hdf2ScJZSxezwc+9Uxxx7wPoGsqwI5UtNAJnoPnaz0M 1vMXwBxWpKcRbf50KpdD2JvEu0rcdDrHvl/7qrLFdguZH1sOrKbycReu89RjKftuW6 se+x5kJuOD+Jmn5bPaQppidQFXffaLbnVppuPVpDgk+QQ/0D76GpOh+3F33tiPEjHJ 5XHXoOd4JgjtA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 561C05C147C; Mon, 20 Jun 2022 15:54:13 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Neeraj Upadhyay , Eric Dumazet , Alexei Starovoitov , Andrii Nakryiko , Martin KaFai Lau , KP Singh Subject: [PATCH rcu 24/32] rcu-tasks: Pull in tasks blocked within RCU Tasks Trace readers Date: Mon, 20 Jun 2022 15:54:03 -0700 Message-Id: <20220620225411.3842519-24-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20220620225402.GA3842369@paulmck-ThinkPad-P17-Gen-1> References: <20220620225402.GA3842369@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit scans each CPU's ->rtp_blkd_tasks list, adding them to the list of holdout tasks. This will cause the current RCU Tasks Trace grace period to wait until these tasks exit their RCU Tasks Trace read-side critical sections. This commit will enable later work omitting the scan of the full task list. Signed-off-by: Paul E. McKenney Cc: Neeraj Upadhyay Cc: Eric Dumazet Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Martin KaFai Lau Cc: KP Singh --- kernel/rcu/tasks.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index a8f95864c921a..d318cdfd2309c 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -1492,7 +1492,11 @@ static void rcu_tasks_trace_pertask_handler(void *hop_in) /* Initialize for a new RCU-tasks-trace grace period. */ static void rcu_tasks_trace_pregp_step(struct list_head *hop) { + LIST_HEAD(blkd_tasks); int cpu; + unsigned long flags; + struct rcu_tasks_percpu *rtpcp; + struct task_struct *t; // There shouldn't be any old IPIs, but... for_each_possible_cpu(cpu) @@ -1506,6 +1510,26 @@ static void rcu_tasks_trace_pregp_step(struct list_head *hop) // allow safe access to the hop list. for_each_online_cpu(cpu) smp_call_function_single(cpu, rcu_tasks_trace_pertask_handler, hop, 1); + + // Only after all running tasks have been accounted for is it + // safe to take care of the tasks that have blocked within their + // current RCU tasks trace read-side critical section. + for_each_possible_cpu(cpu) { + rtpcp = per_cpu_ptr(rcu_tasks_trace.rtpcpu, cpu); + raw_spin_lock_irqsave_rcu_node(rtpcp, flags); + list_splice_init(&rtpcp->rtp_blkd_tasks, &blkd_tasks); + while (!list_empty(&blkd_tasks)) { + rcu_read_lock(); + t = list_first_entry(&blkd_tasks, struct task_struct, trc_blkd_node); + list_del_init(&t->trc_blkd_node); + list_add(&t->trc_blkd_node, &rtpcp->rtp_blkd_tasks); + raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); + rcu_tasks_trace_pertask(t, hop); + rcu_read_unlock(); + raw_spin_lock_irqsave_rcu_node(rtpcp, flags); + } + raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); + } } /*