From patchwork Tue Oct 3 23:29:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 13408095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D43B2E8FDC0 for ; Tue, 3 Oct 2023 23:29:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236839AbjJCX3e (ORCPT ); Tue, 3 Oct 2023 19:29:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236767AbjJCX33 (ORCPT ); Tue, 3 Oct 2023 19:29:29 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0F92F0; Tue, 3 Oct 2023 16:29:24 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33129C433CA; Tue, 3 Oct 2023 23:29:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1696375764; bh=0rRo8FAmb5oxl6EZiJPhw9pHIJx5I2TobFmAd6Jx3js=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ckZ84jL+5znrqWKHYEA+9b1Rw3RT0Gi+UXmpmDTGlZhoPinert8WUNxoDLU4o56nk dmedaAxQJKrj7ntNQVKbPtnO2xlcoeh9TptXuh3uHCcLUzchAgQQSAxWftjDL7dHI4 F0GlfzowXeBcgPjhVYmPfr5Vtqv1pKYzqUVAEbkdOOLw4rADsiMRppbrszlTsiYIXp najipfPoK5n//X9Kg/cvv3L1npieqZqr+QdozUZ9ALZVuzsIdeTaN32AS/NkaSNHQV vrd4UXNW8B3aGZGyz0IN4a6Eb7zGdywILKMBiwNxLEtYz+WsYVe5USMhK4ClYS+sxK PV7LRnOBhW8iw== From: Frederic Weisbecker To: "Paul E . McKenney" Cc: LKML , Frederic Weisbecker , Yong He , Neeraj upadhyay , Joel Fernandes , Boqun Feng , Uladzislau Rezki , RCU Subject: [PATCH 5/5] srcu: Explain why callbacks invocations can't run concurrently Date: Wed, 4 Oct 2023 01:29:03 +0200 Message-ID: <20231003232903.7109-6-frederic@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231003232903.7109-1-frederic@kernel.org> References: <20231003232903.7109-1-frederic@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org If an SRCU barrier is queued while callbacks are running and a new callbacks invocator for the same sdp were to run concurrently, the RCU barrier might execute too early. As this requirement is non-obvious, make sure to keep a record. Signed-off-by: Frederic Weisbecker --- kernel/rcu/srcutree.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 2bfc8ed1eed2..0351a4e83529 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -1715,6 +1715,11 @@ static void srcu_invoke_callbacks(struct work_struct *work) WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); rcu_segcblist_advance(&sdp->srcu_cblist, rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); + /* + * Although this function is theoretically re-entrant, concurrent + * callbacks invocation is disallowed to avoid executing an SRCU barrier + * too early. + */ if (sdp->srcu_cblist_invoking || !rcu_segcblist_ready_cbs(&sdp->srcu_cblist)) { spin_unlock_irq_rcu_node(sdp); @@ -1745,6 +1750,7 @@ static void srcu_invoke_callbacks(struct work_struct *work) sdp->srcu_cblist_invoking = false; more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist); spin_unlock_irq_rcu_node(sdp); + /* An SRCU barrier or callbacks from previous nesting work pending */ if (more) srcu_schedule_cbs_sdp(sdp, 0); }