From patchwork Tue Jan 3 17:53:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 13087776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3213C53210 for ; Tue, 3 Jan 2023 17:53:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238203AbjACRxk (ORCPT ); Tue, 3 Jan 2023 12:53:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238185AbjACRxh (ORCPT ); Tue, 3 Jan 2023 12:53:37 -0500 Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 220066478 for ; Tue, 3 Jan 2023 09:53:37 -0800 (PST) Received: by mail-qt1-x82c.google.com with SMTP id c7so25102053qtw.8 for ; Tue, 03 Jan 2023 09:53:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=K3lJPOWgS9gmsVyQJhhMopaN46rhZdwXt5/TWv91+AM=; b=sx78T575rkzRQ1xyJ0XJDbAOm3msaUet2neJ8FxIbPzxNm+/Pj5IJHNyWd949YDqCx N/h11soxlNwdyogCcdguHHnd8pSD1NxwlN6iK5PStXTFe2mbAXRRhb6MNrgvs+ttR0f6 8ldUuEl3RadmySoPBmcYvN12ot2mpRUEGC+nk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=K3lJPOWgS9gmsVyQJhhMopaN46rhZdwXt5/TWv91+AM=; b=SSTgf5dKC6byzbCzcVeMAKO3lRbMCV3eGaUX3a1mB1SmKzZCc97W9PF7zE68MkfFqb dtePmNfH36KulP/elnTso9hW65Y8jEYvji5OhloA59CQlo+8iSORu/HMMRwp0TKwDQfV gu6QrDE8kGKJc1o5aEJusg+z9/H6lsAJgseilEh5i/MA5H0s6MXNeZPopOndqlrGD9TN XCZdFarQ/2une5ZfRdSilyMbsrNI9lRZqvmeS9TvNfgrU7f5g8oaqnjGHx93gn+yG7Is 8aTDPEfOGjPMJQ/TmY4t7tzRkFRNtBkajlGfFwV4u/NYFrmN6AMgoi8sRXaZdw2Pyvyn Zgmw== X-Gm-Message-State: AFqh2kqeY8XDr64SNTxf5tg7p5Fl+n3VCXxfDgTqLV/kVo2FH5U9nuXx HAN91dadR32aE+UjdlG8bmcGHg== X-Google-Smtp-Source: AMrXdXutdee2UENiJl42RFuOVUHGIRRJgHkozXbKmn9JZf/efMbI2kZ377ReUqxftt3A7Kc0UV6jkQ== X-Received: by 2002:ac8:5481:0:b0:3a7:e2df:e868 with SMTP id h1-20020ac85481000000b003a7e2dfe868mr54522820qtq.41.1672768416180; Tue, 03 Jan 2023 09:53:36 -0800 (PST) Received: from joelboxx.c.googlers.com.com (228.221.150.34.bc.googleusercontent.com. [34.150.221.228]) by smtp.gmail.com with ESMTPSA id u15-20020a37ab0f000000b006fafc111b12sm22253412qke.83.2023.01.03.09.53.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 09:53:35 -0800 (PST) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Joel Fernandes (Google)" , Frederic Weisbecker , Mathieu Desnoyers , Boqun Feng , Josh Triplett , Lai Jiangshan , "Paul E. McKenney" , rcu@vger.kernel.org, Steven Rostedt , neeraj.iitr10@gmail.com Subject: [PATCH v2] srcu: Remove memory barrier "E" as it does not do anything Date: Tue, 3 Jan 2023 17:53:21 +0000 Message-Id: <20230103175321.1910864-1-joel@joelfernandes.org> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org During a flip, we have a full memory barrier before srcu_idx is incremented. The idea is we intend to order the first phase scan's read of lock counters with the flipping of the index. However, such ordering is already enforced because of the control-dependency between the 2 scans. We would be flipping the index only if lock and unlock counts matched. But such match will not happen if there was a pending reader before the flip in the first place (observation courtesy Mathieu Desnoyers). The litmus test below shows this: (test courtesy Frederic Weisbecker, Changes for ctrldep by Boqun/me): C srcu (* * bad condition: P0's first scan (SCAN1) saw P1's idx=0 LOCK count inc, though P1 saw flip. * * So basically, the ->po ordering on both P0 and P1 is enforced via ->ppo * (control deps) on both sides, and both P0 and P1 are interconnected by ->rf * relations. Combining the ->ppo with ->rf, a cycle is impossible. *) {} // updater P0(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1) { int lock1; int unlock1; int lock0; int unlock0; // SCAN1 unlock1 = READ_ONCE(*UNLOCK1); smp_mb(); // A lock1 = READ_ONCE(*LOCK1); // FLIP if (lock1 == unlock1) { // Control dep smp_mb(); // E // Remove E and still passes. WRITE_ONCE(*IDX, 1); smp_mb(); // D // SCAN2 unlock0 = READ_ONCE(*UNLOCK0); smp_mb(); // A lock0 = READ_ONCE(*LOCK0); } } // reader P1(int *IDX, int *LOCK0, int *UNLOCK0, int *LOCK1, int *UNLOCK1) { int tmp; int idx1; int idx2; // 1st reader idx1 = READ_ONCE(*IDX); if (idx1 == 0) { // Control dep tmp = READ_ONCE(*LOCK0); WRITE_ONCE(*LOCK0, tmp + 1); smp_mb(); /* B and C */ tmp = READ_ONCE(*UNLOCK0); WRITE_ONCE(*UNLOCK0, tmp + 1); } else { tmp = READ_ONCE(*LOCK1); WRITE_ONCE(*LOCK1, tmp + 1); smp_mb(); /* B and C */ tmp = READ_ONCE(*UNLOCK1); WRITE_ONCE(*UNLOCK1, tmp + 1); } } exists (0:lock1=1 /\ 1:idx1=1) Co-developed-by: Frederic Weisbecker Co-developed-by: Mathieu Desnoyers Co-developed-by: Boqun Feng Signed-off-by: Joel Fernandes (Google) --- v1->v2: Update changelog, keep old comments. kernel/rcu/srcutree.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 1c304fec89c0..0f9ba0f9fd12 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -983,15 +983,15 @@ static bool try_check_zero(struct srcu_struct *ssp, int idx, int trycount) static void srcu_flip(struct srcu_struct *ssp) { /* - * Ensure that if this updater saw a given reader's increment - * from __srcu_read_lock(), that reader was using an old value - * of ->srcu_idx. Also ensure that if a given reader sees the - * new value of ->srcu_idx, this updater's earlier scans cannot - * have seen that reader's increments (which is OK, because this - * grace period need not wait on that reader). + * Control dependencies on both reader and updater side ensures that if + * this updater saw a given reader's increment from __srcu_read_lock(), + * that reader was using an old value of ->srcu_idx. Also ensures that + * if a given reader sees the new value of ->srcu_idx, this updater's + * earlier scans cannot have seen that reader's increments (which is + * OK, because this grace period need not wait on that reader). + * + * So no need for an smp_mb() before incrementing srcu_idx. */ - smp_mb(); /* E */ /* Pairs with B and C. */ - WRITE_ONCE(ssp->srcu_idx, ssp->srcu_idx + 1); /*