From patchwork Wed Oct 19 22:58:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13012470 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B3EEC433FE for ; Wed, 19 Oct 2022 22:58:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231128AbiJSW6v (ORCPT ); Wed, 19 Oct 2022 18:58:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230440AbiJSW6u (ORCPT ); Wed, 19 Oct 2022 18:58:50 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02D8A1CBA94; Wed, 19 Oct 2022 15:58:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9195D61920; Wed, 19 Oct 2022 22:58:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EA56CC433C1; Wed, 19 Oct 2022 22:58:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666220328; bh=lC2ZhkuunFA2ksKMDckPtMceJ6zVcRmtLkRgyIgtcls=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cnfRlwEud8zypsIPnkwkCkn4HzzhbvLYXCgPFisO8zPb3C9evnYf+kZ3SQCxwnJHu 5R9xpAjW3LITHmqT4vuhUGiA6f3pnLgLI1uyh2NP3h/sgiOfkqgUUb+PO5BAhzptBn dgGpyTdlk4eBWNcDLem7An8S6TbmugUekGrA3uLk1rv/K6gNaoJ416WnYw19pRhroW 7rXKmCMtL/u2/czbqQraIF1CRXk/WWpE2VQqm3SVOyUOt393LxO9arF800iV3GXnmN jEEvmzO5mbnEppwdS85MqBgABaBfTg783962fITifkN43P+5H2drvNoQk4319L54O+ oGF8SQRQ88njA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id AA4CD5C06B4; Wed, 19 Oct 2022 15:58:47 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Frederic Weisbecker , Thomas Gleixner , John Ogness , Petr Mladek Subject: [PATCH v3 rcu 01/11] srcu: Convert ->srcu_lock_count and ->srcu_unlock_count to atomic Date: Wed, 19 Oct 2022 15:58:36 -0700 Message-Id: <20221019225846.2501109-1-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> References: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org NMI-safe variants of srcu_read_lock() and srcu_read_unlock() are needed by printk(), which on many architectures entails read-modify-write atomic operations. This commit prepares Tree SRCU for this change by making both ->srcu_lock_count and ->srcu_unlock_count by atomic_long_t. [ paulmck: Apply feedback from John Ogness. ] Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Signed-off-by: Paul E. McKenney Reviewed-by: Frederic Weisbecker Cc: Thomas Gleixner Cc: John Ogness Cc: Petr Mladek --- include/linux/srcutree.h | 4 ++-- kernel/rcu/srcutree.c | 24 ++++++++++++------------ 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index e3014319d1ade..0c4eca07d78d5 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -23,8 +23,8 @@ struct srcu_struct; */ struct srcu_data { /* Read-side state. */ - unsigned long srcu_lock_count[2]; /* Locks per CPU. */ - unsigned long srcu_unlock_count[2]; /* Unlocks per CPU. */ + atomic_long_t srcu_lock_count[2]; /* Locks per CPU. */ + atomic_long_t srcu_unlock_count[2]; /* Unlocks per CPU. */ /* Update-side state. */ spinlock_t __private lock ____cacheline_internodealigned_in_smp; diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 1c304fec89c02..25e9458da6a26 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -417,7 +417,7 @@ static unsigned long srcu_readers_lock_idx(struct srcu_struct *ssp, int idx) for_each_possible_cpu(cpu) { struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu); - sum += READ_ONCE(cpuc->srcu_lock_count[idx]); + sum += atomic_long_read(&cpuc->srcu_lock_count[idx]); } return sum; } @@ -434,7 +434,7 @@ static unsigned long srcu_readers_unlock_idx(struct srcu_struct *ssp, int idx) for_each_possible_cpu(cpu) { struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu); - sum += READ_ONCE(cpuc->srcu_unlock_count[idx]); + sum += atomic_long_read(&cpuc->srcu_unlock_count[idx]); } return sum; } @@ -503,10 +503,10 @@ static bool srcu_readers_active(struct srcu_struct *ssp) for_each_possible_cpu(cpu) { struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu); - sum += READ_ONCE(cpuc->srcu_lock_count[0]); - sum += READ_ONCE(cpuc->srcu_lock_count[1]); - sum -= READ_ONCE(cpuc->srcu_unlock_count[0]); - sum -= READ_ONCE(cpuc->srcu_unlock_count[1]); + sum += atomic_long_read(&cpuc->srcu_lock_count[0]); + sum += atomic_long_read(&cpuc->srcu_lock_count[1]); + sum -= atomic_long_read(&cpuc->srcu_unlock_count[0]); + sum -= atomic_long_read(&cpuc->srcu_unlock_count[1]); } return sum; } @@ -636,7 +636,7 @@ int __srcu_read_lock(struct srcu_struct *ssp) int idx; idx = READ_ONCE(ssp->srcu_idx) & 0x1; - this_cpu_inc(ssp->sda->srcu_lock_count[idx]); + this_cpu_inc(ssp->sda->srcu_lock_count[idx].counter); smp_mb(); /* B */ /* Avoid leaking the critical section. */ return idx; } @@ -650,7 +650,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock); void __srcu_read_unlock(struct srcu_struct *ssp, int idx) { smp_mb(); /* C */ /* Avoid leaking the critical section. */ - this_cpu_inc(ssp->sda->srcu_unlock_count[idx]); + this_cpu_inc(ssp->sda->srcu_unlock_count[idx].counter); } EXPORT_SYMBOL_GPL(__srcu_read_unlock); @@ -1687,8 +1687,8 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf) struct srcu_data *sdp; sdp = per_cpu_ptr(ssp->sda, cpu); - u0 = data_race(sdp->srcu_unlock_count[!idx]); - u1 = data_race(sdp->srcu_unlock_count[idx]); + u0 = data_race(atomic_long_read(&sdp->srcu_unlock_count[!idx])); + u1 = data_race(atomic_long_read(&sdp->srcu_unlock_count[idx])); /* * Make sure that a lock is always counted if the corresponding @@ -1696,8 +1696,8 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf) */ smp_rmb(); - l0 = data_race(sdp->srcu_lock_count[!idx]); - l1 = data_race(sdp->srcu_lock_count[idx]); + l0 = data_race(atomic_long_read(&sdp->srcu_lock_count[!idx])); + l1 = data_race(atomic_long_read(&sdp->srcu_lock_count[idx])); c0 = l0 - u0; c1 = l1 - u1; From patchwork Wed Oct 19 22:58:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13012477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3AD1C433FE for ; Wed, 19 Oct 2022 22:58:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231423AbiJSW66 (ORCPT ); Wed, 19 Oct 2022 18:58:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231317AbiJSW6y (ORCPT ); Wed, 19 Oct 2022 18:58:54 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 214531CBAAF; Wed, 19 Oct 2022 15:58:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 5ADD5B82622; Wed, 19 Oct 2022 22:58:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEA8AC4314C; Wed, 19 Oct 2022 22:58:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666220328; bh=XNWl0HIokgwRVMPagCZ5aA6xkbCFeaPBVMDULtjlh9U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aSL//Xz1geDrL70We06qUFfxPQB1fefbnlBS69m60Xc9ptHZUuYCYSx0bFKWDEmZB kAhMsNar9Pa3GHSQ0Jgzi8eDZg6HJNUfQtJCzkF006C6zT4vwIwZQOazU7xWTEqEfW WFdRN9WRt1VZdpLXfCFwWhxa/9o0+ZkA/iu56sd5mG24kAcLRhnepJrKLKduPKw/xP 4HH8j7XDITWJg1Z1T1o5oXHMwi2tAwCKFhXAMV581PgKBWHFNsxW8ncPAGCvtY5vrt 8SzJmbDfQc3nahuTzMqNaOibBp9D1pd+w4akppt8objl6lcsFDLxDWD1y2NqYL/qPB XBQUs9KgsyosA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id ACFC35C0879; Wed, 19 Oct 2022 15:58:47 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Randy Dunlap , Frederic Weisbecker , Thomas Gleixner , John Ogness , Petr Mladek Subject: [PATCH v3 rcu 02/11] srcu: Create an srcu_read_lock_nmisafe() and srcu_read_unlock_nmisafe() Date: Wed, 19 Oct 2022 15:58:37 -0700 Message-Id: <20221019225846.2501109-2-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> References: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org On strict load-store architectures, the use of this_cpu_inc() by srcu_read_lock() and srcu_read_unlock() is not NMI-safe in TREE SRCU. To see this suppose that an NMI arrives in the middle of srcu_read_lock(), just after it has read ->srcu_lock_count, but before it has written the incremented value back to memory. If that NMI handler also does srcu_read_lock() and srcu_read_lock() on that same srcu_struct structure, then upon return from that NMI handler, the interrupted srcu_read_lock() will overwrite the NMI handler's update to ->srcu_lock_count, but leave unchanged the NMI handler's update by srcu_read_unlock() to ->srcu_unlock_count. This can result in a too-short SRCU grace period, which can in turn result in arbitrary memory corruption. If the NMI handler instead interrupts the srcu_read_unlock(), this can result in eternal SRCU grace periods, which is not much better. This commit therefore creates a pair of new srcu_read_lock_nmisafe() and srcu_read_unlock_nmisafe() functions, which allow SRCU readers in both NMI handlers and in process and IRQ context. It is bad practice to mix the existing and the new _nmisafe() primitives on the same srcu_struct structure. Use one set or the other, not both. Just to underline that "bad practice" point, using srcu_read_lock() at process level and srcu_read_lock_nmisafe() in your NMI handler will not, repeat NOT, work. If you do not immediately understand why this is the case, please review the earlier paragraphs in this commit log. [ paulmck: Apply kernel test robot feedback. ] [ paulmck: Apply feedback from Randy Dunlap. ] [ paulmck: Apply feedback from John Ogness. ] Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Signed-off-by: Paul E. McKenney Acked-by: Randy Dunlap # build-tested Reviewed-by: Frederic Weisbecker Cc: Thomas Gleixner Cc: John Ogness Cc: Petr Mladek --- arch/Kconfig | 3 +++ include/linux/srcu.h | 39 ++++++++++++++++++++++++++++++++++++ include/linux/srcutiny.h | 11 ++++++++++ include/linux/srcutree.h | 3 +++ kernel/rcu/Kconfig | 3 +++ kernel/rcu/rcutorture.c | 11 ++++++++-- kernel/rcu/srcutree.c | 43 ++++++++++++++++++++++++++++++++++++---- 7 files changed, 107 insertions(+), 6 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 8f138e580d1ae..6b95244c3057d 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -468,6 +468,9 @@ config ARCH_WANT_IRQS_OFF_ACTIVATE_MM config ARCH_HAVE_NMI_SAFE_CMPXCHG bool +config ARCH_HAS_NMI_SAFE_THIS_CPU_OPS + bool + config HAVE_ALIGNED_STRUCT_PAGE bool help diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 01226e4d960a0..2cc8321c0c86a 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -52,6 +52,8 @@ int init_srcu_struct(struct srcu_struct *ssp); #else /* Dummy definition for things like notifiers. Actual use gets link error. */ struct srcu_struct { }; +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) __acquires(ssp); +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) __releases(ssp); #endif void call_srcu(struct srcu_struct *ssp, struct rcu_head *head, @@ -166,6 +168,25 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) return retval; } +/** + * srcu_read_lock_nmisafe - register a new reader for an SRCU-protected structure. + * @ssp: srcu_struct in which to register the new reader. + * + * Enter an SRCU read-side critical section, but in an NMI-safe manner. + * See srcu_read_lock() for more information. + */ +static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp) +{ + int retval; + + if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) + retval = __srcu_read_lock_nmisafe(ssp); + else + retval = __srcu_read_lock(ssp); + rcu_lock_acquire(&(ssp)->dep_map); + return retval; +} + /* Used by tracing, cannot be traced and cannot invoke lockdep. */ static inline notrace int srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp) @@ -191,6 +212,24 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) __srcu_read_unlock(ssp, idx); } +/** + * srcu_read_unlock_nmisafe - unregister a old reader from an SRCU-protected structure. + * @ssp: srcu_struct in which to unregister the old reader. + * @idx: return value from corresponding srcu_read_lock(). + * + * Exit an SRCU read-side critical section, but in an NMI-safe manner. + */ +static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) + __releases(ssp) +{ + WARN_ON_ONCE(idx & ~0x1); + rcu_lock_release(&(ssp)->dep_map); + if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) + __srcu_read_unlock_nmisafe(ssp, idx); + else + __srcu_read_unlock(ssp, idx); +} + /* Used by tracing, cannot be traced and cannot call lockdep. */ static inline notrace void srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp) diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h index 5aa5e0faf6a12..278331bd77660 100644 --- a/include/linux/srcutiny.h +++ b/include/linux/srcutiny.h @@ -90,4 +90,15 @@ static inline void srcu_torture_stats_print(struct srcu_struct *ssp, data_race(READ_ONCE(ssp->srcu_idx_max))); } +static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) +{ + BUG(); + return 0; +} + +static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) +{ + BUG(); +} + #endif diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 0c4eca07d78d5..d45dd507f4a56 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -154,4 +154,7 @@ void synchronize_srcu_expedited(struct srcu_struct *ssp); void srcu_barrier(struct srcu_struct *ssp); void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf); +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp); +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp); + #endif diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig index d471d22a5e21b..f53ad63b2bc63 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -72,6 +72,9 @@ config TREE_SRCU help This option selects the full-fledged version of SRCU. +config NEED_SRCU_NMI_SAFE + def_bool HAVE_NMI && !ARCH_HAS_NMI_SAFE_THIS_CPU_OPS && !TINY_SRCU + config TASKS_RCU_GENERIC def_bool TASKS_RCU || TASKS_RUDE_RCU || TASKS_TRACE_RCU select SRCU diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 503c2aa845a4a..b4c74ce102256 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -615,10 +615,14 @@ static struct rcu_torture_ops rcu_busted_ops = { DEFINE_STATIC_SRCU(srcu_ctl); static struct srcu_struct srcu_ctld; static struct srcu_struct *srcu_ctlp = &srcu_ctl; +static struct rcu_torture_ops srcud_ops; static int srcu_torture_read_lock(void) __acquires(srcu_ctlp) { - return srcu_read_lock(srcu_ctlp); + if (cur_ops == &srcud_ops) + return srcu_read_lock_nmisafe(srcu_ctlp); + else + return srcu_read_lock(srcu_ctlp); } static void @@ -642,7 +646,10 @@ srcu_read_delay(struct torture_random_state *rrsp, struct rt_read_seg *rtrsp) static void srcu_torture_read_unlock(int idx) __releases(srcu_ctlp) { - srcu_read_unlock(srcu_ctlp, idx); + if (cur_ops == &srcud_ops) + srcu_read_unlock_nmisafe(srcu_ctlp, idx); + else + srcu_read_unlock(srcu_ctlp, idx); } static int torture_srcu_read_lock_held(void) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 25e9458da6a26..32a94b254d29f 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -654,6 +654,41 @@ void __srcu_read_unlock(struct srcu_struct *ssp, int idx) } EXPORT_SYMBOL_GPL(__srcu_read_unlock); +#ifdef CONFIG_NEED_SRCU_NMI_SAFE + +/* + * Counts the new reader in the appropriate per-CPU element of the + * srcu_struct, but in an NMI-safe manner using RMW atomics. + * Returns an index that must be passed to the matching srcu_read_unlock(). + */ +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) +{ + int idx; + struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); + + idx = READ_ONCE(ssp->srcu_idx) & 0x1; + atomic_long_inc(&sdp->srcu_lock_count[idx]); + smp_mb__after_atomic(); /* B */ /* Avoid leaking the critical section. */ + return idx; +} +EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe); + +/* + * Removes the count for the old reader from the appropriate per-CPU + * element of the srcu_struct. Note that this may well be a different + * CPU than that which was incremented by the corresponding srcu_read_lock(). + */ +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) +{ + struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); + + smp_mb__before_atomic(); /* C */ /* Avoid leaking the critical section. */ + atomic_long_inc(&sdp->srcu_unlock_count[idx]); +} +EXPORT_SYMBOL_GPL(__srcu_read_unlock_nmisafe); + +#endif // CONFIG_NEED_SRCU_NMI_SAFE + /* * Start an SRCU grace period. */ @@ -1090,7 +1125,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, int ss_state; check_init_srcu_struct(ssp); - idx = srcu_read_lock(ssp); + idx = __srcu_read_lock_nmisafe(ssp); ss_state = smp_load_acquire(&ssp->srcu_size_state); if (ss_state < SRCU_SIZE_WAIT_CALL) sdp = per_cpu_ptr(ssp->sda, 0); @@ -1123,7 +1158,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, srcu_funnel_gp_start(ssp, sdp, s, do_norm); else if (needexp) srcu_funnel_exp_start(ssp, sdp_mynode, s); - srcu_read_unlock(ssp, idx); + __srcu_read_unlock_nmisafe(ssp, idx); return s; } @@ -1427,13 +1462,13 @@ void srcu_barrier(struct srcu_struct *ssp) /* Initial count prevents reaching zero until all CBs are posted. */ atomic_set(&ssp->srcu_barrier_cpu_cnt, 1); - idx = srcu_read_lock(ssp); + idx = __srcu_read_lock_nmisafe(ssp); if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, 0)); else for_each_possible_cpu(cpu) srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, cpu)); - srcu_read_unlock(ssp, idx); + __srcu_read_unlock_nmisafe(ssp, idx); /* Remove the initial count, at which point reaching zero can happen. */ if (atomic_dec_and_test(&ssp->srcu_barrier_cpu_cnt)) From patchwork Wed Oct 19 22:58:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13012478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37388C433FE for ; Wed, 19 Oct 2022 22:59:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231467AbiJSW67 (ORCPT ); Wed, 19 Oct 2022 18:58:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231297AbiJSW6x (ORCPT ); Wed, 19 Oct 2022 18:58:53 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 212511CBAAD; Wed, 19 Oct 2022 15:58:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 58FD9B8261D; Wed, 19 Oct 2022 22:58:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F1D9EC43145; Wed, 19 Oct 2022 22:58:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666220328; bh=BdQBnT/5Lg+ztsBrYfPMi3iUxQTo6LmC+1YzXMhFGpc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cU8gnXdGPAvsqb8jSRTMD5Tspc+o/pThgguWUy9C0+HKiSB02l5GfNIGxS0JkL7uX pq+RNB4RMzE074QpEWxilIEtr7DP28edGfkQiaGDDJT2n8K1gKZlATm9DXGgaKo8wG hDWPKAtu3fuXKGyMMvmjFBcs3gV9VQjqWDVSyyr2rbyzRt0hU2S/kJ0OCsX9eRyAy5 NEQ/jAccQDyxbgxzafRZOvp74EWbisM43SneTv+DSiK+TjDYrUYdhvKA7Y95NiHV4v Y+rymIeTADKMHoCQUGs00uf3WFhiACFYWcP3ZkoSodalaYjLJfkN8YHvwGgjBfbTku PN9wNMrWoqdSw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id AF0AF5C0890; Wed, 19 Oct 2022 15:58:47 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Frederic Weisbecker , Thomas Gleixner , John Ogness , Petr Mladek Subject: [PATCH v3 rcu 03/11] srcu: Check for consistent per-CPU per-srcu_struct NMI safety Date: Wed, 19 Oct 2022 15:58:38 -0700 Message-Id: <20221019225846.2501109-3-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> References: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit adds runtime checks to verify that a given srcu_struct uses consistent NMI-safe (or not) read-side primitives on a per-CPU basis. Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Signed-off-by: Paul E. McKenney Reviewed-by: Frederic Weisbecker Cc: Thomas Gleixner Cc: John Ogness Cc: Petr Mladek --- include/linux/srcu.h | 4 ++-- include/linux/srcutiny.h | 4 ++-- include/linux/srcutree.h | 9 +++++++-- kernel/rcu/srcutree.c | 38 ++++++++++++++++++++++++++++++++------ 4 files changed, 43 insertions(+), 12 deletions(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 2cc8321c0c86a..565f60d574847 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -180,7 +180,7 @@ static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp int retval; if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) - retval = __srcu_read_lock_nmisafe(ssp); + retval = __srcu_read_lock_nmisafe(ssp, true); else retval = __srcu_read_lock(ssp); rcu_lock_acquire(&(ssp)->dep_map); @@ -225,7 +225,7 @@ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) WARN_ON_ONCE(idx & ~0x1); rcu_lock_release(&(ssp)->dep_map); if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) - __srcu_read_unlock_nmisafe(ssp, idx); + __srcu_read_unlock_nmisafe(ssp, idx, true); else __srcu_read_unlock(ssp, idx); } diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h index 278331bd77660..f890301f123df 100644 --- a/include/linux/srcutiny.h +++ b/include/linux/srcutiny.h @@ -90,13 +90,13 @@ static inline void srcu_torture_stats_print(struct srcu_struct *ssp, data_race(READ_ONCE(ssp->srcu_idx_max))); } -static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) +static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) { BUG(); return 0; } -static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) +static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) { BUG(); } diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index d45dd507f4a56..35ffdedf86ccb 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -25,6 +25,7 @@ struct srcu_data { /* Read-side state. */ atomic_long_t srcu_lock_count[2]; /* Locks per CPU. */ atomic_long_t srcu_unlock_count[2]; /* Unlocks per CPU. */ + int srcu_nmi_safety; /* NMI-safe srcu_struct structure? */ /* Update-side state. */ spinlock_t __private lock ____cacheline_internodealigned_in_smp; @@ -42,6 +43,10 @@ struct srcu_data { struct srcu_struct *ssp; }; +#define SRCU_NMI_UNKNOWN 0x0 +#define SRCU_NMI_NMI_UNSAFE 0x1 +#define SRCU_NMI_NMI_SAFE 0x2 + /* * Node in SRCU combining tree, similar in function to rcu_data. */ @@ -154,7 +159,7 @@ void synchronize_srcu_expedited(struct srcu_struct *ssp); void srcu_barrier(struct srcu_struct *ssp); void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf); -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp); -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp); +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) __acquires(ssp); +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) __releases(ssp); #endif diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 32a94b254d29f..30575864fcfa3 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -626,6 +626,26 @@ void cleanup_srcu_struct(struct srcu_struct *ssp) } EXPORT_SYMBOL_GPL(cleanup_srcu_struct); +/* + * Check for consistent NMI safety. + */ +static void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe) +{ + int nmi_safe_mask = 1 << nmi_safe; + int old_nmi_safe_mask; + struct srcu_data *sdp; + + if (!IS_ENABLED(CONFIG_PROVE_RCU)) + return; + sdp = raw_cpu_ptr(ssp->sda); + old_nmi_safe_mask = READ_ONCE(sdp->srcu_nmi_safety); + if (!old_nmi_safe_mask) { + WRITE_ONCE(sdp->srcu_nmi_safety, nmi_safe_mask); + return; + } + WARN_ONCE(old_nmi_safe_mask != nmi_safe_mask, "CPU %d old state %d new state %d\n", sdp->cpu, old_nmi_safe_mask, nmi_safe_mask); +} + /* * Counts the new reader in the appropriate per-CPU element of the * srcu_struct. @@ -638,6 +658,7 @@ int __srcu_read_lock(struct srcu_struct *ssp) idx = READ_ONCE(ssp->srcu_idx) & 0x1; this_cpu_inc(ssp->sda->srcu_lock_count[idx].counter); smp_mb(); /* B */ /* Avoid leaking the critical section. */ + srcu_check_nmi_safety(ssp, false); return idx; } EXPORT_SYMBOL_GPL(__srcu_read_lock); @@ -651,6 +672,7 @@ void __srcu_read_unlock(struct srcu_struct *ssp, int idx) { smp_mb(); /* C */ /* Avoid leaking the critical section. */ this_cpu_inc(ssp->sda->srcu_unlock_count[idx].counter); + srcu_check_nmi_safety(ssp, false); } EXPORT_SYMBOL_GPL(__srcu_read_unlock); @@ -661,7 +683,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_unlock); * srcu_struct, but in an NMI-safe manner using RMW atomics. * Returns an index that must be passed to the matching srcu_read_unlock(). */ -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) { int idx; struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); @@ -669,6 +691,8 @@ int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) idx = READ_ONCE(ssp->srcu_idx) & 0x1; atomic_long_inc(&sdp->srcu_lock_count[idx]); smp_mb__after_atomic(); /* B */ /* Avoid leaking the critical section. */ + if (chknmisafe) + srcu_check_nmi_safety(ssp, true); return idx; } EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe); @@ -678,12 +702,14 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe); * element of the srcu_struct. Note that this may well be a different * CPU than that which was incremented by the corresponding srcu_read_lock(). */ -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) { struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); smp_mb__before_atomic(); /* C */ /* Avoid leaking the critical section. */ atomic_long_inc(&sdp->srcu_unlock_count[idx]); + if (chknmisafe) + srcu_check_nmi_safety(ssp, true); } EXPORT_SYMBOL_GPL(__srcu_read_unlock_nmisafe); @@ -1125,7 +1151,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, int ss_state; check_init_srcu_struct(ssp); - idx = __srcu_read_lock_nmisafe(ssp); + idx = __srcu_read_lock_nmisafe(ssp, false); ss_state = smp_load_acquire(&ssp->srcu_size_state); if (ss_state < SRCU_SIZE_WAIT_CALL) sdp = per_cpu_ptr(ssp->sda, 0); @@ -1158,7 +1184,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, srcu_funnel_gp_start(ssp, sdp, s, do_norm); else if (needexp) srcu_funnel_exp_start(ssp, sdp_mynode, s); - __srcu_read_unlock_nmisafe(ssp, idx); + __srcu_read_unlock_nmisafe(ssp, idx, false); return s; } @@ -1462,13 +1488,13 @@ void srcu_barrier(struct srcu_struct *ssp) /* Initial count prevents reaching zero until all CBs are posted. */ atomic_set(&ssp->srcu_barrier_cpu_cnt, 1); - idx = __srcu_read_lock_nmisafe(ssp); + idx = __srcu_read_lock_nmisafe(ssp, false); if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, 0)); else for_each_possible_cpu(cpu) srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, cpu)); - __srcu_read_unlock_nmisafe(ssp, idx); + __srcu_read_unlock_nmisafe(ssp, idx, false); /* Remove the initial count, at which point reaching zero can happen. */ if (atomic_dec_and_test(&ssp->srcu_barrier_cpu_cnt)) From patchwork Wed Oct 19 22:58:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13012476 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDE80C43219 for ; Wed, 19 Oct 2022 22:58:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231402AbiJSW65 (ORCPT ); Wed, 19 Oct 2022 18:58:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231294AbiJSW6x (ORCPT ); Wed, 19 Oct 2022 18:58:53 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2021B1C97D5; Wed, 19 Oct 2022 15:58:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4BDE4B82620; Wed, 19 Oct 2022 22:58:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01CEAC4314D; Wed, 19 Oct 2022 22:58:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666220328; bh=uYczQLLZZuNFqlq5NKe5CWa3nO/1WLyhpggz2xg9uhA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q7M1Vv9lN62MBvoQb5g5kOtBWau7pjX4utTN721A0qI1T3bW4dJJ465cA0rrNCWZU ANWg7UiQUTzktsQaVP+ihkXfhqwFjV0sZqx/nuwYgBeX0T9J0a7LTYsC4GRYtc9ef1 F2gCDcG418R5GTqMIZWJI+77HtI4oACAfA2X7DlJ8N57t/knxdBQkCRINgRoGGDS2D DkK3qIWyhQ2V+8cZiLGrXhdHH4iiFDX3ifV0Gtb+GL7Twu+Kpw7wNrmA+1T3GyJjOz t3G9FWGOW+shnkH3J2Num1HsvLc3XiZyQvthJ3KATqOHFim1VOZEPHcO3S30c3Gnjx Dy/UY6bj23tNA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id B14B45C0920; Wed, 19 Oct 2022 15:58:47 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Frederic Weisbecker , Thomas Gleixner , John Ogness , Petr Mladek Subject: [PATCH v3 rcu 04/11] srcu: Check for consistent global per-srcu_struct NMI safety Date: Wed, 19 Oct 2022 15:58:39 -0700 Message-Id: <20221019225846.2501109-4-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> References: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org This commit adds runtime checks to verify that a given srcu_struct uses consistent NMI-safe (or not) read-side primitives globally, but based on the per-CPU data. These global checks are made by the grace-period code that must scan the srcu_data structures anyway, and are done only in kernels built with CONFIG_PROVE_RCU=y. Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Signed-off-by: Paul E. McKenney Reviewed-by: Frederic Weisbecker Cc: Thomas Gleixner Cc: John Ogness Cc: Petr Mladek --- kernel/rcu/srcutree.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 30575864fcfa3..87ae6f5c1edae 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -429,13 +429,18 @@ static unsigned long srcu_readers_lock_idx(struct srcu_struct *ssp, int idx) static unsigned long srcu_readers_unlock_idx(struct srcu_struct *ssp, int idx) { int cpu; + unsigned long mask = 0; unsigned long sum = 0; for_each_possible_cpu(cpu) { struct srcu_data *cpuc = per_cpu_ptr(ssp->sda, cpu); sum += atomic_long_read(&cpuc->srcu_unlock_count[idx]); + if (IS_ENABLED(CONFIG_PROVE_RCU)) + mask = mask | READ_ONCE(cpuc->srcu_nmi_safety); } + WARN_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && (mask & (mask >> 1)), + "Mixed NMI-safe readers for srcu_struct at %ps.\n", ssp); return sum; } From patchwork Wed Oct 19 22:58:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13012475 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D8E3C43217 for ; Wed, 19 Oct 2022 22:58:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231252AbiJSW64 (ORCPT ); Wed, 19 Oct 2022 18:58:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230440AbiJSW6x (ORCPT ); Wed, 19 Oct 2022 18:58:53 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21ACD1CC3EE; Wed, 19 Oct 2022 15:58:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 67BB4B8261C; Wed, 19 Oct 2022 22:58:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0B161C4314F; Wed, 19 Oct 2022 22:58:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666220328; bh=U7EwiwQT1WIRqzav2oy6CKAJs3S0FYOZZwSmr789m/Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VHSeKWaok+7TErScKygedoM5pxkR28Xt8UhuuCNhxDJHK7bcHWtlUo29Lkc/PrbWb 1b9KpVyaPa/k44ez7NM+GkChSe45i0RD+vtTlIqQ6KxgGj/b9CDGM448AOGXxUYPP2 IplplV+7Up+araeM5uzRfUHNI+bW0O5cMtB1jJs61eMWuWvebcb3m9qoppgC7/igUE SFyASmOxJsO2CWeaTUAm1K8FeIMB3Q3H4IxZD5O/bgLm8gCRPAx8m6ZJH0ml46kKm9 YCAHYFxGInG9rn/bBct8txWus9bYK/kj5LZGBfamSJch5BepTlpx2X4qvil5B1lHMZ hZSCNYcoONwtA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id B34FC5C0A04; Wed, 19 Oct 2022 15:58:47 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Frederic Weisbecker , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , John Ogness , Petr Mladek , x86@kernel.org Subject: [PATCH v3 rcu 05/11] arch/x86: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option Date: Wed, 19 Oct 2022 15:58:40 -0700 Message-Id: <20221019225846.2501109-5-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> References: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The x86 architecture uses an add-to-memory instruction to implement this_cpu_add(), which is NMI safe. This means that the old and more-efficient srcu_read_lock() may be used in NMI context, without the need for srcu_read_lock_nmisafe(). Therefore, add the new Kconfig option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/x86/Kconfig, which will cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving the current srcu_read_lock() behavior. Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Signed-off-by: Paul E. McKenney Reviewed-by: Frederic Weisbecker Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: John Ogness Cc: Petr Mladek Cc: --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 6d1879ef933a2..bcb3190eaa266 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -81,6 +81,7 @@ config X86 select ARCH_HAS_KCOV if X86_64 select ARCH_HAS_MEM_ENCRYPT select ARCH_HAS_MEMBARRIER_SYNC_CORE + select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PMEM_API if X86_64 select ARCH_HAS_PTE_DEVMAP if X86_64 From patchwork Wed Oct 19 22:58:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13012471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6511CC43219 for ; Wed, 19 Oct 2022 22:58:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231299AbiJSW6x (ORCPT ); Wed, 19 Oct 2022 18:58:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231209AbiJSW6w (ORCPT ); Wed, 19 Oct 2022 18:58:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 134321CC768; Wed, 19 Oct 2022 15:58:50 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6074F619E8; Wed, 19 Oct 2022 22:58:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4F393C4315C; Wed, 19 Oct 2022 22:58:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666220328; bh=Nb+xsXAeDl9rjVwYT8+uivwa9boPW8pnKYqH3COPbBY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VfqMvXcayMmhZ5/iuy3j3lXwyVT0aBX9tfr5xmh7SpBuHwKwazuBBRp/wJlePRVBr mh7NW+8FEl+rP7Bu4leZvBTLjzJzpKMMWOXoDn+LTxRpuVzWLVJqAJOl8nWaopjnzv kARyhKaFZJSQ5r3CgujnfJnwIQ0uUgPV5WkMHfLOU2CBMkjxREKJe4nEZFBwp7ODwa AbCK5NvLIINjI72QR/DrBWPIO2DVw4C2QCMshI6JjQoFMDTAsyFvx4/5SoEMrgeGUa J2t32wNwZJr07kzRfhQM3X4LuQ61Z0SrNeNFHE0hp2ba/rUMHOtnlm1OXlh51UFI0q EQtpf4HBUuGIQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id B59075C0A40; Wed, 19 Oct 2022 15:58:47 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Neeraj Upadhyay , Frederic Weisbecker , Boqun Feng , Catalin Marinas , Will Deacon , Thomas Gleixner , John Ogness , Petr Mladek , linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 rcu 06/11] arch/arm64: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option Date: Wed, 19 Oct 2022 15:58:41 -0700 Message-Id: <20221019225846.2501109-6-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> References: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The arm64 architecture uses either an LL/SC loop (old systems) or an LSE stadd instruction (new systems) to implement this_cpu_add(), both of which are NMI safe. This means that the old and more-efficient srcu_read_lock() may be used in NMI context, without the need for srcu_read_lock_nmisafe(). Therefore, add the new Kconfig option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/arm64/Kconfig, which will cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving the current srcu_read_lock() behavior. Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Suggested-by: Neeraj Upadhyay Suggested-by: Frederic Weisbecker Suggested-by: Boqun Feng Signed-off-by: Paul E. McKenney Cc: Catalin Marinas Cc: Will Deacon Cc: Thomas Gleixner Cc: John Ogness Cc: Petr Mladek Cc: --- arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 505c8a1ccbe0c..099ee812f3f18 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -31,6 +31,7 @@ config ARM64 select ARCH_HAS_KCOV select ARCH_HAS_KEEPINITRD select ARCH_HAS_MEMBARRIER_SYNC_CORE + select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PTE_DEVMAP select ARCH_HAS_PTE_SPECIAL From patchwork Wed Oct 19 22:58:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13012480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DFE4C04A95 for ; Wed, 19 Oct 2022 22:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230357AbiJSW70 (ORCPT ); Wed, 19 Oct 2022 18:59:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231392AbiJSW65 (ORCPT ); Wed, 19 Oct 2022 18:58:57 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A9821CC762; Wed, 19 Oct 2022 15:58:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7E0CAB82625; Wed, 19 Oct 2022 22:58:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4F2CFC43158; Wed, 19 Oct 2022 22:58:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666220328; bh=q+Nh2vqtEVu5d5wW0RvHG/vkrFq6MOVxBevrDf1eSrY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uN2Y657PIuHGDtEfThRsABBfDWmMY2nPkfEzviDg/z53XUHh45lsuuSSilpTZJSpz 3NHYNYSreQ/zQyMhpaYFRd684T2E8r+C4cuCUGN8buVHW+bgLaKsVO5niw4BheQgfv oI5CKDeCHTWd2QrYmZc51KrYDFFP33b8kAwHeXne1pIAaKFfwlxGCJq7Uzo7gTSBRC lXUSAqY6VVWYmet5AZvswNsazH0XCr6W2/y4lIkcQ/XP8gY6a5SvaeLcOPBmnkRYcq /qOtDVcglyW/+UqM11sLKetAjdUp4xjzvBGSdoE0dPfB32hce9PNR70MKY34hCpHgA t5I0+l6rN+eLw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id B781A5C0AC5; Wed, 19 Oct 2022 15:58:47 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Neeraj Upadhyay , Frederic Weisbecker , Boqun Feng , Huacai Chen , WANG Xuerui , Thomas Gleixner , John Ogness , Petr Mladek , loongarch@lists.linux.dev Subject: [PATCH v3 rcu 07/11] arch/loongarch: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option Date: Wed, 19 Oct 2022 15:58:42 -0700 Message-Id: <20221019225846.2501109-7-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> References: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The loongarch architecture uses the atomic read-modify-write amadd instruction to implement this_cpu_add(), which is NMI safe. This means that the old and more-efficient srcu_read_lock() may be used in NMI context, without the need for srcu_read_lock_nmisafe(). Therefore, add the new Kconfig option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/x86/Kconfig, which will cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving the current srcu_read_lock() behavior. Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Suggested-by: Neeraj Upadhyay Suggested-by: Frederic Weisbecker Suggested-by: Boqun Feng Signed-off-by: Paul E. McKenney Cc: Huacai Chen Cc: WANG Xuerui Cc: Thomas Gleixner Cc: John Ogness Cc: Petr Mladek Cc: --- arch/loongarch/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 903096bd87f88..386adde2feffb 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -10,6 +10,7 @@ config LOONGARCH select ARCH_ENABLE_MEMORY_HOTPLUG select ARCH_ENABLE_MEMORY_HOTREMOVE select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI + select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_INLINE_READ_LOCK if !PREEMPTION From patchwork Wed Oct 19 22:58:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13012474 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30A51C4167B for ; Wed, 19 Oct 2022 22:58:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231372AbiJSW6z (ORCPT ); Wed, 19 Oct 2022 18:58:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231190AbiJSW6w (ORCPT ); Wed, 19 Oct 2022 18:58:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8DA21CBA82; Wed, 19 Oct 2022 15:58:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2E89A619D2; Wed, 19 Oct 2022 22:58:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4F438C4315D; Wed, 19 Oct 2022 22:58:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666220328; bh=FsGsTqRYffVxY8Tir1OmIHFlULaz8yaYl41bZtGWV/s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BNcVR0ahlFzw6b+PSF6Ag1ZOkRdVOoCNxNeLVlSqdB/xb/QBx8Oy05CFGtm+ymepY F9Cb8e1kDXGYBFOxmUEN6DEgTvqjf+3IfS4EILZ0xl0y3r3Gj/x9haeO6Q6nJDbBeB bFBfxwtOkN2KJdKDNTG0L0tJnXBHYu6mDamjbZUF4+tIi6+IoiJQm2f/XwMrR1G8aa EQN/ByNpoPWbaepW39zGL/TdsoURAJTRs5UBfWyA9S9ldgHCwLPRlTV/iyVPZgbVfQ TYj4iIDGYDRyu0ynwgbxmaHVm+eLqJF5y3Usbs4c9sPmncdGU+nyqWtVjl69OeNcFH C51yosxbFgh9w== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id B99C05C0B8F; Wed, 19 Oct 2022 15:58:47 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, "Paul E. McKenney" , Neeraj Upadhyay , Frederic Weisbecker , Boqun Feng , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , John Ogness , Petr Mladek , linux-s390@vger.kernel.org Subject: [PATCH v3 rcu 08/11] arch/s390: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option Date: Wed, 19 Oct 2022 15:58:43 -0700 Message-Id: <20221019225846.2501109-8-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> References: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The s390 architecture uses either a cmpxchg loop (old systems) or the laa add-to-memory instruction (new systems) to implement this_cpu_add(), both of which are NMI safe. This means that the old and more-efficient srcu_read_lock() may be used in NMI context, without the need for srcu_read_lock_nmisafe(). Therefore, add the new Kconfig option ARCH_HAS_NMI_SAFE_THIS_CPU_OPS to arch/arm64/Kconfig, which will cause NEED_SRCU_NMI_SAFE to be deselected, thus preserving the current srcu_read_lock() behavior. Link: https://lore.kernel.org/all/20220910221947.171557773@linutronix.de/ Suggested-by: Neeraj Upadhyay Suggested-by: Frederic Weisbecker Suggested-by: Boqun Feng Signed-off-by: Paul E. McKenney Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: Thomas Gleixner Cc: John Ogness Cc: Petr Mladek Cc: Acked-by: Heiko Carstens --- arch/s390/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 318fce77601d3..0acdfda332908 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -73,6 +73,7 @@ config S390 select ARCH_HAS_GIGANTIC_PAGE select ARCH_HAS_KCOV select ARCH_HAS_MEM_ENCRYPT + select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SCALED_CPUTIME select ARCH_HAS_SET_MEMORY From patchwork Wed Oct 19 22:58:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13012472 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95AD8C43217 for ; Wed, 19 Oct 2022 22:58:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231255AbiJSW6y (ORCPT ); Wed, 19 Oct 2022 18:58:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231206AbiJSW6w (ORCPT ); Wed, 19 Oct 2022 18:58:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11DC31CC741; Wed, 19 Oct 2022 15:58:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 32619619D6; Wed, 19 Oct 2022 22:58:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 53B38C4315E; Wed, 19 Oct 2022 22:58:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666220328; bh=vcOf9CMrU+6NN6qjpATXbQFTM1BXdbCbv7vFEbo3qjM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tBBT+O+tdFhyraHK1mTg82F6rvkz9Afq3DB3uSS+If39hRs3+6Rwd6Ixl1N/VNOol DEBxL+oTy7c6xrpF/vTdn1VBUBX2MqOCLp91dt3myP+U6qlEBcwaW4QnhOMiBMum/c sWHM+yE/NMFPt6J6boNnvxQxbBD1bwuyubIAwqTl6dpze6T3hOxAyOHXVHHGerKGa0 HRgPT4SGikXeEoUrsl8yuwJO8hzvrTNbn7xP9ItcNCgWeEGIWzV3wjEJa4iuV1s34n x7pAKLdDdeWMywbF12fYUNc9iGL7wV0fd47O7rSQsvkPOOzwAfOPFNYJXxMwor35oh VEIihypUyELbA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id BB57A5C0BE8; Wed, 19 Oct 2022 15:58:47 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , "Paul E . McKenney" Subject: [PATCH v3 rcu 09/11] srcu: Warn when NMI-unsafe API is used in NMI Date: Wed, 19 Oct 2022 15:58:44 -0700 Message-Id: <20221019225846.2501109-9-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> References: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker Using the NMI-unsafe reader API from within an NMI handler is very likely to be buggy for three reasons: 1) NMIs aren't strictly re-entrant (a pending nested NMI will execute at the end of the current one) so it should be fine to use a non-atomic increment here. However, breakpoints can still interrupt NMIs and if a breakpoint callback has a reader on that same ssp, a racy increment can happen. 2) If the only reader site for a given srcu_struct structure is in an NMI handler, then RCU should be used instead of SRCU. 3) Because of the previous reason (2), an srcu_struct structure having an SRCU read side critical section in an NMI handler is likely to have another one from a task context. For all these reasons, warn if an NMI-unsafe reader API is used from an NMI handler. Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/srcutree.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 87ae6f5c1edae..18bb696cff8ca 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -642,6 +642,8 @@ static void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe) if (!IS_ENABLED(CONFIG_PROVE_RCU)) return; + /* NMI-unsafe use in NMI is a bad sign */ + WARN_ON_ONCE(!nmi_safe && in_nmi()); sdp = raw_cpu_ptr(ssp->sda); old_nmi_safe_mask = READ_ONCE(sdp->srcu_nmi_safety); if (!old_nmi_safe_mask) { From patchwork Wed Oct 19 22:58:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13012479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1140CC43217 for ; Wed, 19 Oct 2022 22:59:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231476AbiJSW67 (ORCPT ); Wed, 19 Oct 2022 18:58:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231375AbiJSW6z (ORCPT ); Wed, 19 Oct 2022 18:58:55 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D83A1CD688; Wed, 19 Oct 2022 15:58:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7DB39B82624; Wed, 19 Oct 2022 22:58:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59E48C4315B; Wed, 19 Oct 2022 22:58:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666220328; bh=Lw7Q0GwSzhjX27kJ3r/Pu5VkOmW/ZLbkQEvvvISFP7I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rrgNEcCvnt9q2Ei1GCPn0fArdrA82frgyah4oWktNFVTH01HA4b2GnkskY46J8peD 3AzWfEUiIuOHcksy1hvxYjs5auNbV0LjN2jRF7FgONsTlUYJ0yF5FPygzUThbcn/ks 5ngY96d2lwGndXGz5zEeYU5a1RnZk3ZkR0+MiAwZpgw/duszA+iyiaiyoXcGEPyBak tIAES6BW4ZKPKhINAaCaSllhJBB+B2+eYyjjFda7Ubsa/haPD+unNMQQKX+QtBP7v8 2ReYkEMsdLQBjt7RIVd9D2jqojVRVRuXEhk7qga4dvKR1vKaVdRbCs+uGCDGb9Td6p YUwBPwuuRZqoQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id BD0FC5C0D2B; Wed, 19 Oct 2022 15:58:47 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , "Paul E . McKenney" Subject: [PATCH v3 rcu 10/11] srcu: Explain the reason behind the read side critical section on GP start Date: Wed, 19 Oct 2022 15:58:45 -0700 Message-Id: <20221019225846.2501109-10-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> References: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker Tell about the need to protect against concurrent updaters who may overflow the GP counter behind the current update. Reported-by: Paul E. McKenney Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- kernel/rcu/srcutree.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 18bb696cff8ca..272830a87e566 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -1158,6 +1158,11 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, int ss_state; check_init_srcu_struct(ssp); + /* + * While starting a new grace period, make sure we are in an + * SRCU read-side critical section so that the grace-period + * sequence number cannot wrap around in the meantime. + */ idx = __srcu_read_lock_nmisafe(ssp, false); ss_state = smp_load_acquire(&ssp->srcu_size_state); if (ss_state < SRCU_SIZE_WAIT_CALL) From patchwork Wed Oct 19 22:58:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13012473 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56556C433FE for ; Wed, 19 Oct 2022 22:58:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231262AbiJSW6y (ORCPT ); Wed, 19 Oct 2022 18:58:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231244AbiJSW6w (ORCPT ); Wed, 19 Oct 2022 18:58:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 140221CCCEE; Wed, 19 Oct 2022 15:58:50 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5CFF661962; Wed, 19 Oct 2022 22:58:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 53C47C4315F; Wed, 19 Oct 2022 22:58:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666220328; bh=UBuj76kHE7Oii/4AoWLoto3e/2iEtgpEdWdwXnDVmFQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XdE4x2ZJHM7aJD5PUYc0rXTv6uWHLp+BmrOXwgZ/n795GNKG82V1omLedMJT1KxNB 5hoLRNWWWs3f1jubTVC13kelpwCZWqdWhy6qJn/xf28/L79Awu7xQbUndIWsHEN7sW joJn7gFP+lVxKMeRx+SOLREoRFegwJ/JsWjjl1D1CdPgqHpQZJOoII6TtyuE5ANasw TD2HurdcYQDgGua0p1LaKqJuJo1G35agZD3Dpp2DrMKPuJKhZofYkvgdFbCkD2VUwp EOJKtHDCqgCn0m+/oLE7KGU7E4fNyqmRqSZu0GATBLwFfCiaOgXOEXFloHYoRzo8bH dQd8KEm9c5XTw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id BEF155C0DA6; Wed, 19 Oct 2022 15:58:47 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, rostedt@goodmis.org, Frederic Weisbecker , "Paul E . McKenney" Subject: [PATCH v3 rcu 11/11] srcu: Debug NMI safety even on archs that don't require it Date: Wed, 19 Oct 2022 15:58:46 -0700 Message-Id: <20221019225846.2501109-11-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> References: <20221019225838.GA2500612@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org From: Frederic Weisbecker Currently the NMI safety debugging is only performed on architectures that don't support NMI-safe this_cpu_inc(). Reorder the code so that other architectures like x86 also detect bad uses. [ paulmck: Apply kernel test robot, Stephen Rothwell, and Zqiang feedback. ] Signed-off-by: Frederic Weisbecker Signed-off-by: Paul E. McKenney --- include/linux/srcu.h | 44 +++++++++++++++++++++++++++++++--------- include/linux/srcutiny.h | 12 ----------- include/linux/srcutree.h | 7 ------- kernel/rcu/srcutree.c | 25 +++++++++-------------- 4 files changed, 44 insertions(+), 44 deletions(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 565f60d574847..f0814ffca34bb 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -52,8 +52,6 @@ int init_srcu_struct(struct srcu_struct *ssp); #else /* Dummy definition for things like notifiers. Actual use gets link error. */ struct srcu_struct { }; -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) __acquires(ssp); -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) __releases(ssp); #endif void call_srcu(struct srcu_struct *ssp, struct rcu_head *head, @@ -66,6 +64,20 @@ unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp); unsigned long start_poll_synchronize_srcu(struct srcu_struct *ssp); bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie); +#ifdef CONFIG_NEED_SRCU_NMI_SAFE +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp); +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp); +#else +static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) +{ + return __srcu_read_lock(ssp); +} +static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) +{ + __srcu_read_unlock(ssp, idx); +} +#endif /* CONFIG_NEED_SRCU_NMI_SAFE */ + #ifdef CONFIG_SRCU void srcu_init(void); #else /* #ifdef CONFIG_SRCU */ @@ -106,6 +118,18 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp) #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ +#define SRCU_NMI_UNKNOWN 0x0 +#define SRCU_NMI_UNSAFE 0x1 +#define SRCU_NMI_SAFE 0x2 + +#if defined(CONFIG_PROVE_RCU) && defined(CONFIG_TREE_SRCU) +void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe); +#else +static inline void srcu_check_nmi_safety(struct srcu_struct *ssp, + bool nmi_safe) { } +#endif + + /** * srcu_dereference_check - fetch SRCU-protected pointer for later dereferencing * @p: the pointer to fetch and protect for later dereferencing @@ -163,6 +187,7 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) { int retval; + srcu_check_nmi_safety(ssp, false); retval = __srcu_read_lock(ssp); rcu_lock_acquire(&(ssp)->dep_map); return retval; @@ -179,10 +204,8 @@ static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp { int retval; - if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) - retval = __srcu_read_lock_nmisafe(ssp, true); - else - retval = __srcu_read_lock(ssp); + srcu_check_nmi_safety(ssp, true); + retval = __srcu_read_lock_nmisafe(ssp); rcu_lock_acquire(&(ssp)->dep_map); return retval; } @@ -193,6 +216,7 @@ srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp) { int retval; + srcu_check_nmi_safety(ssp, false); retval = __srcu_read_lock(ssp); return retval; } @@ -208,6 +232,7 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp) { WARN_ON_ONCE(idx & ~0x1); + srcu_check_nmi_safety(ssp, false); rcu_lock_release(&(ssp)->dep_map); __srcu_read_unlock(ssp, idx); } @@ -223,17 +248,16 @@ static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp) { WARN_ON_ONCE(idx & ~0x1); + srcu_check_nmi_safety(ssp, true); rcu_lock_release(&(ssp)->dep_map); - if (IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) - __srcu_read_unlock_nmisafe(ssp, idx, true); - else - __srcu_read_unlock(ssp, idx); + __srcu_read_unlock_nmisafe(ssp, idx); } /* Used by tracing, cannot be traced and cannot call lockdep. */ static inline notrace void srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp) { + srcu_check_nmi_safety(ssp, false); __srcu_read_unlock(ssp, idx); } diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h index f890301f123df..f3a4d65b91efd 100644 --- a/include/linux/srcutiny.h +++ b/include/linux/srcutiny.h @@ -89,16 +89,4 @@ static inline void srcu_torture_stats_print(struct srcu_struct *ssp, data_race(READ_ONCE(ssp->srcu_idx)), data_race(READ_ONCE(ssp->srcu_idx_max))); } - -static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) -{ - BUG(); - return 0; -} - -static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) -{ - BUG(); -} - #endif diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 35ffdedf86ccb..c689a81752c9a 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -43,10 +43,6 @@ struct srcu_data { struct srcu_struct *ssp; }; -#define SRCU_NMI_UNKNOWN 0x0 -#define SRCU_NMI_NMI_UNSAFE 0x1 -#define SRCU_NMI_NMI_SAFE 0x2 - /* * Node in SRCU combining tree, similar in function to rcu_data. */ @@ -159,7 +155,4 @@ void synchronize_srcu_expedited(struct srcu_struct *ssp); void srcu_barrier(struct srcu_struct *ssp); void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf); -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) __acquires(ssp); -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) __releases(ssp); - #endif diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 272830a87e566..ca4b5dcec675b 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -631,17 +631,16 @@ void cleanup_srcu_struct(struct srcu_struct *ssp) } EXPORT_SYMBOL_GPL(cleanup_srcu_struct); +#ifdef CONFIG_PROVE_RCU /* * Check for consistent NMI safety. */ -static void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe) +void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe) { int nmi_safe_mask = 1 << nmi_safe; int old_nmi_safe_mask; struct srcu_data *sdp; - if (!IS_ENABLED(CONFIG_PROVE_RCU)) - return; /* NMI-unsafe use in NMI is a bad sign */ WARN_ON_ONCE(!nmi_safe && in_nmi()); sdp = raw_cpu_ptr(ssp->sda); @@ -652,6 +651,8 @@ static void srcu_check_nmi_safety(struct srcu_struct *ssp, bool nmi_safe) } WARN_ONCE(old_nmi_safe_mask != nmi_safe_mask, "CPU %d old state %d new state %d\n", sdp->cpu, old_nmi_safe_mask, nmi_safe_mask); } +EXPORT_SYMBOL_GPL(srcu_check_nmi_safety); +#endif /* CONFIG_PROVE_RCU */ /* * Counts the new reader in the appropriate per-CPU element of the @@ -665,7 +666,6 @@ int __srcu_read_lock(struct srcu_struct *ssp) idx = READ_ONCE(ssp->srcu_idx) & 0x1; this_cpu_inc(ssp->sda->srcu_lock_count[idx].counter); smp_mb(); /* B */ /* Avoid leaking the critical section. */ - srcu_check_nmi_safety(ssp, false); return idx; } EXPORT_SYMBOL_GPL(__srcu_read_lock); @@ -679,7 +679,6 @@ void __srcu_read_unlock(struct srcu_struct *ssp, int idx) { smp_mb(); /* C */ /* Avoid leaking the critical section. */ this_cpu_inc(ssp->sda->srcu_unlock_count[idx].counter); - srcu_check_nmi_safety(ssp, false); } EXPORT_SYMBOL_GPL(__srcu_read_unlock); @@ -690,7 +689,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_unlock); * srcu_struct, but in an NMI-safe manner using RMW atomics. * Returns an index that must be passed to the matching srcu_read_unlock(). */ -int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) +int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) { int idx; struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); @@ -698,8 +697,6 @@ int __srcu_read_lock_nmisafe(struct srcu_struct *ssp, bool chknmisafe) idx = READ_ONCE(ssp->srcu_idx) & 0x1; atomic_long_inc(&sdp->srcu_lock_count[idx]); smp_mb__after_atomic(); /* B */ /* Avoid leaking the critical section. */ - if (chknmisafe) - srcu_check_nmi_safety(ssp, true); return idx; } EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe); @@ -709,14 +706,12 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock_nmisafe); * element of the srcu_struct. Note that this may well be a different * CPU than that which was incremented by the corresponding srcu_read_lock(). */ -void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx, bool chknmisafe) +void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) { struct srcu_data *sdp = raw_cpu_ptr(ssp->sda); smp_mb__before_atomic(); /* C */ /* Avoid leaking the critical section. */ atomic_long_inc(&sdp->srcu_unlock_count[idx]); - if (chknmisafe) - srcu_check_nmi_safety(ssp, true); } EXPORT_SYMBOL_GPL(__srcu_read_unlock_nmisafe); @@ -1163,7 +1158,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, * SRCU read-side critical section so that the grace-period * sequence number cannot wrap around in the meantime. */ - idx = __srcu_read_lock_nmisafe(ssp, false); + idx = __srcu_read_lock_nmisafe(ssp); ss_state = smp_load_acquire(&ssp->srcu_size_state); if (ss_state < SRCU_SIZE_WAIT_CALL) sdp = per_cpu_ptr(ssp->sda, 0); @@ -1196,7 +1191,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, srcu_funnel_gp_start(ssp, sdp, s, do_norm); else if (needexp) srcu_funnel_exp_start(ssp, sdp_mynode, s); - __srcu_read_unlock_nmisafe(ssp, idx, false); + __srcu_read_unlock_nmisafe(ssp, idx); return s; } @@ -1500,13 +1495,13 @@ void srcu_barrier(struct srcu_struct *ssp) /* Initial count prevents reaching zero until all CBs are posted. */ atomic_set(&ssp->srcu_barrier_cpu_cnt, 1); - idx = __srcu_read_lock_nmisafe(ssp, false); + idx = __srcu_read_lock_nmisafe(ssp); if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, 0)); else for_each_possible_cpu(cpu) srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, cpu)); - __srcu_read_unlock_nmisafe(ssp, idx, false); + __srcu_read_unlock_nmisafe(ssp, idx); /* Remove the initial count, at which point reaching zero can happen. */ if (atomic_dec_and_test(&ssp->srcu_barrier_cpu_cnt))